Why AI might be the most effective weapon we have to fight COVID-19 – The Next Web

If not the most deadly, the novel coronavirus (COVID-19) is one of the most contagious diseases to have hit our green planet in the past decades. In little over three months since the virus was first spotted in mainland China, it has spread to more than 90 countries, infected more than 185,000 people, and taken more than 3,500 lives.

As governments and health organizations scramble to contain the spread of coronavirus, they need all the help they can get, including from artificial intelligence. Though current AI technologies arefar from replicating human intelligence, they are proving to be very helpful in tracking the outbreak, diagnosing patients, disinfecting areas, and speeding up the process of finding a cure for COVID-19.

Data science and machine learning might be two of the most effective weapons we have in the fight against the coronavirus outbreak.

Just before the turn of the year, BlueDot, an artificial intelligence platform that tracks infectious diseases around the world, flagged a cluster of unusual pneumonia cases happening around a market in Wuhan, China. Nine days later, the World Health Organization (WHO)released a statementdeclaring the discovery of a novel coronavirus in a hospitalized person with pneumonia in Wuhan.

BlueDot usesnatural language processingandmachine learning algorithmsto peruse information from hundreds of sources for early signs of infectious epidemics. The AI looks at statements from health organizations, commercial flights, livestock health reports, climate data from satellites, and news reports. With so much data being generated on coronavirus every day, the AI algorithms can help home in on the bits that can provide pertinent information on the spread of the virus. It can also find important correlations between data points, such as the movement patterns of the people who are living in the areas most affected by the virus.

The company also employs dozens of experts who specialize in a range of disciplines including geographic information systems, spatial analytics, data visualization, computer sciences, as well as medical experts in clinical infectious diseases, travel and tropical medicine, and public health. The experts review the information that has been flagged by the AI and send out reports on their findings.

Combined with the assistance of human experts, BlueDots AI can not only predict the start of an epidemic, but also forecast how it will spread. In the case of COVID-19, the AI successfully identified the cities where the virus would be transferred to after it surfaced in Wuhan. Machine learning algorithms studying travel patterns were able to predict where the people who had contracted coronavirus were likely to travel.

Coronavirus (COVID-19) (Image source:NIAID)

You have probably seen the COVID-19 screenings at border crossings and airports. Health officers use thermometer guns and visually check travelers for signs of fever, coughing, and breathing difficulties.

Now,computer vision algorithmscan perform the same at large scale. An AI system developed by Chinese tech giant Baidu uses cameras equipped with computer vision and infrared sensors to predict peoples temperatures in public areas. The system can screen up to 200 people per minute and detect their temperature within a range of 0.5 degrees Celsius. The AI flags anyone who has a temperature above 37.3 degrees. The technology is now in use in Beijings Qinghe Railway Station.

Alibaba, another Chinese tech giant, has developed an AI system that candetect coronavirus in chest CT scans. According to the researchers who developed the system, the AI has a 96-percent accuracy. The AI was trained on data from 5,000 coronavirus cases and can perform the test in 20 seconds as opposed to the 15 minutes it takes a human expert to diagnose patients. It can also tell the difference between coronavirus and ordinary viral pneumonia. The algorithm can give a boost to the medical centers that are already under a lot of pressure to screen patients for COVID-19 infection. The system is reportedly being adopted in 100 hospitals in China.

A separate AI developed by researchers from Renmin Hospital of Wuhan University, Wuhan EndoAngel Medical Technology Company, and the China University of Geosciences purportedly shows 95-percent accuracy on detecting COVID-19 in chest CT scans. The system is adeep learning algorithmtrained on 45,000 anonymized CT scans. According to a preprint paperpublished on medRxiv, the AIs performance is comparable to expert radiologists.

One of the main ways to prevent the spread of the novel coronavirus is to reduce contact between infected patients and people who have not contracted the virus. To this end, several companies and organizations have engaged in efforts to automate some of the procedures that previously required health workers and medical staff to interact with patients.

Chinese firms are using drones and robots to perform contactless delivery and to spray disinfectants in public areas to minimize the risk of cross-infection. Other robots are checking people for fever and other COVID-19 symptoms and dispensing free hand sanitizer foam and gel.

Inside hospitals, robots are delivering food and medicine to patients and disinfecting their rooms to obviate the need for the presence of nurses. Other robots are busy cooking rice without human supervision, reducing the number of staff required to run the facility.

In Seattle, doctors used a robot to communicate with and treat patients remotely to minimize exposure of medical staff to infected people.

At the end of the day, the war on the novel coronavirus is not over until we develop a vaccine that can immunize everyone against the virus. But developing new drugs and medicine is a very lengthy and costly process. It can cost more than a billion dollars and take up to 12 years. Thats the kind of timeframe we dont have as the virus continues to spread at an accelerating pace.

Fortunately, AI can help speed up the process. DeepMind, the AI research lab acquired by Google in 2014, recently declared that it has used deep learning to find new information about the structure of proteins associated with COVID-19. This is a process that could have taken many more months.

Understanding protein structures can provide important clues to the coronavirus vaccine formula. DeepMind is one of several organizations who are engaged in the race to unlock the coronavirus vaccine. It has leveraged the result of decades of machine learning progress as well as research on protein folding.

Its important to note that our structure prediction system is still in development and we cant be certain of the accuracy of the structures we are providing, although we are confident that the system is more accurate than our earlier CASP13 system, DeepMinds researchers wroteon the AI labs website. We confirmed that our system provided an accurate prediction for the experimentally determined SARS-CoV-2 spike protein structure shared in the Protein Data Bank, and this gave us confidence that our model predictions on other proteins may be useful.

Although its too early to tell whether were headed in the right direction, the efforts are commendable. Every day saved in finding the coronavirus vaccine can save hundredsor thousandsof lives.

This story is republished fromTechTalks, the blog that explores how technology is solving problems and creating new ones. Like them onFacebookhere and follow them down here:

Published March 21, 2020 17:00 UTC

Here is the original post:
Why AI might be the most effective weapon we have to fight COVID-19 - The Next Web

In science, its better to be curious than correct – The Conversation CA

Im a geneticist. I study the connection between information and biology essentially what makes a fly a fly and a human a human. Interestingly, were not that different. Ive been a professional geneticist since the early 1990s. Im reasonably good at this, and my research group has done some really good work over the years.

But one of the challenges of the job is coming to grips with the idea that much of what we think we know is in fact wrong. Sometimes, were just off a little and we try to get a little closer to the answer. At some point, though, its likely that were just flat out wrong in some aspect.

We cant know when were wrong, but its important to remain open-minded and adaptable so we can learn from our mistakes. Especially because sometimes the stakes can be incredibly high with lives on the line (more on this later).

In the late 1980s, cattle started wasting away. In the late stages of what was slowly recognized as a disease, cattle began acting in such bizarre manner that their condition bovine spongiform encephalitis became known as mad cow disease. Strikingly, the brains of the cattle were full of holes (hence spongiform) that were caked with plaques of proteins clumped together; these were proteins that were found in the brains of healthy cattle, but now they had an unnatural shape.

Proteins are long chains, but they fold into specific complex shapes. But the proteins in the cattles brains were misfolded. Some time after, people started dying from the same symptoms, and a connection was made between eating infected cattle and contracting the disease. Researchers determined that the culprit was consumption of brain and spinal tissue, the only tissue that showed the physical effects of infection.

One of the challenges to explaining mad cow disease was the length of time from infection to disease to death. Diseases, we knew, were transmitted by viruses and bacteria, but no scientist could isolate one that would explain this disease. Further, no one knew of other viruses or bacteria whose infection would take this long to lead to death. Science leaned toward assuming a viral cause, and careers and reputations were built on finding the slow virus.

In the late 1980s, a pair of British researchers suggested that perhaps the misfolded proteins in the plaques was key. This proposal was soon championed by Stanley Prusiner, a young American researcher early in his career. The idea was simple: the misfolded protein was both the result and cause of the infection.

The misfolded protein plaques killed brain tissue and caused correctly folded versions of the proteins to misfold. Prusiners hypothesis was straightforward, but it didnt fit the way scientists understood diseases to work. Diseases are transmitted as DNA (and in rare cases, RNA) by viruses or bacteria. But they are not transmitted in protein folding.

For holding this protein-based view of infection, Prusiner was literally and metaphorically shouted out of the room. Then he showed, experimentally and elegantly, that the misfolded proteins, which he called prions, were the cause of these diseases. For this accomplishment, he was awarded the 1997 Nobel Prize in medicine.

We now know that prions are responsible for a series of diseases in humans and other animals, including chronic wasting disease, a disease whose spread poses a serious threat to deer and elk in Ontario. And, circling back, the over-cooked burger phenomenon is because of these proteins. If you heat the prions sufficiently, they lose their unnatural shape all shape actually and they cant transmit the disease.

So in this case, the information necessary for disease transmission is the shape of the protein, not in the genetic code of an infecting virus or bacteria. This fact is why this case specifically speaks to me as a geneticist. All my career, Ive been trained to look for answers in DNA sequences. Prions remind me that sometimes really interesting answers are not were we expect them to be.

Where does this leave us? To me, the take-home message is that we need to remain skeptical but curious. Examine the world around us with open eyes and be ready to challenge and question our assumptions. Also, we shouldnt ignore what is in front of us simply because it doesnt fit our understanding of the world around us.

Climate change, for example, is real. Its another example of why its important to be open to being wrong and the need to try to get it right. Medical science only started controlling mad cow disease after we understood the role of prions, and the years of denial cost an untold number of lives.

Similarly, our global refusal to accept the massive climate change around us, and our obvious role in it, is leading us into one weather-based disaster after another, and all the loss of life associated with these disasters.

Ive spent a lot of time in my career putting together models of how the biological world works, but I know that pieces of these models are wrong. I can almost guarantee you that I have something as fundamentally wrong as those prion-deniers, I just dont know what it is. Yet.

But the important thing isnt to be right. Instead, it is to be open to seeing when you are wrong.

[ Youre smart and curious about the world. So are The Conversations authors and editors. You can read us daily by subscribing to our newsletter. ]

Read the rest here:

In science, its better to be curious than correct - The Conversation CA

Got a graphics card? Put it to work fighting the coronavirus – TechSpot

The big picture: If youve got some spare GPU horsepower now is the time to put it to good use; by simulating molecular dynamics you can contribute to a dataset that could help researchers find a cure to the coronavirus.

The Pande Lab at Stanford University has been running distributed compute network Folding@home for nearly twenty years for disease research using the idle processing resources of home personal computers. Today the lab has put it its network to work to better understand coronavirus.

Folding@home combines the power of thousands of individual home systems and treats each of them as a node in a supercomputer. Each node simulates molecular dynamics like protein folding and computational drug design. In the end, all the results are combined into a dataset of interactions that are made available to researchers. According to Wikipedia, Folding@home is one of the world's fastest computing systems, with a speed of approximately 98.7petaFLOPS as of March 2020.

I personally signed up to fold this morning and my system just finished computing one work unit. Despite a detailed explanation, I understood nothing about what my computer was just doing apart from that the simulation involved exactly 62,227 atoms. Folding@home project's reputation precedes it, so while I may not know how this is beneficial in the fight against coronavirus, listed on their website is a list of academic publications using data from the networks research into cancer, Parkinsons, Alzheimers, and other conditions, and a list of very grateful researchers.

Do note, folding is not for every system. Its pretty intensive. There are three performance tiers, though as far as I can tell medium and full do the same thing: both pin my hexacore processor at 100% utilization and my RTX graphics card at about 40%. The light setting reduces the processors load to about 40% and seems to alternate between using the processor and graphics card. Memory and storage utilization are setting agnostic and very low, and the process is offline once a small initial file is downloaded.

There are no issues with system slowdown, however. The program can be set to turn on automatically when the system is idle and pause upon user activity. Although, even at full power my system hasnt slowed down at all while writing this article; folding is designed to be an adaptable workload that scales down utilization as other applications require it. And of course, you can just turn it off through a simple browser interface.

Im going to continue to run the software in the hope that it makes a difference. If youd like to do so as well, you can download it here.

Excerpt from:
Got a graphics card? Put it to work fighting the coronavirus - TechSpot

What is Biophysical Analysis? – The John Innes Centre

Clare Stevenson runs the Biophysical Analysis facility at the John Innes Centre.

Recently, she explained why she is passionate about increasing the use of these Biophysical techniques and making the science more accessible to a wider range of users.

The Biophysical Analysis (BA) facility has state of the art instrumentation for looking at structures of molecules and enables scientists to observe and measure the strength of the interactions between biomolecules.

Biomolecular interactions are central to biochemistry and an understanding of these interactions can aid our knowledge across a broad range of science, so the biomolecules we look at may be proteins, DNA, RNA, small molecules or drugs.

We recently added a third technique to our biophysical service; a Circular Dichroism Spectroscopy (CD), maintained and run by Julia Mundy.

The CD spectrophotometer uses a Xenon light source to collect CD data on biological macromolecules in solution at near and far UV wavelengths. This technique is commonly used to look at the folding of proteins, provide information on the secondary structure of proteins and can detect changes in structure during protein-protein or protein-ligand binding events. Julia runs samples for scientists as a service or will train users how to use the equipment themselves.

In addition to the CD, the BA facility has two complementary techniques for studying biomolecular interactions; Surface Plasmon Resonance (SPR) and Isothermal Titration Calorimetry (ITC).

In Surface Plasmon Resonance (SPR), one of the biomolecules you are investigating is attached to a surface and the other biomolecule is flowed over the surface. If there is an interaction a response is observed. That interaction can then be observed, on the computer, in real time and you see the biomolecules binding and unbinding. Once you know that an interaction occurs you can do some further measurements to determine how strong or weak the interaction is.

The SPR is highly automated and once an experiment is optimised it can even be left testing multiple interactions while the scientist is at home sleeping. Recently I have been working with Dr Tung Le, looking at protein-DNA interactions. Once the experiment was planned, it can be run quickly and within 20 minutes we can see if binding is happening. Tungs group can easily screen many different protein samples and DNA sequences quickly in an automated manner getting the results the next day.

Weve also recently been successful in obtaining some funding which will allow us to purchase a new instrument that will enhance our SPR capability even further.

Isothermal Titration Calorimetry (ITC) is a complimentary technique to SPR but rather than requiring one biomolecule to be immobilised, it has one in a cell and one in a syringe. The molecule in the syringe is injected into the one in the cell and if binding occurs the heat given out (exothermic) or taken in (endothermic) can be measured.

Changes in heat occur when molecules bind, and the ITC can measure these tiny changes in heat with high sensitivity. This information can then tell us whether the biomolecules interact and, if they do, how strong the interactions is. Plus, knowing the heat changes in the interaction can also give us a clue to how the molecules are binding.

Both techniques are complimentary but have different advantages and disadvantages. As a facility we speak to the scientists and recommend the best technique for the desired experiment and spend time training, troubleshooting projects and analysing results.

We also run regular training courses where people interested in all three techniques can come and learn what is involved and what they need to understand before putting on their lab coat.

Our BA facility is available for anyone to book and use.

Predominantly we work with John Innes Centre scientists, but we have also worked with The Sainsbury Laboratory, University of East Anglia and external companies like Leaf Expression Systems. If you want to measure any interactions, we can help, wherever you are.

Our facility can run 24-hours-a-day, 365-days-a-year, thanks to being able to automate much of the process, which in turn means we have the instrument capacity to take on more than we are currently doing.

I enjoy my job and my career has had lots of highlights.

I am particularly proud of a method that I developed to study protein DNA interactions by SPR. This is our Reusable DNA Capture Technique (ReDCaT) and enables DNA to be attached to a surface and easily removed. This means that the SPR instrument can be used to measure the interaction of multiple proteins with many DNA sequences in a high throughput and automated manner.

I was always good at science and maths at school, but I think my parents expected me to become a doctor, rather than a scientist. I come from a medical family, full of doctors and nurses, so it was sort of assumed I would follow the family trade.

However, I was bit of a rebellious teenager, so I found myself applying to do Biochemistry at Liverpool Polytechnic, now Liverpool John Moores University, and realised that biochemistry was a fascinating subject and I wanted to learn more.

My degree included a year in industry and I really enjoyed working in the lab. After graduation I found a job with a rival company and I had a lot of fun and learnt a lot working in the area of drug metabolism.

After a new challenge in 1996 I applied for a job as a Research Technician for Professor David Lawson. His passion for solving the 3-dimensional structures of proteins using crystals was inspiring and it encouraged me to take a step sideways. I must admit, I am hugely grateful to David, because he took a bit of a chance on me, as when I first came, I had only done a little bit of protein work and knew very little about protein crystallography. However, I learnt quickly and had excellent technical skills so was able to become competent at growing protein crystals and solving structures.

I realised that in academia having a PhD is important (although believe it is your competency and skills rather than the qualification that is what is really the important thing) but I was lucky to have the support from the John Innes Centre to complete my PhD in 2007. It took over seven years studying part-time, while I was working full time and I also had a baby.

I solved several structures as part of my PhD but also had to study a protein-DNA interactions and had to learn how to use SPR. The more I did it the more I enjoyed it and over time I became the most experienced person at the John Innes Centre for using this technique and ended up training and advising others.

Over time SPR became the first technique in the facility and then it expanded to include ITC and CD. I now manage the facility but also continue working for David on the Protein Crystallography Facility.

There was, and I think remains, a feeling that biophysics is quite pedantic and difficult, but the machines are actually relatively easy to use with the right training and support. I love working with students and post docs showing them the techniques and designing their experiments with them.

Running a facility means every day is different and sometime hectic but feel lucky to work with so many great scientists and make their experiments happen.

Read more from the original source:

What is Biophysical Analysis? - The John Innes Centre

Thermodynamic probes of instability: application to therapeutic proteins – European Pharmaceutical Review

Developing a stable therapeutic protein formulation requires an intimate knowledge of the protein and its physical and chemical properties. In this article, Bernardo Perez-Ramirez and Robert Simler discuss the thermodynamic consequences that low temperature can have on the aggregation tendencies of a protein.

Proteins are dynamic entities, constantly adopting different partially-folded states as a function of temperature and other solution variables. These variables dictate the standard free energy between the native, unfolded and partially folded aggregation prone states leading to oligomerisation. As a result, not all oligomerisation events in proteins are similar. Altering these variables, such as temperature, pH, salt and ligands, could induce a protein to aggregate as a consequence of unnatural folding to balance the thermodynamically unfavourable interactions between solvent and exposed hydrophobic residues in proteins. In the same way, these variables may induce a protein to self-associate, in mostly the native state, to counteract the unfavourable interactions with the solvent. Thus, cold instability without inducing cold denaturation could destabilise the native state of a protein, making it more prone to aggregation events.

View post:

Thermodynamic probes of instability: application to therapeutic proteins - European Pharmaceutical Review

2 tricked-out pies to be thankful for: pear with cranberries and pumpkin with ginger praline – The Gazette

By JeanMarie Brownson, Chicago Tribune

Homemade pie fillings prove easy. Crust not so much. Practice makes perfect. With every pie, our skills improve. Its an acquired art to turn out flaky, beautiful crust. My mother regularly reminds us of her early crust adventures many of which ended in the garbage can. No worries, she says, the crust ingredients cost far less than the filling.

So, when time allows, we practice making pie crust hearing her voice remind us to use a gentle hand when gathering the moist dough into a ball and later when rolling it out. Mom always uses a floured rolling cloth on the board and on the rolling pin. These days, I prefer to roll between two sheets of floured wax paper. We factor in plenty of time to refrigerate the dough so its at the perfect stage for easy rolling. The chilly rest also helps prevent shrinkage in the oven.

Ive been using the same pie dough recipe for years now. I like the flakiness I get from vegetable shortening and the flavor of butter, so I use some of each fat. A bit of salt in the crust helps balance sweet fillings. The dough can be made in a few days in advance. Soften it at room temperature until pliable enough to roll, but not so soft that it sticks to your work surface.

Of course, when pressed for time, I substitute store-bought frozen crusts. Any freshly baked pie with or without a homemade crust, is better than most store-bought versions.

I read labels to avoid ingredients I dont want to eat or serve my family. Im a fan of Trader Joes ready-to-roll pie crusts sold in freezer cases both for their clean ingredient line and the baked flavor. The 22-ounce box contains two generous crusts (or one bottom crust and one top or lattice). Other brands, such as Simple Truth Organics, taste fine, but at 15 ounces for two crusts, are best suited for smaller pies. Wewalka brand sells one 9-ounce crust thats relatively easy to work with. Always thaw according to package directions and use a rolling pin or your hands to repair any rips that may occur when unwrapping.

Double-crust fruit pies challenge us to get the thickener amount just right so the pie is not soupy when cut. Im a huge fan of instant tapioca in most fruit pies because it thickens the juices without adding flavor or a cloudy appearance. In general, I use one tablespoon instant tapioca for every two cups cut-up raw fruit.

Pretty, lattice-topped pies have the added benefit of allowing more fruit juice evaporation while the pie bakes. Precooking the fruit for any pie helps ensure that the thickener is cooked through; I especially employ this technique when working with cornstarch or flour-thickened pie fillings. This also allows the cook to work in advance a bonus around the busy holiday season.

ARTICLE CONTINUES BELOW ADVERTISEMENT

We are loving the combination of juicy, sweet Bartlett pears with tart cranberries for a gorgeous pie with hues of pink; a few crisp apples and chewy dried cranberries contribute contrasting textures. Feel free to skip the lattice work and simply add a top crust; pierce the top crust in several places with a fork to allow steam to escape. For added flavor and texture, I brush the top crust with cream and sprinkle it generously with coarse sugar before baking.

The nut-free ginger praline recipe is a riff on a longtime favorite pumpkin pie from Jane Salzfass Freiman, a former Chicago Tribune recipe columnist. She taught us to gussy up the edge of pumpkin pie with nuts, brown sugar and butter. We are employing store-bought ginger snap cookies and crystallized ginger in place of pecans for a spicy, candied edge to contrast the creamy pie interior. Think of this pie as all your favorite coffee shop flavors in one pumpkin pie spice and gingerbread, topped with whipped cream.

Happy pie days, indeed.

PEAR, DOUBLE CRANBERRY AND APPLE LATTICE PIE

Prep: 1 hour

Chill: 1 hour

Cook: 1 hour

Makes: 8 to 10 servings

1 recipe double crust pie dough, see recipe

2 1/2 pounds ripe, but still a bit firm, Bartlett pears, about 6

1 1/2 pounds Honeycrisp or Golden Delicious apples, about 4

2 cups fresh cranberries, about 8 ounces

3 tablespoons unsalted butter

3/4 cup sugar

3 tablespoons cornstarch

1 cup (4 ounces) dried cranberries

1/2 teaspoon grated fresh orange zest

1/8 teaspoon salt

Cream or milk, coarse sugar (or turbinado sugar)

Make pie dough and refrigerate it as directed. Working between two sheets of floured wax paper, roll out one disk into a 12-inch circle. Remove the top sheet of wax paper and use the bottom sheet to flip the crust into a 10-inch pie pan. Gently smooth the crust into the pan, without stretching it. Roll the edge of the dough under so it sits neatly on the edge of the pie dish. Refrigerate.

Roll the second disk of pie dough between the sheets of floured wax paper into an 11-inch circle. Slide onto a cookie sheet and refrigerate while you make the filling.

Peel and core the pears. Slice into 1/4-inch wide wedges; put into a bowl. You should have 6 generous cups. Peel and core the apples. Cut into 3/4-inch chunks; you should have about 3 1/2 cups. Add to the pears. Stir in fresh cranberries.

Heat butter in large deep skillet over medium-high until melted; add pears, apples and fresh cranberries. Cook, stirring, until nicely coated with butter, about 2 minutes. Cover and cook to soften the fruit, 3 minutes. Add sugar and cornstarch; cook and stir until glazed and tender, about 5 minutes. Remove from heat; stir in dried cranberries, orange zest and salt. Spread on a rimmed baking sheet; cool to room temperature. While the fruit mixture cools, heat oven to 425 degrees.

Pile the cooled fruit into the prepared bottom crust. Use a very sharp knife to cut the rolled top crust into 18 strips, each about 1/2 inch wide. Place 9 of those strips over the fruit filling positioning them about 1/2 inch apart. Arrange the other 9 strips over the strips on the pie in a diagonal pattern. (If you want to make a woven lattice, put one strip of dough over the 9 strips on the pie and weave them by lifting up and folding to weave them together.)

Crimp the edge of the bottom crust and the lattice strips together with your fingers. Use a fork to make a decorative edge all the way around the pie. Use a pastry brush to brush each of the strips and the edge of the pie with cream. Sprinkle strips and the edge with the coarse sugar.

ARTICLE CONTINUES BELOW ADVERTISEMENT

Place pie on a baking sheet. Bake at 425 degrees, 25 minutes. Reduce oven temperature to 350 degrees. Use strips of foil to lightly cover the outer edge of the pie. Continue baking until the filling is bubbling hot and the crust richly golden, about 40 minutes more.

Cool completely on a wire rack. Serve at room temperature topped with whipped cream or ice cream. To rewarm the pie, simply set it in a 350-degree oven for about 15 minutes.

Nutrition information per serving (for 10 servings): 540 calories, 24 g fat, 11 g saturated fat, 34 mg cholesterol, 80 g carbohydrates, 43 g sugar, 4 g protein, 270 mg sodium, 7 g fiber

DOUBLE CRUST PIE DOUGH

Prep: 20 minutes

Chill: 1 hour

Makes: Enough for a double crust 10-inch pie

This is our familys favorite pie crust for ease of use with a flaky outcome. We use vegetable shortening for easy dough handling and maximum flakiness; unsalted butter adds rich flavor.

2 1/2 cups flour

1 tablespoon sugar

1 teaspoon salt

1/2 cup unsalted butter, very cold

1/2 cup trans-fat free vegetable shortening, frozen

Put flour, sugar and salt into a food processor. Pulse to mix well. Cut butter and shortening into small pieces; sprinkle them over the flour mixture. Pulse to blend the fats into the flour. The mixture will look like coarse crumbs.

Put ice cubes into about 1/2 cup water and let the water chill. Remove the ice cubes and drizzle about 6 tablespoons of the ice water over the flour mixture. Briefly pulse the machine just until the mixture gathers into a dough.

Dump the mixture out onto a sheet of wax paper. Gather into two balls, one slightly larger than the other. (Use this one later for the bottom crust.) Flatten the balls into thick disks. Wrap in plastic and refrigerate until firm, about 1 hour. (Dough will keep in the refrigerator for several days.)

Nutrition information per serving (for 10 servings): 291 calories, 20 g fat, 8 g saturated fat, 24 mg cholesterol, 25 g carbohydrates, 1 g sugar, 3 g protein, 235 mg sodium, 1 g fiber

GINGER PRALINE PUMPKIN PIE

Prep: 40 minutes

Cook: 1 1/2 hours

Makes: 8 servings

ARTICLE CONTINUES BELOW ADVERTISEMENT

Prebaking the crust helps ensure the proper texture in the finished pie. You can replace the ginger snap cookies here with just about any spice cookie; I also like to use speculoos cookies or homemade molasses cookies. The recipe calls for canned pumpkin pie mix, which has sugar and spice already.

Half recipe double crust pie dough, see recipe

Filling

2 large eggs

1 can (30 ounces; or two 15-ounce cans) pumpkin pie mix (with sugar and spices)

1/2 teaspoon each ground: cinnamon, ginger

1/4 teaspoon ground cloves

2/3 cup heavy whipping cream

2 tablespoons dark rum or 1 teaspoon vanilla

Topping

3 tablespoons butter, softened

2 tablespoons dark brown sugar

1/4 cup finely chopped crystallized ginger, about 1 1/2 ounces

1 cup roughly chopped or broken ginger snap cookies, about 2 ounces or 12 cookies

Whipped cream for garnish

For crust, heat oven to 425 degrees. Roll pie dough between 2 sheets of floured wax paper to an 11-inch circle. Remove the top sheet of paper. Use the bottom sheet to help you flip the dough into a 9-inch pie pan. Gently ease the dough into the pan, without stretching it; roll the edge of the dough under so it sits neatly on the edge of the pie dish; flatten attractively with a fork.

Line the bottom of the pie crust with a sheet of foil; fill the foil with pie weights or dried beans. Bake, 8 minutes. Remove the beans using the foil to lift them out of the crust. Return pie crust to the oven; bake until light golden in color, about 2 minutes. Cool. (Crust can be prebaked up to 1 day in advance; store in a cool, dry place.)

Reduce oven temperature to 350 degrees. For filling, whisk eggs in a large bowl until smooth. Whisk in pumpkin mix, cinnamon, ginger and cloves until smooth. Whisk in cream and rum or vanilla.

For topping, mix soft butter and brown sugar in a small bowl until smooth. Stir in crystallized ginger; gently stir in the cookies to coat them with the butter mixture.

Carefully pour pie filling into cooled crust. Set the pie pan on a baking sheet; slide into the center of the oven. Bake, 40 minutes. Remove pie from oven. Gently distribute the topping evenly around the outer rim of the pie, near the crust. Return the pie to the oven; bake until a knife inserted near the center is withdrawn clean, about 40 more minutes. Cool on a wire rack. Serve cold or at room temperature with whipped cream.

Nutrition information per serving: 481 calories, 27 g fat, 13 g saturated fat, 96 mg cholesterol, 58 g carbohydrates, 9 g sugar, 6 g protein, 433 mg sodium, 9 g fiber

Go here to see the original:

2 tricked-out pies to be thankful for: pear with cranberries and pumpkin with ginger praline - The Gazette

From Mediterranean Lentil Salad to Cinnamon Raisin Bread: Our Top 10 Vegan Recipes of the Day! – One Green Planet

Ready, set, recipes! Here are our just published, fresh-out-the-mill recipes in one convenient place! These are the top vegan recipes of the day, and are now a part of the thousands of recipes on ourFood Monster App! We have cauliflower chocolate mousse, Mediterranean lentil salad, and the ultimate guacamole recipe, so if youre looking for something new and delicious, you are sure to find a new favorite!

Source: Cauliflower Chocolate Mousse

If youre looking for a much healthier and more nutritious version of chocolate mousse, youve got to try this Cauliflower Chocolate Mousse by Mitra Shirmohammadi. Theres more than a full serving of vegetables in each cup, its free from refined sugar, and also super allergy-friendly (no dairy, eggs, soy, or gluten). Cauliflower is one of those miraculous vegetables that works incredibly well in both sweet and savory dishes. Theres no way anyone can ever tell there are vegetables hidden in your chocolate mousse! If you have picky eaters at home, this cauliflower chocolate mousse is a great way to get them to eat more veggies without even realizing it!

Source: Cinnamon Raisin Bread

Vegan and gluten-free cinnamon raisin bread thats refined sugar free and packed with raisins and texture. Made extra cinnamon-y with a secret ingredient! Its easy to make and only takes 1-bowl (and plenty of cinnamon and raisins.) This Gluten-Free Cinnamon Raisin Bread is a little less table bread and a lot more dense bakery- style bread. Its hearty, naturally sweet, and packed with protein and fiber.The combination of ground cinnamon and cinnamon kombucha adds the perfect amount of spice. Its hearty enough to serve for a breakfast bread or nutritious snack but sweet enough to also function as a healthful dessert. For even more texture, try adding walnuts or almonds when folding in the raisins. Serve your slices of gluten-free Cinnamon Raisin Bread by Lauren Kirchmaier warm with a big smear of almond butter slathered on top.

Source: Vegan Scampi in Lemon Garlic White Wine Sauce

This is a really simple dish that tastes elegant. Hearts of palm are the perfect stand-in for scallops. They have a similar look when sliced and a briny quality reminiscent of seafoodgreat for those with a shellfish allergy. This Vegan Scampi in Lemon Garlic White Wine Sauce by Jenn Sebestyen is elegant and delicious!

Source: Savory Granola

The amazing thing about granola is that you can, just like banana ice cream, use the basic recipe, exchange the spices and make your own combination. This savory granola recipe is perfect for topping your salad and soups or just munching on as a midday. It only takes about 20-30 min to make and the ingredients are pretty simple. You have to try this Savory Granola by True Foods Blog!

Source: Chocolate Pie

Chocolate and pie what could be better? This Chocolate Pie by Lenia Patsi is the ideal dessert to make this weekend. Simple and decadent!

Source: Protein Apple Berry Crumble

Vegan apple berry crumble is the perfect Autumn dessertits cozy, comforting, cinnamon-spiced & absolutely irresistible! This isnt however a standard crumble recipe. This is a healthy, refined sugar and gluten free crumble with added protein. You are going to love this Protein Apple Berry Crumble by Vicky Coates!

Source: Mediterranean Lentil Salad

This Mediterranean Lentil Salad by Medha Swaminathan is the perfect reset after a weekend of eating things that seemingly all contain massive amounts of vegan cream, cheese, and/or cream cheese. Its super easy to make, so your food-fatigued body doesnt have to do much work. Plus, its really good for you.

Source: Gado Gado Salad With Nut-Free Sauce

A mixed vegetable Indonesian-style salad served with nut free sauce dressing. The medley of vegetables with potato and tofu added make this salad a tasty, nutritious, attractive and colorful dish. This Gado Gado Salad by Daphne Goh is not only naturally gluten free but also vegan, egg free, nut free and refined sugar free. For soy free, simply omit the fried tofu and add some pumpkins for protein.

Source: Crispy Flavorful Pickles

A super quick recipe for yummy, full-of-flavor pickles. Have these Crispy Flavorful Pickles by Caroline Ginolfion the side of your preferred vegan dinner!

Source: The Ultimate Guacamole

This is the The Ultimate Guacamole by Christina Bedetta. Serve it with chips and fresh vegetables or enjoy it in wrap form. Its easy to make and is a great addition to so many dishes. Made with creamy avocado, red onion, tomatoes, and spices there is no arguing that this is the ultimate guacamole recipe. With such delicious flavors and versatility, the possibilities are endless!

We also highly recommend downloading ourFood Monster App, which is available foriPhone, and can also be found onInstagramandFacebook. The app has more than 15,000 plant-based, allergy-friendly recipes, and subscribers gain access to new recipes every day. Check it out!

For more Vegan Food, Health, Recipe, Animal, and Life content published daily, dont forget to subscribe to theOne Green Planet Newsletter!

Being publicly-funded gives us a greater chance to continue providing you with high quality content. Pleasesupport us!

Read the original here:

From Mediterranean Lentil Salad to Cinnamon Raisin Bread: Our Top 10 Vegan Recipes of the Day! - One Green Planet

IBM vs. Google and the race to quantum supremacy – Salon

Googles quantum supremacy claim has now been disputed by its close competitor IBM. Not because Googles Sycamore quantum computers calculations are wrong, but because Google had underestimated what IBMs Summit, the most powerful supercomputer in the world, could do. Meanwhile, Googles paper, which had accidentally been leaked by a NASA researcher, has now been published in the prestigious science journal Nature. Googles claims are official now, and therefore can be examined in the way any new science claim should be examined: skeptically until all the doubts are addressed.

Previously, I have covered what quantum computing is, and in this article, I will move on to the key issue of quantum supremacy, the claim that IBM has challenged and what it really means. IBM concedes that Google has achieved an important milestone, but does not accept that it has achieved quantum supremacy.

IBM refuted Googles claim around the same time as Googles Nature paper was published. Google had claimed that IBMs supercomputer, Summit, would take 10,000 years to solve the problem Googles Sycamore had solved in a mere 200 seconds. IBM showed that Summit, with clever programming and using its huge disk space, could actually solve the problem in only 2.5 days. Sycamore still beat Summit on this specific problem by solving it 1,100 times faster, but not 157 million times faster, as Google had claimed. According to IBM, this does not establish quantum supremacy as that requires solving a problem a conventional computer cannot solve in a reasonable amount of time. Two and a half days is reasonable, therefore according to IBM quantum supremacy is yet to be attained.

The original definition of quantum supremacy was given by John Preskill, on which he now has second thoughts. Recently he wrote, supremacy, through its association with white supremacy, evokes a repugnant political stance. The other reason is that the word exacerbates the already overhyped reporting on the status of quantum technology.

Regarding IBMs claim that quantum supremacy has not yet been achieved, Scott Aaronson, a leading quantum computing scientist, wrotethat though Google should have foreseen what IBM has done, it does not invalidate Googles claim. The key issue is not that Summit had a special way to solve the specific quantum problem Google had chosen, but that Summit cannot scale: if Googles Sycamore goes from 53 to 60 qubits, IBM will require 33 Summits; if to 70 Qubits, a supercomputer the size of a city!

Why does Summit have to increase at this rate to match Sycamores extra qubits? To demonstrate quantum supremacy, Google chose the simulation of quantum circuits, which is similar to generating a sequence of truly random numbers. Classical computers can produce numbers that appear to be random, but it is a matter of time before they will repeat the sequence.

The resources disk space, memory, computing power classical computers require to solve this problem, in a reasonable time, increase exponentially with the size of the problem. For quantum computers, adding qubits linearly meaning, simply adding more qubits increases computing capacity exponentially. Therefore, just 7 extra qubits of Sycamore means IBM needs to increase the size of Summit 33 times. A 17-qubit increase of Sycamore needs Summit to increase by thousands of times. This is the key difference between Summit and Sycamore. For each extra qubit, a conventional computer will have to scale its resources exponentially, and this is a losing game for the conventional computer.

We have to give Google the victory here, not because IBM is wrong, but because the principle of quantum supremacy, that a quantum computer can work as designed, solve a specific problem, and beat a conventional computer in computational time has been established. The actual demonstrationa more precise definition of reasonable time and its physical demonstration is only of academic value. If 53 qubits can solve the problem, but with IBMs Summit still in the race, even if much slower, it is just a matter of time before it is well and truly beaten.

Of course, there are other ways that this particular test could fail. A new algorithm can be discovered that solves this problem faster, starting a fresh race. But the principle here is not a specific race but the way quantum computing will scale in solving a certain class of problems that classical or conventional computers cannot.

For problems that do not increase exponentially with size, the classical computers work better, are way cheaper, and do not require near absolute zero temperatures that quantum computers require. In other words, classical computers will coexist with quantum computers and not follow typewriters and calculators to the technology graveyards.

The key issue in creating viable quantum computers should not be confused with a race between classical computers and the new kid on the block. If we see the race as between two classes of computers only in terms of solving a specific problem, we are missing the big picture. It is simply that for classical computers, the solution time for a certain class of problems increases exponentially with the size of the problem, and beyond a certain size, we just cant solve them in any reasonable time. Quantum computers have the potential to solve such large problems requiring exponential computing power. This opens a way to solve these classes of problems other than the iffy route of finding new algorithms.

Are there such problems, and will they yield worthwhile technological applications? The Google problem, computing the future states of quantum circuits, was not chosen for any practical application. It was simply chosen to showcase quantum supremacy, defined as a quantum computer solving a problem that a classical computer cannot solve in a reasonable time.

Recently, a Chinese team led by Pan Jianwei has published a paper that shows another problema Boson sampling experiment with 20 photons can also be a pathway to show quantum supremacy. Both these problems are constructed not to showcase real-world applications, but simply to show that quantum computing works and can potentially solve real-world problems.

What are the classes of problems that quantum computers can solve? The first are those for which the late Nobel laureate Richard Feynman had postulated quantum computers as a simulation of the quantum world. Why do we need such simulations, after all, we live in the macro-world in which quantum effects are not visible? Though such effects may not visible to us, they are indeed all around us and affect us in different ways.

A number of such phenomena arise out of the interaction of the quantum world with the macro-world. It is now clear that using classical computers we cannot simulate, for instance, protein folding, as it involves the quantum world intersecting with the macro-world. A quantum computer could simulate the probability of how many possible ways such proteins could fold and the likely shapes they could take. This would allow us to build not only new materials but also medicines known as biologics. Biologics are large molecules used for treating cancer and auto-immune diseases. They work due to not only their composition but also their shapes. If we could work out their shapes, we could identify new proteins or new biological drug targets; or complex new chemicals for developing new materials. The other examples are solving real-life combinatorial problems such as searching large databases, cracking cryptographic problems, improved medical imaging, etc.

The business world IBM, Google, Microsoft is gung-ho on the possible use of quantum computers for such applications, and that is why they are all investing in it big time. Nature reported that in 2017 and 2018, at least $450 million was invested by venture capital in quantum computing, more than four times more than the preceding two years. Nation-states, notably the United States and China, are also investing billions of dollars each year.

But what if quantum computers do not lead to commercial benefits should we then abandon them? What if they are useful only for simulating quantum mechanics and understanding that world better? Did we build the Hadron Collider investing $13.25 billion, and with an annual running cost of $1 billion only because we expected discoveries that will have commercial value? Or, should society invest in knowing the fundamental properties of space and time including that of the quantum world? Even if quantum computers only give us a window to the quantum world, the benefits would be knowledge.

What is the price of this knowledge?

Go here to read the rest:

IBM vs. Google and the race to quantum supremacy - Salon

That Junk DNA Is Full of Information! – Advanced Science News

Share

Share

Email

It should not surprise us that even in parts of the genome where we dont obviously see a functional code (i.e., one thats been evolutionarily fixed as a result of some selective advantage), there is a type of code, but not like anything weve previously considered as such. And what if it were doing something in three dimensions as well as the two dimensions of the ATGC code? A paper just published in BioEssays explores this tantalizing possibility

Isnt it wonderful to have a really perplexing problem to gnaw on, one that generates almost endless potential explanations. How about what is all that non-coding DNA doing in genomes?that 98.5% of human genetic material that doesnt produce proteins. To be fair, the deciphering of non-coding DNA is making great strides via the identification of sequences that are transcribed into RNAs that modulate gene expression, may be passed on transgenerationally (epigenetics) or set the gene expression program of a stem cell or specific tissue cell. Massive amounts of repeat sequences (remnants of ancient retroviruses) have been found in many genomes, and again, these dont code for protein, but at least there are credible models for what theyre doing in evolutionary terms (ranging from genomic parasitism to symbiosis and even exploitation by the very host genome for producing the genetic diversity on which evolution works); incidentally, some non-coding DNA makes RNAs that silence these retroviral sequences, and retroviral ingression into genomes is believed to have been the selective pressure for the evolution of RNA interference (so-called RNAi); repetitive elements of various named types and tandem repeats abound; introns (many of which contain the aforementioned types of non-coding sequences) have transpired to be crucial in gene expression and regulation, most strikingly via alternative splicing of the coding segments that they separate.

Still, theres plenty of problem to gnaw on because although we are increasingly understanding the nature and origin of much of the non-coding genome and are making major inroads into its function (defined here as evolutionarily selected, advantageous effect on the host organism), were far from explaining it all, andmore to the pointwere looking at it with a very low-magnification lens, so to speak. One of the intriguing things about DNA sequences is that a single sequence can encode more than one piece of information depending on what is reading it and in which direction viral genomes are classic examples in which genes read in one direction to produce a given protein overlap with one or more genes read in the opposite direction (i.e., from the complementary strand of DNA) to produce different proteins. Its a bit like making simple messages with reverse-pair words (a so-called emordnilap). For example: REEDSTOPSFLOW, which, by an imaginary reading device, could be divided into REED STOPS FLOW. Read backwards, it would give WOLF SPOTS DEER.

Now, if it is of evolutionary advantage for two messages to be coded so economically as is the case in viral genomes, which tend to evolve towards minimum complexity in terms of information content, hence reducing necessary resources for reproductionthen the messages themselves evolve with a high degree of constraint. What does this mean? Well, we could word our original example message as RUSH-STEM IMPEDES CURRENT, which would embody the same essential information as REED STOPS FLOW. However, that message, if read in reverse (or even in the same sense, but in different chunks) does not encode anything additional that is particularly meaningful. Probably the only way of conveying both pieces of information in the original messages simultaneously is the very wording REEDSTOPSFLOW: thats a highly constrained system! Indeed, if we studied enough examples of reverse-pair phrases in English, we would see that they are, on the whole, made up of rather short words, and the sequences are missing certain units of language such as articles (the, a); if we looked more closely, we might even detect a greater representation than average of certain letters of the alphabet in such messages. We would see these as biases in word and letter usage that would, a priori, allow us to have a stab at identifying such dual-function pieces of information.

Now lets return to the letters, words, and information encoded in genomes. For two distinct pieces of information to be encoded in the same piece of genetic sequence we would, similarly, expect the constraints to be manifest in biases of word and letter usagethe analogies, respectively, for amino acid sequences constituting proteins, and their three-letter code. Hence a sequence of DNA can code for a protein and, in addition, for something else. This something else, according to Giorgio Bernardi, is information that directs the packaging of the enormous length of DNA in a cell into the relatively tiny nucleus. Primarily it is the code that guides the binding of the DNA-packaging proteins known as histones. Bernardi refers to this as the genomic codea structural code that defines the shape and compaction of DNA into the highly-condensed form known as chromatin.

But didnt we start with an explanation for non-coding DNA, not protein-coding sequences? Yes, and in the long stretches of non-coding DNA we see information in excess of mere repeats, tandem repeats and remnants of ancient retroviruses: there is a type of code at the level of preference for the GC pair of chemical DNA bases compared with AT. As Bernardi reviews, synthesizing his and others groundbreaking work, in the core sequences of the eukaryotic genome, the GC content in structural organizational units of the genome termed isochores increased during the evolutionary transition between so-called cold-blooded and warm-blooded organisms. And, fascinatingly, this sequence bias overlaps with sequences that are much more constrained in function: these are the very protein-coding sequences mentioned earlier, and theymore than the intervening non-coding sequencesare the clue to the genomic code.

Protein-coding sequences are also packed and condensed in the nucleus particularly when theyre not in use (i.e., being transcribed, and then translated into protein) but they also contain relatively constant information on precise amino acid identities, otherwise they would fail to encode proteins correctly: evolution would act on such mutations in a highly negative manner, making them extremely unlikely to persist and be visible to us. But the amino acid code in DNA has a little catch that evolved in the most simple of unicellular organisms (bacteria and archaea) billions of years ago: the code is partly redundant. For example, the amino acid Threonine can be coded in eukaryotic DNA in no fewer than four ways: ACT, ACC, ACA or ACG. The third letter is variable and hence available for the coding of extra information. This is exactly what happens to produce the genomic code, in this case creating a bias for the ACC and ACG forms in warm-blooded organisms. Hence, the high constraint on this additional codewhich is also seen in parts of the genome that are not under such constraint as protein-coding sequencesis imposed by the packaging of protein-coding sequences that embody two sets of information simultaneously. This is analogous to our example of the highly-constrained dual-information sequence REEDSTOPSFLOW.

Importantly, however, the constraint is not as strict as in our English language example because of the redundancy of the third position of the triplet code for amino acids: a better analogy would be SHE*ATE*STU* where the asterisk stands for a variable letter that doesnt make any difference to the machine that reads the three-letter component of the four-letter message. One could then imagine a second level of information formed by adding D at these asterisk points, to make SHEDATEDSTUD (SHE DATED STUD). Next imagine a second reading machine that looks for meaningful phrases of a sensitive nature containing a greater than average concentration of Ds. This reading machine carries a folding machine with it that places a kind of peg at each D, kinking the message by 120 degrees in a plane. a point where the message should be bent by 120 degrees in the same plane, we would end up with a more compact, triangular, version. In eukaryotic genomes, the GC sequence bias proposed to be responsible for structural condensation extends into non-coding sequences, some of which have identified activities, though less constrained in sequence than protein-coding DNA. There it directs their condensation via histone-containing nucleosomes to form chromatin.

Figure. Analogy between condensation of a word-based message and condensation of genomic DNA in the cell nucleus. Panel A: Information within information, a sequence of words with a variable fourth space which, when filled with particular letters, generates a further message. One message is read by a three-letter reading machine; the other by a reading machine that can interpret information extending to the 4thvariableposition of the sequence. The second reader recognizes sensitive information that should be concealed, and at the points where a D appears in the 4th position, it folds the string of words, hence compressing the sensitive part and taking it out of view. This is an analogy for the principle of genomic 3D compression via chromatin, as depicted in panel B: a fluorescence image (via Fluorescence In-Situ Hybridization FISH) of the cell nucleus. H2/H3 isochores, which increased in GC content during evolution from cold-blooded to warm-blooded vertebrates, are compressed into a chromatin core, leaving L1 isochores (with lower GC content) at the periphery in a less-condensed state. The genomic code embodied in the high-GC tracts of the genome is, according to Bernardi [1], read by the nucleosome-positioning machinery of the cell and interpreted as sequence to be highly compressed in euchromatin. Acknowledgements: Panel A: concept and figure production: Andrew Moore; Panel B: A FISH pattern of H2/H3 and L1 isochores from a lymphocyte induced by PHAcourtesy of S. Sacconeas reproduced in Ref. [1].]

These regions of DNA may then be regarded as structurally important elements in forming the correct shape and separation of condensed coding sequences in the genome, regardless of any other possible function that those non-coding sequences have: in essence, this would be an explanation for the persistence in genomes of sequences to which no function (in terms of evolutionarily-selected activity), can be ascribed (or, at least, no substantial function).

A final analogythis time much more closely relatedmight be the very amino acid sequences in large proteins, which do a variety of twists, turns, folds etc. We may marvel at such complicated structures and ask but do they need to be quite so complicated for their function? Well, maybe they do in order to condense and position parts of the protein in the exact orientation and place that generates the three-dimensional structure that has been successfully selected by evolution. But with a knowledge that the genomic code overlaps protein coding sequences, we might even start to become suspicious that there is another selective pressure at work as well

Andrew Moore, Ph.D.Editor-in-Chief, BioEssays

Reference:

1. G.Bernardi. 2019. The genomic code: a pervasive encoding/moulding ofchromatin structures and a solution of the non-coding DNA mystery. BioEssays41:12. 1900106

See the original post here:

That Junk DNA Is Full of Information! - Advanced Science News

Tenure-Track or Tenure-Eligible Position in the Laboratory of Chemical Physics job with National Institutes of Health | 28302 – Chemical &…

A tenure track (equivalent to Assistant Professor) or tenure-eligible (equivalent to Associate or Full Professor) position is available for an experimental or theoretical biophysical scientist to establish an independent research program in the Laboratory of Chemical Physics (LCP), NIDDK, NIH. We are especially interested in candidates who will develop a vigorous independent research program involving the application of physical methods to biomedical problems and have a demonstrated track record of research excellence that is complementary to ongoing research in LCP. Current research includes: solution state NMR spectroscopy with an emphasis on methods development, structural and kinetic characterization of sparsely-populated states and molecular assembly (Ad Bax and Marius Clore); solid state NMR spectroscopy with an emphasis on the study of amyloid fibrils, protein self-assembly, and protein folding (Robert Tycko); single molecule fluorescence spectroscopy with applications to protein folding, binding, and aggregation (Hoi Sung Chung); picosecond X-ray crystallography and scattering, as well as femtosecond spectroscopy (Philip Anfinrud); theory of single molecule force and fluorescence spectroscopies (Attila Szabo); theory and simulations with emphasis on models for protein folding, misfolding and self-assemblies (Robert Best); and drug discovery for sickle cell disease (William Eaton). Four of the eight LCP principal investigators are members of the US National Academy of Sciences.

The Laboratory is located on the main campus of the NIH in Bethesda, Maryland (https://www.niddk.nih.gov/research-funding/at-niddk/labs-branches/laboratory-chemical-physics/about) and is part of the intramural program of NIDDK (http://www.niddk.nih.gov/research-funding/at-niddk/labs-branches/Pages/default.aspx). The NIH Intramural Program provides a highly interactive and interdisciplinary environment that is conducive for carrying out high risk, basic research with state-of-the-art core facilities and access to collaborators in both the basic and clinical sciences in almost every major area of biology and medicine. Stable research support for NIDDK intramural scientists is based on accomplishments.

Applicants must have a PhD, MD/PhD or equivalent degree and have a demonstrated record of scientific achievement. Applicants may be U.S. citizens, resident aliens, or non-resident aliens with, or eligible to obtain, a valid employment-authorization visa. Applicants should electronically submit a curriculum vitae, bibliography, a summary of research accomplishments, copies of three most significant publications and a brief statement of future research goals. Junior applicants should arrange for three letters of reference to be sent directly to the Chair of the Search Committee. Senior applicants should provide the names of three reference letter writers. All applications should be submitted electronically to:

Dr. Wei Yang

Chair, LCP Search Committee

danica.day@nih.gov

Please include in your CV a description of mentoring and outreach activities, especially those involving women and racial/ethnic groups that are underrepresented in biomedical research.

The LCP Search Committee will review received applications on or around December 7, 2019.Applications will be accepted until the position is filled. Salary and benefits are commensurate with the experience of the applicant.

DHHS and NIH are equal opportunity employers

Read more from the original source:

Tenure-Track or Tenure-Eligible Position in the Laboratory of Chemical Physics job with National Institutes of Health | 28302 - Chemical &...

Bulls-Eye: Imaging Technology Could Confirm When a Drug Is Going to the Right Place – On Cancer – Memorial Sloan Kettering

Summary

Doctors and scientists from Memorial Sloan Kettering report on an innovative technique for noninvasively watching where a targeted therapy is going in the body. It also allows them to see how much of the drugreaches the tumor.

Targeted therapy has become an important player in the collection of treatments for cancer. But sometimes its difficult for doctors to determine whether a persons tumor has the right target or how much of a drug is actually reaching it.

A multidisciplinary team of doctors and scientists from Memorial Sloan Kettering has discovered an innovative technique for noninvasively visualizing where a targeted therapy is going in the body. This method can also measure how much of it reaches the tumor. What makes this development even more exciting is that the drug they are studying employs an entirely new approach for stopping cancer growth. The work was published on October 24 in Cancer Cell.

This paper reports on the culmination of almost 15 years of research, says first author Naga Vara Kishore Pillarsetty, a radiochemist in the Department of Radiology. Everything about this drug from the concept to the clinical trials was developed completely in-house at MSK.

Our research represents a new role for the field of radiology in drug development, adds senior author Mark Dunphy, a nuclear medicine doctor. Its also a new way to provide precision oncology.

Our research represents a new role for the field of radiology in drug development.

The drug being studied, called PU-H71, was developed by the studys co-senior author Gabriela Chiosis. Dr. Chiosis is a member of the Chemical Biology Program in the Sloan Kettering Institute. PU-H71 is being evaluated in clinical trials for breast cancer and lymphoma, and the early results are promising.

We always hear about how DNA and RNA control a cells fate, Dr. Pillarsetty says. But ultimately it is proteins that carry out the functions that lead to cancer. Our drug is targeting a unique network of proteins that allow cancer cells to thrive.

Most targeted therapies affect individual proteins. In contrast, PU-H71 targets something called the epichaperome. Discovered and named by Dr. Chiosis, the epichaperome is a communal network of proteins called chaperones.

Chaperone proteins help direct and coordinate activities in cells that are crucial to life, such as protein folding and assembly. The epichaperome, on the other hand, does not fold. It reorganizes the function of protein networks in cancer, which enables cancer cells to survive under stress.

Previous research from Dr. Chiosis and Monica Guzman of Weill Cornell Medicine provided details on how PU-H71 works. The drug targets a protein called the heat shock protein 90 (HSP90). When PU-H71 binds to HSP90 in normal cells, it rapidly exits. But when HSP90 is incorporated into the epichaperome, the PU-H71 molecule becomes lodged and exits more slowly. This phenomenon is called kinetic selectivity. It helps explain why the drug affects the epichaperome. It also explains why PU-H71 appears to have fewer side effects than other drugs aimed at HSP90.

At the same time, this means that PU-H71 works only in tumors where an epichaperome has formed. This circumstanceled to the need for a diagnostic method to determine which tumors carry the epichaperome and, ultimately, who might benefit from PU-H71.

Communal Behavior within Cells Makes Cancers Easier to Target

Findings about proteins called molecular chaperones are shedding new light on possible approaches to cancer treatment.

In the Cancer Cell paper, the investigators report the development of a precision medicine tactic that uses a PET tracer with radioactive iodine. It is called [124I]-PU-H71 or PU-PET. PU-PET is the same molecule as PU-H71 except that it carries radioactive iodine instead of nonradioactive iodine. The radioactive version binds selectively to HSP90 within the epichaperome in the same way that the regular drug does. Ona PET scan, PU-PET displays the location of the tumor or tumors that carry the epichaperome and therefore are likely to respond to the drug. Additionally, when its given along with PU-H71, PU-PET can confirm that the drug is reaching the tumor.

This research fits into an area that is sometimes called theranostics or pharmacometrics, Dr. Dunphy says. We have found a very different way of selecting patients for targeted therapy.

He explains that with traditional targeted therapies, a portion of a tumor is removed with a biopsy and then analyzed. Biopsies can be difficult to perform if the tumor is located deep in the body. Additionally, people with advanced disease that has spread to other parts of the body may have many tumors, and not all of them may be driven by the same proteins. By using this imaging tool, we can noninvasively identify all the tumors that are likely to respond to the drug, and we can do it in a way that is much easier for patients, Dr. Dunphy says.

The researchers explain that this type of imaging also allows them to determine the best dose for each person. For other targeted therapies, doctors look at how long a drug stays in the blood. But that doesnt tell you how much is getting to the tumor, Dr. Pillarsetty says. By using this imaging agent, we can actually quantify how much of the drug will reach the tumor and how long it will stay there.

Plans for further clinical trials of PU-H71 are in the works. In addition, the technology reported in this paper may be applicable for similar drugs that also target the epichaperome.

See more here:

Bulls-Eye: Imaging Technology Could Confirm When a Drug Is Going to the Right Place - On Cancer - Memorial Sloan Kettering

Yumanity Therapeutics Initiates Phase 1 Clinical Trial of Lead Candidate YTX-7739 for the Treatment of Parkinson’s Disease | Small Molecules | News…

DetailsCategory: Small MoleculesPublished on Tuesday, 08 October 2019 10:09Hits: 417

YTX-7739 represents a novel, first-in-class, potentially disease-modifying therapy

Data from Phase 1 study expected in the first quarter of 2020

CAMBRIDGE, MA, USA I October 07, 2019 IYumanity Therapeutics, a company focused on protecting the vitality of the mind by discovering and developing transformative brain-penetrating small molecule drugs to treat neurodegenerative diseases, today announced that the first subject cohort has been dosed in a Phase 1 clinical trial evaluating the safety and tolerability of YTX-7739 in healthy volunteers. YTX-7739, the companys lead investigational therapy, is designed to inhibit Stearoyl-CoA-Desaturase (SCD), a validated biologic target that has recently shown potential in neurodegenerative diseases by protecting cells from a-synuclein toxicity, a major driver of Parkinsons disease.

Developing effective therapies for patients with devastating neurodegenerative diseases has been challenging because too few hypotheses and novel targets have been explored, said Kenneth Rhodes, Ph.D., chief scientific officer at Yumanity Therapeutics. We advanced YTX-7739, an orally-active SCD inhibitor, into clinical development because of recent evidence established at Yumanity Therapeutics demonstrating its promise to protect cells from a-synuclein toxicity. We look forward to fully characterizing the potential clinical use of YTX-7739, which is clearly differentiated from currently available Parkinsons disease therapies that only address the symptoms, not the underlying causes.

The double-blind, placebo-controlled, dose-escalation, crossover study is intended to evaluate the safety, tolerability and pharmacokinetics of single ascending doses of YTX-7739 in adult healthy volunteers. A second study, exploring multiple ascending doses in adult healthy volunteers and patients with Parkinsons, will follow. Approximately 40 participants will be enrolled in this Phase 1 single ascending dose study. Following completion of the Phase 1 studies, Yumanity Therapeutics expects to advance YTX-7739 into a Phase 1b proof-of-concept clinical trial in the second half of 2020.

Since Yumanity Therapeutics inception, our goal has been to uncover novel pathways and targets to tackle significant medical challenges, said Richard Peters, M.D., Ph.D., chief executive officer of Yumanity Therapeutics. Moving from target identification of SCD to initial clinical development of YTX-7739 in just three years is a testament to the enormous potential of our discovery platform to reproducibly identify previously unexplored biology and new, druggable targets that have the potential to protect cells from neurodegeneration. This Phase 1 trial will provide important validation for the broad application of our technology to help address arguably the most important therapeutic challenges of our time.

About YTX-7739 YTX-7739 is Yumanity Therapeutics proprietary lead investigational therapy designed to penetrate the blood-brain barrier and inhibit the activity of a novel target that plays an important and previously unrecognized role in the neurotoxicity caused by the a-synuclein protein, a major driver of Parkinsons disease and related neurodegenerative disorders. Misfolding and aggregation of the a-synuclein protein triggers a cascade of events, ultimately resulting in neurotoxicity and the subsequent disorders in movement and cognition that affect patients living with these diseases. YTX-7739 has been shown to inhibit many of the key aspects of a-synuclein toxicity and the company is assessing its potential utility in Parkinsons disease.

About Parkinsons Disease Parkinsons disease is a progressive neurological disorder that affects the central nervous system and impacts both motor and non-motor functions. It is one of the most common age-related neurodegenerative diseases, affecting an estimated 0.5 to 1 percent of people 65 to 69 years of age, rising to 1 to 3 percent of the population over the age of 80.1 Symptom severity and disease progression differ between individuals, but typically include slowness of movement (bradykinesia), trembling in the extremities (tremors), stiffness (rigidity), cognitive or behavioral abnormalities, sleep disturbances and sensory dysfunction.2 There is no laboratory or blood test for Parkinsons disease, so diagnosis is made based on clinical observation.3 Currently, there is no cure and available treatments only address the symptoms of Parkinsons disease, not the underlying causes.

About Yumanity Therapeutics Yumanity Therapeutics is transforming drug discovery for neurodegenerative diseases caused by protein misfolding. Formed in 2014 by renowned biotech industry leader, Tony Coles, M.D., and protein folding science pioneer, Susan Lindquist, Ph.D., the company is focused on discovering disease-modifying therapies for patients with Parkinsons disease and related disorders, amyotrophic lateral sclerosis (ALS), and Alzheimers disease. Leveraging its proprietary discovery engine, Yumanity Therapeutics innovative new approach to drug discovery and development concentrates on reversing the cellular phenotypes and disease pathologies caused by protein misfolding. For more information, please visit yumanity.com.

1N Engl J Med. 2003;348:1356-1364 doi: 10.1056/NEJM2003ra020003 2J Neurol Neurosurg Psychiatry. 2008;79:368376. doi:10.1136/jnnp.2007.131045 3Cold Spring Harb Perspect Med. 2012;2:a008870

SOURCE: Yumanity Therapeutics

Visit link:

Yumanity Therapeutics Initiates Phase 1 Clinical Trial of Lead Candidate YTX-7739 for the Treatment of Parkinson's Disease | Small Molecules | News...

IBM vs. Google and the Race to Quantum Supremacy – Citizen Truth

Though IBM contests Googles claim of quantum supremacy, it concedes that it passed an important milestone. For the science of computing, that is all that matters.

Googles quantum supremacy claim has now been disputed by its close competitor IBM. Not because Googles Sycamore quantum computers calculations are wrong, but because Google had underestimated what IBMs Summit, the most powerful supercomputer in the world, could do. Meanwhile, Googles paper, which had accidentally been leaked by a NASA researcher, has now been published in the prestigious science journal Nature. Googles claims are official now, and therefore can be examined in the way any new science claim should be examined: skeptically until all the doubts are addressed.

Previously, I have covered what quantum computing is, and in this article, I will move on to the key issue of quantum supremacy, the claim that IBM has challenged and what it really means. IBM concedes that Google has achieved an important milestone, but does not accept that it has achieved quantum supremacy.

IBM refuted Googles claim around the same time as Googles Nature paper was published. Google had claimed that IBMs supercomputer, Summit, would take 10,000 years to solve the problem Googles Sycamore had solved in a mere 200 seconds. IBM showed that Summit, with clever programming and using its huge disk space, could actually solve the problem in only 2.5 days. Sycamore still beat Summit on this specific problem by solving it 1,100 times faster, but not 157 million times faster, as Google had claimed. According to IBM, this does not establish quantum supremacy as that requires solving a problem a conventional computer cannot solve in a reasonable amount of time. Two and a half days is reasonable, thereforeaccording to IBMquantum supremacy is yet to be attained.

The original definition of quantum supremacy was given by John Preskill, on which he now has second thoughts. Recently he wrote, supremacy, through its association with white supremacy, evokes a repugnant political stance. The other reason is that the word exacerbates the already overhyped reporting on the status of quantum technology.

Regarding IBMs claim that quantum supremacy has not yet been achieved, Scott Aaronson, a leading quantum computing scientist, wrote that though Google should have foreseen what IBM has done, it does not invalidate Googles claim. The key issue is not that Summit had a special way to solve the specific quantum problem Google had chosen, but that Summit cannot scale: if Googles Sycamore goes from 53 to 60 qubits, IBM will require 33 Summits; if to 70 Qubits, a supercomputer the size of a city!

Why does Summit have to increase at this rate to match Sycamores extra qubits? To demonstrate quantum supremacy, Google chose the simulation of quantum circuits, which is similar to generating a sequence of truly random numbers. Classical computers can produce numbers that appear to be random, but it is a matter of time before they will repeat the sequence.

The resourcesdisk space, memory, computing powerclassical computers require to solve this problem, in a reasonable time, increase exponentially with the size of the problem. For quantum computers, adding qubits linearlymeaning, simply adding more qubitsincreases computing capacity exponentially. Therefore, just 7 extra qubits of Sycamore means IBM needs to increase the size of Summit 33 times. A 17-qubit increase of Sycamore needs Summit to increase by thousands of times. This is the key difference between Summit and Sycamore. For each extra qubit, a conventional computer will have to scale its resources exponentially, and this is a losing game for the conventional computer.

We have to give Google the victory here, not because IBM is wrong, but because the principle of quantum supremacy, that a quantum computer can work as designed, solve a specific problem, and beat a conventional computer in computational time has been established. The actual demonstrationa more precise definition of reasonable time and its physical demonstrationis only of academic value. If 53 qubits can solve the problem, but with IBMs Summit still in the race, even if much slower, it is just a matter of time before it is well and truly beaten.

Of course, there are other ways that this particular test could fail. A new algorithm can be discovered that solves this problem faster, starting a fresh race. But the principle here is not a specific race but the way quantum computing will scale in solving a certain class of problems that classical or conventional computers cannot.

For problems that do not increase exponentially with size, the classical computers work better, are way cheaper, and do not require near absolute zero temperatures that quantum computers require. In other words, classical computers will coexist with quantum computers and not follow typewriters and calculators to the technology graveyards.

The key issue in creating viable quantum computers should not be confused with a race between classical computers and the new kid on the block. If we see the race as between two classes of computers only in terms of solving a specific problem, we are missing the big picture. It is simply that for classical computers, the solution time for a certain class of problems increases exponentially with the size of the problem, and beyond a certain size, we just cant solve them in any reasonable time. Quantum computers have the potential to solve such large problems requiring exponential computing power. This opens a way to solve these classes of problems other than the iffy route of finding new algorithms.

Are there such problems, and will they yield worthwhile technological applications? The Google problem, computing the future states of quantum circuits, was not chosen for any practical application. It was simply chosen to showcase quantum supremacy, defined as a quantum computer solving a problem that a classical computer cannot solve in a reasonable time.

Recently, a Chinese team led by Pan Jianwei has published a paper that shows another problema Boson sampling experiment with 20 photonscan also be a pathway to show quantum supremacy. Both these problems are constructed not to showcase real-world applications, but simply to show that quantum computing works and can potentially solve real-world problems.

What are the classes of problems that quantum computers can solve? The first are those for which the late Nobel laureate Richard Feynman had postulated quantum computers as a simulation of the quantum world. Why do we need such simulations, after all, we live in the macro-world in which quantum effects are not visible? Though such effects may not visible to us, they are indeed all around us and affect us in different ways.

A number of such phenomena arise out of the interaction of the quantum world with the macro-world. It is now clear that using classical computers we cannot simulate, for instance, protein folding, as it involves the quantum world intersecting with the macro-world. A quantum computer could simulate the probability of how many possible ways such proteins could fold and the likely shapes they could take. This would allow us to build not only new materials but also medicines known as biologics. Biologics are large molecules used for treating cancer and auto-immune diseases. They work due to not only their composition but also their shapes. If we could work out their shapes, we could identify new proteinsor new biological drug targets; or complex new chemicals for developing new materials. The other examples are solving real-life combinatorial problems such as searching large databases, cracking cryptographic problems, improved medical imaging, etc.

The business worldIBM, Google, Microsoftis gung-ho on the possible use of quantum computers for such applications, and that is why they are all investing in it big time. Nature reported that in 2017 and 2018, at least $450 million was invested by venture capital in quantum computing, more than four times more than the preceding two years. Nation-states, notably the United States and China, are also investing billions of dollars each year.

But what if quantum computers do not lead to commercial benefitsshould we then abandon them? What if they are useful only for simulating quantum mechanics and understanding that world better? Did we build the Hadron Colliderinvesting $13.25 billion, and with an annual running cost of $1 billiononly because we expected discoveries that will have commercial value? Or, should society invest in knowing the fundamental properties of space and time including that of the quantum world? Even if quantum computers only give us a window to the quantum world, the benefits would be knowledge.

What is the price of this knowledge?

This article was produced in partnership by Newsclick and Globetrotter, a project of the Independent Media Institute.

Read more here:

IBM vs. Google and the Race to Quantum Supremacy - Citizen Truth

Microprotein ID’d Affecting Protein Folding and Cell Stress Linked to Diseases Like Huntington’s, Study Finds – Huntington’s Disease News

PIGBOS a newly discovered mitochondrial microprotein involved in a cellular stress-response mechanism called unfolded protein response (UPR) might be a treatment target for neurodegenerative diseases likeHuntingtons, a study suggests.

The study, Regulation of the ER stress response by a mitochondrial microprotein, was published in the journal Nature Communications.

Maintenance of protein balance including the production, shaping (folding), and degradation of proteins is essential for a cells function and survival.

Dysfunction in protein balance has been associated with the build-up of toxic protein aggregates and the development of neurodegenerative diseases, including Alzheimers, Parkinsons, and Huntingtons disease.

The endoplasmic reticulum (ER) is a key cellular structure in the production, folding, modification, and transport of proteins. Excessive amounts of unfolded or misfolded proteins (proteins with abnormal 3D structures) in the ER results in ER stress, and the activation of the unfolded protein response (UPR) stress response mechanism, which acts to mitigate damage caused by this protein build-up.

UPR promotes the reduction of protein production and an increase in protein folding and degradation of unfolded proteins in the ER. If this fails to restore cellular balance and prolongs the activation of UPR, cell death is induced.

UPR dysfunction contributes to accumulation of key disease-related proteins, and thus plays an essential role in the [development] of many neurodegenerative disorders, including Alzheimers disease, Parkinsons disease, and Huntingtons disease, the researchers wrote.

During UPR, mitochondria the cells powerhouses are known to provide energy for protein folding in the ER and to activate cell death pathways if the cellular balance is not restored. However, how mitochondria and the ER communicate in this context remains unclear.

Researchers at the Salk Institute for Biological Studies, in California, discovered a mitochondrial microprotein, called PIGBOS, that regulates UPR at the sites of contact between mitochondria and the ER.

While the average human protein contains around 300 amino acids (the building blocks of proteins), microproteins have less than 100 amino acids. Microproteins were only recently found to be functional and important in the regulation of several cellular processes.

By conducting protein-binding experiments, the team found that the 54-aminoacidmicroprotein PIGBOS, present in the outer membrane of mitochondria, interacts with a protein called CLCC1 at the ER-mitochondria contact sites.

CLCC1 whose low levels werepreviously associated with increased UPR and neurodegeneration is found at the portion of the ER that contacts the mitochondria, called mitochondria-associated ER membrane.

Further analyses showed that inducing ER stress in cells genetically modified to lack CLCC1 or PIGBOS increased the levels of UPR-related proteins, while the opposite effect was observed in cells overproducing PIGBOS. Lower levels of PIGBOS were also associated with greater cell death.

Researchers noted that these findings suggest that loss of PIGBOS increases cellular sensitivity to ER stress, which in turn increases [cell death] and links PIGBOS levels to the ability of cells to survive stress, emphasizing that modulating PIGBOS levels can in turn modulate cellular sensitivity towards ER stress.

Results also showed that PIGBOSs UPR regulation is dependent on its interaction with CLCC1, and that modulating the number of ER-mitochondria contacts regulates the levels of PIGBOS-CLCC1 interactions.

These data identified PIGBOS as a [previously] unknown mitochondrial regulator of UPR, and the only known microprotein linked to the regulation of cell stress or inter-organelle signaling, the team emphasized.

These findings may help in developing treatment approaches targeting ER stress and cell death.

Given the importance of UPR in biology and disease, future studies on PIGBOSs role in UPR should afford additional insights and may provide methods for regulating this pathway for therapeutic applications, the researchers concluded.

Total Posts: 79

Ana holds a PhD in Immunology from the University of Lisbon and worked as a postdoctoral researcher at Instituto de Medicina Molecular (iMM) in Lisbon, Portugal. She graduated with a BSc in Genetics from the University of Newcastle and received a Masters in Biomolecular Archaeology from the University of Manchester, England. After leaving the lab to pursue a career in Science Communication, she served as the Director of Science Communication at iMM.

Excerpt from:

Microprotein ID'd Affecting Protein Folding and Cell Stress Linked to Diseases Like Huntington's, Study Finds - Huntington's Disease News

The Science Behind Foldit | Foldit

Foldit is a revolutionary crowdsourcing computer game enabling you to contribute to important scientific research. This page describes the science behind Foldit and how your playing can help.

What is a protein? Proteins are the workhorses in every cell of every living thing. Your body is made up of trillions of cells, of all different kinds: muscle cells, brain cells, blood cells, and more. Inside those cells, proteins are allowing your body to do what it does: break down food to power your muscles, send signals through your brain that control the body, and transport nutrients through your blood. Proteins come in thousands of different varieties, but they all have a lot in common. For instance, they're made of the same stuff: every protein consists of a long chain of joined-together amino acids.

What are amino acids? Amino acids are small molecules made up of atoms of carbon, oxygen, nitrogen, sulfur, and hydrogen. To make a protein, the amino acids are joined in an unbranched chain, like a line of people holding hands. Just as the line of people has their legs and feet "hanging" off the chain, each amino acid has a small group of atoms (called a sidechain) sticking off the main chain (backbone) that connects them all together. There are 20 different kinds of amino acids, which differ from one another based on what atoms are in their sidechains. These 20 amino acids fall into different groups based on their chemical properties: acidic or alkaline, hydrophilic (water-loving) or hydrophobic (greasy).

What shape will a protein fold into? Even though proteins are just a long chain of amino acids, they don't like to stay stretched out in a straight line. The protein folds up to make a compact blob, but as it does, it keeps some amino acids near the center of the blob, and others outside; and it keeps some pairs of amino acids close together and others far apart. Every kind of protein folds up into a very specific shape -- the same shape every time. Most proteins do this all by themselves, although some need extra help to fold into the right shape. The unique shape of a particular protein is the most stable state it can adopt. Picture a ball at the top of a hill -- the ball will always roll down to the bottom. If you try to put the ball back on top it will still roll down to the bottom of the hill because that is where it is most stable.

Why is shape important? This structure specifies the function of the protein. For example, a protein that breaks down glucose so the cell can use the energy stored in the sugar will have a shape that recognizes the glucose and binds to it (like a lock and key) and chemically reactive amino acids that will react with the glucose and break it down to release the energy.

What do proteins do? Proteins are involved in almost all of the processes going on inside your body: they break down food to power your muscles, send signals through your brain that control the body, and transport nutrients through your blood. Many proteins act as enzymes, meaning they catalyze (speed up) chemical reactions that wouldn't take place otherwise. But other proteins power muscle contractions, or act as chemical messages inside the body, or hundreds of other things. Here's a small sample of what proteins do:

Proteins are present in all living things, even plants, bacteria, and viruses. Some organisms have proteins that give them their special characteristics:

You can find more information on the rules of protein folding in our FAQ.

What big problems is this game tackling?

How does my game playing contribute to curing diseases?

With all the things proteins do to keep our bodies functioning and healthy, they can be involved in disease in many different ways. The more we know about how certain proteins fold, the better new proteins we can design to combat the disease-related proteins and cure the diseases. Below, we list three diseases that represent different ways that proteins can be involved in disease.

What other good stuff am I contributing to by playing?

Proteins are found in all living things, including plants. Certain types of plants are grown and converted to biofuel, but the conversion process is not as fast and efficient as it could be. A critical step in turning plants into fuel is breaking down the plant material, which is currently done by microbial enzymes (proteins) called "cellulases". Perhaps we can find new proteins to do it better.

Can humans really help computers fold proteins?

Were collecting data to find out if humans' pattern-recognition and puzzle-solving abilities make them more efficient than existing computer programs at pattern-folding tasks. If this turns out to be true, we can then teach human strategies to computers and fold proteins faster than ever!

You can find more information about the goals of the project in our FAQ.

Brian Koepnick, Jeff Flatten, Tamir Husain, Alex Ford, Daniel-Adriano Silva, Matthew J. Bick, Aaron Bauer, Gaohua Liu, Yojiro Ishida, Alexander Boykov, Roger D. Estep, Susan Kleinfelter, Toke Nrgrd-Solano, Linda Wei, Foldit Players, Gaetano T. Montelione, Frank DiMaio, Zoran Popovi, Firas Khatib, Seth Cooper and David Baker. De novo protein design by citizen scientists Nature (2019). [link]

Thomas Muender, Sadaab Ali Gulani, Lauren Westendorf, Clarissa Verish, Rainer Malaka, Orit Shaer and Seth Cooper.Comparison of mouse and multi-touch for protein structure manipulation in a citizen science game interface.Journal of Science Communication (2019). [link]

Lorna Dsilva, Shubhi Mittal, Brian Koepnick, Jeff Flatten, Seth Cooper and Scott Horowitz.Creating custom Foldit puzzles for teaching biochemistry.Biochemistry and Molecular Biology Education (2019). [link]

Seth Cooper, Amy L. R. Sterling, Robert Kleffner, William M. Silversmith and Justin B. Siegel.Repurposing citizen science games as software tools for professional scientists.Proceedings of the 13th International Conference on the Foundations of Digital Games (2018). [link]

Robert Kleffner, Jeff Flatten, Andrew Leaver-Fay, David Baker, Justin B. Siegel, Firas Khatib and Seth Cooper. Foldit Standalone: a video game-derived protein structure manipulation interface using Rosetta. Bioinformatics (2017). [link]

Jacqueline Gaston and Seth Cooper. To three or not to three: improving human computation game onboarding with a three-star system. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2017). [link]

Scott Horowitz, Brian Koepnick, Raoul Martin, Agnes Tymieniecki, Amanda A. Winburn, Seth Cooper, Jeff Flatten, David S. Rogawski, Nicole M. Koropatkin, Tsinatkeab T. Hailu, Neha Jain, Philipp Koldewey, Logan S. Ahlstrom, Matthew R. Chapman, Andrew P. Sikkema, Meredith A. Skiba, Finn P. Maloney, Felix R. M. Beinlich, Foldit Players, University of Michigan students, Zoran Popovi, David Baker, Firas Khatib and James C. A. Bardwell. Determining crystal structures through crowdsourcing and coursework. Nature Communications 7, Article number: 12549 (2016). [link]

Dun-Yu Hsiao, Min Sun, Christy Ballweber, Seth Cooper and Zoran Popovi. Proactive sensing for improving hand pose estimation. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2016). [link]

Dun-Yu Hsiao, Seth Cooper, Christy Ballweber and Zoran Popovi. User behavior transformation through dynamic input mappings. Proceedings of the 9th International Conference on the Foundations of Digital Games (2014). [link]

George A. Khoury, Adam Liwo, Firas Khatib, Hongyi Zhou, Gaurav Chopra, Jaume Bacardit, Leandro O. Bortot, Rodrigo A. Faccioli, Xin Deng, Yi He, Pawel Krupa, Jilong Li, Magdalena A. Mozolewska, Adam K. Sieradzan, James Smadbeck, Tomasz Wirecki, Seth Cooper, Jeff Flatten, Kefan Xu, David Baker, Jianlin Cheng, Alexandre C. B. Delbem, Christodoulos A. Floudas, Chen Keasar, Michael Levitt, Zoran Popovi, Harold A. Scheraga, Jeffrey Skolnick, Silvia N. Crivelli and Foldit Players. WeFold: a coopetition for protein structure prediction. Proteins (2014). [link]

Christopher B. Eiben, Justin B. Siegel, Jacob B. Bale, Seth Cooper, Firas Khatib, Betty W. Shen, Foldit Players, Barry L. Stoddard, Zoran Popovi and David Baker. Increased Diels-Alderase activity through backbone remodeling guided by Foldit players. Nature Biotechnology (2012). [link]

Erik Andersen, Eleanor O'Rourke, Yun-En Liu, Richard Snider, Jeff Lowdermilk, David Truong, Seth Cooper and Zoran Popovi. The impact of tutorials on games of varying complexity. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (2012). [link]

Firas Khatib, Seth Cooper, Michael D. Tyka, Kefan Xu, Ilya Makedon, Zoran Popovi, David Baker and Foldit Players. Algorithm discovery by protein folding game players. Proceedings of the National Academy of Sciences of the United States of America (2011). [link]

Miroslaw Gilski, Maciej Kazmierczyk, Szymon Krzywda, Helena Zbransk, Seth Cooper, Zoran Popovi, Firas Khatib, Frank DiMaio, James Thompson, David Baker, Iva Pichov and Mariusz Jaskolskia. High-resolution structure of a retroviral protease folded as a monomer. Acta Crystallographica (2011). [link]

Firas Khatib, Frank DiMaio, Foldit Contenders Group, Foldit Void Crushers Group, Seth Cooper, Maciej Kazmierczyk, Miroslaw Gilski, Szymon Krzywda, Helena Zbransk, Iva Pichov, James Thompson, Zoran Popovi, Mariusz Jaskolski and David Baker. Crystal structure of a monomeric retroviral protease solved by protein folding game players. Nature Structural and Molecular Biology (2011). [link]

Seth Cooper, Firas Khatib, Ilya Makedon, Hao Lu, Janos Barbero, David Baker, James Fogarty, Zoran Popovi and Foldit Players. Analysis of social gameplay macros in the Foldit cookbook. Proceedings of the 6th International Conference on the Foundations of Digital Games (2011). [link]

Seth Cooper, Firas Khatib, Adrien Treuille, Janos Barbero, Jeehyung Lee, Michael Beenen, Andrew Leaver-Fay, David Baker, Zoran Popovi and Foldit Players. Predicting protein structures with a multiplayer online game. Nature (2010). [link]

Seth Cooper, Adrien Treuille, Janos Barbero, Andrew Leaver-Fay, Kathleen Tuite, Firas Khatib, Alex Cho Snyder, Michael Beenen, David Salesin, David Baker, Zoran Popovi and Foldit players. The challenge of designing scientific discovery games. Proceedings of the 5th International Conference on the Foundations of Digital Games (2010). [link]

Foldit has been in dozens of publications over the years - to list them all would take a page of their own. For a sampling, please see our Center for Game Science page.

Check out the Rosetta@Home Screensaver to see how computers fold proteins using distributed computing.

Thank you for using Foldit in your classroom! We have put together a set of instructions to assist you in setting up your students to play Foldit.

You can find the researchers and supporters associated with this study on the game's credits page.

Continued here:

The Science Behind Foldit | Foldit

UCI vision scientist Krzysztof Palczewski elected to National Academy of Medicine – UCI News

Irvine, Calif., Oct. 21, 2019 Krzysztof Palczewski, the Irving H. Leopold Chair in Ophthalmology and a professor of physiology & biophysics at the University of California, Irvine, has been elected to the National Academy of Medicine, one of the highest distinctions accorded to professionals in the medical sciences, healthcare and public health. He is one of 100 new U.S.-based members announced today.

The National Academy of Medicine recognizes leaders in diverse fields including health and medicine; the natural, social and behavioral sciences; and beyond. Through its domestic and global initiatives, the academy works to address critical issues in health, medicine and related policy and inspire positive action across sectors.

Congratulations to Dr. Palczewskion this exceptional achievement which illustrates the academic excellence of UCI faculty,said Enrique Lavernia, UCI provost and executive vice chancellor. With the election of Dr. Palczewski to the National Academy of Medicine, UCI is now home to42 members of the National Academies of Sciences, Engineering and Medicine, 35 members of the American Academy of Arts & Sciences, nine members of the National Academy of Inventors, and four members of the National Academy of Education.

I feel deeply honored by the National Academy of Medicine election, Palczewski said. Such recognition reflects on our colleagues, collaborators and trainees who contributed to impactful research on eye diseases. Clearly, this distinction further encourages us to give our very best efforts in the next stage of our research: developing therapeutics against blinding diseases.

The internationally renowned chemist, pharmacologist and vision scientist has made critical additions to the understanding of the molecular basis of age-related macular degeneration and inherited retinal degeneration, illuminating the path toward the creation of new vision treatments.

Palczewski has studied the pharmacology of vision for more than 30 years, and his work has had a tremendous impact on efforts to restore vision in people suffering from retinitis pigmentosa and other congenital mutations that result in blindness.

He is best known for discovering the structure, folding and binding properties of rhodopsin, a light-sensitive photoreceptor protein. His findings profoundly increased comprehension of the molecular basis of vision and the structure of photoreceptor cells in the retina. They also contributed to the ability to originate new molecular therapies for age-related macular degeneration and other retinopathies.

Palczewski came to UCI last year from Case Western Reserve University in Cleveland to establish the Center for Translational Vision Research at the Gavin Herbert Eye Institute, which is part of the UCI School of Medicine. There, he collaborates with a team of noted vision scientists to maximize opportunities to translate insights from basic science investigations into clinical treatments.

He holds 29 issued and nine pending patents and has received several prestigious accolades, including the 2015 Bressler Prize in Vision Science and the inaugural 2014 Beckman-Argyros Award in Vision Research.

In addition, Palczewski is the only person to have won both the Cogan Award (1996) for the most promising young vision scientist and the Friedenwald Award (2014) for continuously outstanding ophthalmology research from the Association for Research in Vision and Ophthalmology. His work has been cited more than 46,000 times, with an h-index impact factor of 115, according to Google Scholar.

Palczewski earned a Ph.D. in biochemistry at the Wrocaw University of Science and Technology in Poland.

About the University of California, Irvine: Founded in 1965, UCI is the youngest member of the prestigious Association of American Universities. The campus has produced three Nobel laureates and is known for its academic achievement, premier research, innovation and anteater mascot. Led by Chancellor Howard Gillman, UCI has more than 36,000 students and offers 222 degree programs. Its located in one of the worlds safest and most economically vibrant communities and is Orange Countys second-largest employer, contributing $5 billion annually to the local economy. For more on UCI, visit http://www.uci.edu.

Media access: Radio programs/stations may, for a fee, use an on-campus ISDN line to interview UCI faculty and experts, subject to availability and university approval. For more UCI news, visit news.uci.edu. Additional resources for journalists may be found at communications.uci.edu/for-journalists.

More here:

UCI vision scientist Krzysztof Palczewski elected to National Academy of Medicine - UCI News

Discover: Science is often wrong and that’s actually a really good thing – Sudbury.com

Im a geneticist. I study the connection between information and biology essentially what makes a fly a fly, and a human a human. Interestingly, were not that different. Its a fantastic job and I know, more or less, how lucky I am to have it.

Ive been a professional geneticist since the early 1990s. Im reasonably good at this, and my research group has done some really good work over the years. But one of the challenges of the job is coming to grips with the idea that much of what we think we know is, in fact, wrong.

Sometimes, were just off a little, and the whole point of a set of experiments is simply trying to do a little better, to get a little closer to the answer. At some point, though, in some aspect of what we do, its likely that were just flat out wrong. And thats okay. The trick is being open-minded enough, hopefully, to see that someday, and then to make the change.

One of the amazing things about being a modern geneticist is that, generally speaking, people have some idea of what I do: work on DNA (deoxyribonucleic acid). When I ask a group of school kids what a gene is, the most common answer is DNA. And this is true, with some interesting exceptions. Genes are DNA and DNA is the information in biology.

For almost 100 years, biologists were certain that the information in biology was found in proteins and not DNA, and there were geneticists who went to the grave certain of this. How they got it wrong is an interesting story.

Genetics, microscopy (actually creating the first microscopes), and biochemistry were all developing together in the late 1800s. Not surprisingly, one of the earliest questions that fascinated biologists was how information was carried from generation to generation. Offspring look like their parents, but why? Why your second daughter looks like the postman is a question that came up later.

Early cell biologists were using the new microscopes to peer into the cell in ways that simply hadnt been possible previously. They were finding thread-like structures in the interior of cells that passed from generation to generation, were similar within a species, but different between them. We now know these threads as chromosomes. Could these hold the information that scientists were looking for?

Advances in biochemistry paralleled those in microscopy and early geneticists determined that chromosomes were primarily made up of two types of molecules: proteins and DNA. Both are long polymers (chains) made up of repeated monomers (links in the chains). It seemed very reasonable that these chains could contain the information of biological complexity.

By analogy, think of a word as just a string of letters, a sentence as a chain of words, and a paragraph as a chain of sentences. We can think of chromosomes, then, as chapters, and all of our genetic information what we now call our genome (all our genetic material) as these chapters that make up a novel. The question to those early geneticists, then, was: Which string made up the novel? Was it protein or DNA?

You and I know the answer: DNA. Early geneticists, however, got it wrong and then passionately defended this wrong stance for eight decades. Why? The answer is simple. Protein is complicated. DNA is simple. Life is complicated. The alphabet of life, then, should be complicated and protein fits that.

Proteins are made up of 20 amino acids there are 20 different kinds of links in the protein chain. DNA is made up of only four nucleotides there are only four different links in the DNA chain. Given the choice between a complicated alphabet and a simple one, the reasonable choice was the complicated one, namely protein. But, biology doesnt always follow the obvious path and the genetic material was, and is, DNA.

It took decades of experiments to disprove conventional wisdom and convince most people that biological information was in DNA. For some, it took James Watson and Francis Crick (http://www.pbs.org/wgbh/aso/databank/entries/do53dn.html), using data misappropriated from Rosalind Franklin https://www.nature.com/scitable/topicpage/rosalind-franklin-a-crucial-contribution-6538012/), deciphering the structure of DNA in 1953 to drive the nail in the protein coffin. It just seemed to obvious that protein, with all its complexity, would be the molecule that coded for complexity.

These were some of the most accomplished and thoughtful scientists of their day, but they got it wrong. And thats okay if we learn from their mistakes.

It is too easy to dismiss this example as the foolishness of the past. We wouldnt make this kind of mistake today, would we? I cant answer that, but let me give you another example that suggests we would, and Ill argue at the end that we almost certainly are.

Im an American, and one of the challenges of moving to Canada was having to adapt to overcooked burgers (my mother still cant accept that she cant get her burger medium when she visits). This culinary challenge is driven by a phenomenon that one of the more interesting recent cases of scientists having it wrong and refusing to see that.

In the late 1980s, cows started wasting away and, in the late stages of what was slowly recognized as a disease, acting in such bizarre manner that their disease, bovine spongiform encephalitis, became known as Mad Cow Disease. Strikingly, the brains of the cows were full of holes (hence spongiform) and the holes were caked with plaques of proteins clumped together.

Really strikingly, the proteins were ones that are found in healthy brains, but now in an unnatural shape. Proteins are long chains, but they function because they have complex 3D shapes think origami. Proteins fold and fold into specific shapes. But, these proteins found in sick cow brains had a shape not normally seen in nature; they were misfolded.

Sometime after, people started dying from the same symptoms and a connection was made between eating infected cows and contracting the disease (cows could also contract the disease, but likely through saliva or direct contact, and not cannibalism). Researchers also determined the culprit was consumption only of neural tissue, brain and spinal tissue, the very tissue that showed the physical effects of infection (and this is important).

One of the challenges of explaining the disease was the time-course from infection to disease to death; it was long and slow. Diseases, we knew, were transmitted by viruses and bacteria, but no scientist could isolate one that would explain this disease. Further, no one knew of other viruses or bacteria whose infection would take this long to lead to death. For various reasons, people leaned toward assuming a viral cause, and careers and reputations were built on finding the slow virus.

In the late 1980s, a pair of British researchers suggested that perhaps the shape, the folding, of the proteins in the plaques was key. Could the misfolding be causing the clumping that led to the plaques? This proposal was soon championed by Stanely Prusiner, a young scientist early in his career.

The idea was simple. The misfolded protein was itself both the result and the cause of the infection. Misfolded protein clumped forming plaques that killed the brain tissue they also caused correctly folded versions of the proteins to misfold. The concept was straightforward, but completely heretical. Disease, we knew, did not work that way. Diseases are transmitted by viruses or bacteria, but the information is transmitted as DNA (and, rarely, RNA, a closely related molecule). Disease is not transmitted in protein folding (although in 1963 Kurt Vonnegut had predicted such a model for world-destroying ice formation in his amazing book Cats Cradle).

For holding this protein-based view of infection, Prusiner was literally and metaphorically shouted out of the room. Then he showed, experimentally and elegantly, that misfolded proteins, which he called prions, were the cause of these diseases, of both symptoms and infection.

For this accomplishment, he was awarded the 1997 Nobel Prize in Medicine. He, and others, were right. Science, with a big S, was wrong. And thats okay. We now know that prions are responsible for a series of diseases in humans and other animals, including Chronic Wasting Disease, the spread of which poses a serious threat to deer and elk here in Ontario.

Circling back, the overcooked burger phenomenon is because of these proteins. If you heat the prions sufficiently, they lose their unnatural shape all shape actually and the beef is safe to eat. A well-done burger will guarantee no infectious prions, while a medium one will not. We dont have this issue in the U.S. because cows south of the border are less likely to have been infected with the prions than their northern counterparts (or at least Americans are willing to pretend this is the case).

Where does this leave us? To me, the take-home message is that we need to remain skeptical, but curious. Examine the world around you with curious eyes, and be ready to challenge and question your assumptions.

Also, dont ignore the massive things in front of your eyes simply because they dont fit your understanding of, or wishes for, the world around you. Climate change, for example, is real and will likely make this a more difficult world for our children. Ive spent a lot of time in my career putting together models of how the biological world works, but I know pieces of these models are wrong.

I can almost guarantee you that I have something as fundamentally wrong as those early geneticists stuck on protein as the genetic material of cells or the prion-deniers; I just dont know what it is. Yet.

And, this situation is okay. The important thing isnt to be right. Instead, it is to be open to seeing when you are wrong.

Dr. Thomas Merritt is the Canada Research Chair in Genomics and Bioinformatics at Laurentian University.

See original here:

Discover: Science is often wrong and that's actually a really good thing - Sudbury.com

Antibiotics with novel mechanism of action discovered – Drug Target Review

A new family of synthetic antibiotics that possess broad anti-Gram-negative antimicrobial activity has been discovered.

Researchers have reported the discovery and characterisation of a new family of synthetic antibiotics that possess broad-spectrum anti-Gram-negative antimicrobial activity.

The research teams were headed by the University of Zurich (UZH) and Polyphor AG, both Switzerland.

The new antibiotics interact with essential outer membrane proteins in Gram-negative bacteria, said John Robinson from the UZH Department of Chemistry, who co-led the study. According to our results, the antibiotics bind to complex fat-like substances called lipopolysaccharides and to BamA, an essential protein of the outer membrane of Gram-negative bacteria.

E. coli cells treated with a novel chimeric peptidomimetic antibiotic. Cells in blue are alive while green cells are already killed by the peptidomimetic. As the antibiotic destroys the integrity of the bacterial membranes, the scientists observed explosive cell lysis (cells indicated by arrows), which leads in the release of DNA (diffuse green) (credit: Matthias Urfer, UZH).

BamA is the main component of the so-called -barrel folding complex (BAM), which is essential for outer membrane synthesis. After targeting this essential outer membrane protein, the antibiotics destroy the integrity of the bacterial membranes and the cells burst.

The outer membrane of Gram-negative bacteria protects the cells from toxic environmental factors, such as antibiotics. It is also responsible for the uptake and export of nutrients and signalling molecules. Despite its critical importance, so far no clinical antibiotics target these key proteins required for outer membrane biogenesis, Robinson continued.

The plan now is to progress one compound into human clinical trials. POL7306, a first lead molecule of the novel antibiotics class, is now in pre-clinical development, added Daniel Obrecht, chief scientific officer at Polyphor and co-head of the work.

Continued here:

Antibiotics with novel mechanism of action discovered - Drug Target Review

The top AI lighthouse projects to watch in biopharma – FierceBiotech

So-called lighthouse projectsare typically defined as smallefforts focused on deliverables ina narrow area, designed to establish a pathway for larger enterprises down the line. But as the biopharma industrys digital transformation continues apace, with the rapid acceptance of tools such as artificial intelligence and machine learning, certain projects promise to yield impacts on a much wider scale.

At the same time, AI expertise is being quickly diffused throughout the industry through collaborations, with numerous tech firms offering different approaches and services as drug and device makers shop around for those that best suit their needs.

In just the past few years, biopharma has built a large, interconnected network of partnerships as it aims to transform and digitize its processes to maximize their valueacross everything from molecule design to clinical trial planning, as well as in supply chain, quality control and sales strategies.

In a survey released earlier this month by Optum of 500 healthcare industry leaders and professionals, the number of respondents who said their organizations had an AI implementation strategy in place had increased by nearly 88% compared to the year before.

Additionally, the survey found that organizations plan to spend about $40 million apiece on average on AI-related projects over the next five yearsand half of respondents said they expect to see positive returns on AI investments in three years or less.

Its encouraging to see executives growing trust in, and adoption of, AI to make data more actionable in making the health system work better for everyone, Optum President and COO Dan Schumacher said in a statement, although higher levels of trust in AI were seen in regards to administrative applications compared to clinical ones, and the automation of business processes was ranked higher as a priority.

RELATED: Novartis to put AI on every employee's desk through Microsoft partnership

Still, this sets the stage for potentially rapid adoptions in R&D and care delivery as new methods are validated and become available.

Forming collaborations and partnerships will be paramount, as many companies lack the expertise needed to make the transformation on their own in the coming years. Among medtech companies specifically, a report from Deloitte predicts that the current illness-focused system will be completely overhauled by 2040 and replaced by a proactive one that integrates data to personalize a continuum of care spanning before and after a procedure.

One of the ripest sectors for advancement is in digital pathology and assisting diagnosis. Many research projects aim to use machine learning processes to spot the patterns of diseases or conditions in images or scans. This can help ease the burden on hospital departments by screening chest X-rays, MRI scans, tissue slides or pictures of the eye.

In April 2018, the FDA approved the first medical device in the U.S. to use artificial intelligence to detect cases of diabetic retinopathy, the most common cause of vision loss among people with diabetes. Using digitalimages uploaded from a retinal camera, the AIdubbed IDx-DRcan detect those with more than a mild case of the disease and refer them to a healthcare professional.

RELATED: AstraZeneca enlists artificial intelligence for sales rep coaching

The benefits of automating healthcare and research processes are similar in other areas of medicinenot just in mining data for insightbut also in sharing those insights.

Take the MELLODDY project, for example: short for Machine Learning Ledger Orchestration for Drug Discovery, the initiative hopes to share preclinical data among a network of Big Pharma companies and research partners using a blockchain-based infrastructure to protect confidentiality and proprietary information.

Meanwhile, the newest center of the NIH is working to build a universal translator for medical data, with the goal of bringing together researchers from different fields across the healthcare enterpriseon top of redefining our current definitions of disease based on the findings.

Machine learning can also spot patterns in a flood of data from multiple sources, where certain changes, no matter how small, can herald the onset of Alzheimers disease and dementia. Evidation Health, along with Apple and Eli Lilly, is looking to develop digital biomarkers for neurodegenerative disease by tracking peoples daily routines, device usage and changes in speech.

Using all the information provided by patients from different angles, and then feeding that data back into care delivery to potentially improve outcomes, is where machine learning can excel, and Verb Surgicalthe joint venture between Verily and Johnson & Johnsonaims to use those tenets to drive a new generation of surgery.

RELATED: FDA delivers regulatory guidance on AI software and clinical decision-making aids

Elsewherein the Google/Alphabet sphere, DeepMind has developed an AI program to take on one of the most challenging problems in medical science: predicting protein folding, a mathematical problem that reaches a spectacular number of possibilities. The field once relied on human intuition to solve the puzzle. Now were teaching machines to follow similar instincts.

But the biggest changes will come when these methods, tools and knowledge can be made widely available, and the public-private ATOM consortium is looking to do just that. Spun out of the U.S. governments cancer moonshot efforts, the initiative by GlaxoSmithKline, UC San Francisco and federal research laboratories hosts a series of research projects aimed at accelerating preclinical development to a timeline under one year.

This small handful of projects hopes to have an outsized impactand serve as a beacon for the industry as a whole. Read on below. Conor Hale (email | Twitter)

Read more here:

The top AI lighthouse projects to watch in biopharma - FierceBiotech

RNA Folding Insights Lead to New Therapeutics and Synthetic Biology Technologies – Technology Networks

A Northwestern Engineering research team led by Professor Julius Lucks has uncovered a new understanding of how RNA molecules act as cellular 'biosensors' to monitor and respond to changes in the environment by controlling gene expression. The findings could impact the design of future RNA-specific therapeutics as well as new synthetic biology tools that measure the presence of toxins in the environment.

RNA molecules play a pivotal role in storing and propagating genetic information like DNA, as well as performing functions critical to living systems like proteins. At the core of its function is its ability to undergo origami-style folding into intricate shapes inside the cell.

Using high-throughput next-generation sequencing technology developed in his lab that chemically images the dynamic shapes RNAs fold into, Lucks found similarities in the folding tendencies among a family of RNA molecules, called riboswitches. Riboswitches act as natural biosensors to monitor the internal and external state of cells. When a riboswitch binds to a molecule, it changes its shape, causing a change in gene expression.

"These riboswitches have evolved to fold into very specific shapes so they can recognize other compounds, change their shape when they bind to them, and ultimately induce a change in gene expression," said Lucks, associate chair and professor of chemical and biological engineering at the McCormick School of Engineering. "There's been little studied about how exactly they can fold and adjust those shapes, especially since they do so before the RNAs are fully made. We learned that there is an evolutionary pressure on RNAs to not only fold into the final structure, but to have a pathway to do so similarly and efficiently."

A paper outlining the work, titled "A Ligand Gated Strand Displacement Mechanism for ZTP Riboswitch Transcription Control," was published on October 21 in the journalNature Chemical Biology. The study was also featured in the journal's "News & Views" section.

Lucks served as the paper's corresponding author, while Eric Strobel, a Beckman Postdoctoral Fellow in Lucks's group, served as the study's lead author. PhD students Katherine Berman and Luyi Cheng, and visiting predoctoral scholar Paul Carlson, all from the Lucks Lab, also contributed to the research.

The study builds on past research in which Lucks and his team developed a platform that provides super high-resolution representations of RNA shape changing as the RNAs are synthesized.

Finding Folding SimilaritiesPreviously, Lucks and his team used their high-resolution system to study how a riboswitch sensed the fluoride ion. In theNature Chemical Biologypaper, he applied the system to a riboswitch responsible for sensing a natural cellular alarmone molecule called ZTP, which Lucks said functions as an "alarm trigger" in cells.

Despite structural and functional differences between the riboswitches and their respective target compounds, Lucks discovered that in both instances the riboswitches followed the same folding pathway -- the series of shapes the RNA molecule progresses through as it is synthesized.

Using high-throughput next-generation sequencing technology, Professor Julius Lucks found similarities in the folding tendencies among a family of RNA molecules, called riboswitches.

"Once RNAs are made, they immediately fold into a shape that recognizes the molecule. If the molecule is there, the shape locks in and preserves the structure," Lucks said. "If the molecule isn't present, the RNA unravels itself. We found that happened in both instances.

"Whether you're trying to make an origami crane or frog, the first several steps are pretty much the same," he added. "While these RNAs look different, they're amazingly similar when you break them down into their sequence of folding instructions. Finding links to these common features lays the groundwork for coding these principles as design elements for when we want to harness them for our own uses."

Those uses could include future drug delivery strategies. While many therapeutics are designed to treat diseases caused by protein misfolding, such as Alzheimer's or Parkinson's, Lucks believes his lab's work could inform efforts to treat diseases believed to be triggered at the RNA level, including spinal muscular atrophy, a neuromuscular disorder caused by the mis-splicing of the SMN gene.

"You may not only want to target the final structure of an RNA molecule, because they all fold in some sort of structure, but also the folding process to get into that structure," he said.

The findings also represent a positive step toward harnessing RNA's capability as a natural biosensor. Working with Northwestern's Center for Synthetic Biology and Center for Water Research, Lucks and his lab are pursuing how riboswitches could be used within low-cost synthetic biology platforms to detect toxins in the environment, impacting areas like crop health and water quality.

"As we learn more about the architecture behind how RNAs work, we'll seek to understand how to make them work better," Lucks said. "Nature may have evolved to make them do one thing, but we want them to work for us faster or more sensitively. We're still learning how to do that, but we're nearing that level of detail where we can truly design around these principles."

Reference: Strobel et al. 2019.A ligand-gated strand displacement mechanism for ZTP riboswitch transcription control. Nature Chemical Biology.DOI: https://doi.org/10.1038/s41589-019-0382-7.

This article has been republished from the following materials. Note: material may have been edited for length and content. For further information, please contact the cited source.

Excerpt from:

RNA Folding Insights Lead to New Therapeutics and Synthetic Biology Technologies - Technology Networks