Cancer to Be Treated as Easily as Common Cold When Humans Crack Quantum Computing – Business Wire

DUBAI, United Arab of Emirates--(BUSINESS WIRE)--Breakthroughs in quantum computing will enable humans to cure diseases like cancer, Alzheimers, and Parkinsons as easily as we treat the common cold.

That was one of the major insights to emerge from the Dubai Future Forum, with renowned theoretical physicist Dr. Michio Kaku telling the worlds largest gathering of futurists that humanity should brace itself for major transformations in healthcare.

The forum concluded with a call for governments to institutionalize foresight and engrain it within decision making.

Taking place in Dubai, UAE at the Museum of the Future, Amy Webb, CEO of Future Today Institute, criticized nations for being too pre-occupied with the present and too focused on creating white papers, reports and policy recommendations instead of action.

Nowism is a virus. Corporations and governments are infected, she said.

One panel session heard how humans could be ready to test life on the Moon in just 15 years and be ready for life on Mars in another decade. Sharing his predictions for the future, Dr. Kaku also said there is a very good chance humans will pick up a signal from another intelligent life form this century.

Dr. Jamie Metzl, Founder and Chair, OneShared.World, urged people to eat more lab-grown meat to combat global warming and food insecurity.

If we are treating them like a means to an end of our nutrition, wouldnt it be better instead of growing the animal, to grow the meat? he said.

Among the 70 speakers participating in sessions were several UAE ministers. HE Mohammad Al Gergawi, UAE Minister of Cabinet Affairs, Vice Chairman, Board of Trustees and Managing Director of the Dubai Future Foundation, said ministers around the world should think of themselves as designers of the future. Our stakeholders are 7.98 billion people around the world, he noted.

Dubais approach to foresight was lauded by delegates, including HE Omar Sultan Al Olama, UAE Minister of State for Artificial Intelligence, Digital Economy, and Remote Work Applications, who said: What makes our city and nation successful is not natural resources, but a unique ability to embrace all ideas and individuals.

More than 30 sessions covered topics including immortality, AI sentience, climate change, terraforming, genome sequencing, legislation, and the energy transition.

*Source: AETOSWire

Follow this link:

Cancer to Be Treated as Easily as Common Cold When Humans Crack Quantum Computing - Business Wire

New laboratory to explore the quantum mysteries of nuclear materials – EurekAlert

Replete with tunneling particles, electron wells, charmed quarks and zombie cats, quantum mechanics takes everything Sir Isaac Newton taught about physics and throws it out the window.

Every day, researchers discover new details about the laws that govern the tiniest building blocks of the universe. These details not only increase scientific understanding of quantum physics, but they also hold the potential to unlock a host of technologies, from quantum computers to lasers to next-generation solar cells.

But theres one area that remains a mystery even in this most mysterious of sciences: the quantum mechanics of nuclear fuels.

Until now, most fundamental scientific research of quantum mechanics has focused on elements such as silicon because these materials are relatively inexpensive, easy to obtain and easy to work with.

Now, Idaho National Laboratory researchers are planning to explore the frontiers of quantum mechanics with a new synthesis laboratory that can work with radioactive elements such as uranium and thorium.

An announcement about the new laboratory appears online in the journalNature Communications.

Uranium and thorium, which are part of a larger group of elements called actinides, are used as fuels in nuclear power reactors because they can undergo nuclear fission under certain conditions.

However, the unique properties of these elements, especially the arrangement of their electrons, also means they could exhibit interesting quantum mechanical properties.

In particular, the behavior of particles in special, extremely thin materials made from actinides could increase our understanding of phenomena such as quantum wells and quantum tunneling (see sidebar).

To study these properties, a team of researchers has built a laboratory around molecular beam epitaxy (MBE), a process that creates ultra-thin layers of materials with a high degree of purity and control.

The MBE technique itself is not new, said Krzysztof Gofryk, a scientist at INL. Its widely used. Whats new is that were applying this method to actinide materials uranium and thorium. Right now, this capability doesnt exist anywhere else in the world that we know of.

The INL team is conducting fundamental research science for the sake of knowledge but the practical applications of these materials could make for some important technological breakthroughs.

At this point, we are not interested in building a new qubit [the basis of quantum computing], but we are thinking about which materials might be useful for that, Gofryk said. Some of these materials could be potentially interesting for new memory banks and spin-based transistors, for instance.

Memory banks and transistors are both important components of computers.

To understand how researchers make these very thin materials, imagine an empty ball pit at a fast-food restaurant. Blue and red balls are thrown in the pit one at a time until they make a single layer on the floor. But that layer isnt a random assortment of balls. Instead, they arrange themselves into a pattern.

During the MBE process, the empty ball pit is a vacuum chamber, and the balls are highly pure elements, such as nitrogen and uranium, that are heated until individual atoms can escape into the chamber.

The floor of our imaginary ball pit is, in reality, a charged substrate that attracts the individual atoms. On the substrate, atoms order themselves to create a wafer of very thin material in this case, uranium nitride.

Back in the ball pit, weve created layer of blue and red balls arranged in a pattern. Now we make another layer of green and orange balls on top of the first layer.

To study the quantum properties of these materials, Gofryk and his team will join two dissimilar wafers of material into a sandwich called a heterostructure. For instance, the thin layer of uranium nitride might be joined to a thin layer of another material such as gallium arsenide, a semiconductor. At the junction between the two different materials, interesting quantum mechanical properties can be observed.

We can make sandwiches of these materials from a variety of elements, Gofryk said. We have lots of flexibility. We are trying to think about the novel structures we can create with maybe some predicted quantum properties.

We want to look at electronic properties, structural properties, thermal properties and how electrons are transported through the layers, he continued. What will happen if you lower the temperature and apply a magnetic field? Will it cause electrons to behave in certain way?

INL is one of the few places where researchers can work with uranium and thorium for this type of science. The amounts of the radioactive materials and the consequent safety concerns will be comparable to the radioactivity found in an everyday smoke alarm.

INL is the perfect place for this research because were interested in this kind of physics and chemistry, Gofryk said.

In the end, Gofryk hopes the laboratory will result in breakthroughs that help attract attention from potential collaborators as well as recruit new employees to the laboratory.

These actinides have such special properties, he said. Were hoping we can discover some new phenomena or new physics that hasnt been found before.

In 1900, German physicist Max Planck first described how light emitted from heated objects, such as the filament in a light bulb, behaved like particles.

Since then, numerous scientists including Albert Einstein and Niels Bohr have explored and expanded upon Plancks discovery to develop the field of physics known as quantum mechanics. In short, quantum mechanics describes the behavior of atoms and subatomic particles.

Quantum mechanics is different than regular physics, in part, because subatomic particles simultaneously have characteristics of both particles and waves, and their energy and movement occur in discrete amounts called quanta.

More than 120 years later, quantum mechanics plays a key role in numerous practical applications, especially lasers and transistors a key component of modern electronic devices. Quantum mechanics also promises to serve as the basis for the next generation of computers, known as quantum computers, which will be much more powerful at solving certain types of calculations.

Uranium, thorium and the other actinides have something in common that makes them interesting for quantum mechanics: the arrangement of their electrons.

Electrons do not orbit around the nucleus the way the earth orbits the sun. Rather, they zip around somewhat randomly. But we can define areas where there is a high probability of finding electrons. These clouds of probability are called orbitals.

For the smallest atoms, these orbitals are simple spheres surrounding the nucleus. However, as the atoms get larger and contain more electrons, orbitals begin to take on strange and complex shapes.

In very large atoms like uranium and thorium (92 and 90 electrons respectively), the outermost orbitals are a complex assortment of party balloon, jelly bean, dumbbell and hula hoop shapes. The electrons in these orbitals are high energy. While scientists can guess at their quantum properties, nobody knows for sure how they will behave in the real world.

Quantum tunneling is a key part of any number of phenomena, including nuclear fusion in stars, mutations in DNA and diodes in electronic devices.

To understand quantum tunneling, imagine a toddler rolling a ball at a mountain. In this analogy, the ball is a particle. The mountain is a barrier, most likely a semiconductor material. In classical physics, theres no chance the ball has enough energy to pass over the mountain.

But in the quantum realm, subatomic particles have properties of both particles and waves. The waves peak represents the highest probability of finding the particle. Thanks to a quirk of quantum mechanics, while most of the wave bounces off the barrier, a small part of that wave travels through if the barrier is thin enough.

For a single particle, the small amplitude of this wave means there is a very small chance of the particle making it to the other side of the barrier.

However, when large numbers of waves are travelling at a barrier, the probability increases, and sometimes a particle makes it through. This is quantum tunneling.

Quantum wells are also important, especially for devices such as light emitting diodes (LEDs) and lasers.

Like quantum tunneling, to build quantum wells, you need alternating layers of very thin (10 nanometers) material where one layer is a barrier.

While electrons normally travel in three dimensions, quantum wells trap electrons in two dimensions within a barrier that is, for practical purposes, impossible to overcome. These electrons exist at specific energies say the precise energies needed to generate specific wavelengths of light.

About Idaho National LaboratoryBattelle Energy Alliance manages INL for the U.S. Department of Energys Office of Nuclear Energy. INL is the nations center for nuclear energy research and development,and alsoperforms research in each of DOEs strategic goal areas: energy, national security, science and the environment. For more information, visitwww.inl.gov.Follow us on social media:Twitter,Facebook,InstagramandLinkedIn.

View post:

New laboratory to explore the quantum mysteries of nuclear materials - EurekAlert

General officers in the Confederate States Army – Wikipedia

Senior military leaders of the Confederate States of America

The general officers of the Confederate States Army (CSA) were the senior military leaders of the Confederacy during the American Civil War of 18611865. They were often former officers from the United States Army (the regular army) prior to the Civil War, while others were given the rank based on merit or when necessity demanded. Most Confederate generals needed confirmation from the Confederate Congress, much like prospective generals in the modern U.S. armed forces.

Like all of the Confederacy's military forces, these generals answered to their civilian leadership, in particular Jefferson Davis, the South's president and therefore commander-in-chief of the Army, Navy, and the Marines of the Confederate States.

Much of the design of the Confederate States Army was based on the structure and customs of the U.S. Army[1] when the Confederate Congress established their War Department on February 21, 1861.[2] The Confederate Army was composed of three parts; the Army of the Confederate States of America (ACSA, intended to be the permanent, regular army), the Provisional Army of the Confederate States (PACS, or "volunteer" Army, to be disbanded after hostilities), and the various Southern state militias.

Graduates from West Point and Mexican War veterans were highly sought after by Jefferson Davis for military service, especially as general officers. Like their Federal counterparts, the Confederate Army had both professional and political generals within it. Ranks throughout the CSA were roughly based on the U.S. Army in design and seniority.[3] On February 27, 1861, a general staff for the army was authorized, consisting of four positions: an adjutant general, a quartermaster general, a commissary general, and a surgeon general. Initially the last of these was to be a staff officer only.[2] The post of adjutant general was filled by Samuel Cooper (the position he had held as a colonel in the U.S. Army from 1852 until resigning) and he held it throughout the Civil War, as well as the army's inspector general.[4]

Initially, the Confederate Army commissioned only brigadier generals in both the volunteer and regular services;[2] however, the Congress quickly passed legislation allowing for the appointment of major generals as well as generals, thus providing clear and distinct seniority over the existing major generals in the various state militias.[5] On May 16, 1861, when there were only five officers at the grade of brigadier general, this legislation was passed, which stated in part:

That the five general officers provided by existing laws for the Confederate States shall have the rank and denomination of 'general', instead of 'brigadier-general', which shall be the highest military grade known to the Confederate States ...[6]

As of September 18, 1862, when lieutenant generals were authorized, the Confederate Army had four grades of general officers; they were (in order of increasing rank) brigadier general, major general, lieutenant general, and general.[7] As officers were appointed to the various grades of general by Jefferson Davis (and were confirmed), he would create the promotion lists himself. The dates of rank, as well as seniority of officers appointed to the same grade on the same day, were determined by Davis "usually following the guidelines established for the prewar U.S. Army."[8]

These generals were most often infantry or cavalry brigade commanders, aides to other higher ranking generals, and War Department staff officers. By war's end the Confederacy had at least 383 different men who held this rank in the PACS, and three in the ACSA: Samuel Cooper, Robert E. Lee, and Joseph E. Johnston.[9] The organization of regiments into brigades was authorized by the Congress on March 6, 1861. Brigadier generals would command them, and these generals were to be nominated by Davis and confirmed by the Confederate Senate.[2]

Though close to the Union Army in assignments, Confederate brigadiers mainly commanded brigades while Federal brigadiers sometimes led divisions as well as brigades, particularly in the first years of the war. These generals also often led sub-districts within military departments, with command over soldiers in their sub-district. These generals outranked Confederate Army colonels, who commonly led infantry regiments.

This rank is equivalent to brigadier general in the modern U.S. Army.

These generals were most commonly infantry division commanders, aides to other higher ranking generals, and War Department staff officers. They also led the districts that made up military departments and had command over the troops in their districts. Some Major generals also led smaller military departments. By war's end, the Confederacy had at least 88 different men who had held this rank, all in the PACS.[10]

Divisions were authorized by the Congress on March 6, 1861, and major generals would command them. These generals were to be nominated by Davis and confirmed by the Senate.[2] Major generals outranked brigadiers and all other lesser officers.

This rank was not synonymous with the Union's use of it, as Northern major generals led divisions, corps, and entire armies. This rank is equivalent in most respects to major general in the modern U.S. Army.

Not further promoted

Evander Mclver Law was promoted to the rank of Major General on March 20, 1865; on the recommendation of Generals Johnston and Hampton just before the surrender. The promotion was too late to be confirmed by the Confederate Congress however.

There were 18 lieutenant generals in the Confederate Army, and these general officers were often corps commanders within armies or military department heads, in charge of geographic sections and all soldiers in those boundaries. All of the Confederacy's lieutenant generals were in the PACS.[10] The Confederate Congress legalized the creation of army corps on September 18, 1862, and directed that lieutenant generals lead them. These generals were to be nominated by President Davis and confirmed by the C.S. Senate.[7] Lieutenant generals outranked major generals and all other lesser officers.

This rank was not synonymous with the Federal use of it; Ulysses S. Grant (18221885) was one of only two Federal lieutenant generals during the war, the other being Winfield Scott (17861866), General-in-Chief of the United States Army 18411861, at the beginning of the American Civil War who also served in the War of 1812 (18121815), and led an army in the field during the MexicanAmerican War (18461849), received a promotion to brevet lieutenant general by a special Act of Congress in 1855. Gen. Grant was by the time of his promotion, March 9, 1864, the only Federal lieutenant general in active service. Grant became General-in-Chief, commander of the United States Army and of all the Union armies, answering directly to President Abraham Lincoln and charged with the task of leading the Federal armies to victory over the southern Confederacy. The CSA lieutenant general rank is also roughly equivalent to lieutenant general in the modern U.S. Army.

The Congress passed legislation in May 1864 to allow for "temporary" general officers in the PACS, to be appointed by President Jefferson Davis and confirmed by the C.S. Senate and given a non-permanent command by Davis.[12] Under this law, Davis appointed several officers to fill open positions. Richard H. Anderson was appointed a "temporary" lieutenant general on May 31, 1864, and given command of the First Corps in the Army of Northern Virginia commanded by Gen. Lee (following the wounding of Lee's second-in-command, Lt. Gen. James Longstreet on May 6 in the Battle of the Wilderness.) With Longstreet's return that October, Anderson reverted to a major general. Jubal Early was appointed a "temporary" lieutenant general on May 31, 1864, and given command of the Second Corps (following the reassignment of Lt. Gen. Richard S. Ewell to other duties) and led the Corps as an army into the third Southern invasion of the North in July 1864 with battles at the Monocacy near Frederick, Maryland and Fort Stevens outside the Federal capital city of Washington, D.C., until December 1864, when he too reverted to a major general. Likewise, both Stephen D. Lee and Alexander P. Stewart were appointed to fill vacancies in the Western Theater as "temporary" lieutenant generals and also reverted to their prior grades as major generals as those assignments ended. However, Lee was nominated a second time for lieutenant general on March 11, 1865.[13]

Originally five officers in the South were appointed to the rank of general, and only two more would follow. These generals occupied the senior posts in the Confederate Army, mostly entire army or military department commanders, and advisers to Jefferson Davis. This rank is equivalent to the general in the modern U.S. Army, and the grade is often referred to in modern writings as "full general" to help differentiate it from the generic term "general" meaning simply "general officer".[15]

All Confederate generals were enrolled in the ACSA to ensure that they outranked all militia officers,[5] except for Edmund Kirby Smith, who was appointed general late in the war and into the PACS. Pierre G.T. Beauregard, had also initially been appointed a PACS general, was elevated to ACSA two months later with the same date of rank.[16] These generals outranked all other grades of generals, as well as all lesser officers in the Confederate States Army.

The first group of officers appointed to general was Samuel Cooper, Albert Sidney Johnston, Robert E. Lee, Joseph E. Johnston, and Pierre G.T. Beauregard, with their seniority in that order. This ordering caused Cooper, a staff officer who would not see combat, to be the senior general officer in the CSA. That seniority strained the relationship between Joseph E. Johnston and Jefferson Davis. Johnston considered himself the senior officer in the Confederate States Army and resented the ranks that President Davis had authorized. However, his previous position in the U.S. Army was staff, not line, which was evidently a criterion for Davis regarding establishing seniority and rank in the subsequent Confederate States Army.[17]

On February 17, 1864, legislation was passed by Congress to allow President Davis to appoint an officer to command the Trans-Mississippi Department in the Far West, with the rank of general in the PACS. Edmund Kirby Smith was the only officer appointed to this position.[18] Braxton Bragg was appointed a general in the ACSA with a date of rank of April 6, 1862, the day his commanding officer Gen. Albert Sidney Johnston died in combat at Shiloh/Pittsburg Landing.[19]

The Congress passed legislation in May 1864 to allow for "temporary" general officers in the PACS, to be appointed by Davis and confirmed by the C.S. Senate and given a non-permanent command by Davis.[12]John Bell Hood was appointed a "temporary" general on July 18, 1864, the date he took command of the Army of Tennessee in the Atlanta Campaign, but this appointment was not later confirmed by the Congress, and he reverted to his rank of lieutenant general in January 1865.[20] Later in March 1865, shortly before the end of the war, Hood's status was spelled out by the Confederate States Senate, which stated:

Resolved, That General J. B. Hood, having been appointed General, with temporary rank and command, and having been relieved from duty as Commander of the Army of Tennessee, and not having been reappointed to any other command appropriate to the rank of General, he has lost the rank of General, and therefore cannot be confirmed as such.[21]

Note that during 1863, Beauregard, Cooper, J. Johnston, and Lee all had their ranks re-nominated on February 20 and then re-confirmed on April 23 by the Confederate Congress.[13] This was in response to debates on February 17 about whether confirmations made by the provisional legislature needed re-confirmation by the permanent legislature, which was done by an Act of Congress issued two days later.[22]

The position of General in Chief of the Armies of the Confederate States was created on January 23, 1865. The only officer appointed to it was Gen. Robert E. Lee, who served from February 6 until April 12.

The Southern states had had militias in place since Revolutionary War times consistent with the U.S. Militia Act of 1792. They went by varied names such as State "Militia" or "Armies" or "Guard" and were activated and expanded when the Civil War began. These units were commanded by "Militia Generals" to defend their particular state and sometimes did not leave native soil to fight for the Confederacy. The Confederate militias used the general officer ranks of Brigadier General and Major General.

The regulations in the Act of 1792 provided for two classes of militia, divided by age. Class one was to include men from 22 to 30 years old, and class two would include men from 18 to 20 years as well as from 31 to 45 years old.[23] The various southern states were each using this system when the war began.

All Confederate generals wore the same uniform insignia regardless of which rank of general they were,[24] except for Robert E. Lee who wore the uniform of a Confederate colonel. The only visible difference was the button groupings on their uniforms; groups of three buttons for lieutenant and major generals, and groups of two for brigadier generals. In either case, a general's buttons were also distinguished from other ranks by their eagle insignia.

To the right is a picture of the CSA general's full uniform, in this case of Brig. Gen. Joseph R. Anderson of the Confederacy's Ordnance Department. All of the South's generals wore uniforms like this regardless of which grade of general they were, and all with gold-colored embroidering.

The general officers of the Confederate Army were paid for their services, and exactly how much (in Confederate dollars (CSD)) depended on their rank and whether they held a field command or not. On March 6, 1861, when the army only contained brigadier generals, their pay was $301 CSD monthly, and their aide-de-camp lieutenants would receive an additional $35 CSD per month beyond regular pay. As more grades of the general officer were added, the pay scale was adjusted. By June 10, 1864, a general received $500 CSD monthly, plus another $500 CSD if they led an army in the field. Also, by that date, lieutenant generals got $450 CSD and major generals $350 CSD, and brigadiers would receive $50 CSD in addition to regular pay if they served in combat.[25]

The CSA lost more general officers killed in combat than the Union Army did throughout the war, in the ratio of about 5-to-1 for the South compared to roughly 12-to-1 in the North.[26] The most famous of them is General Thomas "Stonewall" Jackson, probably the best known Confederate commander after General Robert E. Lee.[27] Jackson's death was the result of pneumonia which emerged subsequently after a friendly fire incident had occurred at Chancellorsville on the night of May 2, 1863. Replacing these fallen generals was an ongoing problem during the war, often having men promoted beyond their abilities (a common criticism of officers such as John Bell Hood[28] and George E. Pickett,[29] but an issue for both armies), or gravely wounded in combat but needed, such as Richard S. Ewell.[30] The problem was made more difficult by the South's depleting manpower, especially near the war's end.

The last Confederate general in the field, Stand Watie, surrendered on June 23, 1865, and the war's last surviving full general, Edmund Kirby Smith, died on March 28, 1893.[31] James Longstreet died on January 2, 1904, and was considered "the last of the high command of the Confederacy".[32]

The Confederate Army's system of using four grades of general officers is currently the same rank structure used by the U.S. Army (in use since shortly after the Civil War) and is also the system used by the U.S. Marine Corps (in use since World War II).

View original post here:

General officers in the Confederate States Army - Wikipedia

1836, the Slaveholder Republic’s Birthday – The Texas Observer

History, its often said, is written by the victors. While that isnt always true, its certainly borne out by many popular accounts of the Texas Revolution of 1835-36, which often tell a very black-and-white story of the virtuous Texansthe victorsfighting against the evil Mexicans. The San Jacinto Monument inscription, for instance, blames the rebellion on unjust acts and despotic decrees of unscrupulous rulers in Mexico. A pamphlet produced by the Republican Party-sponsored 1836 Project says that Anglo settlers fought to preserve constitutional liberty and republican government.

University of Houston history professor Gerald Horne tells a very different story. In The Counter-Revolution of 1836: Texas Slavery & Jim Crow and the Roots of American Fascism, published earlier this year, Horne contends that the motivation behind the Anglo-American rebellion was anything but virtuous: to make Texas safe for slavery and white supremacy. For othersBlacks, the Indigenous peoples, and many Tejanosthe Anglo victory meant slavery, oppression, dispossession, and in many cases, death.

The Counter-Revolution of 1836 is a big, sprawling book (over 570 pages), as befits its scope: It takes the reader from the lead-up to the Texas rebellion, through independence, annexation, the Civil War, Reconstruction, and Jim Crow, to the early 1920s. It is scrupulously researched, drawing not only on other scholars but also on a wide range of sources from the times, including letters, speeches, newspaper articles, and diplomatic posts.

Given current right-wing efforts to expel discussions of systemic racism from Texas classrooms, Hornes book is an important contribution to the ongoing debate over our collective history.

Recently, Horne discussed the book and its implications with the Texas Observer via email.

As the title indicates, you contend that the Texas Revolution was in fact a counter-revolution. What does counter-revolution mean to you, and why do you think its a more accurate designation?

The title suggests that the 1836 revolt was in response to abolitionism south of the border and thus was designed to stymie progress. A revolution, properly understood, should advance progress. [The] counter-revolution in 1836 assuredly was a step forward for many European settlersnot so much for Africans and the Indigenous.

This book continues the story you begin in your 2014 book on the American rebellion against England (1775-83). In that book, you similarly contend that the American Revolution was a counter-revolution. Why do you think so?

Similarly, 1776 was designed to stymie not only the prospect of abolitionism, but as well to sweep away what was signaled by the Royal Proclamation of 1762-3 which expressed Londons displeasure at continuing to expend blood and treasure ousting Indigenous peoples for the benefit of real estate speculators, e.g., George Washington. Not coincidentally, nationals from the post-1776 republic [the United States] were essential to the success of the 1836 counter-revolution.

You refer to the pre-emancipation United States and the pre-annexation Republic of Texas as slaveholder republics. Some readers may bristle at this label, especially those who believe, as anti-critical race theory Senate Bill 3 puts it, that slavery was not central to the American founding, but was merely a failur[e] to live up to the authentic founding principles of the United States. Why do you think the term slaveholder republic is a more accurate description?

Slaveholding republic is actually a term popularized by the late Stanford historianand Pulitzer Prize winnerDon Fehrenbacher. It is an indicator of regressionan offshoot of counter-revolutionthat this accurate descriptor is now deemed to be verboten. This ruse of suggesting that every blemishor atrocityis inconsistent with founding principles is akin to the thief and embezzler telling the judge when caught red-handed, Your honor, this is not who I am. Contrary to the delusions of the delirious, slaveholding was not an accident post-1776: How else to explain the exponential increase in the number of enslaved leading up to the Civil War? How else to explain U.S. and Texiannationals coming to dominate the slave trade in Cuba, Brazil, etc.?

You write that 1836 was a civil war over slavery and, like a precursor of Typhoid Mary, Texas seemed to bring the virulent bacteria that was war to whatever jurisdiction it joined. Of course, Texas ultimately joined the United States. How did slave-owning Texas infect the United States?

Texas was a bulwark of the so-called Confederate States of America which seceded from the U.S. in 1861 not least to preserveand extendenslavement of Africans in the first place. The detritus of Texas slaveholders became a bulwark of the Ku Klux Klan which served to drown Reconstructionor the post-Civil War steps to deliver a measure of freedom to the formerly enslavedin blood. This Texas detritus were stalwart backers in the 20th century of the disastrous escapades of McCarthyism, which routed not just communists but numerous labor organizers and anti-Jim Crow advocates. Texas also supplied a disproportionate percentage of the insurrectionists who stormed Capitol Hill on 6 January 2021.

Your book is subtitled The Roots of U.S. Fascism. Theres a growing awareness among pundits and some political leaders, President Biden, for instance, of the rise of fascist or fascist-like politics in the United Statesa politics of racist nationalism, trading in perceived grievances and centered on devotion to an autocratic leader. Your book argues that todays American fascism has roots as far back as the Anglo settlement of Mexican Texas. Why do you think so?

The genocidal and enslaving impulse has been essential to fascism whenever it has reared its ugly head globally. In Texasas in the wider republicthis involved class collaboration between and among a diverse array of settlers for mutual advantage. This class collaboration persists to this very day and can be espied on 6, January 2021 and thereafter.

Read more:

1836, the Slaveholder Republic's Birthday - The Texas Observer

President Biden Announces Key Appointments to Boards and Commissions – The White House

WASHINGTON Today, President Biden announced his intent to appoint the following individuals to serve in key roles:

Council of the Administrative Conference of the United StatesAdministrative Conference of the United States (ACUS) is an independent federal agency charged with convening expert representatives from the public and private sectors to recommend improvements to administrative process and procedure. ACUS initiatives promote efficiency, participation, and fairness in the promulgation of federal regulations and in the administration of federal programs. The ten-member ACUS Council is composed of government officials and private citizens.

Kristen Clarke, Member, Council of the Administrative Conference of the United StatesKristen Clarke is the Assistant Attorney General for Civil Rights at the U.S. Department of Justice. In this role, she leads the Justice Departments broad federal civil rights enforcement efforts and works to uphold the civil and constitutional rights of all who live in America. Clarke is a lifelong civil rights lawyer who has spent her entire career in public service. She most recently served as President and Executive Director of the Lawyers Committee for Civil Rights Under Law, one of the nations leading civil rights organizations founded at the request of John F. Kennedy.

Fernando Raul Laguarda, Member, Council of the Administrative Conference of the United StatesFernando Laguarda is General Counsel at AmeriCorps. Prior to his current role, he was Faculty Director of the Program on Law and Government and a Professor at American University Washington College of Law, where he taught and developed courses in administrative law, legislation, and antitrust, and launched the law schools LLM in Legislation. Laguarda also founded the nations first student-centered initiative to study the work of government oversight entities and was faculty advisor to the Latino Law Students Association. Fernando has worked in the telecommunications industry and as a partner at two different Washington, D.C. law firms focusing on technology and competition law. He was a Founder, served as General Counsel, and eventually became Board Chair, of the National Network to End Domestic Violence. Laguarda has also served as a member of numerous non-profit, civil rights, academic, and advisory boards. Laguarda received his J.D. cum laude from Georgetown University Law Center and his A.B. cum laude in government from Harvard College.

Anne Joseph OConnell, Member, Council of the Administrative Conference of the United StatesAnne Joseph OConnell, a lawyer and social scientist, is the Adelbert H. Sweet Professor of Law at Stanford University. Her research and teaching focuses on administrative law and public administration. She is a three-time recipient of the American Bar Associations Scholarship Award in Administrative Law for the best article or book published in the preceding year, and a two-time winner of the Richard D. Cudahy Writing Competition on Regulatory and Administrative Law from the American Constitution Society. OConnell joined the Gellhorn and Byses Administrative Law: Cases and Comments casebook as a co-editor with the twelfth edition. Most recently, her work has focused on acting officials and delegations of authority in federal agencies. Her research has been cited by Congress, the Supreme Court, lower federal courts, and the national media. She is an elected fellow of the American Academy of Arts and Sciences and the National Academy of Public Administration.

Before entering law school teaching, OConnell clerked for Justice Ruth Bader Ginsburg and Judge Stephen F. Williams and served as a trial attorney for the Federal Programs Branch of the Department of Justices Civil Division. A Truman Scholar, she worked for a number of federal agencies in earlier years. OConnell received a B.A. in Mathematics from Williams College, an M.Phil. in the History and Philosophy of Science from Cambridge University, a J.D. from Yale Law School, and a Ph.D. in Political Economy and Government from Harvard University.

Jonathan Su, Member, Council of the Administrative Conference of the United StatesJonathan Su most recently served as Deputy Counsel to the President. Prior to his service at the White House, Su was the Deputy Office Managing Partner of the Washington, D.C. office of Latham & Watkins LLP, where he was also a partner in the White Collar Defense & Investigations practice. During the Obama-Biden Administration, Su served as Special Counsel to the President. Su was also a federal prosecutor at the United States Attorneys Office for the District of Maryland. He served as a law clerk for U.S. Circuit Judge Ronald M. Gould and U.S. District Judge Julian Abele Cook, Jr. Su is a graduate of the University of California at Berkeley and Georgetown University Law Center.

National Capital Planning CommissionEstablished by Congress in 1924, the National Capital Planning Commission (NCPC) is the federal governments central planning agency for the National Capital Region. Through planning, policymaking, and project review, NCPC protects and advances the federal governments interest in the regions development. The Commission provides overall planning guidance for federal land and buildings in the region by reviewing the design of federal and certain local projects, overseeing long-range planning for future development, and monitoring capital investment by federal agencies. The 12-member Commission represents federal and local constituencies with a stake in planning for the nations capital.

Bryan Clark Green, Commissioner, National Capital Planning CommissionBryan Green leverages his expertise as an educator, writer, and practicing preservationist to embrace the role of architecture in Americas larger story. He began his career at the Virginia Historical Society, worked for the Virginia Department of Historic Resources, was a Senior Associate and Director of Historic Preservation at Commonwealth Architects. He later joined the Tidewater and Big Bend Foundation as Executive Director. Green is the author of the forthcoming work, In Jeffersons Shadow: The Architecture of Thomas R. Blackburn, co-author of Lost Virginia: Vanished Architecture of the Old Dominion, After the Monuments Fall: The Removal of Confederate Monuments from the American South (LSU Press), with Kathleen James-Chakraborty and Katherine Kuenzli. Green graduated from the University of Notre Dame with a Bachelors in History and obtained his Masters and Ph.D. in Architectural History at the University of Virginia.

He serves as Chair, Preservation Officer, and ex officio member the Board at the Heritage Conservation Committee of the Society of Architectural Historians. He co-chairs the Publications Committee of the Association Preservation Technology International and serves on the Commonwealth of Virginias Citizens Advisory Council on Furnishing and Interpreting the Executive Mansion, and formerly served on the City of Richmond Commission of Architectural Review and Urban Design committees. Greens longstanding commitment to this work led him to Honorary Membership in both the Virginia Society and the Richmond Chapter of the American Institute of Architects.

Elizabeth M. Hewlett, Commissioner, National Capital Planning CommissionElizabeth M. Hewlett is an attorney and servant of the public interest. She recently retired from her second tenure as the Chairman of the Prince Georges County Planning Board and the Maryland-National Capital Park and Planning Commission. She has represented Maryland on the Washington Metropolitan Area Transit Authority and served as a Principal at Shipley, Horne and Hewlett, P.A., a law firm where she represented individuals, businesses, and real estate clients while also rendering many community-centric pro bono services. Hewlett has participated in or led dozens of public boards, civic groups, and key initiatives, including the Prince Georges County Census effort, the Maryland State Board of Law Examiners, and as a member of the Governors Drug and Alcohol Abuse Commission.

Throughout her career, Hewlett has also been a contributor to several legal and professional organizations, including: National Bar Association, Womens Bar Association of Maryland, the J. Franklyn Bourne Bar Association, the National Association for the Advancement of Colored People, and Delta Sigma Theta Sorority, Inc. She has been awarded many awards, including the Wayne K. Curry Distinguished Service Award, the National Bar Association Presidential Lifetime Achievement Award, and the J. Joseph Curran Award for Public Service. She is a graduate of Tufts University, Boston College Law School, and the John F. Kennedy School of Government Executive Program at Harvard University.

Presidents Intelligence Advisory BoardThe Presidents Intelligence Advisory Board is an independent element within the Executive Office of the President. The Presidents Intelligence Advisory Board exists exclusively to assist the President by providing the President with an independent source of advice on the effectiveness with which the Intelligence Community is meeting the nations intelligence needs and the vigor and insight with which the community plans for the future. The President is able to appoint up to 16 members of the Board.

Anne M. Finucane, Member, Presidents Intelligence Advisory BoardAnne Finucane currently serves as Chairman of the Board for Bank of America Europe. She also serves on the board of Bank of America Securities Europe SA, the banks EU broker-dealer in Paris. Finucane served as the first woman Vice Chairman of Bank of America. She led the companys strategic positioning and global sustainable and climate finance work, environmental, social and governance (ESG), capital deployment and public policy efforts. She is widely recognized for pioneering sustainable finance in the banking industry. For most of her career, Finucane also oversaw marketing, communications, and data and analytics at the company, and is credited with leading Bank of Americas successful efforts to reposition the company and repair its reputation after the 2008 financial crisis.

Finucane serves on a variety of corporate and nonprofit boards of directors, including CVS Health, Williams Sonoma, Mass General Brigham Healthcare, Special Olympics, the (RED) Advisory Board, and the Carnegie Endowment for International Peace. She previously served on the U.S. State Departments Foreign Affairs Policy board and is a member of the Council on Foreign Relations. Finucane has consistently been highlighted in most powerful women lists, including in the American Banker, Fortune, and Forbes. In 2021, she received the Carnegie Hall Medal of Honor, and in 2019 she was inducted into the American Advertising Federations Advertising Hall of Fame, and received the Edward M. Kennedy Institute for Inspired Leadership.

###

Continued here:

President Biden Announces Key Appointments to Boards and Commissions - The White House

Inside Lake Lanier’s Deaths And Why People Say It’s Haunted – All That’s Interesting

Constructed right atop the historically Black town of Oscarville, Georgia in 1956, Lake Lanier has become one of the most dangerous bodies of water in America with the remains of buildings just below the surface ensnaring hundreds of boats and swimmers.

Each year, more than 10 million people visit Lake Lanier in Gainesville, Georgia. Though unsuspecting the massive, placid lake might look, its considered one of the deadliest in America indeed, there have been 700 deaths at Lake Lanier since its construction in 1956.

This shocking number of accidents at the lake have led many to theorize that the site may, in fact, be haunted.

And given the controversial circumstances surrounding the lakes construction and a history of racial violence in the ruins of the former town of Oscarville that lie beneath the lakes surface, there might be some truth to this idea.

In 1956, the United States Army Corps of Engineers was tasked with creating a lake to provide water and power to parts of Georgia and help to prevent the Chattahoochee River from flooding.

They chose to construct the lake near Oscarville, in Forsyth County. Named after the poet and Confederate soldier Sidney Lanier, Lake Lanier has 692 miles of shoreline, making it the largest in Georgia and far, far larger than the town of Oscarville, which the Corps of Engineers forcefully emptied so that the lake could be built.

In total, 250 families were displaced, roughly 50,000 acres of farmland were destroyed, and 20 cemeteries were either relocated or otherwise engulfed by the lakes waters over its five-year construction period.

The town of Oscarville, however, was strangely not demolish before the lake was filled, and its ruins still rest at the bottom of Lake Lanier.

Divers have reported finding fully intact streets, walls, and houses, making it the single most dangerous underwater surface in the United States.

The flooded structures, coupled with declining water levels, are presumed to be a major factor in the high number of deaths that occur yearly at Lake Lanier, catching swimmers and holding them under or damaging boats with debris.

The deaths at Lake Lanier arent the typical sort, though. While there are many instances of people drowning, there are also reports of boats randomly going up in flames, freak accidents, missing persons, and inexplicable tragedies.

Some believe the regions dark past is responsible for these incidents. Legend asserts that vengeful and restless spirits of those whose graves were flooded many of whom were Black or persecuted and driven out by violent white mobs is behind this curse.

The town of Oscarville was once a bustling, turn-of-the-century community and a beacon for Black culture in the South. At the time, 1,100 Black people owned land and operated businesses in Forsyth County alone.

But on Sept. 9, 1912, an 18-year-old white woman named Mae Crow was raped and murdered near Browns Bridge on the Chattahoochee River banks, right by Oscarville.

According to the Oxford American, Mae Crows murder was pinned on four young Black people who happened to live in the area nearby; siblings Oscar and Trussie Jane Daniel, only 18 and 22 respectively, and their 16-year-old cousin Ernest Knox. With them was Robert Big Rob Edwards, 24.

Edwards was arrested for Crows rape and murder and taken to jail in Cumming, Georgia, the seat of Forsyth County.

A day later, a white mob invaded Edwards jail cell. They shot him, dragged him through the streets, and hanged him from a telephone pole outside the courthouse.

A month later, Ernest Knox and Oscar Daniel appeared in court for the rape and murder of Mae Crow. They were found guilty by the jury in just over an hour.

Some 5,000 people gathered to watch the teenagers be hanged.

Trussie Daniels charges were dismissed, but its widely believed that all three boys were innocent of the crimes.

Following Edwards lynching, white mobs known as night riders started going door to door across Forsyth County with torches and guns, burning down Black businesses and churches, demanding that all Black citizens vacate the county.

As Narcity reported, to this day less than five percent of Forsyth Countys population is Black.

But perhaps Lake Lanier is haunted by some other force?

The most popular legend surrounding Lake Lanier is called The Lady of the Lake.

As the story goes, in 1958, two young girls named Delia May Parker Young and Susie Roberts were at a dance in town but had decided to leave early. On the way home, they stopped to get gas and then left without paying for it.

They were driving across a bridge over Lake Lanier when they lost control of the car, spiraling off the edge and crashing into the dark waters below.

A year later, a fisherman out on the lake came across a decomposed, unrecognizable body floating near the bridge. At the time, no one could identify who it belonged to.

It wasnt until 1990 when officials discovered a 1950s Ford sedan at the bottom of the lake with the remains of Susie Roberts inside, that they were able to identify the body found three decades earlier as Delia May Parker Youngs.

But locals already knew who she was. They had reportedly seen her, still in her blue dress, wandering near the bridge at night with handless arms, waiting to drag unsuspecting lake-goers to the bottom.

Other people have reported seeing a shadowy figure sitting on a raft, inching himself across the water with a long pole and holding up a lantern to see.

Besides these ghost stories of yore, there are those who claim that the lake is haunted by the spirits of the 27 victims who have died in Lake Lanier over the years, but whose bodies were never found.

In the end, though, ghost stories are perhaps nothing more than a fun way to write off an otherwise tragic history littered with racist violence as well as unsafe and poorly planned construction.

Regardless of its size, for 700 people to have died in the lake in less than 70 years, something must be wrong. The Army Corps of Engineers initially believed that the submerged town of Oscarville wouldnt cause any harm, but the lake also wasnt constructed to be recreational it was meant to supply water from the Chattahoochee River to towns and cities in Georgia.

Many of the deaths can likely be attributed to things as simple as not wearing a life jacket, drinking alcohol while out on the lake, accidents, or incorrectly assuming that shallow water is always safe.

Perhaps the only thing that truly haunts Lake Lanier is its bigoted past.

After reading about the deaths at Lake Lanier and the history of Lake Lanier, learn about Ohios Franklin Castle, which quickly became a house of horrors. Then, see the twisted, dark history of the Myrtles Plantation in Louisiana.

Read this article:

Inside Lake Lanier's Deaths And Why People Say It's Haunted - All That's Interesting

8 Best Canadian Whiskies of 2022 – HICONSUMPTION

In a world of whiskies where identity is key, Canadian whisky might just suffer by its ability to do everything. From making a Scotch-style single malt to an American-style bourbon, distilleries from our northern neighbors thrive of that very versatility, opening the doors to creativity and innovation. Luckily, in recent years, Canadian liquor has been on the rise Stateside. Its yet to build up the exotic cachet of Scotch or Japanese whisky, but were confident that its only a matter of time. To help you get started, weve compiled a guide to the best Canadian whiskies to drink right now.

And Rye Is It So Good?

Although its often called rye whisky, Canadian rye whisky is much different than American rye whiskey (other than the added e), which can contain as much as 100% rye in the mashbill. For one, the rye in Canadian whisky refers to the grain being added to a predominantly-corn mashbill. Whereas most popular whisky-making regions (think Scotland, Ireland, Japan, and the United States) specialize in a certain style or styles brought on by the prominence of a specific grain or still type Canada is known for its eclectic variety and is frequently blended from different styles.

That said, there are some legal stipulations pinned to making Canadian whisky thanks to the nations Food and Drug Act. Most importantly, the liquor is required to be mashed, distilled, and aged in Canada. Additionally, it must be aged in small wood vessels for at least three years and bottled at 40% ABV. Unlike many other regions, caramel may be added for flavoring as long as it doesnt lose the aroma and taste generally attributed to Canadian whisky.

Slow But Steady

Around since the 1700s, Canadian whisky mostly began as a wheat spirit, since thats what primarily grew in the country at the time. Rye was added for flavor, thus creating what would become the profile and identity of the spirit for some time. The liquor really started to boom in the 19th century in England, who was having trouble sourcing their whisky elsewhere. Later on, during the Civil War in the United States, the North looked to Canada to supply them with their liquor since they refused to buy products from the Confederate states, which happened to be the source of most of the whiskey in the country.

The first real nation to enact an aging requirement, which was only one year in 1887 before eventually increasing to three, Canadian whisky was able to capitalize on the repeal of Prohibition since many U.S. distilleries had shut down and consumers wanted something besides the bootleg whiskey they had been drinking for 13 years. Likewise, a lot of their products had been aging in barrels waiting for the demand to return. Like most spirits (other than vodka), wine and beer were the favored alcoholic drink throughout the 70s and 80s until 1992 when Forty Creek reclaimed what Canadian whisky could be.

Launched in 1946, Albert Distillers started making rye whisky a couple decades after it went out of style and long before it was cachet again. A few years ago, the number-one rye producer in North America decided to do something a little different. Where its contemporaries were finishing their whiskies in former wine casks, Alberta was putting it straight in the batch, blending 91% rye, 8% bourbon, and 1% sherry to make its Dark Batch, which rides on a profile of vanilla, oak, dried stone fruit, citrus, and baking spices.

Lot 40 was created by Corby Spirit and Wine in 1998 as a limited-edition homage to pre-Prohibition-style rye whisky. After the resurgence of rye, it was launched as its own brand in 2012 and has since become one of the most decorated Canadian whiskies. Utilizing a mashbill of 100% unmalted rye, Lot 40, which gets its namesake from the plot of land owned by one of its founders, is distilled in copper pot stills one batch at a time and aged in new American oak barrels much similar to bourbon. The result is a dry and complex profile of spice, dark fruit, and citrus.

Since 2011, British Columbia-based distillery Shelter Point has made all of its whiskies with the barley thats grown on its own 380-acre property and water from a river that runs through its estate. Its highly-popular small-batch Smoke Point expression takes after the peated single malts from Scotland. Made from pot stills, Batch #3 has already won a plethora of awards this year, including Double Gold at the San Francisco World Spirits Competition and Best Single Malt at the Canadian Whisky Awards.

With 165 years of whisky-making experience, J.P. Wisers is one of the oldest operating distilleries in the nation. Thanks to the low-rye mashbill, this 18-year-old corn whisky the brands highest age statement is double column distilled, blended, and aged for nearly two decades in Canadian oak casks. Perfect for sipping neat, this expression goes down super smooth with a dynamic palate of pine, oak, apple, and floral notes with a long finish. And with a sub-$60 price point, this is one of the best deals youll find in any liquor category.

What Jack Daniels is to American whisky, Crown Royal is to our neighbors to the north. Easily Canadas most recognizable brand, the Gimli giant has been heading in a new premium direction as of late. However, that purple bag and picturesque bottles have always come underpinned with an air of elegance. This most recent version of the Noble Collections Winter Wheat Blended whisky has been the brands hottest batch as of late, even winning Best Whisky Overall at the Canadian Whiskey Awards back in February.

Since its launch in 1992, Forty Creek has been paving the way for Canadian whisky with its resilient approach to thinking outside the box. Credited with helping revive the national spirit, Forty Creeks small-batch Confederation Oak Reserve, named after the Canadian Confederation of 1867, blends together three spirts of different ages, made from a mashbill of corn, rye, and barley, and then finished for two years in Canadian oak casks. The colder weather imparts a profile of vanilla, butter cream, pepper, and walnut.

Billed as Canadas first single-barrel whisky, this marvelous expression from Caribou Crossing comes from one of around 200,000 casks in the distillerys collection. Bourbon lovers might compare its caribou bottle topper to Blantons galloping horse, but the flavor profile can stand toe-to-toe as well. Easily one of the most prolific top-shelf options from the Great White North, Caribou Crossings Single Barrel soars with a slightly-fluctuating medium-body profile of vanilla, honey, pepper, and fruit.

Rye typically matures much faster that corn- or barley-based whiskies. Nevertheless, the folks at Lock Stock & Barrel have found magic in their process, utilizing a mashbill of 100% rye. The brands top-shelf 21 Year was double distilled in copper pot stills before being aged for over two decades in new charred American oak barrels. Bottled at 111 proof, this whisky has a definite heat undergirding notes of cinnamon, caramel, cocoa, anise, and treacle, giving way to a long finish of leather, oak, and spice.

Original post:

8 Best Canadian Whiskies of 2022 - HICONSUMPTION

Gene therapy can make a real impact on global health but we need equitable access, say experts – World Economic Forum

Low- and middle-income countries (LMICs) can and should play a leading role in dictating the future of the worlds most advanced healthcare technologies, according to the World Economic Forums Accelerating Global Access to Gene Therapies: Case Studies from Low- and Middle-Income Countries white paper.

Gene therapy is at the forefront of modern medicine. By making precise changes to the human genome, these sophisticated technologies can potentially lead to one-time lifelong cures for infectious and non-communicable diseases (e.g. HIV, sickle cell disease) that affect tens of millions of people around the globe, most of whom live in LMICs. However, too often the benefits of advanced healthcare technologies remain restricted to high-income countries (HICs), a reality that could happen to gene therapies.

The narrative that new healthcare technologies are unsuitable for LMICs is a long-standing rationale for excluding a majority of the world from the benefits of modern medicine. Without concerted efforts to build gene therapy capacity in LMICs, the global health divide will continue to widen.

The gene therapy industry is in its infancy, but early clinical successes and substantial funding have generated enormous momentum. This is an ideal moment for LMICs to enter the global market, prioritizing the needs of communities carrying the highest disease burdens.

We asked five clinical researchers from LMICs, who are all co-authors on the recent white paper, what innovations on the ground and changes at policy-level need to happen for gene therapy to make a real impact on global health.

Dr. Cissy Kityo Mutuluza, Executive Director, Joint Clinical Research Centre, Uganda

Although gene therapy has the potential to treat or even cure life-limiting diseases and infections, the full impact will only be realized if we deliver it for the benefit of all people, instead of fueling more health inequity between and within countries.

An essential first step towards maximizing the global impact of gene therapies is to build research and development (R&D) capacity in LMICs. Current gene therapy R&D has mainly excluded LMICs, instead centering pre-clinical and clinical work in HICs. Gene therapy R&D needs to be performed in regions where target diseases are prevalent to ensure that these therapies are safe and effective for those populations. Manufacturing technologies and healthcare infrastructure, which are the cost drivers for gene therapy products in HICs, need to be replaced with innovative and simplified platforms and workflows that bring down costs and are functional and cost-effective within LMIC health systems.

As for policy and regulation, individual countries must establish gene therapy frameworks that enable R&D. The construction of such frameworks should be guided by recommendations from the World Health Organization, emphasizing safety, effectiveness and ethics.

A critical component in effective global health interventions is community outreach. Treatment acceptability is essential for future clinical trials, thus it is important for scientists and clinicians to be clear about the risks and benefits of gene therapies. Communication and education activities should be made accessible to a broad range of stakeholders. Gene therapy and gene editing technologies are complex and it can be difficult for the public to understand their possible benefits or side effects. However, patient and public support is critical for the successful adoption of any new technology.

Professor Johnny Mahlangu, University of the Witwatersrand, South Africa

The ongoing COVID-19 pandemic is accelerating innovation, implementation and acceptance of molecular therapeutics (e.g. mRNA vaccines) globally. As a result, there is escalating interest in developing molecular interventions for many other conditions, such as gene therapies for genetic diseases. Strategically leveraging infrastructure that is being developed for molecular therapeutics will be critical in manufacturing, testing, and delivering gene therapies across diverse settings. Three critical areas of consideration include:

The application of precision medicine to save and improve lives relies on good-quality, easily-accessible data on everything from our DNA to lifestyle and environmental factors. The opposite to a one-size-fits-all healthcare system, it has vast, untapped potential to transform the treatment and prediction of rare diseasesand disease in general.

But there is no global governance framework for such data and no common data portal. This is a problem that contributes to the premature deaths of hundreds of millions of rare-disease patients worldwide.

The World Economic Forums Breaking Barriers to Health Data Governance initiative is focused on creating, testing and growing a framework to support effective and responsible access across borders to sensitive health data for the treatment and diagnosis of rare diseases.

The data will be shared via a federated data system: a decentralized approach that allows different institutions to access each others data without that data ever leaving the organization it originated from. This is done via an application programming interface and strikes a balance between simply pooling data (posing security concerns) and limiting access completely.

The project is a collaboration between entities in the UK (Genomics England), Australia (Australian Genomics Health Alliance), Canada (Genomics4RD), and the US (Intermountain Healthcare).

Professor Vikram Mathews, Christian Medical College, Vellore, India

Gene therapy is on course to revolutionize medical care for several conditions. The hope is that gene therapy will be a one-time curative therapeutic intervention for diseases ranging from inherited hemoglobinopathies, such as sickle cell disease and thalassemia, to acquired diseases such as HIV.

A primary challenge limiting access to these life-saving therapies is their astronomical costs, making them inaccessible even in developed countries where most gene therapies have originated. Due to economic challenges, there is often a mismatch between regions in the world where development and clinical research happens versus regions in the world where the incidence of the disease target is the highest. Classic examples of these are sickle cell disease and HIV with the highest incidence rates in Africa.

Moving the manufacturing of gene therapy products to local regions and point of care settings (within hospitals) are strategies that can both significantly reduce the cost of these products and improve accessibility. Additionally, current gene therapy approaches use expensive ex vivo procedures that require removal of a patients cells from their body. Instead, researchers must develop novel in vivo methods that simplify the procedure to a single injection directly into the patient, saving time and money.

Professor Julie Makani, Muhimbili University of Health and Allied Sciences, Tanzania

In order for gene therapy to have an impact on global health, changes in innovation and policy must occur at several levels: individual, institutional, national, continental and global.

At the individual level, patients and personnel are the primary focal points. Taking a patient-centered approach will ensure that the community is involved in research and will have a say in receiving a particular health intervention when it is available. For personnel working in areas pertinent to gene therapy including healthcare, research and education, there is a need to increase knowledge and to change perspectives regarding the advancements and achievements made within the field of gene therapy.

At the national, continental and global levels, genomic research is catalyzed by strategic partnerships and often occur in Centers of Excellence (CoE). Many countries in Africa have established CoEs in academic settings, which integrate health and science programmes. These innovative environments help maximize resources (physical and human) and provide settings that facilitate research and translation of research findings to health interventions to be done contemporaneously, in the appropriate population and geographical region.

At the policy-level, investments in global health and research in gene therapy must change. This can be done in three ways: direct investment to institutions in Africa; increase in the level of investment through funding partnerships; and recognition that the duration of investment needs to be longer than the normal funding cycles of three to five years.

Professor Suradej Hongeng, Mahidol University, Thailand

Gene therapy has received global attention over the last few years, recognition that continues to grow with each new clinical success. The field is constantly evolving, with disruptive innovation across public and private sectors. However, access to these life-saving treatments remain restricted due to a number of technical and policy challenges.

First, researchers must continue to develop cost-effective ways to administer gene therapies into patients, an area of R&D where the private sector can play an important role. Yet many LMICs have weak ecosystems to support the emergence of new companies or entice collaborations with multinational companies. Stronger private sector involvement will be critical for penetration into emerging markets.

Second, the unique nature of these personalized treatments makes them difficult to regulate within traditional frameworks, meaning that agencies must update current policies and regulations. As regulation evolves, it must also converge with the frameworks of other countries. This will make it easier for companies to navigate regulations and interact with agencies when performing clinical trials or bringing a therapy to multiple markets.

See the article here:
Gene therapy can make a real impact on global health but we need equitable access, say experts - World Economic Forum

Editas Rumored to be in Advanced Discussions around Potential Sale of Oncology Assets – BioSpace

From left: Editas CMO Baisong Mei and CEO Gilmore O'Neill/courtesy of Editas Medicine

CRISPR gene editing leader Editas Medicineoften makes biotech headlines for its therapies for sickle cell and retinal diseases. Less often does it make the news for its preclinical cancer pipeline which could be why the company is reportedly considering a sale of these assets.

Editas is in "advanced discussions" regarding the sale of its preclinical oncology lineup, according to reporting from Endpoints News. When asked to confirm the rumors, Cristi Barnett, VP & head of corporate communications at Editas told BioSpace,We have long shared our plans to pursue development and commercialization opportunities through partnerships, specifically with oncology and our iNK program.

Barnett added that with a new leadership team onboard, Editas undertook a strategic review to inform opportunities.

Investors seemed to agree with the notion as Editas stock rose4.2% following the report.

Editas has given its C-Suite a makeover this year. In April, the company appointed genetic medicine veteran Gilmore ONeill as president and CEO.

ONeill wasted no time in bringing on board Sanofi veteran Baisong Mei to serve as the companys new chief medical officer. Mei has deep experience in the hemophilia space at both Sanofi and Bayer. He replaced Lisa Michaels, who was terminated by the company in February.

Editas presented data on one of its oncology assets, EDIT-202, last week at the European Society of Gene and Cell Therapy 29th Annual Meeting in Edinburgh, Scotland. EDIT-202 is a gene-edited iPSC-derived NK cell therapy that maintains prolonged persistence, high cytotoxicity and enhanced in vivo control of solid tumors, according to Editas.

Currently, there is no change to our program or plans. EDIT-202is advancing toward IND-enabling studies, Barnett said. She added that Editas will share additional updates on this program later this year including additional preclinical data at an upcoming medical meeting.

Also at ESGCT, Editas presented preclinical data from another program, EDIT-103, which is being developed to treat rhodopsin-associated autosomal dominant retinitis pigmentosa (RHO-adRP), a progressive type of retinal degeneration.

In a non-human primate model, the therapy demonstrated nearly 100% knockout of the endogenous RHO gene. Additionally, the replacement RHO gene produced over 30% of normal RHO protein levels in the treated area of subretinal injection, the company reported.

Original post:
Editas Rumored to be in Advanced Discussions around Potential Sale of Oncology Assets - BioSpace

Editas Medicine Presents Preclinical Data on EDIT-103 for Rhodopsin-associated Autosomal Dominant Retinitis Pigmentosa at the European Society of Gene…

Studies in non-human primates demonstrated nearly 100% gene editing and knockout of endogenous RHO gene and more than 30% replacement protein levels using a dual vector AAV approach

Treated eyes showed morphological and functional photoreceptor preservation

EDIT-103 advancing towards IND-enabling studies

CAMBRIDGE, Mass., Oct. 13, 2022 (GLOBE NEWSWIRE) -- Editas Medicine, Inc. (Nasdaq: EDIT), a leading genome editing company, today announced ex vivo and in vivo preclinical data supporting its experimental medicine EDIT-103 for the treatment of rhodopsin-associated autosomal dominant retinitis pigmentosa (RHO-adRP). The Company reported these data in an oral presentation today at the European Society of Gene and Cell Therapy 29th Annual Meeting in Edinburgh, Scotland, UK.

EDIT-103 is a mutation-independent CRISPR/Cas9-based, dual AAV5 vectors knockout and replace (KO&R) therapy to treat RHO-adRP. This approach has the potential to treat any of over 150 dominant gain-of-function rhodopsin mutations that cause RHO-adRP with a one-time subretinal administration.

These promising preclinical data demonstrate the potential of EDIT-103 to efficiently remove the defective RHO gene responsible for RHO-adRP while replacing it with an RHO gene capable of producing sufficient levels of RHO to preserve photoreceptor structure and functions. The program is progressing towards the clinic, said Mark S. Shearman, Ph.D., Executive Vice President and Chief Scientific Officer, Editas Medicine. EDIT-103 uses a dual AAV gene editing approach, and also provides initial proof of concept for the treatment of other autosomal dominant disease indications where a gain of negative function needs to be corrected.

Key findings include:

Full details of the Editas Medicine presentations can be accessed in the Posters & Presentations section on the Companys website.

About EDIT-103EDIT-103 is a CRISPR/Cas9-based experimental medicine in preclinical development for the treatment of rhodopsin-associated autosomal dominant retinitis pigmentosa (RHO-adRP), a progressive form of retinal degeneration. EDIT-103 is administered via subretinal injection and uses two adeno-associated virus (AAV) vectors to knockout and replace mutations in the rhodopsin gene to preserve photoreceptor function. This approach can potentially address more than 150 gene mutations that cause RHO-adRP.

AboutEditas MedicineAs a leading genome editing company, Editas Medicine is focused on translating the power and potential of the CRISPR/Cas9 and CRISPR/Cas12a genome editing systems into a robust pipeline of treatments for people living with serious diseases around the world. Editas Medicine aims to discover, develop, manufacture, and commercialize transformative, durable, precision genomic medicines for a broad class of diseases. Editas Medicine is the exclusive licensee of Harvard and Broad Institutes Cas9 patent estates and Broad Institutes Cas12a patent estate for human medicines. For the latest information and scientific presentations, please visit http://www.editasmedicine.com.

Forward-Looking StatementsThis press release contains forward-looking statements and information within the meaning of The Private Securities Litigation Reform Act of 1995. The words "anticipate," "believe," "continue," "could," "estimate," "expect," "intend," "may," "plan," "potential," "predict," "project," "target," "should," "would," and similar expressions are intended to identify forward-looking statements, although not all forward-looking statements contain these identifying words. The Company may not actually achieve the plans, intentions, or expectations disclosed in these forward-looking statements, and you should not place undue reliance on these forward-looking statements. Actual results or events could differ materially from the plans, intentions and expectations disclosed in these forward-looking statements as a result of various factors, including: uncertainties inherent in the initiation and completion of preclinical studies and clinical trials and clinical development of the Companys product candidates; availability and timing of results from preclinical studies and clinical trials; whether interim results from a clinical trial will be predictive of the final results of the trial or the results of future trials; expectations for regulatory approvals to conduct trials or to market products and availability of funding sufficient for the Companys foreseeable and unforeseeable operating expenses and capital expenditure requirements. These and other risks are described in greater detail under the caption Risk Factors included in the Companys most recent Annual Report on Form 10-K, which is on file with theSecurities and Exchange Commission, as updated by the Companys subsequent filings with theSecurities and Exchange Commission, and in other filings that the Company may make with theSecurities and Exchange Commissionin the future. Any forward-looking statements contained in this press release speak only as of the date hereof, and the Company expressly disclaims any obligation to update any forward-looking statements, whether because of new information, future events or otherwise.

Read the original here:
Editas Medicine Presents Preclinical Data on EDIT-103 for Rhodopsin-associated Autosomal Dominant Retinitis Pigmentosa at the European Society of Gene...

Developing New Tools to Fight Cancer – Duke University School of Medicine

For decades, medical cancer treatment has generally meant chemotherapy, radiation, or surgery, alone or in combination. But things are changing rapidly. Today, new approaches such as immunotherapies and targeted therapies are becoming available, with many more in research and development. In many cases, the new treatments are more effective, with fewer side effects.

Its an exciting time to be in cancer research and cancer discovery, said Colin Duckett, PhD, professor of pathology, interim chair of the Department of Pharmacology and Cancer Biology, and vice dean for basic science."

Were moving into this era where we have a new set of tools we can use to treat cancer.-Colin Duckett, PhD

Researchers in the Duke Cancer Institute (DCI) and across the School of Medicine are helping to create these new tools, fueled by the knowledge and experience of experts from a wide range of disciplines.

Indeed, cancer research has always been a team-based endeavor at DCI.

DCI was specifically created a decade ago to break down barriers between disciplines to stimulate collaborative research and multidisciplinary interaction, said DCI Executive Director Michael Kastan, MD, PhD, the William and Jane Shingleton Distinguished Professor of Pharmacology and Cancer Biology.

Adding fuel to the fire is the Duke Science and Technology (DST) initiative, which aims to catalyze and support collaborative research in service of solving some of the worlds most pressing problems, including cancer.

The new tools, though varied, all represent advances in personalized cancer medicine. Targeted treatments are chosen based on the genetic signature of a patients tumor. Some immunotherapies take personalization even further, by manipulating a patients own immune cells to create a treatment for that individual alone.

To match treatments to patients, the multidisciplinary Duke Molecular Tumor Board, led by John Strickler, MD, HS11, and Matthew McKinney, MD06, HS06-09, HS10-13, helps providers identify best practices, newly approved treatments, or clinical trials for advanced cancer patients based on genetic sequencing of their tumors.

In precision cancer medicine the right therapy for the right patient at the right time all these things come together, the targeted therapies, the immunotherapy, even standard chemotherapy, all of that is part of precision cancer medicine.-Michael Kastan, MD, PhD

Immunotherapy aims to harness the power of the immune system to fight cancer. That can mean activating the immune system, energizing exhausted immune cells, or helping immune cells find cancer cells by guiding them there or by removing cancers good guy disguises.

Dukes Center for Cancer Immunotherapy supports these efforts by identifying promising basic science discoveries and building teams to translate those ideas into treatments.

"There are so many world-class basic research scientists here making discoveries..."-Scott Antonia, MD, PhD

...discoveries that are potentially translatable as immunotherapeutic strategies, said Scott Antonia, MD, PhD, professor of medicine and the centers founding director. Thats what motivated me to come to Duke, because of the great opportunity to interact with basic scientists to develop new immunotherapeutics and get them into the clinic.

Antonia believes immunotherapy has the potential to revolutionize cancer treatment, but more work remains to be done to realize its promise. The proof of principle is there, he said, but still only a relatively small fraction of people enjoy long-term survival. If we can hone immunotherapeutic approaches, thats our best opportunity.

Among the most exciting immunotherapy work being facilitated by the center involves removing a patients own T cells (a type of lymphocyte), manipulating them in the lab to make them more effective against tumors, then injecting them back into the patient.

T cells can be manipulated in the lab in a number of different ways. In one approach, called CAR T-cell therapy, the T cells are engineered with an addition of synthetic antibody fragments that bind to the patients tumor, effectively directing the T cells directly to the tumor cells.

In another approach, called tumor-infiltrating lymphocyte (TIL) adoptive cell therapy, the subset of a patients T cells that have already managed to find their way into the tumor are extracted and then grown to large numbers before being returned to the patient. Antonia and his colleagues recently published a paper demonstrating the effectiveness of TIL expansion in lung cancer. Were now doing the preparative work to develop clinical trials using this approach in brain tumors, and our intention is to expand into many other cancers as well, he said.

Antonia points out that innovations in CAR T-cell therapy and TIL therapy happening at Duke are possible because of collaborations with scientists in an array of disciplines, including antibody experts like Barton Haynes, MD, HS73-75, the Frederic M. Hanes Professor of Medicine, and Wilton Williams, PhD, associate professor of medicine and surgery, at the Duke Human Vaccine Institute, and biomedical engineers like Charles Gersbach, PhD, the John W. Strohbehn Distinguished Professor of Biomedical Engineering at the Pratt School of Engineering.

Furthermore, clinical trials for these kinds of cellular therapies require special facilities to engineer or expand the cells, which are provided by Dukes Marcus Center for Cellular Cures, led by Joanne Kurtzberg, MD, the Jerome S. Harris Distinguished Professor of Pediatrics, and Beth Shaz, MD, MBA, professor of pathology. Its been a very productive collaboration highlighting how Duke is uniquely positioned to develop immunotherapeutic strategies, Antonia said.

Targeted therapies exploit a tumors weak spot: a genetic mutation, for example. The benefit is that the treatment kills only cancer cells and not healthy cells. The prerequisite is knowing the genetics and biology of the specific tumor, no simple task.

Trudy Oliver, PhD05, who joined the Department of Pharmacology and Cancer Biology faculty as a Duke Science and Technology Scholar, studies cancer development and the biology of tumor subtypes, particularly squamous cell lung cancer and small cell lung cancer.

Even within small cell lung cancer, there are subsets that behave differently from each other, she said. Some of the treatments shes identified are in clinical trials

Our work suggests that when you tailor therapy to those subsets, you can make a difference in outcome.-Trudy Oliver, PhD'05

Some of the treatments shes identified are in clinical trials.

Sandeep Dave, MD, Wellcome Distinguished Professor of Medicine, is leading an ambitious project to analyze the genomics of the more than 100 different types of blood cancer. His project will streamline the diagnosis of blood cancer and uncover potential therapy targets.

All cancers arise from genetic alterations that allow cancer to survive and thrive at the expense of the host, he said. These genetic alterations are a double-edged sword they allow these cancer cells to grow, but on the other hand they do confer specific vulnerabilities that we can potentially exploit.

Dave said his background in computer science, genetics, and oncology helped him as he designed the project, which uses huge datasets.

Weve done the heavy lifting in terms of tool development and methodology, which is ripe to be applied to every other type of cancer."-Sandeep Dave, MD

Cancer disparities are caused by a complex interplay of elements, including access to health care and other resources, institutional barriers, structural racism, and biology, such as ancestry-related genetics. For example, some genetic biological factors and social elements contribute to disparities in many types of cancer.

Cancer treatment is approaching this personalized space where patients are no longer treated with a one-size-fits-all paradigm."-Tammara Watts, MD, PhD

"Its becoming increasingly apparent that there are differences in outcome with respect to race and ethnicity, said Tammara Watts, MD, PhD, associate professor of head and neck surgery & communication sciences, and associate director of equity, diversity, and inclusion at DCI. The very broad hypothesis is that there are genetic ancestry-related changes that may play a critical role in the disparate clinical outcomes we see every day in our cancer patients.

For example, self-identified white patients with throat cancer associated with the human papilloma virus (HPV) have better outcomes compared to self-identified Black patients, even when controlling for elements such as health care access, education, and socioeconomic status.

Watts is collaborating with bioinformatics experts at DCI to try to identify significant differences in gene expression among the two groups.

Im trying to tease out differences that may be impactful for disadvantaged patients based on race and ethnicity, she said. But there could be differences that emerge that could be useful for designing targeted treatments for a broad group of patients.

Thats because a targeted treatment for a particular genetic expression that might occur more commonly in Black people would help all patients with that expression, regardless of race or ethnicity.

Watts is far from alone in doing cancer disparity research at DCI. Tomi Akinyemiju, PhD, associate professor in population health sciences, uses epidemiology to study both biological factors and social elements that contribute to disparities in many types of cancer.

Jennifer Freedman, PhD, associate professor of medicine, Daniel George, MD92, professor of medicine, and Steven Patierno, PhD, professor of medicine and deputy director of DCI, are studying the molecular basis for why prostate, breast, and lung cancer tend to be more aggressive and lethal in patients who self-identify as Black. Patierno, who has been a national leader in cancer disparities research for more than 20 years, leads the Duke Cancer Disparities SPORE (Specialized Program of Research Excellence), funded by the National Cancer Institute. The SPORE grant supports these researchers as well as other DCI teams working on cancers of the breast, lung, stomach, and head and neck.

One of the things that impresses me is that [cancer disparities research] is a high priority within DCI, said Watts, who joined the faculty in 2019. These groups are actively engaged and collaborating and asking the questions that will drive change for patients who have worse outcomes that are related to ancestry.

Even better than a cancer cure is avoiding cancer altogether.

At DCI, Meira Epplein, PhD, associate professor in population health sciences, and Katherine Garman, MD02, MHS02, HS02-06, HS09, associate professor of medicine, are looking to decrease the incidence of stomach cancer by improving detection and treatment of the bacteria Helicobacter pylori, which can set off a cascade leading to stomach cancer. Epplein and Garman, also funded by the Duke Cancer Disparities SPORE grant, hope their work will reduce disparities because H. pylori infections and stomach cancer are both more prevalent among African Americans than whites.

When preventing cancer isnt successful, the next best thing is to detect and treat early. A relatively new concept in cancer care is interception, which means catching cancer just as, or even just before, it begins.

The point is to prevent it from progressing to full blown malignancy, said Patierno. In other words, stop the cancer from getting over its own goal line.

Patierno envisions a future where patients with pre-cancerous conditions or early cancer could take a pill to halt cancer development without killing cells in other words, a non-cytotoxic treatment, unlike standard chemotherapy.

We know its there, but were not going to poison it or burn it or cut it out because all of those have side effects. Were going to find a non-cytotoxic way to prevent it from progressing. Thats the goal.-Steven Patierno, PhD

Read About Alumni Making a Differencein Cancer Research and Care:

Changing theStatus Quo: Lori Pierce MD'85

Treatingthe WholePerson:Arif Kamal, MD,HS12, MHS15

Targetingthe Seeds ofCancer Growth:Eugenie S. Kleinerman, MD75, HS75

A DiscoveryThat Comes Outof Nowhere:Bill Kaelin, BS79, MD82

Story originally published in DukeMed Alumni News, Fall 2022.

Read more from DukeMed Alumni News

More here:
Developing New Tools to Fight Cancer - Duke University School of Medicine

Mathematical model could bring us closer to effective stem cell therapies – Michigan Medicine

Until recently, researchers could not see gene expression in an individual cell. Thanks to single cell sequencing techniques, they now can. But the timing of changes is still hard to visualize, as measuring the cell destroys it.

To address this, we developed an approach based on models in basic physics, explained Welch, treating the cells like they are masses moving through space and we are trying to estimate their velocity.

The model, dubbed MultiVelo, predicts the direction and speed of the molecular changes the cells are undergoing.

Like Podcasts? Add the Michigan Medicine News Break on Spotify, Apple Podcasts or anywhere you listen to podcasts.

Our model can tell us which things are changing firstepigenome or gene expression--and how long it takes for the first to ramp up the second, said Welch.

They were able to verify the method using four types of stem cells from the brain, blood and skin, and identified two ways in which the epigenome and transcriptome can be out of sync. The technique provides an additional, and critical, layer of insight to so called cellular atlases, which are being developed using single cell sequencing to visualize the various cell types and gene expression in different body systems.

By understanding the timing, Welch noted, researchers are closer to steering the development of stem cells for use as therapeutics.

One of the big problems in the field is the artificially differentiated cells created in the lab never quite make it to full replicas of their real-life counterparts, said Welch. I think the biggest potential for this model is better understanding what are the epigenetic barriers to fully converting the cells into whatever target you want them to be.

Additional authors on this paper include Chen Li, Maria C. Virgilio, and Kathleen L. Collins.

Paper cited: Single-cell multi-omic velocity infers dynamic and decoupled gene regulation, Nature Biotechnology. DOI: 10.1038/s41587-022-01476-y

Live your healthiest life: Get tips from top experts weekly. Subscribe to the Michigan Health blog newsletter

Headlines from the frontlines: The power of scientific discovery harnessed and delivered to your inbox every week. Subscribe to the Michigan Health Lab blog newsletter

More here:
Mathematical model could bring us closer to effective stem cell therapies - Michigan Medicine

CANbridge-UMass Chan Medical School Gene Therapy Research in Oral Presentation at the European Society of Gene and Cell Therapy (ESGCT) 29th Annual…

BEIJING & BURLINGTON, Mass.--(BUSINESS WIRE)--CANbridge Pharmaceuticals Inc. (HKEX:1228), a leading global biopharmaceutical company, with a foundation in China, committed to the research, development and commercialization of transformative rare disease and rare oncology therapies, announced that data from its gene therapy research agreement with the Horae Gene Therapy Center, at the UMass Chan Medical School, was presented at the 29th European Society of Gene and Cell Therapy Annual Congress in Edinburgh, Scotland, today.

In an oral presentation, Guangping Gao, Ph.D., Co-Director, Li Weibo Institute for Rare Diseases Research, Director, the Horae Gene Therapy Center and Viral Vector Core, Professor of Microbiology and Physiological Systems and Penelope Booth Rockwell Professor in Biomedical Research at UMass Chan Medical School, discussed the study that was led by the investigator Jun Xie, Ph.D., and his team from Dr. Gaos lab, and titled Endogenous human SMN1 promoter-driven gene replacement improves the efficacy and safety of AAV9-mediated gene therapy for spinal muscular atrophy (SMA) in mice.

The study showed that a novel second-generation self-complementary AAV9 gene therapy, expressing a codon-optimized human SMN1 gene. under the control of its endogenous promoter, (scAAV9-SMN1p-co-hSMN1), demonstrated superior safety, potency, and efficacy across several endpoints in an SMA mouse model, when compared to the benchmark vector, scAAV9-CMVen/CB-hSMN1, which is similar to the vector used in the gene therapy approved by the US Food and Drug Administration for the treatment of SMA. The benchmark vector expresses a human SMN1 transgene under a cytomegalovirus enhancer/chicken -actin promoter for ubiquitous expression in all cell types, whereas the second-generation vector utilizes the endogenous SMN1 promoter to control gene expression in different tissues. Compared to the benchmark vector, the second-generation vector resulted in a longer lifespan, better restoration of muscle function, and more complete neuromuscular junction innervation, without the liver toxicity seen with the benchmark vector.

This, the first data to be presented from the gene therapy research collaboration between CANbridge and the Gao Lab at the Horae Gene Therapy Center, was also presented at the American Society for Cellular and Gene Therapy (ASGCT) Annual Meeting in May 2022. Dr. Gao is a former ASCGT president.

Oral Presentation: Poster #: 0R57

Category: AAV next generation vectors

Presentation Date and Time: Thursday, October 13, 5:00 PM BST

Authors: Qing Xie, Hong Ma, Xiupeng Chen, Yunxiang Zhu, Yijie Ma, Leila Jalinous, Qin Su, Phillip Tai, Guangping Gao, Jun Xie

Abstracts are available on the ESGCT website: https://www.esgctcongress.com/

About the Horae Gene Therapy Center at UMass Chan Medical School

The faculty of the Horae Gene Therapy Center is dedicated to developing therapeutic approaches for rare inherited disease for which there is no cure. We utilize state of the art technologies to either genetically modulate mutated genes that produce disease-causing proteins or introduce a healthy copy of a gene if the mutation results in a non-functional protein. The Horae Gene Therapy Center faculty is interdisciplinary, including members from the departments of Pediatrics, Microbiology & Physiological Systems, Biochemistry & Molecular Pharmacology, Neurology, Medicine and Ophthalmology. Physicians and PhDs work together to address the medical needs of rare diseases, such as alpha 1-antitrypsin deficiency, Canavan disease, Tay-Sachs and Sandhoff diseases, retinitis pigmentosa, cystic fibrosis, amyotrophic lateral sclerosis, TNNT1 nemaline myopathy, Rett syndrome, NGLY1 deficiency, Pitt-Hopkins syndrome, maple syrup urine disease, sialidosis, GM3 synthase deficiency, Huntington disease, and others. More common diseases such as cardiac arrhythmia and hypercholesterolemia are also being investigated. The hope is to treat a wide spectrum of diseases by various gene therapeutic approaches. Additionally, the University of Massachusetts Chan Medical School conducts clinical trials on site and some of these trials are conducted by the investigators at The Horae Gene Therapy Center.

About CANbridge Pharmaceuticals Inc.

CANbridge Pharmaceuticals Inc. (HKEX:1228) is a global biopharmaceutical company, with a foundation in China, committed to the research, development and commercialization of transformative therapies for rare disease and rare oncology. CANbridge has a differentiated drug portfolio, with three approved drugs and a pipeline of 11 assets, targeting prevalent rare disease and rare oncology indications that have unmet needs and significant market potential. These include Hunter syndrome and other lysosomal storage disorders, complement-mediated disorders, hemophilia A, metabolic disorders, rare cholestatic liver diseases and neuromuscular diseases, as well as glioblastoma multiforme. CANbridge is also building next-generation gene therapy development capability through a combination of collaboration with world-leading researchers and biotech companies and internal capacity. CANbridges global partners include Apogenix, GC Pharma, Mirum, Wuxi Biologics, Privus, the UMass Chan Medical School and LogicBio.

For more on CANbridge Pharmaceuticals Inc., please go to: http://www.canbridgepharma.com.

Forward-Looking Statements

The forward-looking statements made in this article relate only to the events or information as of the date on which the statements are made in this article. Except as required by law, we undertake no obligation to update or revise publicly any forward-looking statements, whether as a result of new information, future events or otherwise, after the data on which the statements are made or to reflect the occurrence of unanticipated events. You should read this article completely and with the understanding that our actual future results or performance may be materially different from what we expect. In this article, statements of, or references to, our intentions or those of any of our Directors or our Company are made as of the date of this article. Any of these intentions may alter in light of future development.

Here is the original post:
CANbridge-UMass Chan Medical School Gene Therapy Research in Oral Presentation at the European Society of Gene and Cell Therapy (ESGCT) 29th Annual...

Winners of ninth annual Vision Research Workshop named – Wayne State University

The poster and oral presentation winners of the Wayne State University School of Medicines ninth annual Vision Research Workshop have been announced.

The workshop, held Oct. 12, was presented by the Department of Ophthalmology, Visual and Anatomical Sciences, and the Kresge Eye Institute.

Presentation winners included:

Poster Presentations

First place: Nicholas Pryde, Assessment of NanodropperTM eyedropper attachment

Second place: Bing Ross, Mechanism of Preferential Calcification in Hydrophilic Versus Hydrophobic Acrylic Intraocular Lens

Third place: Pratima Suvas, Expression, Localization, and Characterization of CXCR4 and its ligand CXCL12 in herpes simplex virus-1 infected corneas

Oral Presentations

First place: Ashley Kramer, A comparative analysis of gene and protein expression in a zebrafish model of chronic photoreceptor degeneration

Second place: Jeremy Bohl, Long-distance cholinergic signaling contributes to direction selectivity in the mouse retina

Third place: Zain Hussain, Diagnostic and Treatment Patterns of Age-Related Macular Degeneration among Asian Medicare Beneficiaries

Mark Juzych, M.D., chair of the Department of Ophthalmology, Visual and Anatomical Sciences, and director of the Kresge Eye Institute, gave welcome remarks.Linda Hazlett, Ph.D., vice dean of Research and Graduate Programs and vice chair of the department, provided an overview of research.

The keynote speaker giving the annual Robert N. Frank, M.D., Clinical Translational Lecture, was Reza Dana, M.D., M.P.H., the Claes H. Dohlman Chair and vice chair for Academic Programs in Ophthalmology at Harvard Medical School, who presented New Ways of Doing Old Things: Translational Investigations in Management of Common Corneal and Ocular Surface Disorders.

See the original post:
Winners of ninth annual Vision Research Workshop named - Wayne State University

How does the genomic naive public perceive whole genomic testing for health purposes? A scoping review | European Journal of Human Genetics -…

Study characteristics

Sixteen studies were included in the analysis [46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61]. Most were quantitative (n=12), using questionnaires to assess public perceptions [46,47,48,49,50,51,52, 54, 56, 57, 59, 61]. Three studies conducted focus groups [53, 55, 60] while one study used both focus groups and a survey [58]. The US has contributed the most to this field thus far, undertaking six of the 16 studies identified in the literature search [49, 50, 52, 53, 55, 58]. This is followed by Canada (n=2) [48, 51] and Japan (n=2) [54, 59]. Each of the following countries contributed one study: Jordan [56], Korea [57], The Netherlands [61], Singapore [60], Qatar [46] and the UK [47]. Ten of the studies attempted to recruit a representative sample [46,47,48,49, 51, 53, 54, 56, 59, 61]. Higher educated participant populations (compared to the general population) were noted in four studies [48, 59,60,61]. Three studies recruited participants from specific sites [52, 55, 57]. No studies attempted to discern the views of underrepresented populations aside Mallow et al. [58] who conducted focus groups with a rural community (Table3, Supplementary File3).

Education level influenced decisions to hypothetically partake in genomic testing in different ways [49, 51, 56, 59, 61]. Three studies found that more educated individuals were more likely to be interested in testing [49, 56, 59], while two other studies found that being more educated led to more critical attitudes towards testing [51, 61]. One study found no association between education level and attitude towards testing [57]. Khadir, Al-Qerem and Jarrar [56] found that having a low perceived knowledge of genomic testings social consequences reduced the likelihood of having a reserved attitude. Abdul Rahim et al. [46] found genetic/genomic knowledge did not impact whether a participant would engage in testing.

The age of the participant was reported to influence decision making [49, 54, 56, 57, 59], with no consensus on attitudes of older versus younger adults. Lee et al. [57] found that older adults were more likely to approve of integrating personalised medicine testing into standard healthcare. Two other studies also found that older adults were slightly more interested in genomic testing [54, 56]. In contrast, Okita et al. [59] found that older adults were less willing to partake in genomic testing, while Dodson et al. [49] found no association between age and likeliness to have testing.

Abdul Rahim et al. [46] found that marital status was not significantly associated with willingness to partake in testing in Qatari adults, while Dodson et al. [49] found American participants planning to have children in the next five years had significantly increased interest in testing. Dodson et al. [49] was the only study to investigate whether ethnicity influenced decision-making, showing no association.

Okita et al. [59] assessed the influence of employment status on willingness to partake, reporting that students had significantly more positive attitudes towards testing compared to employed respondents. Bombard et al. [48] found that having an income of more than CAD$80,000 led to a 11-12% decrease in likeliness of believing parents have a responsibility to have their child tested via expanded NBS. No study assessed the impact of sex on attitude towards testing, however Lee et al. [57] found that sex did not significantly influence whether the participant had heard of personalised medicine.

Using the NASSS domains we were able to map primary source data to technology (Domain 2), value proposition (Domain 3), the adopter system (Domain 4) and the wider context (Domain 6) (Fig.2). Greenhalgh et al. [39] does not provide specific definitions for their domains, rather they frame these domains in the form of questions that need to be answered. We replicated this approach and adapted the questions to align with our study questions (Supplementary File4).

The NASSS Framework considers the influences on adoption, nonadoption, abandonment, spread, scale-up, and sustainability of healthcare technologies. Domains 2 (Technology), 3 (Value proposition), 4 (Adopter system) and 6 (Wider context) of the NASSS Framework have been addressed in this scoping review to consider how public perceptions are incorporated in the framework.

Domain 2 considers the technical aspects of the technology that will influence its implementation [39]. Questions 2B, types of data generated; 2C, knowledge needed to use the technology; and 2E, Who owns the IP generated by the technology?, are addressed in the primary sources.

This question considers the knowledge generated by the technology and how this is perceived by patients and/or caregivers. Two studies cited the accuracy of genetic information as an issue for their participants [54, 58].

Greenhalgh et al. [39] defines this as the type of knowledge needed by both healthcare providers and patients to use the technology. However, we will only focus on the views of the general public. Although patients of genomic testing do not necessarily need knowledge to undertake testing, the informed consent process is essential. To gain informed consent from patients, understanding the baseline genomic knowledge of the public is beneficial for those taking consent. Knowledge of genetics and genomics was assessed in several different ways across the included articles [46, 52,53,54, 56, 58, 60]. These included asking participants if they had heard of various genetic and/or genomic terms, how they had heard about genomic testing, how participants describe genomics (in a focus group setting) and questions on genetics knowledge.

Abdul Rahim et al. [46] found that less than a third (n=245) of survey respondents had heard of genomic testing while just over half (n=447) had heard of genetic testing. Gibson, Hohmeier and Smith [52] found that 54% (n=7) of their participants had heard the term pharmacogenomics. Hishiyama, Minari and Suganuma [54] found that more than two-thirds of their participants had heard of classic genetic terminology (e.g. DNA, gene, chromosome), whereas fewer participants had heard of newer, genomics terminology (e.g. personal genome and pharmacogenomics). Hahn et al. [53] found that the majority of their participants had not heard the term genomic medicine and personalised medicine. Ong et al. [60] found that English and Mandarin-speaking participants had heard of the term personalised medicine but not precision medicine, while Malay-speaking participants had not heard of either term.

Three studies questioned participants on how they had heard about genomics [46, 52, 53]. Abdul Rahim et al. [46] asked about both genetic and genomic testing whereas Gibson, Hohmeier and Smith [52] asked their participants where they had heard certain terms from. Abdul Rahim et al. [46] found that 30% (n=69) of participants who knew of genomic testing, heard about it through word of mouth. Gibson, Hohmeier and Smith [52] found that 54% (n=7) of participants had heard of pharmacogenomic testing, and other key terms associated with genomics, from the internet. Hahn et al. [53] used focus groups to discern participant understanding of the term genomic medicine, and found that some college students had heard of the term on the news and in biology classes.

Two studies used focus groups to discern genomic understanding [53, 58]. Mallow et al. [58] used a Community Participating Research approach. Community leaders suggested they use terms like genes and family health history rather than scientific terminology to assist discussions with the community. They found that participants were more likely to describe inheriting disease rather than inheriting health and wellness [58]. Hahn et al. [53] found that their focus group participants described genomic medicine in terms of genetics, family history, the genome project, using genetics to heal people and cloning. Ong et al. [60] also used focus groups to discuss baseline understanding of personalised medicine and precision medicine divided into the primary language spoken by the participants, allowing for discussions on terminology specific to the language.

Knowledge of genetic and/or genomic facts was directly assessed in two studies [46, 56]. Abdul Rahim et al. [46] and Khadir, Al-Qerem and Jarrar [56] both questioned respondents on their basic genetic literacy via survey questions. Abdul Rahim et al. [46] found that 56.1% of survey respondents (n=464) were able to answer at least 5 out of 8 genetic literacy questions correctly, while Khadir, Al-Qerem and Jarrar [56] found that participants were knowledgeable in hereditary genetic information but not other scientific facts. Khadir, Al-Qerem and Jarrar [56] also gave participants the opportunity to self-report their knowledge of genetics. Many participants reported having sufficient knowledge on basic medical uses of testing and potential social consequences, such as refusing testing and the rights of third parties to request genetic test results of individuals [56].

For genomic testing, we have interpreted this question to mean whether patients own their genetic information or if it belongs to the group that conducts sequencing. Four studies found that participants had concerns about the privacy of their or their childs genetic information [46, 53, 55, 57]. Hishiyama, Minari and Suganuma [54] also found that 37.1% (n=1112) of their participants were concerned about management and storage of genetic information.

Greenhalgh et al. [39] use this domain to consider the value placed on the technology by healthcare professionals and the patient. Question 3B, demand-side value (to patient), is addressed in the primary sources.

Greenhalgh et al. [39] define this question as the downstream value of the technology, including the evidence of benefit to patients and affordability. Willingness to pay for genomic testing was directly assessed in three studies [50, 52, 57]. Gibson, Hohmeier and Smith [52] found that if the entire cost of the pharmacogenomic test was covered by insurance, 89% of participants (n=24) would undertake testing [52]. Lee et al. [57] determined that age, gender, income, inconvenience of testing and prior knowledge all influenced whether participants would pay extra for personalised medical testing. Cost of testing was a concern for 44.8% of participants (n=316) [57]. Edgar et al. [50] found that most adoptees (72.4%) and non-adoptees (80.3%) were willing to pay between US$1 and US$499. Education level was a predictor for adoptee willingness to pay, while income predicted willingness to pay in non-adoptees [50]. Abdul Rahim et al. [46] did not directly assess willingness to pay, however they noted that a high income was associated with participant willingness to partake in testing.

Hahn et al. [53] and Ong et al. [60] did not directly assess willingness to pay for genomic sequencing, but participants did express concerns about the cost of testing to the individual and whether there would be equitable access to testing.

Greenhalgh et al. [39] use this domain to consider the adoption of the technology. The adopter system includes caregivers, healthcare professionals and patients. Question 4B addresses whether patients will adopt a technology, while 4C addresses if lay caregivers are available to facilitate adoption. As we did not include patients or lay caregivers in our review, we have adapted these definitions to incorporate hypothetical patients and/or carers under the term genomic naive public. Greenhalgh et al. [39] also emphasise patient acceptance and family conflict as influencing factors on use of technology.

Several personal values were identified across the included studies [46, 48,49,50,51,52,53,54, 56, 59]. Abdul Rahim et al. [46] and Hishiyama, Minari and Suganuma [54] found that contributing to science and medical research were reasons to partake [46, 54]. Other reasons for partaking in genomic testing suggested by Qatari adults included improved health knowledge and prevention of future health conditions [46]. This was also suggested by participants in Etchegary et al. [51], Hahn et al. [53] Khadir, Al-Qerem and Jarrar [56].

Bombard et al. [48] found that most of their participants preferred using scientific evidence (82%, n=994) and receiving expert advice (74%, n=897) when making important healthcare decisions. However, only half (53%) of participants had trust in healthcare (n=639). Hahn et al. [53] also found that many participants were sceptical of genomic medicine specifically, and often associated it with genetic engineering and cloning despite these not being directly related to genomic testing. Some participants felt they did not need the information genomic testing could provide, while others who would hypothetically want testing, believed it could promote the development of new treatments and provide more information on family history [53].

Primary reasons for not willing to partake in testing, as noted by Abdul Rahim et al. [46] were lack of time, information or knowledge, and privacy concerns. Similar concerns were suggested by Hahn et al. [53] and Lee et al. [57]. Fear of the unknown was also suggested in Hahn et al. [53] and Mallow et al. [58]. Participants in Hahn et al. [53] also noted they may be uncomfortable with the results, and the results may be too deterministic.

Aside from general concerns about the nature of genomic testing, concern regarding communication of genetic information among family members was also highlighted [47, 51, 53, 56, 58, 61]. Ballard et al. [47] noted that most participants, whether asked to imagine either they or a family member had a genetic condition, believed other family members who might also be affected should be notified. Etchegary et al. [51] and Khadir, Al-Qerem and Jarrar [56] also found that most participants would share genomic test results with family members. Participants in Hahn et al. [53] generally had a positive view of learning about genetic information if it would help other family members as some had family members who had passed away without explanation. Mallow et al. [58], however, found that communicating genetic information to family members may be an issue. Participants cited several reasons for this including: upsetting children and the creation of family issues, older family members not willing to disclose information and stigmatisation by the community, particularly if the information in question regarded mental illness or substance abuse disorders [58]. Participants also suggested they would only discuss genetic risk if there was a health crisis in the family [58]. Etchegary et al. [51], although noting that many participants would want to share information, found that those with the highest education levels and income were less likely to share results with family members. Vermeulen et al. [61] also found that 17% of their participants (n=160) were worried about causing friction within their families. However, participants who believed family history assessments were worthwhile cited disease prevention as a benefit to involving family members [61].

Greenhalgh et al. [39] describe the wider context as the institutional and sociocultural contexts. Examples of the wider context include health policy, fiscal policy, statements and positions of professional and peak bodies, as well as law and regulation. Here, in order to respond to our research questions, we focus on the socio-cultural aspects of the public.

Societal concerns were noted in many studies [51, 53,54,55,56, 58, 60, 61]. Twenty-two percent of participants (n=1425) in the Hishiyama, Minari and Suganuma [54] study noted employment and insurance discrimination as a concern. This was also noted in Etchegary et al. [51] and Khadir, Al-Qerem and Jarrar [56]. Participants in Hahn et al. [53] and Mallow et al. [58] noted discrimination and segregation as key societal issues that may arise. One-third of participants (n=311) in Vermeulen et al. [61] thought that individuals may be coerced into testing if it is normalised.

Cultural context may influence participant responses. For example, Abdul-Rahim et al. found the 45.1% of their respondents (n=241) were in consanguineous relationships [46]. No other study reported on consanguinity, demonstrating that different cultures prioritise different elements when reporting. Abdul-Rahim et al. found that 70.9% population (n=584) were willing to undergo genomic testing [46], whereas Dodson et al. found that 39.5% of their US population (n=805) were somewhat interested and 19.1% (n=389) were definitely interested in genomic testing [49]. These papers demonstrates that different cultures can influence perceptions of genomic testing. However, the Caucasian US population in Gibson et al. were more willing to undergo testing at 81.0% (n=21) [52], showing that even within the same country there can be cultural differences that may lead to differences in perception.

Read more here:
How does the genomic naive public perceive whole genomic testing for health purposes? A scoping review | European Journal of Human Genetics -...

Global Genetic Testing Market Research Report 2022 Featuring Major Players – Abbott Laboratories, Myriad Genetics, F. Hoffmann-La Roche, Illumina, and…

DUBLIN--(BUSINESS WIRE)--The "Global Genetic Testing Market Research and Forecast, 2022-2028" report has been added to ResearchAndMarkets.com's offering.

The global genetic testing market is growing at a significant CAGR during the forecast period. The genetic disorder can be occurred by a change in one gene (monogenic disorder), by changes in multiple genes by a combination of environmental factors, and gene mutations, or by the destruction of chromosomes. Genetic testing is a medical test that is used for the identification of mutations in genes or chromosomes.

The key benefit of genetic testing is the chance to know the risk for a certain disease that possibly can be prevented, identify the disease or a type of disease, identify the cause of a disease, to determine options for a disease. The disease that can be identified by genetic testing includes, breast and ovarian cancer, Age-Related Macular Degeneration (AMD), bipolar disorder, Parkinson's disease, celiac disease, and psoriasis.

The global genetic testing market is projected to considerably grow in the upcoming year due to the prevalence of genetic disorders, cancer, and chronic disease. Moreover, continuous advancement by the medical companies in the genetic diagnostic field is also augmenting the market growth.

These companies are finding new and better tests for the accurate diagnosis of the most prevalent as well as rare diseases. Besides, the increase in awareness between people about health and the increased mortality rate due to genetic diseases across the globe is also a major factor increasing the need for demand for genetic testing.

Moreover, The adoption of (DTC) direct-to-consumer genetic testing kits in countries such as the US, China, and Japan, is increasing rapidly. With growing technological acceptances, awareness programs, and a drop in costs, the market for DTC-GT kits is likely to witness a significant boost over the forecast period. However, the lack of diagnostic infrastructure in emerging economies is a challenging factor for market growth.

Regional Outlooks

North America is estimated to contribute a significant share in the global genetic testing market due to the high awareness among the people about advanced treatment for healthcare, well-developed healthcare infrastructure, presence of key players, and availability of drugs.

Moreover, an increase in government initiatives for the enhancement of healthcare facilities and funding in research in the region is also a major factor for the significant market share of the region. In the US under the US CDC EGAPP, inventiveness has been taken by the government such as the Evaluation of Genomic Applications in Practice and Prevention which is also motivating the market growth.

One of the key goals of the initiative is to timely, offer objectively, and credible information that is linked to available scientific evidence. These statistics will allow healthcare workers and payers, customers, policymakers, and others to differentiate genetic tests that are safe and useful.

Asia-Pacific will have considerable growth in the global Genetic Testing Market

In Asia Pacific, the market is increasing due to government initiatives in research and the increasing prevalence of chronic diseases. Apart from cancer, genetic testing processes have also come in easy reach for the diagnosis of inherited cardiovascular diseases such as cardiac amyloidosis, Brugada syndrome, and familial dilated cardiomyopathy. As the region has a high incidence of cardiovascular diseases, significant scope for genetic testing can be witnessed in the region during the forecast period.

Market Players Outlook

The report covers the analysis of various players operating in the global genetic testing market. Some of the major players covered in the report include Abbott Laboratories, Myriad Genetics, Inc., F. Hoffmann-La Roche Ltd., Illumina, Inc., and Thermo Fisher Scientific, Inc. To survive in the market, these players adopt different marketing strategies such as mergers and acquisitions, product launches, and geographical expansion.

The Report Covers

Market Segmentation

Global Genetic Testing Market by Technology

Global Genetic Testing Market by Type

Global Genetic Testing Market by Disease

Company Profiles

For more information about this report visit https://www.researchandmarkets.com/r/fvmuud

About ResearchAndMarkets.com

ResearchAndMarkets.com is the world's leading source for international market research reports and market data. We provide you with the latest data on international and regional markets, key industries, the top companies, new products and the latest trends.

More:
Global Genetic Testing Market Research Report 2022 Featuring Major Players - Abbott Laboratories, Myriad Genetics, F. Hoffmann-La Roche, Illumina, and...

Tracking Transcripts in Biologics and Cell Therapies – Genetic Engineering & Biotechnology News

The outcomes from biologics and cell therapies hinge on what they secrete. More specifically, protein secretion has important impacts for both the quality and quantity of a therapeutic produced using bioprocessing, says Dino Di Carlo, PhD, the Armond and Elena Hairapetian chair in engineering and medicine at the University of California, Los Angeles.

For biologics, he continues, the rate at which producer cells secrete protein therapeuticsfor example, monoclonal antibodiesdrives the amount of therapeutic that can be produced per batch and ultimate costs of production; for cell therapies, secreted proteins are a key product attribute that defines a high-quality product.

A previous GEN story explained how secretion-based screening could improve cell therapies. Here, Di Carlo describes how he and his colleagues used single-cell sequencing information (SEC-seq) to link the secretions and transcriptomes for individual antibody-secreting cells.

The key finding from this work, Di Carlo says, is that gene transcripts are not necessarily correlated to secreted proteins, which makes the community rethink genetic modificationapproaches that just focus on overexpressing the gene for the target-secreted protein to achieve animproved therapeutic effect. For example, the SEC-seq study showed that the levels of mRNA transcripts for immunoglobulin G (IgG) proteins did not correlate with the amount of assembled and secreted IgGheavy and light chainin humanplasma cells.

Instead, in highly secreting cells, we found pathways upregulated that drive energy production, protein translation, protein trafficking, and response to misfolded proteins, Di Carlo explains. This suggests that having enough transcripts around to produce the secreted protein is not the bottleneck for high levels of secretion. As he adds, Other pathways are the likelybottleneck, and they are needed to make and traffic a lot of protein to the membrane to secrete it, as well as deal with mis-folded proteins that result from translation of large quantities of proteins.

Upon understanding the importance of secretions from biologics and cell-based therapies, what can a commercial bioprocessor do about it? One implication is for bioprocessors interested in the genetic modification of therapeutic cells to secrete more of a therapeutic protein, Di Carlo says. Our results suggest it is not sufficient to genetically modify your cell type of interest with just the gene to produce a secreted protein, because you may also need to drive these other associated pathways to enhance the level of secretion.

The SEC-seq method could also be applied in other ways. Bioprocessors could use this technique to identify the pathways that drive secretion of critical cytokines and growth factors for their therapeutic cell type of interest, Di Carlo says. For example, there may be differences in what drives high levels of secretion between the human plasma cells secreting IgG and natural killer cells secreting cytokines. This information could be used by a bioprocessor to improve the base cell types used for a therapeutic product or to perform quality control or production-batch sorting based on these factors.

For biologics, bioprocessors can use SEC-seq to uncover what drives higher secretion of a therapeutic protein by producer cell types, like CHO cells, HEK293 cells, etc., Di Carlo explains. The information obtained could help engineer the next generation of more efficient and productive producer cell lines.

So, making an effective biologic or cell therapy depends on what cells secrete. Fortunately, Di Carlos work will help bioprocessors better understand and improve that secretion.

Continued here:
Tracking Transcripts in Biologics and Cell Therapies - Genetic Engineering & Biotechnology News

Depression Treatment: How Genetic Testing Can Help Find the Right Medication – Dunya News

Depression Treatment: How Genetic Testing Can Help Find the Right Medication

Depression Treatment: How Genetic Testing Can Help Find the Right Medication

17 October,2022 08:42 am

ISLAMABAD, (Online) - Thats according to a new studyTrusted Source conducted by the U.S. Department of Veterans Affairs (VA) and published today in the Journal of the American Medical Association.

In it, researchers report that pharmacogenetic testing might help medical professionals by providing helpful information on how a person metabolizes a medication. This information can help doctors and others avoid prescribing antidepressants that could produce undesirable outcomes.

Depression medication is sometimes determined through trial and error to find the best drug and dosage. The researchers say they hope genetic testing can minimize this by giving insight into how a person may metabolize a drug.

Researchers said genetic testing did not show how a person would react to a particular medication but instead looked at how a person metabolized a drug. A drug-gene interaction is an association between a drug and a generic variation that may impact a persons response to that drug. Learning more about drug-gene interactions could potentially provide information on whether to prescribe medication and whether a dosage adjustment is needed.

In the study, around 2,000 people from 22 VA medical centers diagnosed with clinical depression received medications to treat their symptoms. The participants were randomized, with one-half receiving usual care and one-half undergoing pharmacogenetic testing.

For those that received usual care, doctors prescribed medication without the benefit of seeing a genetic testing result. The researchers found that 59 percent of the patients whose doctors received the genetic testing results used medications with no drug-gene interaction. Only 26 percent of the control group received drugs with no drug-gene interaction.

The researchers said the findings show that doctors avoided medications with a predicted drug-gene interaction.

Most often, patients get tested after at least one or two drugs havent worked or they had severe side effects, said Dr. David A. Merrill, a psychiatrist and director of the Pacific Neuroscience Institutes Pacific Brain Health Center at Providence Saint Johns Health Center in California. There are real genetically driven differences in how people metabolize drugs. It helps select more tolerable options to know about their genetics ahead of time.

Researchers interviewed participants about their depression symptoms at 12 weeks and 24 weeks.

Through 12 weeks, the participants who had genetic testing were more likely to have depression remission than those in the control group.

At 24 weeks, the outcome was not as pronounced. The researchers said this showed that genetic testing could relieve depressive symptoms faster than if a person did not receive the testing.

What experts think

There is a place for pharmacogenetic testing when treating people with depression, according to Dr; Alex Dimitriu, an expert in psychiatry and sleep medicine and founder of Menlo Park Psychiatry & Sleep Medicine in California and BrainfoodMD.

Some situations that might call for genetic testing include treatment-resistant depression and more complex cases.

It tells me if someone will either rapidly or slowly metabolize a drug meaning the level of the drug will either be too low or too high depending on the persons metabolism, Dimitriu told Healthline. I have used it in a few rare cases to see what options remain.

To me, more important than pharmacogenetic testing is watching the symptoms and response in my patients, he continued. I see my patients often, especially when starting a new medicine, and we can go slow and watch how the patient is doing. If you start at a low dose and raise the dose slowly, with good monitoring and charting, you can readily see who responds too fast or too slow and at what dose.

Some doctors dont think the science is there yet and arent going to rush into using pharmacogenetic testing based on this study.

I used pharmacogenetic testing about ten years ago and the science is accurate. It tells you the persons genetic makeup, said Dr. Ernest Rasyida, a psychiatrist at Providence St. Josephs Hospital.

From a scientific point of view, he told Healthline, this was a great study. It showed that the doctor used the data 60 percent of the time.

That means that the doctor looked at the data and the medications in the green zone and chose not to use them for side effects or other reasons. Instead, they chose a drug in the red zone because of their clinical experience.

I would argue that if 40 percent of the time you are going to use your judgment and you should use your judgment then why get the test? he concluded.

In addition to depression, pharmacogenetic testing can also be used in the treatment of other non-mental health conditions, such as cancer and heart disease.

Experts say there is no risk to the patient when getting the test and the researchers said they believe it will likely benefit some patients substantially.

Pharmacogenetic results are well-known and have been for years, but the clinical practice of medicine is very conservative, so it takes a long time for clearly beneficial changes to become common practice, Merrill told Healthline. If 15 to 20 percent of patients started on a new drug can avoid a major gene-drug interaction by knowing their results, doing the test seems like a no-brainer to me.

' ;var i = Math.floor(r_text.length * Math.random());document.write(r_text[i]);

Read this article:
Depression Treatment: How Genetic Testing Can Help Find the Right Medication - Dunya News

Empyrean Neuroscience Launches with $22M Series A and Genetic Engineering Platform to Advance Pipeline of Neuroactive Compounds Targeting CNS…

NEW YORK & CAMBRIDGE, England--(BUSINESS WIRE)--Empyrean Neuroscience, Inc., a leading genetic engineering company dedicated to developing neuroactive compounds to treat neuropsychiatric and neurologic disorders, today announced that it has launched with a $22 million Series A financing and a genetic engineering platform to advance a pipeline of neuroactive compounds targeting disorders of the central nervous system (CNS). The company is founded on a proprietary platform designed to genetically engineer small molecule therapeutics from fungi and plants. Veteran biotech executives Usman Oz Azam, M.D., Chief Executive Officer, and Fred Grossman, D.O., FAPA, Chief Medical Officer, lead the company.

Through precision targeting and engineering of the fungal and plant genomes, Empyrean is working to enhance and modulate neuroactive compounds produced by these kingdoms. The platform is being used to identify therapeutic fungal alkaloids, cannabinoids, and other small molecules that may exhibit enhanced efficacy and safety. In addition, the platform is designed to discover novel small molecules that may exhibit a therapeutic benefit.

There is an enormous medical need for safe and effective therapeutics that treat neuropsychiatric and neurologic disorders and we believe genetic engineering provides the answer, said Dr. Azam, Empyreans Chief Executive Officer. By applying our genetic engineering platform to make precise modifications to the genomes of fungi and plants, we can change the amount and kind of neuroactive small molecules they produce, with the goal of developing safe and effective treatments for difficult-to-treat diseases of the CNS.

The companys developmental pipeline includes fungal alkaloids, cannabinoids, and other neuroactive compounds, such as N,N-Dimethyltryptamine (DMT), for the potential treatment of major depressive disorder (MDD), post-traumatic stress disorder (PTSD), neurologic disorders, substance abuse and dependence, and chronic pain. Investigational New Drug (IND) enabling studies of the companys first genetically engineered encapsulated mushroom drug product are currently underway, and the company aims to enter the clinic for MDD in 2023.

Fungal alkaloids and cannabinoids have shown promise in treating depression, PTSD, anxiety, and other neuropsychiatric and neurologic disorders, said Dr. Grossman, Empyreans Chief Medical Officer. We believe our approach of genetically engineering fungi and plants can improve their safety and efficacy and will ultimately help to address the substantial unmet medical need in patients who suffer from these diseases.

As part of its genetic engineering platform, the company has licensed CRISPR/Cas9 technology from ERS Genomics for genetic engineering applications related to its therapeutic pipeline.

Dr. Azam was previously President and Chief Executive Officer of Tmunity Therapeutics, a biotech developing genetically engineered cell therapies for applications in cancer. Before Tmunity, he was Global Head of Cell & Gene Therapies at Novartis, where he was responsible for commercial operations, business development licensing, new product commercialization, clinical development, regulatory affairs, and other aspects of the global cell and gene therapies business. He was Chief Executive Officer of Novaccel Therapeutics, Chief Medical Officer of Aspreva Pharmaceuticals, and earlier in his career, held positions at Johnson & Johnson, GSK, and Pfizer. Dr. Azam received his M.D. from the University of Liverpool School of Medicine and is board certified in obstetrics and gynecology in the United Kingdom.

Before joining Empyrean, Dr. Grossman was Chief Medical Officer of Mesoblast Ltd. and President and Chief Medical Officer of Glenmark Pharmaceuticals. He has held executive leadership positions in large pharmaceutical companies, including Eli Lilly, Johnson & Johnson, Bristol Myers Squibb, and Sunovion. He has been responsible for leading the development, approval, and supporting the launch of numerous global medications addressing significant unmet medical needs across therapeutic areas, particularly in the CNS. He has held academic appointments and has authored numerous scientific publications. He was trained in psychiatry at Hahnemann University in Philadelphia and at the National Institute of Mental Health in Bethesda, Maryland and completed a Fellowship in the Section on Clinical Pharmacology at the National Institutes of Health. Dr. Grossman is a board-certified psychiatrist and Fellow of the American Psychiatric Association.

About Empyrean Neuroscience

Empyrean Neuroscience is a genetic engineering company developing a pipeline of neuroactive therapeutics to treat a range of neuropsychiatric and neurologic disorders. Through precision genetic modification, transformation, and regeneration of fungi and plants, the platform allows for the creation of small molecule therapeutics. In addition, the platform enables the discovery of novel small molecules that may exhibit therapeutic properties. The company is based in New York City and Cambridge, UK.

Original post:
Empyrean Neuroscience Launches with $22M Series A and Genetic Engineering Platform to Advance Pipeline of Neuroactive Compounds Targeting CNS...

A Decade of Breast Cancer at the Molecular Level: Pioneering Personalized Medicine – Targeted Oncology

Breast cancer treatment options have significantly expanded in the past decade, welcoming new classes of agents as well as treatments directed at specific patient populations (TIMELINE).

Many believe that these advancements in breast cancer care over the past 10 years owe much to the increased understanding of molecular factors contributing to breast cancer pathogenesis and heterogeneity.1-3

In looking back at the past decade of targeted therapy in breast cancer, Targeted Therapies in Oncology (TTO) spoke with 2 medical oncologists with extensive expertise in breast cancer about how biomarker advancements have transformed the practice of breast cancer management.

I think its fair to say that breast cancer in particular has led the way in molecular therapeutics in oncology, Dennis J. Slamon, MD, director of clinical/translational research at the UCLA Jonsson Comprehensive Cancer Center, told TTO. In part, thats because of all the investment that was made in research [and] because of defi ning this disease not just at a tissue level, but at a molecular level.

Classification of breast cancers into not just hormone receptor and HER2 positivity or negativity, but also into the luminal/basal subtypes has helped to identify treatments that may be more helpful for large groups of patients.1,3 For example, patients with basal-like disease, which is about 15% to 20% of all breast cancers, have triple-negative breast cancer (TNBC) and a poor prognosis. These patients tend to be responsive to chemotherapy treatment.1

The fact that molecular targets did not consistently translate to all breast cancers has become a key underpinning of our understanding of cancer.1 Not all patients benefi t from molecularly targeted treatments. For instance, HER2-positive breast cancer only accounts for about 25% of all breast cancer cases, thus HER2-targeted therapies may only benefi t 25% of all patients with breast cancer.

The same story is coming up again and again, not necessarily the same genes or the same targets or the same pathways, but the fact that there is a diversity of these diseases thats far beyond what we used to use to classify cancers by the tissue in which they arose, Slamon said.

Our understanding of cancer as a potentially more complex disease than previously supposed, began to develop well before 2012, explained Slamon.

That started in breast cancerbefore molecular medicine, as far back as 1899 or 98, when a surgeon recognized the fact that this disease occurred in women and the fact that it may have some hormonal component, said Slamon.4 After we found HER2, the methods of dissecting a tumor molecularly became much more sophisticated and widespread in their use and now, today, there are 14 molecular subtypes of breast cancer. And that is the underpinning of how breast cancer has led the way [in determining that patients with breast cancer] should not be treated with a one-size-fits-all approach. They should be treated with therapeutics that are directed to the appropriate subtype or the class in which they sit.

These molecular subtype characterizations have also shaped the therapeutic strategies within different breast cancer settings. Just thinking about advances in targeted therapies and how we use them to treat breast cancer in the last decade, I separate it into 2 categories1 is how we treat localized breast cancer,when our goal is to cure the cancer, so stages I to III. Most patients are being diagnosed with those earlier stages of breast cancer, Marina Sharifi , MD, PhD, assistant professor and medical oncologist at the University of Wisconsin Carbone Cancer Center, told TTO.

I think one of the major themes over the last 10 years for these nonmetastatic breast cancers is what I refer to as right-sizing therapy. We know that some of the women who have these early breast cancers can have recurrence down the road and we want to try and prevent that. So, 10 to 15 years ago, all of those women got chemotherapy, but even back then we knew that not every woman needs chemotherapy, and we knew that there were some breast cancers that could potentially benefit from more targeted types of therapies. But in the past 10 years, there have been a few developments that have allowed us to determine which women need chemotherapy and which women we can safely avoid exposing to the [adverse] effects of chemotherapy, said Sharifi.

This new prognostic ability has been fueled by advances in genomic testing.5,6 In addition to hormone receptors and molecular subtypes, other prognostic biomarkers that have been incorporated into practice include transcriptomic and proteomic levels and Ki-67 levels. Other biomarkers utilize combinations of genes to determine potential responses to treatment as well as the possibility of recurrence.

And more recently, research has turned to the use of circulating DNA and circulating tumor cells to help identify further prognostic and predictive bbiomarkers for patients with breast cancer.6

Specifically, for estrogen-driven (estrogen receptor [ER] positive) breast cancers, which are the most common type of breast cancer, we have genomic tests that are now used routinely to help us identify women who can safely avoid chemotherapy with that type of breast cancer. Both the MammaPrint and the OncoType DX are genomic tests that we know are effective in identifying which women do need chemotherapy to help maximize their chances of cure and which women have lower-risk breast cancers where the chemotherapy actually wont help them because they dont need it.7,8 That has been a huge development in the fi eld in the last 10 yearsto go from knowing that these tests were out there but not having that confirmation that we know that they predict chemotherapy benefit to having 2 major trials come out in the last 10 years that demonstrate that they can predict chemotherapy benefit, both in women who have those ER-positive breast cancers without lymph node involvement and also women who have ER-positive breast cancer with lymph node involvement. That has been a major advance for the most common type of breast cancer thats diagnosed across the country.9,10

Both the TAILORx (NCT00310180) and RxPONDER (NCT01272037) trials validated the usefulness of the 21-gene Oncotype DX recurrence score assay in patients with hormone receptorpositive, HER2-negative breast cancer. The TAILORx trial showed that among patients with node-negative disease, those with an intermediate Oncotype DX score, or intermediate risk of recurrence, could benefit from treatment with endocrine therapy alone and avoid receiving chemotherapy. Younger patients (.50 years) with a recurrence score of 16 to 25 still showed some benefit from the combination of chemotherapy and endocrine therapy.9 I n R xPONDER, adjuvant chemotherapy use was not considered necessary in most postmenopausal women with node-positive disease and recurrence scores between 0 and 25. Alternatively, premenopausal women were more likely to benefit from adjuvant chemotherapy.10

On the fl ip side, said Sharifi , somehave high-risk TNBC or high-risk HER2-positive breast cancer, those are types of breast cancer where historically we have struggled to cure women. There weve had a number of different advances. In TNBC, weve had the introduction of immunotherapies into our treatment. The KEYNOTE-522 trial [NCT03036488] showed that if wecombine pembrolizumab [Keytruda] with chemotherapy, that has significantly increased the number of women were able to cure of that higher-risk TNBC.11

Approval of neoadjuvant pembrolizumab in combination with chemotherapy for patients with high-risk, early-stage TNBC followed by single-agent adjuvant pembrolizumab by the FDA in 2021 was a signifi cant advancement for the treatment of patients with TNBC.12 Data from the KEYNOTE-522 trial were considered practice changing early on, showing a pathological complete response in 64.8% of patients treated with the regimen.11

Likewise, for HER2-positive breast cancer, we have seen the development of multiple drugs that target HER2, from trastuzumab [Herceptin] and pertuzumab [Perjeta], to ado-trastuzumab emtansine [T-DM1; Kadcyla], that have increased the number of women who were able to cure of their HER2-positive breast cancers, Sharifi said.

Slamon also commented on the proliferation of HER2-targeting therapies in addition to the expansion of other types of targeted agents, benefi tting patients in the TNBC space. [Since] our initial fi nding of HER2 and trastuzumab, now theres a ton of HER2 targeting trastuzumab deruxtecan [Enhertu] and emtansine [Kadcyla], margetuximab [Margenza]the list goes on and on of anti- HER2 therapeutics. Then there are new therapeutics for TNBC; they look at the TROP-2 target on tumor cells, and sacituzumab govitecan [Trodelvy] is the new therapeutic for that.13 As we identify new targets that we can approach with an antibody thatll attach to it, [it could be possible to] make an antibody- drug conjugate [ADC] to allow that antibody to go right to the target protein on the tumor cell and have it released internally and that takes away the systemic effect of the chemotherapy and delivers it right into the cell. Thats a whole new strategy thats coming into its own in a big way now, Slamon told TTO.

The phase 3 ASCENT study (NCT02574455) showed that sacituzumab produced a PFS and overall survival (OS) benefi t over physicians choice of chemotherapy in patients with relapsed or refractory metastatic TNBC. The median PFS with sacituzumab was 5.6 months compared with 1.7 months with chemotherapy. Median OS was 12.1 months with the ADC and 6.7 months with chemotherapy.13

The emergence of these newer targeted therapies has permitted a risk-based tailoring of neoadjuvant and adjuvant therapies in the non-metastatic breast cancer space, observed Sharifi . Another major development over the last 10 years, particularly for the [patients with] TNBC and HER2-positive breast cancers, is a shift towards neoadjuvant chemotherapy, which allows us to identify women with higher risk of recurrence after our standard pre-operative chemotherapy, and then add additional therapy after surgery to reduce their risk. For instance, that is how ado-trastuzumab emtansine is used in HER2-positive breast cancer, and there are other targeted options in this space, including olaparib [Lynparza] for women with germline BRCA mutations, she said.

We have also made great strides in precision oncology in the metastatic breast cancer space, with an expansion of different types of targeted approaches, including mutation-targeted inhibitors, immunotherapy, and ADCs. While all of these developments have helped patients live longer and better with metastatic breast cancer, I think ADCs are the most game-changing new development for treating metastatic breast cancer, Sharifi told TTO. As an example, the ADC trastuzumab deruxtecan, is a HER2-targeting agent [encompassing] trastuzumab linked to a chemotherapy that was initially found to be extremely effective for HER2-positive metastatic breast cancer, even in women who have had multiple prior treatments with different other agents. Even more importantly, however, it has recently been shown to be effective also in women who have low HER2 expression, who would previously have been classifi ed as HER2 negative.14 This has dramatically expanded the group of women with metastatic breast cancer who can benefit from trastuzumab deruxtecan to include what we are now calling HER2-low breast cancers, which are far more common than HER2-positive breast cancers. So thats been an important advance for us in the ADC space just in the last year, said Sharifi.

Data from the phase 3 DESTINY-Breast04 trial (NCT03734029) showed that patients with low HER2 expression can still possibly benefit from HER2-targeted therapy. The trial demonstrated a median progression-free survival (PFS) of 10.1 months with trastuzumab deruxtecan therapy vs 5.4 months with physicians choice of therapy in patients with HER2-low (IHC 1+/IHC 2+, ISH-) metastatic breast cancer who had received 1 to 2 prior lines of chemotherapy. The median OS was 23.9 months with trastuzumab deruxtecan and 17.5 months with physicians choice of chemotherapy.14 These findings led to the FDA approval of trastuzumab deruxtecan in this disease setting just this year.15

The Importance of Individualization

Turning to mutation-targeted therapies, this has also been an active area in metastatic breast cancer treatment in the past 5 years, including the first FDA approval of a drug targeting PIK3CA mutations, [which] are common in many types of cancer and found in almost half of women who have ER-positive metastatic breast cancer, where the drug alpelisib [Piqray] has been approved for women with this type of mutation, Sharifi told TTO.

Approval for alpelisib in breast cancer was supported by fi ndings from the phase 3 SOLAR-1 trial (NCT02437318), which showed that the PI3K inhibitor in combination with fulvestrant led to a median PFS of 11.0 months vs 5.7 months with fulvestrant in patients with PIK3CA-mutant, HR-positive, HER2-negative advanced breast cancer.16

patient with metastatic breast cancer should be getting molecular profi ling to identify possible targeted therapy options, and many patients will now have ADC treatment options that they may be eligible for at some point in their disease trajectory. For patients with localized breast cancer, I think weve also come a long way in being able to individualize therapy and avoid exposing patients to unnecessary [adverse] effects while also being able to augment treatment for patients who are at higher risk of recurrence and cure more women with this diagnosis, Sharifi said.

The basis of this personalized therapy derived from breast cancer-based research, observed Slamon. The gamechanger clearly was [the molecular advancements]. [When] looking at what is big in oncology, its this appreciation that originated in breast cancer and now has spread throughout the field of human oncology about this molecular diversity defining, a) different subtypes, and b) new potential therapeutic targets or pathways, he said.

Sharifi looks to the continued development of ADCs as a cancer treatment modality. Theres a real untapped well of potential targets that were just starting to explore in terms of developing new ADCs and combining them with targeted and immunotherapy approaches, and I think this will move the bar in how were able to combat treatment resistance, said Sharifi.

Slamons view of the future also comprises targeted strategies : As we identify more targets...therell probably be more and newer, perhaps even better, therapeutics than we have currently. Breast cancer has led this field.

REFERENCES:

1. Bettaieb A, Paul C, Plenchette S, Shan J, Chouchane L, Ghiringhelli F. Precision medicine in breast cancer: reality or utopia? J Transl Med. 2017;15(1):139. doi:10.1186/s12967-017-1239-z

2. Cocco S, Piezzo M, Calabrese A, et al. Biomarkers in triple-negative breast cancer: state-of-the-art and future perspectives. Int J Mol Sci. 2020;21(13):4579. doi:10.3390/ijms21134579

3. Low SK, Zembutsu H, Nakamura Y. Breast cancer: The translation of big genomic data to cancer precision medicine. Cancer Sci. 2018;109(3):497-506. doi:10.1111/cas.13463

4. Beatson GT. On the treatment of inoperable cases of carcinoma of the mamma: suggestions for a new method of treatment, with illustrative cases. Trans Med Chir Soc Edinb. 1896;15:153-179.

5. Hou Y, Peng Y, Li Z. Update on prognostic and predictive biomarkers of breast cancer. Semin Diagn Pathol. 2022;39(5):322-332. doi:10.1053/j.semdp.2022.06.015

6. Nicolini A, Ferrari P, Duff y MJ. Prognostic and predictive biomarkers in breast cancer: past, present and future. Semin Cancer Biol. 2018;52(Pt 1):56-73. doi:10.1016/j.semcancer.2017.08.010

7. Cardoso F, vant Veer LJ, Bogaerts J, et al; MINDACT Investigators. 70- gene signature as an aid to treatment decisions in early-stage breast cancer. N Engl J Med. 2016;375(8):717-729. doi:10.1056/NEJMoa1602253

Go here to see the original:
A Decade of Breast Cancer at the Molecular Level: Pioneering Personalized Medicine - Targeted Oncology