AI can automatically rewrite outdated text in Wikipedia articles – Engadget

The machine learning-based system is trained to recognize the differences between a Wikipedia article sentence and a claim sentence with updated facts. If it sees any contradictions between the two sentences, it uses a "neutrality masker" to pinpoint both the contradictory words that need deleting and the ones it absolutely has to keep. After that, an encoder-decoder framework determines how to rewrite the Wikipedia sentence using simplified representations of both that sentence and the new claim.

The system can also be used to supplement datasets meant to train fake news detectors, potentially reducing bias and improving accuracy.

As-is, the technology isn't quite ready for prime time. Humans rating the AI's accuracy gave it average scores of 4 out of 5 for factual updates and 3.85 out of 5 for grammar. That's better than other systems for generating text, but that still suggests you might notice the difference. If researchers can refine the AI, though, this might be useful for making minor edits to Wikipedia, news articles (hello!) or other documents in those moments when a human editor isn't practical.

Continued here:

AI can automatically rewrite outdated text in Wikipedia articles - Engadget

Beyond the AI hype cycle: Trust and the future of AI – MIT Technology Review

Theres no shortage of promises when it comes to AI. Some say it will solve all problems while others warn it will bring about the end of the world as we know it. Both positions regularly play out in Hollywood plotlines like Westworld, Carbon Black, Minority Report, Her, and Ex Machina. Those stories are compelling because they require us as creators and consumers of AI technology to decide whether we trust an AI system or, more precisely, trust what the system is doing with the information it has been given.

This content was produced by Nuance. It was not written by MIT Technology Review's editorial staff.

Joe Petro is CTO at Nuance.

Those stories also provide an important lesson for those of us who spend our days designing and building AI applications: trust is a critical factor for determining the success of an AI application. Who wants to interact with a system they dont trust?

Even as a nascent technology AI is incredibly complex and powerful, delivering benefits by performing computations and detecting patterns in huge data sets with speed and efficiency. But that power, combined with black box perceptions of AI and its appetite for user data, introduces a lot of variables, unknowns, and possible unintended consequences. Hidden within practical applications of AI is the fact that trust can have a profound effect on the users perception of the system, as well as the associated companies, vendors, and brands that bring these applications to market.

Advancements such as ubiquitous cloud and edge computational power make AI more capable and effective while making it easier and faster to build and deploy applications. Historically, the focus has been on software development and user-experience design. But its no longer a case of simply designing a system that solves for x. It is our responsibility to create an engaging, personalized, frictionless, and trustworthy experience for each user.

The ability to do this successfully is largely dependent on user data. System performance, reliability, and user confidence in AI model output is affected as much by the quality of the model design as the data going into it. Data is the fuel that powers the AI engine that virtually converts the potential energy of user data into kinetic energy in the form of actionable insights and intelligent output. Just as filling a Formula 1 race car with poor or tainted fuel would diminish performance, and the drivers ability to compete, an AI system trained with incorrect or inadequate data can produce inaccurate or unpredictable results that break user trust. Once broken, trust is hard to regain. That is why rigorous data stewardship practices by AI developers and vendors are critical for building effective AI models as well as creating customer acceptance, satisfaction, and retention.

Responsible data stewardship establishes a chain of trust that extends from consumers to the companies collecting user data and those of us building AI-powered systems. Its our responsibility to know and understand privacy laws and policies and consider security and compliance during the primary design phase. We must have a deep understanding of how the data is used and who has access to it. We also need to detect and eliminate hidden biases in the data through comprehensive testing.

Treat user data as sensitive intellectual property (IP). It is the proprietary source code used to build AI models that solve specific problems, create bespoke experiences, and achieve targeted desired outcomes. This data is derived from personal user interactions, such as conversations between consumers and call agents, doctors and patients, and banks and customers. It is sensitive because it creates intimate, highly detailed digital user profiles based on private financial, health, biometric, and other information.

User data needs to be protected and used as carefully as any other IP, especially for AI systems in highly regulated industries such as health care and financial services. Doctors use AI speech, natural-language understanding, and conversational virtual agents created with patient health data to document care and access diagnostic guidance in real time. In banking and financial services, AI systems process millions of customer transactions and use biometric voiceprint, eye movement, and behavioral data (for example, how fast you type, the words you use, which hand you swipe with) to detect possible fraud or authenticate user identities.

Health-care providers and businesses alike are creating their own branded digital front door that provides efficient, personalized user experiences through SMS, web, phone, video, apps, and other channels. Consumers also are opting for time-saving real-time digital interactions. Health-care and commercial organizations rightfully want to control and safeguard their patient and customer relationships and data in each method of digital engagement to build brand awareness, personalized interactions, and loyalty.

Every AI vendor and developer not only needs to be aware of the inherently sensitive nature of user data but also of the need to operate with high ethical standards to build and maintain the required chain of trust.

Here are key questions to consider:

Who has access to the data? Have a clear and transparent policy that includes strict protections such as limiting access to certain types of data, and prohibiting resale or third-party sharing. The same policies should apply to cloud providers or other development partners.

Where is the data stored, and for how long? Ask where the data lives (cloud, edge, device) and how long it will be kept. The implementation of the European Unions General Data Protection Regulation, the California Consumer Privacy Act, and the prospect of additional state and federal privacy protections should make data storage and retention practices top of mind during AI development.

How are benefits defined and shared? AI applications must also be tested with diverse data sets to reflect the intended real-world applications, eliminate unintentional bias, and ensure reliable results.

How does the data manifest within the system? Understand how data will flow through the system. Is sensitive data accessed and essentially processed by a neural net as a series of 0s and 1s, or is it stored in its original form with medical or personally identifying information? Establish and follow appropriate data retention and deletion policies for each type of sensitive data.

Who can realize commercial value from user data? Consider the potential consequences of data-sharing for purposes outside the original scope or source of the data. Account for possible mergers and acquisitions, possible follow-on products, and other factors.

Is the system secure and compliant? Design and build for privacy and security first. Consider how transparency, user consent, and system performance could be affected throughout the product or service lifecycle.

Biometric applications help prevent fraud and simplify authentication. HSBCs VoiceID voice biometrics system has successfully prevented the theft of nearly 400 million (about $493 million) by phone scammers in the UK. It compares a persons voiceprint with thousands of individual speech characteristics in an established voice record to confirm a users identity. Other companies use voice biometrics to validate the identities of remote call center employees before they can access proprietary systems and data. The need for such measures is growing as consumers conduct more digital and phone-based interactions.

Intelligent applications deliver secure, personalized, digital-first customer service. A global telecommunications company is using conversational AI to create consistent, secure, and personalized customer experiences across its large and diverse brand portfolio. With customers increasingly engaging across digital channels, the company looked to technology partners to expand its own in-house expertise while ensuring it would retain control of its data in deploying a virtual assistant for customer service.

A top-three retailer uses voice-powered virtual assistant technology to let shoppers upload photos of items theyve seen offline, then presents items for them to consider buying based on those images.

Ambient AI-powered clinical applications improve health-care experiences while alleviating physician burnout. EmergeOrtho in North Carolina is using the Nuance Dragon Ambient eXperience (DAX) application to transform how its orthopedic practices across the state can engage with patients and document care. The ambient clinical intelligence telehealth application accurately captures each doctor-patient interaction in the exam room or on a telehealth call, then automatically updates the patient's health record. Patients have the doctors full attention while streamlining the burnout-causing electronic paperwork physicians need to complete to get paid for delivering care.

AI-driven diagnostic imaging systems ensure that patients receive necessary follow-up care. Radiologists at multiple hospitals use AI and natural language processing to automatically identify and extract recommendations for follow-up exams for suspected cancers and other diseases seen in X-rays and other images. The same technology can help manage a surge of backlogged and follow-up imaging as covid-19 restrictions ease, allowing providers to schedule procedures, begin revenue recovery, and maintain patient care.

As digital transformation accelerates, we must solve the challenges we face today while preparing for an abundance of future opportunities. At the heart of that effort is the commitment to building trust and data stewardship into our AI development projects and organizations.

See more here:

Beyond the AI hype cycle: Trust and the future of AI - MIT Technology Review

Scaling AI While Navigating the Current Uncertainty – Dice Insights

The amount of uncertainty and complexity the recent economic difficulties have introduced into the business landscape has left many businesses reeling. While trying to adjust to the new normal, businesses are pressured to find new efficiencies and discover previously untapped sources of economic opportunity, makingA.I. and machine learning modelsmore important than ever to making critical and often timely business decisions.

The time for A.I. experimentation is over. We have arrived at the point where A.I. has to produce results and drive real revenue, while safeguarding the business from all of the potential risks that can jeopardize the bottom line. This expectation only becomes more challenging at a time when data is changing by the hour and previous historical patterns are not reliable. Furthermore, the complexities compound as businesses decide to rely more on A.I. in these trying times as away to stay ahead of the competition.

Newly emerging best practices, commonly referred to as MLOps (ML Operations), underpinned by a new layer of technologies with the same name, are the missing piece of the puzzle for many organizations looking to fast-track and scale their A.I. capabilities without putting their businesses at risk during this time of economic uncertainty. With MLOps technology and practices in place, businesses can bridge the inherent gap between data and operations teams, and get a scalable and governed means to deploy and manage A.I. applications in real-world production environments.

MLOps can be broken down into four key areas of the process required to derive value from machine learning, that must be well-resourced and well-understood to work in your business:

With MLOps, the goal is to make model deployment easy, regardless of which platform or language those models were created in, or where they need to eventually be deployed. MLOps essentially serves as an abstraction layer of automation whereby data teams point their models to, and where they can become managed by MLOps or Ops teams, while providing role-based visibility and actionability based on the needs of your organization.

The notion of removing ownership from the data teams as pertains to production environments, while providing them with the required visibility, allows for taking a lot of work off their plates, freeing them up to conduct their jobswhich is solving complex business problems using data.

To ensure the visibility and removal of unnecessary risk resulting from models going haywire, MLOps solutions need to deploy unique monitoring that is designed from the ground-up to monitor ML models. Such monitoring includes data drift, concept drift, feature importance, model accuracy, as well as overall service health, coupled with proactive alerting to be sent to various stakeholders using a variety of channels such as email, Slack and PagerDuty (based on severity and role). With MLOps monitoring in place, teams can deploy and manage thousands of models, and businesses will be ready to scale production A.I.

MLOps recognizes that models need to be updated frequently and seamlessly. Model lifecycle management supports the testing and warm-up of replacement models, A/B testing of new models against older versions, seamless rollout of updates, failover procedures, and full version control for simple rollback to prior model versions, all wrapped in designable approval workflows.

MLOps provides the integrations and capabilities you need to ensure consistent, repeatable and reportable processes for your models in production.Key capabilities include access control for production models and systems, such as integration to LDAP and role-based access control systems (RBAC), as well as approval flows, logging, version storage of each version of each model, and traceability of results for legal and regulatory compliance.

With the right processes, tools and training in place, businesses will be able to reap many benefits from MLOps. Itll provide insight into areas where the data might be skewed.One of the many frustrating parts of running A.I. modelsespecially right nowis that the data is constantly shifting. With MLOps, businesses can quickly identify and act on that information in order to retrain production models on newer data, using the data pipeline, algorithms, and code leveraged to create the original.

Users can also scale production while minimizing risk.Scaling A.I. across the enterprise is easier said than done. There can be numerous roadblocks that stand in the way, such as lack of communication between the IT and data science teams, or lack of visibility into A.I. outcomes. With MLOps, you can support multiple types of machine learning models created by different tools, as well as software dependencies needed by models.

Sivan Metzger is Managing Director, MLOps and Governance at DataRobot.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

Visit link:

Scaling AI While Navigating the Current Uncertainty - Dice Insights

Samsung wants to spend $1bn on AI – TrustedReviews

Samsung is ready to make an even bigger splash in the artificial intelligence space in 2017, beyond introducing its upcoming "Bixby" AI assistant.

The South Korean company has earmarked around $1 billion for AI acquisitions, according to a Samsung US employee quoted here.

Related: MWC 2017

AI is an area Samsung is increasingly placing a great deal of emphasis on. It was only last year the company acquired AI specialists Viv Labs, and its soon-to-be-released virtual assistant should be based on technology from that very company.

Bixby, which is likely to launch on the Galaxy S8, will have eight languages from launch, putting it ahead of the Google Assistant, which currently supports five languages.

AI assistant 'Bixby' is said to be included on the upcoming Galaxy S8

One tipped feature that has a lot of potential to cause a stir is the ability for S8 users to scan objects using the onboard camera and receive useful information as feedback.

Smartphones wont be the only devices to benefit from Bixby, with the company explaining last November that the plan is to bring voice assistant services to connected home appliances and wearables.

All the rhetoric from Samsung points to a concerted push into everything AI, and it would hardly be a surprise to see an Amazon Echo/Google Home rival in the near future.

Watch: What to expect from the Galaxy S8

Do you plan to upgrade to the Galaxy S8? Let us know in the comments.

Go here to see the original:

Samsung wants to spend $1bn on AI - TrustedReviews

‘Swarm AI’ predicts winners for the 2017 Academy Awards – TechRepublic

Image: LimaEs, Getty Images/iStockphoto

Wondering who will win the 2017 Oscars? Instead of turning to industry experts, film critics, or polls, you can try something else this year: Artificial intelligence.

A startup called Unanimous A.I. has been making predictionslike who will win the Superbowl, March Madness, US presidential debates, the Kentucky Derbyfor the last two years. It uses a software platform called UNU to assemble people at their computers, who make a real-time prediction together.

UNU's algorithm is built to harness the concept of "swarm" intelligencethe power of a group to make an intelligent, collective decision. It's how flocks of birds or bees decide where to travel for the winter, for instancea decision that no single entity could make on its own. The decisions are made quickly, in under a minute each.

When UNU first predicted the Oscars in 2015, it took a group of non-experts to guess the Academy Award winnersand the results were better than those from FiveThirtyEight, The New York Times, and a slew of other experts. When it predicted the 2016 Oscars last year, the platform achieved 76% accuracyoutperforming Rolling Stone and the LA Times.

This week, it met the challenge again, assembling a group of 50 movie fans to make real-time predictions.

The method produces answers that are better than each individual selection. It's not an average. Each user on the platform has a virtual "puck" that it can drag to the answer it chooses, like a digital Ouija board. By giving users the ability to see the other picks, it gives people the opportunity to change their mind in the middle of the question. Each member of the group influences each other this way. If the group decision is heading toward one of two selections that the user did not originally pick, there's an opportunity to advocate for a different choice.

The reason polls, surveys, prediction markets, and expert opinions are different from the swarm? In all of the previous methods, decisions are made individually, sequentially. In a swarm, the decision is made simultaneously.

SEE: How 'artificial swarm intelligence' uses people to make better predictions than experts

Unanimous A.I. CEO Louis Rosenberg previously told TechRepublic that most people in the swarms have not seen all of the movies. Still, the swarm is successful because "fill in each other's gaps in knowledge."

Here are Unanimous A.I.'s predictions for the winners of the major awards in the 2017 Academy Awards (click the hyperlinks to see the swarms in action):

Best Picture: La La LandBest Actress in a Leading Role: Emma Stone (La La Land) Best Actor in a Leading Role: Denzel Washington (Fences) Best Director: Damien Chazelle (La La Land) Best Actress in a Supporting Role: Viola Davis (Fences) Best Actor in a Supporting Role: Mahershalla Ali (Moonlight) Best Foreign Language Film: The Salesman

Most of the predictions are in line with industry experts and polls, which show La La Land to be the favorite. But there are three categories here to watch, in which the swarm was not confident in its predictionsit was conflicted between two options. These categories are: Best Actor, Best Original Screenplay, and Best Foreign Film.

For instance, many experts predict that Casey Affleck will win for Best Actor, but the swarm chose Denzel Washington. "The experts are weighing previous results heavily, most notably the Golden Globes, which Casey Affleck won last month," Rosenberg told TechRepublic about the new predictions. "But the Golden Globes is composed of the Hollywood Foreign Press, a very narrow demographic compared to the Academy." Rosenberg said he thinks the Swarm's pick shows that it's more in line with the Academy.

Image: Unanimous A.I.

Beyond predicting sports games and entertainment, the swarm method has bigger implications. Rosenberg has seen a lot of interest from marketing companies who want to learn how customers would respond to a certain advertisement or product. A new tool offered by Unanimous A.I. called Swarm Insight could help businesses assess how effective their messages are, how they should think about pricing, and when it's worth taking a risk.

Go here to see the original:

'Swarm AI' predicts winners for the 2017 Academy Awards - TechRepublic

AI Can Help Life Insurers Hear What the Customers Are Saying – ThinkAdvisor

(Credit: Thinkstock)

Over the years banks, retailers, and telecommunications have embraced new technologies such as artificial intelligence (AI) and machine learning (ML) to enhance their customer communication and, ultimately, customer experience. While there may have been a time to sit back and see what works and what doesnt, these are now proven technologies. Organizations in the life insurance space need to adopt this thinking or risk being left behind.

Not only is todays customer more tech-savvy than ever and expects the same high levels of convenience from their insurers as they do from other companies, but the last few months have put a spotlight on the need for a better digital communication customer experience.

The renewed battle over federal and state annuity sales standards further emphasizes the value of addressing customers concerns.

(Related:5 Ways Annuity Providers Can Use Tech for Fiduciary Compliance)

By meeting and, hopefully, exceeding these expectations, insurers can ensure customer loyalty in an increasingly competitive market.

That in turn comes with several major business benefits, including increased revenue.

Of course, this kind of transformation wont happen overnight.

Historically, life insurers havent done well when it comes to customer communication. Within the last few years, more than 90% of insurers worldwide did not communicate with their customers even once a year, and many customers did not receive a single communication all year. Of those interactions, many were limited to claims and related advice.

In a world where highly personalized products and services, supported by relevant, easy to understand and contextual information have become the norm, thats no longer viable.

In order to overcome that deficit, it wont be enough for life insurers to simply retrain their staff and hope for the best. Instead, they need to embrace the kind of technologies that will have an immediate positive impact on customer experience.

Enter AI and ML. While some life insurers have adopted these technologies to great effect in the back office to speed up claims processes and in fraud detection, for instance they havent been used to anywhere near the same effect when it comes to customer communications, which is a missed opportunity.

AI improves customer experience through the analysis of data on hand in order to decide the next message that is best suited to each customer, based on actions taken with the insurer, demographics, and changes in life stages that change a customers needs.

By delivering the right message to the right person, at the right time, an organization can dramatically improve the customer experience.

That relevance and timeliness, meanwhile, is most likely to result in the response the business wants: a policy renewal, an upsell or a new sale.

As a subset of AI focused on automating tasks, ML can help decide which content is suited to a customer based on data on hand, such as past behavior, demographics and location, making it easier to deliver truly hyper-personalized communication.

There are several ways organizations incorporate AI in their customer communications beyond hyper-personalization and next-best-action messaging. These include customizing customer journey touch-points (ensuring that tone is appropriate when the customer has suffered a loss, for instance) and allowing customers to interact with communications using voice-enabled tech (such as smart devices). Integrating chatbots into customer communications additionally allows for one-way communication to become a conversation.

By making these changes, life insurers can give their customers a markedly improved experience.

In doing so, they encourage loyalty and additional spend, which can ultimately benefit their bottom line.

Simply put, there are a multitude of reasons why insurers should embrace AI in their customer communications.

Connect with ThinkAdvisor Life/Health onFacebook,LinkedInandTwitter.

Mia Papanicolaou is the chief operating officer at Striata, a digital communications strategy company based in Johannesburg.

Go here to see the original:

AI Can Help Life Insurers Hear What the Customers Are Saying - ThinkAdvisor

MIT creates an AI to predict urban decay – TNW

Facebook volunteers and work-at-home moms might be making city planning decisions, thanks to AI research conducted by MIT scientists. Researchers from MITs media lab have been feeding computers a steady stream of data for the last four years to build an AI capable of determining why some cities grow and others decay.

The data the researchers are using has been compiled from people, regular Joes and Janes, who choose between two randomly selected pictures to determine which one seems less dangerous or more appealing. Currently its all common-sense driven: most of us would agree a typically beautiful environment will foster growth better than a landscape of derelict buildings.

Finally, with enough data, the AI has been returning results which have been compared with human responses to the same image pairings. The researchers proofed their data by comparing responses from Amazon Mechanical Turk workers. According to MIT the robots got it right a little more than 70% of the time, which was better than expected. In the future MIT researchers plan on increasing the number of people contributing data, going so far as to say they may need to advertise on Facebook to draw more participants.

At first-glance it doesnt sound very impressive theyre just feeding data into an algorithm by hand based on thousands of different human interpretations. People decide which Google Map image in a pair looks like a nicer neighborhood, scientists determine if the machine agrees, and vice-versa.

The ultimate goal is for us to glean insight into our problems by learning what the machines can teach us about ourselves. ProfessorCsar Hidalgo, the director of the Collective Learning group at the MIT Media Lab, told Co.Design:

I do hope that this research starts helping us understand how the urban environment affects people and how its affected by people so that when we do policy in the context of urbanplanning, we have a more scientific understanding of the effect different designs have in the behaviors of the populations that use them

Until the machines learn to define beauty for themselves which is a scary thought well need to explain to them why one citys streets are lined with despair and another shows the promise of growth and renewal. Once AI is up to speed, however, well be able to start saving dying communities with machine-learning. Computers can draw exponentially more patterned-based conclusions than humans.

The better we can understand an issue, the more connections we can determine and the better our solutions will be. Thanks to MIT we may be on the verge of solving decades-old urban rejuvenation problems.

AI Is Reshaping What We Know About Cities on Co.Design

Read next: Parents, social media isnt turning your kids into robots

Read the original post:

MIT creates an AI to predict urban decay - TNW

How AI Can Help Manage Mental Health In Times Of Crisis – Forbes

Much has been written in the past few weeks about the COVID-19 crisis and the ripple effects that will impact human society. Beyond the immediate effect of the virus on health and mortality, it is clear that we are also facing a global, massive financial crisis that is likely to affect our lives for years to come. These changes, along with the expected prolonged social isolation, are bound to have a devastating effect on our mental health, collectively and individually, and, in turn, cause a dramatic deterioration in overall health and an increase in the prevalence of chronic illness.

From research conducted by the World Health Organization, we know that most people affected by emergency situations experience immediate psychological distress, hopelessness and sleep issues -- and that 22% of people are expected to develop depression, anxiety, post-traumatic stress disorder, bipolar disorder or schizophrenia. This escalation comes on top of a baseline of 19.1% of U.S. adults experiencing mental illness (47.6 million people in 2018, according to the Substance Abuse and Mental Health Services Administration). We further know that rising depression rates are associated with a variety of chronic health conditions, including obesity, coronary heart disease and diabetes, so the domino effect does not end with mental health.

This prediction may sound like an eschatological prophecy of dystopia, but there are good reasons to be optimistic too. At our disposal, we now have myriad clinical-grade digital tools and applications designed to treat and prevent anxiety and depression. All it takes is a Wi-Fi connection and a mobile phone to provide digital treatment that can reach everyone. Even more encouraging are the recent advances in the use of artificial intelligence in mental health -- more specifically augmented intelligence, the ability to embed the collective knowledge and care of humans into digital applications.

Such an approach attempts to make the best of both worlds -- the human connection along with the rich, often gamified digital experience that is driven by data science. For example, research scientists at the University of Utah founded Lyssn, a product that uses deep learning algorithms for analyzing the recording and sharing of psychotherapy conversations for training and quality assurance purposes. This is a manual and expensive process usually conducted by a panel of psychotherapists. Lyssns product is trained using a broad range of therapists, so the advantages are not only cost and time, but also consistency and reduced bias to any particular attitude or approach.

Other companies offer a range of therapy chatbots: X2AIs bot, Sara, uses natural language processing to engage users in conversations on Facebook Messenger, helping them manage stress and anxiety. Another example is Lark Health, a chatbot that is directed at managing diabetes and hypertension, gathering and analyzing sleep, weight and nutrition information from users in daily conversation.

Such applications distill collective human knowledge into a digital experience, providing users with 24/7 access to a therapist representing a cohort of hundreds of clinicians, who are trained in a variety of different disciplines.

The challenge ahead is to go beyond the mechanics of therapeutic conversation and to model the human alliance or bond that human therapists establish with clients. For this purpose, joint teams of data scientists, clinicians and writers (like those working on team Anna at my company) need to work on creating conversational experiences that have the capacity to express curiosity about users, develop a deeper understanding of their lives, and be emotionally sensitive and attuned.

Going beyond the mechanics of interaction and attempting to build a superhuman digital therapist requires:

Establishing A Single Transdisciplinary Team: Data scientists, clinicians and content editors speak different languages. To avoid creating a modern Tower of Babel, it is critical to help them establish the same language by working closely in a single nimble and cohesive team.

Starting With A Clear Model Of A Therapist: Empathy, care and listening are a result of patterns of interaction that need to be explicitly modeled. Prior to any work on specific interventions, it is critical to specify these patterns in detail. For example, what are the personality traits of the digital therapist? What triggers in the conversation does it respond to? Which goals is it trying to accomplish? What language does it avoid using? What are the ways in which it shows interest? Curiosity? Support? How does it manage the trade-off between persisting with its own agenda to being flexible and letting the user take the lead?

Defining A Few Simple Criteria For Success: Beyond the standard quantitative methods for assessing efficacy, define clearly how you assess the degree to which the AI was successful in establishing an alliance with the user. What does the ideal user feedback sound like? What would you want users to say when they describe the digital therapist?

Talking To Users On A Daily Basis: The experience of digital therapy is made by assembling multiple, and often complex, algorithms and mechanisms together. To make sure you are investing in the right places, always talk to users and ask them to recall specific parts of the interaction that made them feel a sense of alliance and bond. You may discover that the simplest patterns of interaction are the most important ones.

These tools will not replace the couch, tissue box and innate professionalism of the therapist's office, but they may very well keep us healthier in times when we can't make it to that office. In times of crisis, like our current situation and those that will inevitably crop up in the future, it's important to know what our options are and work toward a healthier future in whatever ways we can.

Excerpt from:

How AI Can Help Manage Mental Health In Times Of Crisis - Forbes

human genetics | Description, Chromosomes, & Inheritance …

human genetics, study of the inheritance of characteristics by children from parents. Inheritance in humans does not differ in any fundamental way from that in other organisms.

The study of human heredity occupies a central position in genetics. Much of this interest stems from a basic desire to know who humans are and why they are as they are. At a more practical level, an understanding of human heredity is of critical importance in the prediction, diagnosis, and treatment of diseases that have a genetic component. The quest to determine the genetic basis of human health has given rise to the field of medical genetics. In general, medicine has given focus and purpose to human genetics, so the terms medical genetics and human genetics are often considered synonymous.

Britannica Quiz

Genetics Quiz

Who deduced that the sex of an individual is determined by a particular chromosome? How many pairs of chromosomes are found in the human body? Test your knowledge. Take this quiz.

A new era in cytogenetics, the field of investigation concerned with studies of the chromosomes, began in 1956 with the discovery by Jo Hin Tjio and Albert Levan that human somatic cells contain 23 pairs of chromosomes. Since that time the field has advanced with amazing rapidity and has demonstrated that human chromosome aberrations rank as major causes of fetal death and of tragic human diseases, many of which are accompanied by intellectual disability. Since the chromosomes can be delineated only during mitosis, it is necessary to examine material in which there are many dividing cells. This can usually be accomplished by culturing cells from the blood or skin, since only the bone marrow cells (not readily sampled except during serious bone marrow disease such as leukemia) have sufficient mitoses in the absence of artificial culture. After growth, the cells are fixed on slides and then stained with a variety of DNA-specific stains that permit the delineation and identification of the chromosomes. The Denver system of chromosome classification, established in 1959, identified the chromosomes by their length and the position of the centromeres. Since then the method has been improved by the use of special staining techniques that impart unique light and dark bands to each chromosome. These bands permit the identification of chromosomal regions that are duplicated, missing, or transposed to other chromosomes.

Micrographs showing the karyotypes (i.e., the physical appearance of the chromosome) of a male and a female have been produced. In a typical micrograph the 46 human chromosomes (the diploid number) are arranged in homologous pairs, each consisting of one maternally derived and one paternally derived member. The chromosomes are all numbered except for the X and the Y chromosomes, which are the sex chromosomes. In humans, as in all mammals, the normal female has two X chromosomes and the normal male has one X chromosome and one Y chromosome. The female is thus the homogametic sex, as all her gametes normally have one X chromosome. The male is heterogametic, as he produces two types of gametesone type containing an X chromosome and the other containing a Y chromosome. There is good evidence that the Y chromosome in humans, unlike that in Drosophila, is necessary (but not sufficient) for maleness.

Strands of human chromosomes.

A human individual arises through the union of two cells, an egg from the mother and a sperm from the father. Human egg cells are barely visible to the naked eye. They are shed, usually one at a time, from the ovary into the oviducts (fallopian tubes), through which they pass into the uterus. Fertilization, the penetration of an egg by a sperm, occurs in the oviducts. This is the main event of sexual reproduction and determines the genetic constitution of the new individual.

Human sex determination is a genetic process that depends basically on the presence of the Y chromosome in the fertilized egg. This chromosome stimulates a change in the undifferentiated gonad into that of the male (a testicle). The gonadal action of the Y chromosome is mediated by a gene located near the centromere; this gene codes for the production of a cell surface molecule called the H-Y antigen. Further development of the anatomic structures, both internal and external, that are associated with maleness is controlled by hormones produced by the testicle. The sex of an individual can be thought of in three different contexts: chromosomal sex, gonadal sex, and anatomic sex. Discrepancies between these, especially the latter two, result in the development of individuals with ambiguous sex, often called hermaphrodites. Homosexuality is unrelated to the above sex-determining factors. It is of interest that in the absence of a male gonad (testicle) the internal and external sex anatomy is always female, even in the absence of a female ovary. A female without ovaries will, of course, be infertile and will not experience any of the female developmental changes normally associated with puberty. Such a female will often have Turner syndrome.

If X-containing and Y-containing sperm are produced in equal numbers, then according to simple chance one would expect the sex ratio at conception (fertilization) to be half boys and half girls, or 1 : 1. Direct observation of sex ratios among newly fertilized human eggs is not yet feasible, and sex-ratio data are usually collected at the time of birth. In almost all human populations of newborns, there is a slight excess of males; about 106 boys are born for every100 girls. Throughout life, however, there is a slightly greater mortality of males; this slowly alters the sex ratio until, beyond the age of about 50 years, there is an excess of females. Studies indicate that male embryos suffer a relatively greater degree of prenatal mortality, so the sex ratio at conception might be expected to favour males even more than the 106 : 100 ratio observed at birth would suggest. Firm explanations for the apparent excess of male conceptions have not been established; it is possible that Y-containing sperm survive better within the female reproductive tract, or they may be a little more successful in reaching the egg in order to fertilize it. In any case, the sex differences are small, the statistical expectation for a boy (or girl) at any single birth still being close to one out of two.

During gestationthe period of nine months between fertilization and the birth of the infanta remarkable series of developmental changes occur. Through the process of mitosis, the total number of cells changes from 1 (the fertilized egg) to about 2 1011. In addition, these cells differentiate into hundreds of different types with specific functions (liver cells, nerve cells, muscle cells, etc.). A multitude of regulatory processes, both genetically and environmentally controlled, accomplish this differentiation. Elucidation of the exquisite timing of these processes remains one of the great challenges of human biology.

Read the original post:

human genetics | Description, Chromosomes, & Inheritance ...

People in the News: Baylor’s Thomas Caskey Dies; New Appointments at UK Biobank, CS Genetics, More – GenomeWeb

Baylor College of Medicine: C. Thomas Caskey

C. Thomas Caskey, professor of molecular and human genetics at Baylor College of Medicine, has died at the age of 83. Caskey began his career with Baylor College of Medicine in 1971, when he also founded the Institute for Molecular Genetics, currently known as the Department of Molecular and Human Genetics. In 1994 Caskey moved on to Merck Research Laboratories, where he was senior vice president of human genetics and vaccines discovery. He later returned to Houston to become CEO of the Brown Foundation Institute of Molecular Medicine at the University of Texas Health Science Center, and in 2011 came back to Baylor to work in his current role. In addition, in 2019 he became chief medical officer at Human Longevity.

His research identified the genetic basis of 25 major inherited diseases and clarified the understanding of "anticipation" in the triplet repeat diseases fragile X syndrome and myotonic muscular dystrophy, Baylor said. His personal identification patent is the basis of worldwide application for forensic science, and he was a consultant to the FBI in forensic science. His recent publications addressed the utility of genome-wide sequencing to prevent adult-onset diseases, and his research focused on the application of whole-genome sequencing and metabolomics of individuals to understand disease risk and its prevention, the school noted.

Caskey was a member of the National Academy of Sciences, the National Academy of Medicine (serving as chair of the Board of Health Sciences Policy), and the Royal Society of Canada. He was a past president of the American Society of Human Genetics, the Human Genome Organization, and the Texas Academy of Medicine, Engineering and Science.

UK Biobank: Mahesh Pancholi

Mahesh Pancholi has joined the UK Biobank as chief information officer. Previously, he was an enterprise account manager for genomics and life sciences research at Amazon Web Services, and prior to that, a business development manager at OCF. Before that, he was head of research computing at Queen Mary University of London, where he also received a bachelor's degree in genetics.

CS Genetics: Jeremy Preston

Genomics technology company CS Genetics has named Jeremy Preston as chief commercial officer. Preston joins the company from Illumina, most recently serving as VP of regional and segment marketing. Earlier roles at Illumina included VP of specialty sales and marketing and senior director of product marketing. Prior to Illumina, Preston was associate director of product marketing at Affymetrix. He completed his postdoc in molecular biology at Japan's Riken, and his Ph.D. in molecular biology at La Trobe University in Melbourne.

For additional recent items on executive appointments and promotions in omics and molecular diagnostics, please see the People in the News page on our website.

Read this article:

People in the News: Baylor's Thomas Caskey Dies; New Appointments at UK Biobank, CS Genetics, More - GenomeWeb

Blood proteins could be the key to a long and healthy life, study finds – EurekAlert

Two blood proteins have been shown by scientists to influence how long and healthy a life we live, research suggests.

Developing drugs that target these proteins could be one way of slowing the ageing process, according to the largest genetic study of ageing.

As we age, our bodies begin to decline after we reach adulthood, which results in age-related diseases and death. This latest research investigates which proteins could influence the ageing process.

Many complex and related factors determine the rate at which we age and die, and these include genetics, lifestyle, environment and chance. The study sheds light on the part proteins play in this process.

Some people naturally have higher or lower levels of certain proteins because of the DNA they inherit from their parents. These protein levels can, in turn, affect a persons health.

University of Edinburgh researchers combined the results of six large genetic studies into human ageing each containing genetic information on hundreds of thousands of people,

Among 857 proteins studied, researchers identified two that had significant negative effects across various ageing measures.

People who inherited DNA that causes raised levels of these proteins were frailer, had poorer self-rated health and were less likely to live an exceptionally long life than those who did not. .

The first protein, called apolipoprotein(a) (LPA), is made in the liver and thought to play a role in clotting. High levels of LPA can increase the risk of atherosclerosis a condition in which arteries become clogged with fatty substances. Heart disease and stroke is a possible outcome.

The second protein, vascular cell adhesion molecule 1 (VCAM1), is primarily found on the surfaces of endothelial cells a single-cell layer that lines blood vessels. The protein controls vessels expansion and retraction and function in blood clotting and the immune response.

Levels of VCAM1 increase when the body sends signals to indicate it has detected an infection, VCAM1 then allows immune cells to cross the endothelial layer, as seen for people who have naturally low levels of these proteins.

The researchers say that drugs used to treat diseases by reducing levels of LPA and VCAM1 could have the added benefit of improving quality and length of life.

One such example is a clinical trial that is testing a drug to lower LPA as a way of reducing the risk of heart disease.

There are currently no clinical trials involving VCAM1, but studies in mice have shown how antibodies lowering this proteins level improved cognition during old age.

The findings have been published in the journal Nature Aging.

Dr Paul Timmers, lead researcher at the MRC Human Genetics Unit at University of Edinburgh, said: The identification of these two key proteins could help extend the healthy years of life. Drugs that lower these protein levels in our blood could allow the average person to live as healthy and as long as individuals who have won the genetic lottery and are born with genetically low LPA and VCAM1 levels.

Professor Jim Wilson, Chair of Human Genetics at the University of Edinburghs Usher Institute, said: This study showcases the power of modern genetics to identify two potential targets for future drugs to extend lifespan.

Observational study

Human tissue samples

Mendelian randomization of genetically independent aging phenotypes identifies LPA and VCAM1 as biological targets for human aging

20-Jan-2022

Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.

Read more here:

Blood proteins could be the key to a long and healthy life, study finds - EurekAlert

Evolution: Revealing the influence of viruses – Medical News Today

In classifying all living organisms, scientists use taxonomy a naming system to group similar organisms. The largest groupings are called kingdoms. For example, humans, all animals, plants, fungi, and multicellular organisms are members of a kingdom called eukaryotes.

Eukaryotic cells all have one important commonality: they house their DNA in a nucleus. The nucleus of the cell is centrally located and membrane-bound.

Prokaryotes include bacteria and archaea, single-celled organisms whose DNA is loosely packed and surrounded by a cell membrane.

Viruses are even simpler. They comprise only DNA or RNA and solely have one protective protein coat, called a capsid, surrounding them.

What do these distinct organisms have to do with each other and evolution? Quite a bit, according to Oxford University evolutionary biologist and the new studys first author, Dr. Nicholas A. T. Irwin.

Viruses and eukaryotes depend on one another. The former use their host-derived genes for replication and cellular control, often encoding cellular-derived informational and operational genes, allowing viruses to adapt and survive.

Eukaryotes can incorporate viral DNA into their genomes. This new DNA, previously thought to be inactive, has now been found to provide new functionality to their eukaryote hosts.

Colleagues at the Department of Botany at the University of British Columbia in Vancouver, Canada, and the Department of Zoology at the University of Oxford, United Kingdom, collaborated with Dr. Irwin to reveal groundbreaking findings in gene movement between viruses and eukaryotes called horizontal gene transfer.

In the journal Nature Microbiology, Dr. Irwin and his colleagues explained how they used complex computational analyses to search for evidence of identical genes present in viruses and eukaryotes. After studying 201 eukaryotes and 108,842 viruses, the team identified distinct trends in viral-eukaryote gene transfer.

Using well-established computer analyses of the evolutionary development and diversification of species, called phylogenetics, the researchers could delineate how virus and eukaryote bidirectional gene transfers have driven species diversification.

Dr. Irwin explained to Medical News Today that the researchers used computational analyses to search for evidence of transferred genes in the genomes of around 200 eukaryotes and thousands of viruses, which covered the diversity of eukaryotic and viral species whose genomes had been sampled.

We were not only interested in identifying viral genes within eukaryotic genomes, but also detecting the presence of eukaryotic genes in viral genomes.

Medical News Today asked Dr. Irwin how they were able to arrive at such sweeping conclusions about genetic relatedness between eukaryotes and viruses. Dr. Irwin recounted:

One of the important factors that allowed us to conduct this analysis was the enormous amount of genomic data that has now become available from eukaryotes, viruses, and prokaryotes (including bacteria and archaea). These new resources have resulted from major DNA sequencing efforts trying to understand the diversity of genomes across the tree of life.

In addition to this, recent technological advances in high-throughput DNA sequencing and metagenomics, which is the sequencing and assembly of genomes from mixed communities of organisms, such as seawater samples, has accelerated the rate at which these data have become available.

Having a large diversity of high-quality genomic datasets was crucial, as it allowed us to infer which species were participating in these gene transfers, Dr. Irwin added.

The scientists found that both viruses and eukaryotes hijack each others DNA.

But, they found that eukaryotic genes transferred to viruses approximately twice as frequently as viral genes transferred to eukaryotes.

Dr. Irwin explained there might be a few reasons why viruses were the big winners in the gene competition. He noted that genes may frequently transfer from the virus to the eukaryote, but they might not stick around because of natural selection.

But, viruses may retain those genes they acquire from their hosts because they are beneficial to the virus. And, for a gene to persist, the organism must survive and propagate, a trait at which viruses are very skilled.

The researchers then applied all their knowledge of the genetics of these many eukaryotes and viruses and compared them to well-established evolutionary trees. In this way, they could approximate the timing of gene transfer events relative to when species diverged or speciated, which refers to becoming a new type of species. For Medical News Today, Dr. Irwin illustrated:

If we observed a viral gene in a human genome, we would predict that the gene was acquired after humans speciated from other primates. In contrast, if a viral gene was present in all animals, say from sponges to chimps, we would infer that gene to have been derived in the last common ancestor of animals.

Of course, there are different ways to interpret these patterns, but we base our interpretations on the assumption that gaining a gene through gene transfer is more difficult and unlikely than losing a transferred gene.

[D]r. Irwin described three separate incidents in evolution where viral genes are present and exemplify viral-influenced evolution:

Medical News Today asked Dr. Irwin what intrigued him most about his results. He mused,

The most interesting result of the study was being able to identify and visualize the patterns of gene transfer across the eukaryotic tree of life.

One of my main interests is understanding how cellular diversity and complexity have evolved, and I believe that this work has provided strong evidence that host-virus interactions have played an important part in generating the diversity of life that we see today.

I also think this study has interesting implications for how we view viruses. Similar to how the discovery and characterization of the microbiome changed our view of bacteria, I think that revealing the influence that viruses have had on the evolution of life could encourage more nuanced thoughts about the importance of viruses in nature.

Dr. Irwin

Regarding where this research might lead future scientific endeavors, principal author, Professor Patrick Keeling, added: A lot of progress in understanding [h]orizontal gene transfer (HGT) in eukaryotes has focused on the pattern of gene transfers on the tree of eukaryotes now we also have some insights into the process that led to that pattern and the likelihood that viruses are a major route for transfers.

It would be useful to take a few of the lineages where we see a lot of viral HGT and dig deeper, looking at more closely related hosts and viruses to see the process unfolding at different time scales.

And finally, Dr. Keeling noted, identifying which genes are selected for in viruses can tell you a lot about what process makes the virus more successful, and by extension how it uses its host cell.

This study, explaining HGT between eukaryotes and viruses, is the first of its kind to reveal how viruses may have allowed multiple eukaryotic species to diverge and evolve.

Read more from the original source:

Evolution: Revealing the influence of viruses - Medical News Today

UC San Diego Receives $14M to Drive Precision Nutrition with Gut Microbiome Data – Center for Microbiome Innovation

A student processes microbiome samples in the UC San Diego School of Medicine lab of Rob Knight, PhD. Photo credit: Erik Jepsen/UC San Diego

The National Institutes of Health (NIH)s All of Us Research Program is a national effort to build a large, diverse database of 1 million or more people whom researchers can use to study health and disease.

The NIH is now awarding $170 million in grant funding to centers across the country to create a new consortium known as Nutrition for Precision Health, powered by the All of Us Research Program. The consortium will recruit a diverse pool of 10,000 All of Us Research Program participants to develop algorithms to predict individual responses to food and inform more personalized nutrition recommendations.

The Nutrition for Precision Health consortium includes $14.55 million to launch a new Microbiome and Metagenomics Center at UC San Diego. The center will analyze the microbiomes communities of microbes and their genetic material found in the stool samples of nutrition study participants.

A current challenge in precision nutrition is the inability to combine the many factors that affect how individuals respond to diet into a personalized nutrition regimen. These potential factors include the microbiome, metabolism, nutritional status, genetics and the environment. The way these factors interact to affect health is still poorly understood.

The Microbiome and Metagenomics Center at UC San Diego will help address some of these gaps.

Our new center will deploy more than a decade of research and development for the NIHs most exciting exploration yet, combining our understanding of the microbiome and human genetics with our groundbreaking technical and informatics advances to rapidly explore next-generation disease treatments based on precision nutrition, said Microbiome and Metagenomics Center co-leader Jack Gilbert, PhD, professor at UC San Diego School of Medicine and Scripps Institution of Oceanography.

The Microbiome and Metagenomics Center will be led by Gilbert and Rob Knight, PhD, along with co-investigators Andrew Bartko, PhD; Rebecca Fielding-Miller, PhD, MSPH; Kathleen Fisch, PhD; Maryam Gholami, PhD; David Gonzalez, PhD; Kristen Jepsen, PhD; Daniel McDonald, PhD; Camille Nebeker, EdD, MS; Pavel Pevzner, PhD; and Karsten Zengler, PhD, all at UC San Diego. The team will also collaborate closely with researchers at Duke University.

The center will build on what we have learned in other large-scale activities, including the Human Microbiome Project, the Earth Microbiome Project and the American Gut Project. It leverages many of the faculty and strengths brought together in the Center for Microbiome Innovation, as well as the cross-disciplinary microbiome community we have built here at UC San Diego, said Knight, professor and director of the Center for Microbiome Innovation at UC San Diego School of Medicine and Jacobs School of Engineering.

Bringing this expertise and technology to bear on the incredibly challenging problem of nutrition and health will enable a whole new level of precision in answering the age-old question of what should I eat today? We are just starting to understand how the microbiome can answer this with a surprising level of individual detail, not just broad-strokes generalizations for the whole population.

Nutrition for Precision Health will collect new microbiome and metagenomics data, along with other potentially predictive factors, and combine it with existing data in the All of Us database to develop a more complete picture of how individuals respond to different foods or dietary routines. The data will be integrated into the All of Us Researcher Workbench and made widely available, providing greater opportunities for researchers to make discoveries that could improve health and prevent or treat diseases and conditions affected by nutrition.

We know that nutrition, just like medicine, isnt one-size-fits-all, said Holly Nicastro, PhD, MPH, a coordinator of Nutrition for Precision Health at NIH. Nutrition for Precision Health will take into account an individuals genetics, gut microbes and other lifestyle, biological, environmental or social factors to help each individual develop eating recommendations that improve overall health.

All of Us opened for enrollment in 2018 and UC San Diego Health co-leads the programs implementation in California, where more than 37,000 people have already signed up to participate. To learn more about the All of Us Research Program and how to join, please visit JoinAllofUs.org.

The Microbiome and Metagenomics Center at UC San Diego is supported by the NIH Common Funds Nutrition for Precision Health, powered by the All of Us Research Program grant 1 U24 DK131617-01. Nutrition for Precision Health, powered by All of Us Research Program, and All of Us are service marks of the U.S. Department of Health and Human Services (HHS).

Read more:

UC San Diego Receives $14M to Drive Precision Nutrition with Gut Microbiome Data - Center for Microbiome Innovation

KU, KU Medical Center faculty named recipients of Higuchi-KU Endowment Research Achievement Awards | The University of Kansas – KU Today

LAWRENCE Four University of Kansas faculty members on the Lawrence and Medical Center campuses are this years recipients of the Higuchi-KU Endowment Research Achievement Awards, the state higher education systems most prestigious recognition for scholarly excellence.

The annual awards are given in four categories of scholarly and creative achievement. This years honorees:

The four will be recognized at a ceremony this spring along with recipients of other major KU research awards.

This is the 40th annual presentation of the Higuchi awards, established in 1981 by Takeru Higuchi, a distinguished professor at KU from 1967 to 1983, and his wife, Aya. The awards recognize exceptional long-term research accomplishments by faculty at Kansas Board of Regents universities. Each honoree receives $10,000 for their ongoing research.

The awards are named for former leaders of KU Endowment who helped recruit Higuchi to KU.

More about this years winners:

Olin Petefish Award in Basic Sciences

John Kelly is a professor of ecology & evolutionary biology who has made contributions to the fields of evolutionary biology, genetics and botany. He is considered an international leader in evolutionary genetics research, exploring how organisms adapt to their environment. The impact of his research extends to agricultural selective breeding, understanding organismal adaption to climate change and human genetics. He also has been on the forefront of developing computational genome sequencing methods to address biological questions.

Kelly and his collaborators have received more than $6 million in external funding from the National Institutes of Health, the National Science Foundation and other institutions. He has published more than 100 peer-reviewed articles and served as secretary for the Society for the Study of Evolution. He earned his doctorate in ecology and evolution from the University of Chicago.

Balfour Jeffrey Award in Humanities & Social Sciences

Beth Bailey, Foundation Distinguished Professor and member of the Department of History, is an internationally renowned historian of the United States military, war and society, and the history of gender and sexuality. She is the founding director of KU's Center for Military, War, and Society Studies, which brings together scholars, military leaders, government officials and students to discuss issues relevant to the military, war and more.

In the past year, she has received an Andrew Carnegie Fellowship and was named one of 24 National Endowment for the Humanities Public Scholars for her research on race and the U.S. Army. She was elected to the Society of American Historians in 2017, and the secretary of the Army appointed her to the Department of the Armys Historical Advisory Committee.

Baileys vast publication record includes journal articles, book chapters and books on a variety of subjects, including the history of gender and sexuality, U.S. military history and social history. She holds a doctorate and masters degree in American history from the University of Chicago.

Irvin Youngberg Award in Applied Sciences

Steven Soper is a Foundation Distinguished Professor of chemistry, mechanical engineering and bioengineering as well as an adjust professor of cancer biology and member of The University of Kansas Cancer Center. A world leader in bioanalytical chemistry, he researches biological macromolecules including DNA, RNA and proteins to develop new tools for medical diagnostics and discovery.

Soper directs the NIH-funded and multi-institutional Center of BioModular Multi-Scale Systems for Precision Medicine based at KU. The center coalesces scientists, clinicians and biomedical engineers to design, manufacture and deliver biomedical tools for detecting and managing disease. For example, the center developed an at-home rapid COVID-19 test that is now going to market.

Soper has founded two companies, BioFluidica and Sunflower Genomics, to translate his research into commercial products. He received a doctorate in bioanalytical chemistry from KU.

Dolph Simons Award in Biomedical Sciences

Dr. Russell Swerdlow is a professor in the Department of Neurology at KU Medical Center, with secondary appointments in molecular & integrative physiology and biochemistry & molecular biology. Swerdlow directs KUs Alzheimer's Disease Research Center, and his contributions have helped make KU a world leader in Alzheimers care and research.

His work has defined a role for mitochondrial dysfunction in late-onset neurodegenerative diseases, including Alzheimers. He proposed a hypothesis for the cause of the disease, the sporadic Alzheimers disease mitochondrial cascade hypothesis, which has steadily gained traction for over a decade. His research also has identified potential therapeutics for the disease.

Swerdlow received his doctor of medicine from New York University.

The award funds are managed by KU Endowment, the independent, nonprofit organization serving as the official fundraising and fund-management organization for KU. Founded in 1891, KU Endowment was the first foundation of its kind at a U.S. public university.

Read more:

KU, KU Medical Center faculty named recipients of Higuchi-KU Endowment Research Achievement Awards | The University of Kansas - KU Today

SwabSeq: Scalable, Sensitive and Fast COVID-19 Testing – UCLA Newsroom

After much of Los Angeles went dark in the spring of 2020 amid the growing SARS-CoV-2 threat, two UCLA scientists and their small teambegan working late nights on the fifth floor of the Gonda (Goldschmied) Neuroscience and Genetics Research Center, developing technology that would pave the way for the UCLA community to safely return to campus.

The safer-at-home orders had shut down all but the few core campus activities and services deemed essential. While that meant the suspension of most laboratory research, it didnt apply to a new project led by Valerie Arboleda M.D. 14,Ph.D. 14, assistant professor of pathology and human genetics, and Joshua Bloom 06, a research scientist in human genetics and an adjunct professor in computational biology. Through their collaboration with Octant Bio, a biotech company founded and incubated at UCLA; faculty in UCLAs departments of human genetics and computational medicine; UCLA Health; and other academic institutions across the country, their research ultimately found its way from the high-tech lab Arboleda and Bloom named SwabSeq to vending machines across campus.UCLA faculty, staff and students returning last fall were able to easily access the free COVID-19 test kits, with picking up a testas simple as grabbing a snack: Users simply register for the SwabSeq test by scanning a QR code with their smartphone, retrieve the kit and collect their saliva sample, then deposit the kit in a drop box next to the machine. An email or text notifies them when they can access a secure website for their result.

Diagnosing COVID-19 typically involves polymerase chain reaction (PCR) testing, but as a tool for mass screening of asymptomatic individuals, the approach is limited in its capacity. To run tens of thousands of tests simultaneously, SwabSeq harnesses the power of next-generation DNA sequencing a revolutionary technology thats come of age in the last 15 years and enables the processing of millions of DNA fragments at a time. The testing platform also bypasses a step typically required in the PCR method that of extracting RNA from samples, which can take days to process.

Im thrilled that SwabSeq helped put us back on campus and that my students and I are able to come into the lab.

Valerie Arboleda

SwabSeq attaches a piece of DNA that acts like a molecular barcode to each persons sample, enabling the labs scientists to combine large batches of samples in a genomic sequencing machine. Viewing the barcodes in the resulting sequence, the technology can quickly identify the samples that have the coronavirus that causes COVID-19. SwabSeq can return individual test results in about 24 hours, with highly accurate results the false-positive rate is just 0.2%.

Michal Czerwonka

Rachel Young, laboratory supervisor and clinical laboratory scientist for the COVID-19 SwabSeq lab

SwabSeq has now tested more than half a million specimens from UCLA, as well as from a handful of other universities in Southern California and from the Los Angeles Unified School District. A $13.3 million contract recently awarded by the National Institutes of Health sets the stage for an expansion of SwabSeqs efforts.

This is an innovative use of genomic sequencing for COVID-19 testing that is uniquely scalable to thousands of samples per day, [and that is] sensitive and fast a combination that is challenging to find in diagnostic testing, Arboleda says. Its not cost-effective as a test for a few people, or if you have someone in the hospital who needs an immediate result, but its very effective as a screening tool for large asymptomatic populations.

Neither Arboleda nor Bloom could have predicted they would one day find themselves leading a major element of UCLAs research response to a once-in-a-century pandemic.

Arboleda entered the David Geffen School of Medicine at UCLA intending to become a full-time clinician, but when she took a year off from her medical school studies to work in a lab, she found her true calling. She enrolled in the UCLA Medical Student Training Program, graduating in 2014 with both an M.D. and a Ph.D. in human genetics. As a faculty member, she now devotes about 80% of her time to research, with much of the focus on rare genetic syndromes.

Bloom, trained as a geneticist and a computational biologist, has used model systems such as yeast to develop experimental and computational methods for identifying the heritable genetic factors underlying gene expression differences and other complex traits in large populations. Ive worked on some really abstract problems. Diagnostic testing in a pandemic is definitely not something I thought Id ever be involved in, he says, smiling.

Michal Czerwonka

A machine in the SwabSeq laboratory

Like most of their UCLA colleagues and much of the rest of the world, Bloom and Arboleda saw their work routines upended by the pandemic. Bloom was grappling with the new reality when he received a call from Sri Kosuri, a UCLA assistant professor of chemistry and biochemistry and co-founder/CEO of Emeryville, California-based Octant Bio, the startup where Bloom was a consultant and where early pilot studies for SwabSeq were conducted.

He suggested we could turn the drug-screening technology Octant was using into a COVID test, and asked if I could help with the computational work, Bloom recalls. There were other people at UCLA who were also thinking that with all these smart people here, we should be able to develop a test. From there we began to have large group meetings involving multiple universities sharing information.

When Arboleda heard about the nascent project from a faculty colleague, she knew she could be helpful. In addition to the expertise in molecular biology she could apply to setting up the experiments, her training in pathology gave her the experience with regulatory matters that would need to be addressed once the test was developed. She agreed to collaborate with Bloom, who used his expertise in informatics to optimize the automated DNA sequencing process toward the goal of producing accurate diagnostic readouts.

The two spent a good part of April and May 2020 in the lab. We would do the assay and put it on the sequencer, then Josh would analyze it as soon as it came off the machine, Arboleda says. Based on that, the next day we would adjust a couple of parameters and rerun the experiment.

PreCOVID-19, she had become accustomed to a supervisory role as a principal investigator overseeing a team of scientists. I hadnt gone back to the lab in a while, she says. It was a wild two months, where I felt like a grad student again!

The number and pace of the iteration cycles a new one every 24 hours made this research project unlike any other Bloom had seen. The sequencing technology enables that, because you can tweak a bunch of things and get readouts for them all at once, he says.

But more than that, he credits the speed with which SwabSeq moved from concept to reality to an all-hands-on-deck approach befitting the urgency of the need. We had senior faculty, including department heads, engaged and excited to help, Bloom says.

One of those department heads isEleazar Eskin,chair of the Department of Computational Medicine,a departmentaffiliated with both UCLA Samueli School of Engineering and the medical school. He hascoordinatedlogistics and business operations to ensure that the lab operates efficiently and remainsflexibleenough toadapt to changing circumstances, such asthe appearance of theomicron variant of the virus.Eskinalso built the custom software for SwabSeq'slab-information management system.

Adds Arboleda: Everyone knew it was important and contributed in whatever way would support the mission, whether it was getting space, fundingor institutional review board approvals. And since only people who were doing COVID work could come to campus, I had people on my team who said, OK, Ill put on a mask and do whats needed.

Michal Czerwonka

Hard at work in the SwabSeq lab

The SwabSeq lab now occupies an entire floor in the Center for Health Sciences South Tower. The space is divided into three rooms, each dedicated to a portion of the test. One room is for handling samples; a second is used as a clean room and storage area; and a third, its walls lined with high-level sequencers, is for post-PCR sequencing. All over, freezers and refrigerators store enough reagents for millions of tests. The lab isnt necessarily a one-off Arboleda notes that the technology can be applied to general infectious disease testing and surveillance. Its flexible protocol can rapidly scale up testing and provide a solution to the need for population-wide testing to stem future pandemics, she says.

For now, aside from regular meetings to discuss SwabSeq development and high-level technical issues, the scientists have returned to the work they were doing before everything changed in March 2020. Im thrilled that SwabSeq helped put us back on campus and that my students and I are able to come into the lab, Arboleda says. Now if someone tests positive, no one worries because that person can stay home, and we know we can all easily get tested.

Continued here:

SwabSeq: Scalable, Sensitive and Fast COVID-19 Testing - UCLA Newsroom

Balancing openness with Indigenous data sovereignty: An opportunity to leave no one behind in the journey to sequence all of life – pnas.org

Abstract

The field of genomics has benefited greatly from its openness approach to data sharing. However, with the increasing volume of sequence information being created and stored and the growing number of international genomics efforts, the equity of openness is under question. The United Nations Convention of Biodiversity aims to develop and adopt a standard policy on access and benefit-sharing for sequence information across signatory parties. This standardization will have profound implications on genomics research, requiring a new definition of open data sharing. The redefinition of openness is not unwarranted, as its limitations have unintentionally introduced barriers of engagement to some, including Indigenous Peoples. This commentary provides an insight into the key challenges of openness faced by the researchers who aspire to protect and conserve global biodiversity, including Indigenous flora and fauna, and presents immediate, practical solutions that, if implemented, will equip the genomics community with both the diversity and inclusivity required to respectfully protect global biodiversity.

Since the early days of the Bermuda Accord (1), Human Genome Project (2), and the Fort Lauderdale Agreement (3), the field of genomics has been strongly committed to open data sharing, and the calls for improved data-sharing approaches have only become even louder in the recent response to the COVID-19 outbreak (4). Rapid sequencing and open release of SARS-CoV-2 viral genome sequences throughout the outbreak have aided vaccine development, efficacy assessments, and continual monitoring of the viruss evolution in ways unimaginable a few decades ago (5). Similarly, the open release of the human reference genome and follow-up studies, such as the 1000 Genomes and the gnomAD data resource, have transformed our understanding of human genomic variation and disease and are exemplars of successful community resource-building projects. Now, new projects, such as the Earth BioGenome Project (6), aim to sequence the genomes of all living eukaryotic species to further understand molecular evolution, catalog the worlds biodiversity, and inform future conservation efforts. Such projects have the potential to bring the benefits of genomics to all people and species, but the past model of large consortia generating vast troves of data, favoring the inclusion of some over the exclusion of others, is both damaging and inequitable, requiring movement beyond the principles defined in Bermuda and updated in Toronto (7). These ambitious projects will require contributions from community and academic partners around the globe, and so the genomics community must develop and implement inclusive data-sharing policies and infrastructure that respect the rights and interests of all people.

Unfettered openness of genomic data, and the hows and whys of its enforcing open-science norms, impinge on the rights of Indigenous Peoples. As one example, the Navajo Nation became rightfully wary of freely contributing samples and genomic data and, in 2002, placed a tribal-wide Banishment Order on genetics research (8). In Canada, the three councils that fund research have formally adopted policies that were developed by Indigenous Peoples and scholars, which include that data and samples from Indigenous communities must be collected, analyzed, and disseminated under the terms of a mutually determined research agreement that respects community preferences to maintain control over, and access to, data and human biological materials collected for research (9). Only by reconsidering the definition of openness and who it benefits within the context of the current inequitable infrastructures can a more inclusive genomics community be built to responsibly sequence all of life for the future of life (6).

The prospect of cataloging the genome reference sequences for a huge number of representative species is only possible thanks to the exponential technological advances of the genomics community over the past 40 y. Whereas the initial Human Genome Project cost several billion in todays dollars (USD), the sequencing and assembly of high-quality vertebrate reference genomes now costs under $10,000 and continues to drop rapidly. Leveraging these new sequencing technologies, the Vertebrate Genomes Project has now generated over 100 new vertebrate reference genomes (10), and in the coming year, the Human Pangenome Reference Consortium (https://humanpangenome.org/) aims to create hundreds of new reference genomes that will better represent human genetic diversity. Along with reductions in sequencing costs, the underlying technologies are also becoming increasingly portable, with nanopore-based technologies now enabling on-site sequencing in the most remote corners of the world (11).

This genomics revolution is timely, in the midst of the Earths sixth mass extinction with 35,500 species on the International Union for Conservation of Nature Red (threatened) List (https://www.iucnredlist.org/en). Unlike the mass extinctions of the past, the sixth has been caused as a result of the actions of just one species, humans, and as a species we must act swiftly to halt the dangerous loss of biodiversity and extensively catalog what remains. Providing a catalog of genomic sequences for all life will be important for informing decisions about the effects of climate change on species diversity (12), the development of conservation strategies for threatened and endangered flora and fauna (13), assessing the success of ongoing conservation efforts, and for the preservation of genomic biodiversity before it is lost forever to extinction (6).

The importance of conserving biodiversity is universally recognized, but Earths biodiversity is not uniformly distributed. The Critical Ecosystem Partnership Fund currently recognizes 36 biodiversity hotspots, defined as regions with over 1,500 endemic vascular plant species. These hotspots have suffered a 70% loss of their native vegetation (14). Hotspots will be a top priority for any genomic conservation project, but many of these hotspots overlap Indigenous lands. Indigenous Peoples and lands historically have been exploited and excluded, and not engaged by the genomics community (15). Thus, it is imperative for the genomics community to work as equal partners with Indigenous Peoples going forward. To move forward, however, new infrastructure and policies are required to facilitate alternative modes of data sharing that can coexist with the current open-sharing policies of international genomics consortia. Current blanket open data-sharing policies override the rights of Indigenous Peoples, specifically the right to determine the use and mode of sharing Indigenous resources, which includes data. A fact that contravenes the United Nations (UN) Convention on Biological Diversity (CBD) as a matter of international law (16), violates several rights stipulated in the UN Declaration on the Rights of Indigenous Peoples (17), and results in perpetuating the marginalization of these Indigenous Peoples (18).

Open genomic data are defined here as genomic sequence information that is made freely available without restrictions on use, copying, or distribution. The worlds most popular molecular sequence databasessuch as the National Center for Biotechnology Informations GenBank, the European Nucleotide Archive, and DNA Database of Japanstrictly adhere to this model. Furthermore, in 2011 a Joint Data Archive Policy was drafted and adopted by many leading journals that reinforced open data sharing (19). Open data sharing in genomics has fostered a productive and collaborative international research community; it aspires to reduce systematic wealth and power inequalities by extending research opportunities from partners with a large investment in genomics capacity and capability to those partners with lower investment. In addition, open data sharing has provided knowledge that is more transparent, accessible, and verifiable, which has improved the efficiency and reliability of genomic research (20). However, despite its success, by negating local and regional representation and participation in governance, it has also resulted in the development of data-sharing policies that do not maximize opportunities for all participants in an equitable manner (21).

Moreover, when strictly mandated, open data policies can have the unintended consequence of excluding many minority communities, including those Indigenous Peoples who wish, for a variety of legitimate reasons, to retain control over the resources and data derived from their lands, species, and waters. The lack of clear, respectful, and operational policy that respects Indigenous rights breeds mistrust among Indigenous partners and not only hinders the inclusion of Indigenous science in international biodiversity and conservation efforts, but can also build opposition that results in the stagnation and reversal of Indigenous genomics projects (22). By demanding rigid policies on data sharing, the genomics community has forged rules premised on a single worldview. It undermines the rights and interests associated with traditional knowledge, a phenomenon scholars of Indigenous communities call epistemicide (23). Despite international consortia recognizing the rights of Indigenous Peoples, a lack of accountability and clarity for implementation of appropriate policies has exacerbated tensions between Indigenous communities and international genomic efforts (21).

In the past, the worlds of genomic science and Indigenous communities intersected mainly through Indigenous Peoples being used as subjects of research conducted by non-Indigenous researchers. Research was done on Indigenous Peoples, not by them and very rarely for them. The mistrust of the scientific community among Indigenous communities is well-earned: it has been caused by years of exploitation, mistrust, power imbalances, and inequality (24). It has included decades of taking and using Indigenous samples and data without adequate consent and consultation (24, 25); Indigenous data and samples not being properly attributed or acknowledged as coming from Indigenous lands and waters; Indigenous data being misused through bioprospecting and biopiracy (2628); Indigenous data being scientifically interpreted without cultural or contextual knowledge (29); and researchers who have claimed authority over the Indigenous world by relying on quantitative data rather than traditional knowledge and lived experience (30). Furthermore, the failure of researchers to disseminate research outcomes respectfully through mechanisms that are meaningful and applicable to Indigenous partners, such as asset-based approaches (31), has fomented a sense of a lack of control, lack of access, lack of opportunities to derive benefits from the use of traditional knowledge and genetic resources, and a lack of opportunity to integrate traditional ways of knowing into research plans (32). Through asset-based approaches, results can be communicated more meaningfully and ameliorate the five Ds of statistical data on Indigenous Peoples: disparity, deprivation, disadvantage, dysfunction, and difference (33).

Indigenous Peoples are the guardians and sovereign authorities of their lands and have been since time immemorial. Indigenous Peoples have their own unique beliefs, values, and worldviews. They are highly diverse; however, a commonality shared among many is a deep interconnectedness, interdependence, and intimate connection to their lands and waters (34). In regions of Africa, for example, life is not perceived through an individualistic lens but is experienced as relational and collective; this worldview is known as Ubuntu (35), an example of Indigenous or traditional knowledge that is based upon lived experience extending as far back as the Pleistocene era (36). It has been developed over time, informed by an extensive system of principles, beliefs, and traditions. In New Zealand, a governmental inquiry into the Mori knowledge system, or Mtauranga Mori, concluded that this system of knowledge is fundamentally different from Western science. The Mori knowledge framework has evolved through its own cultural context and evolutionary pathway (37). These epistemological differences in knowledge sharing and individual possession are largely incommensurate with existing intellectual property rights, which privilege and support Eurocentric notions of knowledge commons with no or limited rules around access to knowledge and property. However, rather than being treated as outdated or inferiorattitudes that embody cognitive imperialism and epistemic violencetraditional knowledge systems should be acknowledged, integrated, treated as a coequal, and considered when interpreting findings. One system of knowledge should not eclipse the other. When recognized in this way, traditional knowledge is integral to knowledge production contributing both technically and scientifically to the protection and sustainable development of Indigenous lands, resources, and data through an intrinsic understanding of the interdependence of land and its inhabitants (38).

Any complete catalog of Earths biodiversity must necessarily include species on the lands of Indigenous Peoples. Thus, for global genomic conservation efforts to succeed, the genomics community will need to adapt its open data policies to Indigenous data sovereignty and knowledge systems. To achieve this, policies must be operationalized that embrace multiparadigmatic research approaches (39, 40) that recognize the inherent sovereignty of Indigenous Peoples and that remove barriers to those Indigenous communities who wish to contribute to bioconservation as equal partners.

Over the past two decades there has been an international call for the recognition and protection of Indigenous data rights. Indigenous data sovereignty (IDSov) refers to the individual and collective rights of Indigenous Peoples to control data from and about their communities, land, species, and waters (30).

In 2010, the Nagoya Protocol was established and adopted by the UN CBD (41) to protect, promote, and fulfill this right. It has been fundamental in providing guidance on access and benefit-sharing of Indigenous resources and data. Article 12 states that parties shall, in accordance with domestic law, take into consideration Indigenous and local communities customary laws, community protocols, and procedures. The Nagoya Protocol now has 2,000 internationally recognized certificates of compliance, but notably does not include some nations that have both Indigenous Peoples and a large genomic research program (e.g., the United States, Canada, New Zealand, and Australia). Despite this, domestic legislation over a sample/genetic resource from a signatory nation extends to where that sample/genetic resource is housed or used. Thus, nonsignatory countries are expected to implement Nagoya legislation if resources have been obtained from a country where the Nagoya Protocol is enforced.

In 2014, the UNs General Assembly adopted the United Nations Declaration on the Rights of Indigenous Peoples (17), which affirms the right of Indigenous Peoples to control, protect, and develop manifestations of their sciences, technologies, and cultures, including human and genetic resources (Article 31), the right to the conservation and protection of the environment and the productive capacity of their lands (Article 29), as well as the right to participate in decision-making in matters which would affect their rights (Article 18). Furthermore, the UN has also developed 17 Sustainable Development Goals (SDG) to be achieved by 2030. In 2015, these were agreed upon and adopted by 193 countries worldwide, including the United States, Canada, New Zealand, and Australia (42). SDG 15 aims to Protect, restore and promote sustainable use of terrestrial ecosystems, sustainably manage forests, combat desertification, and halt and reverse land degradation and halt biodiversity loss (42). Its associated Sustainable Development Solutions Network Target 15.6 aims to ensure fair and equitable sharing of the benefits arising from the utilization of genetic resources, and promote appropriate access to genetic resources (42), a provision that has particular importance for marginalized communities, including Indigenous Peoples. Additionally, many individual nations have binding legislation covering their own Indigenous populations. For example, in New Zealand, the founding charter, subsequent legislation, and other policies covering Indigenous species require that all data and intellectual property be retained by the government within New Zealand (43, 44). Indigenous claims to cultural and intellectual property are also being addressed in New Zealand, where a work program to address the issues identified in WAI262 Report Ko Aotearoa Tenei has just been developed and some projects have been initiated (45, 46).

Rights secured through IDSov can be at odds with the open by default culture of the genomics field, leaving Indigenous genomic data unsupported by the decades of open infrastructure that has been built by the genomics community. In an effort to close the gap, higher-income countries, such as Australia, Canada, and New Zealand, have established national Indigenous-driven human genomic efforts, including the work of the National Centre for Indigenous Genomics (https://ncig.anu.edu.au/), the Silent Genomes project, and the Aotearoa Variome, respectively (47). These national efforts are examples of Indigenous-driven human genomics research programs intended to directly benefit Indigenous Peoples. In Canada, protocols have also been established for the protection of nonhuman data, specifically through the Tri-Council Policy Statement (48) on research ethics that provides protection over Indigenous samples. Furthermore, research licensing in the three territories of Canada protects samples and data collected on Indigenous lands (4951).

To date, three national-level IDSov networks provide processes and protocols to enable Indigenous data governance (SI Appendix, Table S1): Te Mana Raraunga Mori Data Sovereignty Network, the United States Indigenous Data Sovereignty Network, and the Maiam nayri Wingara Aboriginal and Torres Strait Islander Data Sovereignty Group in Australia. However, blanket adoption of national efforts is not feasible in countries that lack substantial genomics investment or in which Indigenous governance structures are less established or respected.

Alongside the national efforts, IDSov is also gaining recognition on an international level through a variety of initiatives. For example, in 2019 the Global Indigenous Data Alliance (GIDA) (https://www.gida-global.org) was established to build a global community for the development of data-sharing infrastructure, data-driven research, and data use policies. In 2020, ENRICH (Equity in Indigenous Research and Innovation Co-ordinating Hub) was established in a collaboration between New York University and the University of Waikato. ENRICH supports IDSov-based protocols, Indigenous-centered standard-setting mechanisms, and machine-focused technology that informs policy and transforms institutional and research practices (https://www.enrich-hub.org/bc-labels). Platforms such as the International IDSov Interest Group have also been set up under the Research Data Alliance (https://www.rd-alliance.org/groups/international-indigenous-data-sovereignty-ig). These initiatives include the development of specific tools and practical mechanisms alongside education and training to provide a foundation for further development of ethical research guidelines that address Indigenous rights and interests.

The FAIR principles are a common refrain of open data efforts that encourage data to be Findable, Accessible, Interoperable, and Reusable (52). In 2019, GIDA released a set of complementary CARE' Principles (53) that highlight the core values and expectations of Indigenous Peoples when engaging with the scientific community. These principles encourage the consideration of collective benefit, authority to control, responsibility, and ethics in Indigenous data governance. Such efforts toward developing new policies to respect and promote IDSov are essential; however, there is now the difficult challenge of informing and implementing IDSov principles, policy, and mechanisms within the global field of genomics (54).

A brief inspection of the publicly available data access and governance policies of international genomics-based consortia showcases where progress has been made and where it is needed the most. Notable exceptions include the H3Africa Consortium (55), which has led the way in the adoption of Indigenous policies for human genomics, providing clarity to researchers through an in-depth set of principles and guidelines that hold participating researchers accountable for their implementation. At present, many nonhuman-focused consortia lack governance and data policy information. Some claim to recognize the rights of Indigenous Peoples but provide no pragmatic implementation plan or accountability measures. Exceptions in the nonhuman space include Genomics Aotearoa (56), which have actively developed engagement and biobanking frameworks in partnership with Mori to guide all consortium members while engaging with Indigenous data. However, for many other efforts, the lack of clear and transparent adoption of IDSov policy is problematic for a successful engagement between genomic researchers and Indigenous partners, given the incompatibility of unfettered open data and IDSov. Moreover, there remain ongoing practical challenges in keeping provenance and cultural connections between Indigenous communities and the data generated from their lands and waters transparent and clear within the databases themselves. Open data have successfully encouraged transparency and inclusion among international genomic research collaborations, but it is now time to ensure such success extends to including Indigenous partners and IDSov in these collaborative infrastructures.

The conflicts between IDSov and open data in genomics research are not new and have been extensively discussed (18). Progress, although slow, is being made to identify and provide solutions to these incompatibilities. Local Contexts is a key international initiative that recognizes and advances the rights of Indigenous Peoples in museum collections and their data through a unique set of traditional knowledge and biocultural labels and notices (with licenses under development) (57). Inspired by the Creative Commons licensing structure (https://creativecommons.org/), Local Contexts initiated this work in 2010, producing a suite of practical mechanisms designed to enhance the protection of Indigenous communities and hold researchers accountable. That process entailed community partnership and collaboration, as will scientific projects that follow its precepts. As durable digital tags with unique IDs, the labels (for communities) and the notices (58) (for researchers and institutions) provide an opportunity to include Indigenous protocols and expectations around the sharing of knowledge as metadata within the data infrastructures. As a result, this information, such as the origin of samples and data, travels with the data across platforms. Through this mechanism, Indigenous partners are given a voice, and future research engagement is encouraged; its aspiration is to leave no one behind.

The field of genomics is operating under data-sharing practices established decades ago. A status quo that began with the Bermuda Principles defining the best mode of data sharing with respect to human data, these principles were then extended by the Fort Lauderdale Agreement to include nonhuman data and further updated in Toronto (59). Since Toronto, community-based efforts such as the Global Alliance for Genomics and Health (https://www.ga4gh.org) have reconsidered these data-sharing frameworks, developing responsible and inclusive human data-sharing policies and toolkits for genomics researchers.

An equal effort is now needed for nonhuman data, and nonhuman genomics continues to embed inherent biases and inequality, doing little to address existing disparities. Indigenous Peoples are part of contemporary life, they are not outside of modernity. Indigenous voices need to be heard. It is both a moral responsibility and a legal obligation to share benefits of research fairly and to respect traditional knowledge derived from their lands and waters. Genomics research needs to implement a future that has hitherto been mainly aspirational, a future that builds intellectual bridges between different ways of knowing and being. The appropriate acknowledgment, understanding, and implementation of Indigenous Peoples rights while conducting genomic research provide a foundation to reach this goal.

Change must happen both at the individual and institutional level to ensure that Earths genomic biodiversity can be ethically cataloged. Several suggestions, references, and resources are provided to aid this transformation.

Operationalizing clear policies that respect Indigenous rights will communicate to potential Indigenous research partners what principles guide the research activity, the manner in which the researchers will conduct themselves, and the standards enforced and upheld. By providing clarity and increasing transparency, trust can be built and remove potential impediments to building relationships with Indigenous partners. When implementing these policies, inclusion does not equal assimilation. Respecting and cultivating divergent practices and beliefs is important to avoid monoculturalization. Indigenous Peoples wishes regarding data access and benefit-sharing must be honored, making one-size-fits-all open data licenses inappropriate. International consortia seeking to perform Indigenous research must implement IDSov policies and engage with Indigenous communities in a manner that allows them to contribute on mutually agreed terms.

To change the culture from research that is done to Indigenous Peoples rather than by or for them, researchers, institutes, scientific journals, repositories, and funding bodies must change the status quo. Researchers must reflect upon their personal assumptions and biases and listen attentively to alternative frameworks. This can be done through questioning scientific orthodoxies and recognizing that research, even when good is intended for all humanity, can create power and benefit imbalances. In beginning a new project, researchers must question the expectations of each research partner, the genomics community, the institutions, the funding bodies, the ethics review boards, the Indigenous partners, and the Indigenous communities who have provenance over the data and organisms in question. Rather than pushing the boundaries, attempt to foresee the consequences and deeply consider at the outset of each research project its social license and duty to diverse societies.

Although significant progress toward policy development has been made, further clarity is particularly needed for nonhuman Indigenous data. As species do not respect country or land borders, policy is required to provide clarity to researchers regarding species that straddle the borders of Indigenous and non-Indigenous lands, and those species that are of special importance to Indigenous Peoples but are found also on non-Indigenous lands.

To ensure an even distribution of power, financial resourcing, and benefit, researchers who wish to partner with Indigenous communities must first ensure their own cultural competency while also prioritizing engagement with Indigenous communities at the onset of the study. This allows the necessary time for a partner relationship to be built from mutual agreement as to the role and responsibilities of both groups, the community, and the researchers. Early engagement also provides Indigenous communities with relevant details pertaining to all aspects of the project, from sample collection to potential research publications and intellectual property development and benefit-sharing in a clear, transparent, and accessible fashion, including: the background, the scope of the research, potential outcomes of the project, and any foreseen risks associated with the research. By doing so, both researchers and Indigenous partners have all of the necessary information and education to conceptualize and design the research project in a concerted fashion that acknowledges the communities long-standing relationship with local species and greater breadth of knowledge of the ecological systems and how they are changing (60, 61). This equips all parties with a fair and equal voice in setting research goals, understanding and contextualizing data, and planning of the time and budgetary requirements needed to achieve research goals ethically. Early engagement also allows project outcomes to be jointly interpreted, drafted, and disseminated by multiple parties, rather than the typical one-sided reporting driven by research institutions. Furthermore, the dissemination of outcomes in the Indigenous local languages will enhance accessibility for Indigenous community partners so that the community can relay the outcomes to others, and this process does not depend on an external scientist. This joint dissemination of research outcomes is extremely important for maintaining trust, communicating mutual benefits, and ensuring that Indigenous knowledge is not misappropriated. Indigenous partners should also be included in the evaluation phases of a project to include Indigenous best practice and better understand research impacts in an Indigenous context.

Projects that have been conceptualized and funded prior to engagement already fall outside the best practices for engagement with Indigenous Peoples. Here, other considerations are crucial for a successful partnership, such as minimizing power inequalities throughout the remaining research period. Indigenous Peoples, such as the African San tribe, Mori in New Zealand, and the Australian Institute of Aboriginal and Torres Strait Islander Studies in Australia, have considered and documented the best practices and expectations for engagement in these circumstances (60, 62, 63). These best practices include understanding and incorporating the expectations of Indigenous communities into the research plan; clearly communicating the scope of research, timelines, funding, methods of consent as informed by the Indigenous research partners, and all potential research outcomes; identifying short- and long-term risks and benefits and how they will be shared; building sustainable long-term governance and communication frameworks; discussing potential barriers to project completion and the impacts of project incompletion on partners; and evaluating the cultural competency of the research team. A focus on the process rather than the product is also helpful in assuring that the project has an adequate timeframe and budget to achieve its stated outcomes in a respectful manner, keeping in mind that fast-paced, product-oriented, and extractive strategies are not compatible with Indigenous cultures and may lead to irrevocable harm (24).

The fully open model of sharing must be challenged; the inclusion of some should not be valued over the exclusion of others. Policies need to be cognizant of the history, needs, and worldviews distinct to each Indigenous community (64). To operationalize situated openness, a pragmatic implementation of IDSov policies and licenses is necessary. As it stands, IDSov policies are being actively developed and adopted; however, progress depends on implementing and enforcing these policies by the genomics research community. Ambitious international goals, such as the push to catalog all genomic information on Earth, sit at the interface of genomic science and Indigenous ways of knowing. Effective implementation of IDSov policies and power sharing between communities is necessary to ethically realize such visions. This will require multiparadigm research methodologies built upon commonalities, but also accepting of divergent beliefs and practices, to move away from the extractive and exploitative strategies of past research on Indigenous Peoples. The task is hard, but eminently achievable, as recently demonstrated by more inclusive, diverse, and political research paradigms developed by researchers in New Zealand, Australia, North America, Africa, Central and South America, and the Pacific (40). These stand as positive examples for how to best champion polycultural expression and establish a new status quo for the genomics community.

Open data sharing in genomics has fueled progress and brought benefits to a field that continues to grow, even as it ramifies into many different fields of research and application. However, it is evident that those doing the sharing, to date, have taken on very little riskand in many cases, stand to benefitfrom the act of openly sharing. To impose the same open data requirements on those with the most to lose by relinquishing control over use of resources and data is unfair, and when openness is stated as a prerequisite for participation, it can have the unintended effect of excluding marginalized communities. An infrastructure that allows for multiple modes of data sharing is needed, particularly modes that allow for materials and data over which Indigenous communities exert stewardship to remain under their control, and with respectful communication of findings and sharing of benefits with Indigenous communities. The Native BioData Consortium is the first tribal-driven BioBank in the United States (NBDC; https://nativebio.org/) and provides a model of how to facilitate the flexibility needed to share data in a manner respectful of all parties and worldviews. In an Aboriginal and Torres Strait Islander context, the idea of kinship speaks toward the interconnectedness and interdependence of all life (65), as well as water and geographical features. This relationship to land is shared among Mori (66), and First Nations and Inuit Peoples (67). Adequate time and resources must be assigned to directly coordinate conservation efforts with Indigenous partners who are the experts on implementing systems thinking approaches within their own lands.

To sequence everything requires the help and participation of everyone on equal and mutually agreed terms. Ultimately, genomic technologies can be advanced to the point of becoming commonplace, and initiatives are already under way to bring DNA sequencing into classrooms (68). As the field of genomics progresses, all research partners have the responsibility and opportunity to build a trustworthy and inclusive research community. Investing in outreach programs that pass on the latest technologies and methods such as the SING Consortium (https://www.singconsortium.org/) and IndigiData (https://indigidata.nativebio.org/) workshops, this capacity building will facilitate local research, fueled by local priorities and guided by local best practice. Graduate and undergraduate genomics courses should also include training in ethics and engagement best practices to improve the cultural competency of non-Indigenous researchers that may enter this space. This provides cultural safety but also alleviates expectations and responsibilities resting solely on Indigenous researchers shoulders (47). Infrastructure and opportunities for media producers local to the study should also be developed for the dissemination of genomic research findings in multiple languages, regions, and formats. These efforts will enable all partners, including Indigenous and other marginalized communities, to directly contribute to ongoing international genomics efforts and by fostering diversity within the field. It can help ensure that genomics infrastructure will be accessible and beneficial for all, and practices put in place to foster trust over the long haul.

Parties to the UN CBD and its Nagoya Protocol are currently reviewing the meaning of digital sequence information (DSI) and the requirement for a change to access and benefit-sharing policies under the convention that pertain to such DSI (41). As it stands, the term DSI is a placeholder used to facilitate discussions surrounding three data types: 1) DNA and RNA; 2) DNA, RNA nucleotide sequences, and protein-peptide amino acid sequences; and 3) DNA, RNA, and protein sequences as well as digital information pertaining to metabolites and macromolecules. All three of these definitions would include data contributing to reference genome sequences for nonhuman organisms. Prior to these discussions, there had been a fourth option for associated information, including traditional knowledge (69), but this was removed during the revision.

Despite the Nagoya Protocol calling for access and benefit-sharing, to date only 16 signatory countries have domestic legislation regarding DSI. Eighteen additional signatories are planning to or are in the process of drafting such legislation (70). The United States is not a signatory to the Convention, but United States representatives have attended the November 2021 review conference in China, and will attend further discussions in 2022. Many nations involved in the Earth BioGenome Project, European Reference Genome Atlas (https://vertebrategenomesproject.org/erga), the Human Pangenome Reference Consortium, and other international genomic collaborations are signatories. The ongoing CBD review has the goal of standardizing terms for access and benefit-sharing among all signatories, and discussions continue to include DSI. The international committee overseeing the CBD has expressed discontent with the status quo. Disparate policies among signatories and other major nations have led to the interpretation of open access to DSI as sufficient to fulfill access and benefit-sharing requirements in some cases, while in other cases formal agreements are required to share samples or sequence data. The review considers 13 recent publications relevant to access, benefit-sharing, and sequence data that have been categorized into five policy archetypes, some of which are mutually exclusive, while others can be combined (Table 1). Each archetype will be considered for cost-effectiveness, feasibility, and practicality, as well as uses of traditional knowledge. Access and benefit-sharing standards will be addressed again before a standardized policy is agreed upon and incorporated into the convention framework.

Potential policy options under review of the Convention on Biological Diversity, with respect to access and benefit-sharing and digital sequence information

The lack of infrastructure to trace the geographic origin of samples and DSI is readily apparent: only 12% of the sequence data in publicly available databases specifies a country of origin. The lack of proper infrastructure to monitor compliance with access, benefit-sharing, and sharing of DSI at each point in the value chain has also been flagged as a potential barrier to agreement, with block chain smart contracts highlighted as a potential solution (71).

Policies about access and benefit-sharing, and about sharing of DSI are in flux, but it is clear that unfettered open access to data and materials, including sharing of sequence data, is being questioned when it comes into conflict with Indigenous rights. National and international law are likely to evolve, and the scientific community would be wise to both directly engage in helping set the standards and practices but also to comply with the emerging laws, norms, and practices governed by national and international law.

Following basic principles in a transparent manner, with all parties having access to and an equal understanding of the research project, will help remove the barriers between the genomics community and Indigenous partners, and will facilitate a long-term partnership founded on trust, safety, honesty, and accountability. The genomics community must engage with each Indigenous partner in accordance with that communitys specific traditional beliefs, practices, and connections to the organisms being studied and the appropriate way to engage with other people in discussions of other organisms. As Chip Colwell, previous senior curator of anthropology at the Denver Museum of Nature and Science, stated during SING Aotearoa (https://www.singaotearoa.nz), Indigenous People are not anti-science [but] they demand a science that restores the dignity of Indigenous Peoples and is carried out with fundamental respect (72). This is now the responsibility of each researcher, consortium, journal, data repository, and funding body that seeks engagement with data or resources derived from Indigenous lands. Practical mechanisms like the traditional knowledge and biocultural labels and notices, and Indigenous-driven biobanks such as the Native BioData Consortium, provide proven models. The field has come a long way in working toward diversity, and the wind is at our back. Indigenous researchers have already put great effort into developing guidelines, best practices, legal and extralegal tools, and new research paradigms (SI Appendix, Table S1). Equipped with this knowledge, the community must now capitalize on the opportunity to build an inclusive, respectful, and mutually beneficial future for genomics.

There are no data underlying this work.

We thank Carla Easter (Education and Outreach Department of the National Human Genome Research Institute, NIH), Jenny Reardon (University of California, Santa Cruz), Harris Lewin (University of California, Davis), and Jacob S. Sherkow (University of Illinois) for their time in reviewing and consulting in preparation of this manuscript; and IndigiData and SING USA, Canada, and Aotearoa for their support and guidance throughout the manuscript-drafting process. This work was supported, in part, by the Intramural Research Program of the National Human Genome Research Institute, NIH (A.M.M.C. and A.M.P.). J.G. is funded by NIH Grant 5R01CA237118-02 and a Canadian Institutes of Health Research Fellowship (202012MFE-459170-174211). Development of the Biocultural Label Initiative has been supported by Catalyst Seeding funds for the project Te Tukiri o te Tonga: Recognizing Indigenous Interests in Genetic Resources provided by the New Zealand Ministry of Business, Innovation and Employment and administered by the Royal Society Te Aprangi (19UOW008CSG to M.L.H. and J.A.), leveraging the existing Local Contexts (https://localcontexts.org/) platform supported by the National Endowment for the Humanities (PR 234372-16 and PE 263553-19 to J.A.) and the Institute of Museums and Library Services in the United States (RE-246475-OLS-20 to J.A.), New York University Graduate School of Arts and Sciences, and the University of Waikato. Continuing infrastructure development is supported through the Equity for Indigenous Research and Innovation Co-ordinating Hub based at New York University and University of Waikato (https://www.enrich-hub.org/). The Biocultural Label Initiative is extended through use cases, supported and refined by the Aotearoa Biocultural Label Working Group, Federation of Mori Authorities Innovation (https://www.foma.org.nz/), Te Mana Rauranga (https://www.temanararaunga.maori.nz/), Genomics Aotearoa (https://www.genomics-aotearoa.org.nz/), Indigenous Design and Innovation Aotearoa (https://www.idia.nz/), the Genomics Observatories Metadatabase (https://geome-db.org/), the Ira Moana Genes of the Sea Project (https://sites.massey.ac.nz/iramoana/), supported by Catalyst Seeding funds provided by the New Zealand Ministry of Business, Innovation and Employment and administered by the Royal Society Te Aprangi, 17MAU309CSG to L.L.), and a Massey University Research Fund to L.L. L.L. is supported by a Rutherford Foundation Discovery Fellowship. J.G. and R.C.-D. are funded by the US National Cancer Institute through Grant R01 CA227118 (sulstonproject.org). M.Z.A. is funded by NIH Grant R01AI148788 and NSF CAREER 2046863.

Author contributions: A.M.M.C., J.A., L.L., M.L.H., M.Z.A., B.T., J.G., R.C.-D., and H.R.P. designed research; A.M.M.C. and A.M.P. wrote the paper; and J.A., L.L., M.L.H., M.Z.A., B.T., J.G., R.C.-D., and H.R.P. contributed to drafting text.

The authors declare no competing interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at https://www.pnas.org/lookup/suppl/doi:10.1073/pnas.2115860119/-/DCSupplemental.

View original post here:

Balancing openness with Indigenous data sovereignty: An opportunity to leave no one behind in the journey to sequence all of life - pnas.org

For the War on Drugs Adam Granduciel, a return to a place he once called home – The Boston Globe

I wasnt running around making zines or anything like that, says Granduciel, who graduated in 1997 from the Roxbury Latin School and then headed off to Dickinson College in Pennsylvania. I was basically maybe some kind of social introvert. I had friends from school, obviously, but my life outside of that wasnt very big.

Its gotten a lot bigger since then. I Dont Live Here Anymore is the bands fifth full-length album since the War on Drugs formed in Philadelphia in 2005. (Kurt Vile was an early member of the group, but left after their first album, 2008s Wagonwheel Blues, to focus on his solo career.) Along the way, the band has grown, moving from the independent label Secretly Canadian to the venerable Atlantic Records, while album sales and concert crowds have expanded.

Even as the group has become bigger and more successful, Granduciel still seems content to hide himself away and work on songs, most recently in a warehouse space in Burbank, Calif., and before that in what he describes as a tiny room under his house in Los Angeles. He pays close attention to detail as a songwriter, and he can talk with great specificity about why he changed the key of a certain song, or how he wrote and rewrote a particular section of a song until he felt he had nailed it.

I Dont Live Here Anymore took shape gradually. Granduciel started writing songs for the album fairly soon after the band released 2017s A Deeper Understanding, which won a Grammy for best rock album. The singer spent several years honing the new material, often in conjunction with bassist David Hartley and multi-instrumentalist Anthony LaMarca.

I really trust their musical opinion, Granduciel says. If youre around people long enough, your trust and your friendship grows, and so where we were collaboratively in 2016 and 17, we were significantly past that a couple years later.

The groups albums have become grander and more spacious over the years. I Dont Live Here Anymore has a big, warm sound that straddles the line between indie cool and arena-ready heartland rock, full of guitars, keyboard textures, and hooky melodies, augmented on the title track by vocals from Jess Wolfe and Holly Laessig of Lucius.

They create such amazing sonic landscapes, says Wolfe, who recalls first meeting the War on Drugs in 2014 when both bands were playing a music festival in Vermont. I just remember sitting on the side of the stage and watching in awe.

Even at the time, Granduciel had a distinctive lyrical sensibility that has since become more defined. The narrators in his songs are often on a quest for meaning or belonging as they wrestle with uncertainty. Though the pandemic has probably amplified a general sense of restlessness, Granduciel says, those feelings didnt originate in March 2020.

Everyone feels a little lost, right? I mean, no one really knows what theyre doing, he says.

He traces those themes in his lyrics back to his own nomadic existence when Granduciel was in his 20s and his music career was just starting to take shape.

You try to write from this place that makes a lot of sense to you, he says. The period when he first got serious about music coincided with a time where I was without roots, you know what I mean? I was living in California. I was traveling around all the time. I wasnt homeless or anything, but I was kind of just moving around. I had no real sense of purpose or direction, which was fine with me at the time.

Granduciel has become more settled in recent years. He lives in Los Angeles full time now, and he became a father in 2019. Yet that sense of looking toward the horizon hasnt fully dissipated.

No one is 100 percent confident in every choice theyve made, he says. I wouldnt consider myself fully confident in any sort of adulthood. I think Im still writing from a kind of displacement.

THE WAR ON DRUGS

At House of Blues, 15 Lansdowne St. Jan. 31 and Feb. 1 at 8 p.m. Tickets $46-$66. 888-693-2583, http://www.houseofblues.com/boston

Follow Eric R. Danton on Twitter @erdanton.

Read more from the original source:

For the War on Drugs Adam Granduciel, a return to a place he once called home - The Boston Globe

The War on Drugs Were Effortlessly Transcendent at Their Irving Show on Friday – Dallas Observer

In an evening full of casual grandeur, the most simple sentiment made the biggest impression.

Adam Granduciel (the stage name of singer-songwriter Adam Granofsky) and his War on Drugs bandmates had amply demonstrated they were capable of conjuring a mesmerizing swirl of guitars, percussion, brass and keys by the time they tucked into Living Proof, roughly a quarter of the way through the bands two-hour set Friday night at Irvings Pavilion at Toyota Music Factory.

The track is the opening song on the rock groups fifth and latest LP, last years I Dont Live Here Anymore, the follow-up to 2017s gripping, Grammy-winning A Deeper Understanding. Living Proof is deceptively stripped down an insistent acoustic guitar riff, which blossoms into a beautiful, climactic electric guitar figure, laid against gentle piano and drums but its lyrics land with brutal force: Im always changing/Love overflowing/But Im rising/And Im damaged/Oh, rising, the 42-year-old Granduciel sang Friday, lights swirling around him.

Its a striking opener, but situated as it was on Friday between the brooding Victim and a sprawling Harmonias Dream, the song felt like a subtle restatement of what Granduciel had been saying nearly as soon as he took the stage in front of the comfortably full venue: This place is sweet, he said, but every venue is sweet right now.

COVID-19 protocols were in place Friday; proof of vaccination was required for entry, but despite the bands request for attendees to mask up, there was a pronounced indifference to face coverings among those gathered. (In a concession to the current reality, the War on Drugs is forgoing opening acts on this leg of its tour, and took the stage promptly at 8:30 p.m.)

Fridays stop was the bands first local appearance since a Sept. 2017 gig at what was then known as the Bomb Factory. Granduciel made plain the bands affinity for Dallas: Weve always had a good time playing Dallas this is close enough to Dallas, right? he said midway through the set, and later dedicated Occasional Rain to Dallas drummer Jeff Ryan (its unclear whether Ryan was in attendance Friday).

In an era of hyper TikTok montages and sample-drunk pop music, the music War on Drugs makes is a deliberate throwback to an analog era: men and women making rock music with their hands, embracing the occasional flaw and reveling in the alchemy of live performance.

While its tempting to slap a neo-Springsteen label on what Granduciel and his collaborators are doing, reducing their work to such a narrow definition minimizes the expansive, woolly brilliance packed into even the smallest moments.

In an era of hyper TikTok montages and sample-drunk pop music, the music War on Drugs makes is a deliberate throwback to an analog era: men and women making rock music with their hands, embracing the occasional flaw and reveling in the alchemy of live performance.

The set list heavily favored Anymore and 2014s mesmeric Lost in the Dream, largely bypassing the rest of the bands catalog. Highlights, augmented by the spare yet dazzling array of lighting on an otherwise spartan stage, abounded: Pain was exquisitely bruised, while Red Eyes set the room ablaze, I Dont Live Here Anymore electrified and Under the Pressure culminated in an extended instrumental freak-out only reinforcing how effortless the War on Drugs made musical transcendence look and feel.

By mingling visceral nostalgia and lacerating dispatches from the front lines of life, the War on Drugs manages a potent magic trick, crafting expansive rock songs that feel familiar, even as the nuances tucked away behind elegant, gorgeous guitar lines and sky-scraping bombast pop out like spring-loaded surprises, as capable of lifting you up as they are bringing you to your knees.

See the rest here:

The War on Drugs Were Effortlessly Transcendent at Their Irving Show on Friday - Dallas Observer

The War on Drugs postpone shows because of Covid case in touring party – Brooklyn Vegan

The War on Drugs began their North American tour last week, but they've now been forced to postpone a pair of shows, tonight in Nashville (1/24) and Tuesday night in Atlanta (1/25), after a member of their touring party tested positive for Covid. "With our long-awaited tour finally underway," they write, "we are heartbroken to share a member of our touring party has tested positive for COVID-19. With so much of the tour on the horizon, we've made the difficult decision to postpone the shows in Nashville and Atlanta, in order to take the safest approach for everyone. If everyone remains negative and healthy, we will continue the tour in Philly on Jan 27th. Ticketholders: keep an eye out for an email from your local promoter for more information. We are working with the venues in order to announce new dates as soon as possible."

Their NYC show, scheduled for Saturday, January 29 at Madison Square Garden, is currently still on. Tickets are on sale, and we're giving away a pair.

See The War on Drugs' updated tour dates below.

THE WAR ON DRUGS: 2022 TOURMon, JAN 24 Ryman Auditorium Nashville, TN POSTPONEDTue, JAN 25 Tabernacle Atlanta, GA POSTPONEDThu, JAN 27 The Met Philadelphia Philadelphia, PAFri, JAN 28 The Met Philadelphia Philadelphia, PASat, JAN 29 Madison Square Garden New York, NYMon, JAN 31 House Of Blues Boston Boston, MATue, FEB 1 House Of Blues Boston Boston, MAWed, FEB 2 The Anthem Washington, DCFri, FEB 4 KEMBA Live! Columbus, OHSat, FEB 5 Stage AE Pittsburgh, PASun, FEB 6 PromoWest Pavilion at Ovation Newport, KYTue, FEB 8 The Fillmore Detroit Detroit, MIThu, FEB 10 The Chicago Theatre Chicago, ILFri, FEB 11 Chicago Theater Chicago, ILSat, FEB 12 Riverside Theatre Milwaukee, WISun, FEB 13 Riverside Theatre Milwaukee, WITue, FEB 15 Palace Theatre Saint Paul, MNWed, FEB 16 Palace Theatre Saint Paul, MNFri, FEB 18 Mission Ballroom Denver, COSat, FEB 19 The Union Event Center Salt Lake City, UTMon, FEB 21 Paramount Theatre Seattle, WATue, FEB 22 Paramount Theatre Seattle, WAWed, FEB 23 Theater Of The Clouds Portland, ORFri, FEB 25 Bill Graham Civic Auditorium San Francisco, CASat, FEB 26 Shrine Auditorium and Expo Hall Los Angeles, CASun, FEB 27 Innings Festival 2022 Tempe, AZTue, MAR 22 Helsinki Ice Hall Helsinki, FinlandThu, MAR 24 Annexet Stockholm, SwedenFri, MAR 25 Annexet Stockholm, SwedenSun, MAR 27 Sentrum Scene Oslo, NorwayMon, MAR 28 Sentrum Scene Oslo, NorwayTue, MAR 29 Sentrum Scene Oslo, NorwayWed, MAR 30 KB Hallen Copenhagen, DenmarkThu, MAR 31 KB Hallen Copenhagen, DenmarkSat, APR 2 Verti Music Hall Berlin, GermanyMon, APR 4 Halle 622 Zrich, SwitzerlandTue, APR 5 Alcatraz Milan, ItalyThu, APR 7 Zenith Munich, GermanySat, APR 9 LOlympia Paris, FranceMon, APR 11 O2 Academy Birmingham Birmingham, United KingdomTue, APR 12 The O2 London, United KingdomThu, APR 14 3Arena Dublin, IrelandSat, APR 16 First Direct Arena Leeds, United KingdomSun, APR 17 Edinburgh Corn Exchange Edinburgh, United KingdomMon, APR 18 Edinburgh Corn Exchange Edinburgh, United KingdomWed, APR 20 Palladium Cologne Cologne, GermanyThu, APR 21 Kulturzentrum Schlachthof Wiesbaden, GermanyFri, APR 22 Ziggo Dome Amsterdam, NetherlandsSat, APR 23 Sportpaleis Antwerpen, BelgiumFri, JUN 17 Bonnaroo Music and Arts Festival 2022 Manchester, TNThu, JUN 30 Rock Werchter 2022 Werchter, BelgiumFri, JUL 1 Stadtpark-Open-Air-Bhne Hamburg, GermanyFri, JUL 1 Down The Rabbit Hole 2022 Ewijk, NetherlandsWed, JUL 6 NOS Alive 2022 Lisbon, PortugalFri, JUL 8 Mad Cool Festival 2022 Madrid, Spain

Originally posted here:

The War on Drugs postpone shows because of Covid case in touring party - Brooklyn Vegan

Ka Leody: Shift focus of war on drugs to health – manilastandard.net

Partido Lakas ng Masa presidential candidate Leody De Guzman said the campaign against illegal drugs should focus on treating the problem as a health issue.

We must continue the war on drugs but not in a way where people involved are killed or treated as criminals. Let us treat it as a health problem, he said in a Facebook livestream over the weekend.

De Guzman, who lamented that he was not invited to a television interview of presidential candidates that was aired Saturday, turned to social media instead to discuss his platform of government.

I am not in favor of the killings committed in the implementation of the war on drugs. This shows that our position is correct that killings cannot resolve it. The killings are continuous but the drug problem also still persists, he said.

Official data show more than 6,200 drug suspects have died in anti-narcotics operations since President Rodrigo Duterte assumed office in June 2016.

The bloody war of drugs has prompted judges at the International Criminal Court to approve a formal investigation into the killings.

The ICC, however, suspended the probe in November following a request by the Philippine government, saying it is conducting its own investigation.

Meanwhile, De Guzman addressed accusations that he is living a comfortable life while representing the working class.

The members of my family are all workers. My wife works in a bank she is a bank officer. My eldest child is in a call center. My youngest works on a cruise ship. And the other one also works in a call center, he said.

De Guzman earlier drew flak after posting a Christmas photo of his family.

His running mate, Walden Bello, defended the post, saying workers deserve to have a decent life.

A Christmas photo in a comfortable setting subjects Leody De Guzmans family to online abuse by those who think they should be living in a hovel. What an ugly display of middle-class prejudice. Working people deserve respect, Bello said.

The middle class hates it when poor people get up in the world and begin to enjoy things that they feel only they and the rich deserve. Thats the kind of hypocrisy that is fueling the resentment of the masses, Bello added.

See the original post:

Ka Leody: Shift focus of war on drugs to health - manilastandard.net