Cancer risk, wine preference, and your genes Harvard Gazette – Harvard Gazette

Molly Przeworski launched into a lecture on genomic trait prediction with disappointing news: Using your genes to read the future is a murky practice.

The Columbia University systems biologist, who visited Harvard last week as featured speaker in the annual John M. Prather Lecture Series, explained how current approaches to genomic trait prediction in humans are imperfect. In a sample of 150,000 people, she said, more than 600 million positions in the genome differ among individuals. Over the past decade, it has become routine to survey such variation in large samples and try to associate variation in traits, such as height, to these genetic differences. Companies now aim to use DNA profiling to make personal predictions height, cancer risk, educational attainment, which wine would best suit their palate, and even the right romantic partner.

There are areas, notably for medical prognosis, where genomic trait prediction may turn out to be useful, said Przeworski, whose lab studies how evolution, natural selection, recombination, and mutation operate in humans and animals. But by and large, genomic trait prediction is much less informative than these ads and headlines would suggest.

But by and large, genomic trait prediction is much less informative than these ads and headlines would suggest.

At the moment, she said, the most useful application is not for humans, but rather for studying other species ecological responses to climate change. Her team has used genomic trait prediction among coral species in the Great Barrier Reef to shed light on which are most susceptible to coral bleaching.

In human genetics, the typical approach for associating some trait of interest (height, cancer risk) to specific genes is called a genome-wide association study. The test relates trait variations to genotypes (base pairs AA, AG, etc.) in certain positions on the genome, and fits them to a line.

However, many traits are associated with a large number of genetic variants. For example, one study Przeworski cited found 12,000 unique positions on a genome in which changing one base pair letter would have a small effect on ones height. Whats more, environmental factors, such as nutrition, also affect height.

I think a lot of us have this implicit model of what genomic trait prediction should mean that we understand something about how that genetic variant affects the protein, affects the cellular phenotype, affects development, and therefore affects height, she said. In practice, for almost all complex traits, we are very, very far from that. All we really have is this massive correlational study.

So if genomic prediction is murky, why bother? Przeworski admitted to asking herself the same question years ago, and investigating contexts in which confounding genetic clues wouldnt matter as much as simply making a helpful, reliable prediction. It occurred to me we could make predictions about ecologically important traits in the response to climate change, she said.

She spent part of her talk describing how her lab followed up, partnering with Australian scientists who study how ocean warming affects coral reefs. Due to temperature-related disruptions in the symbiotic relationship between certain coral species and the algae they farm, some colonies lose their pigment and become bleached, which stunts growth and leads to colony death. Przeworskis team has used their expertise in genomic trait prediction to build models that determine which corals are most vulnerable to bleaching.

As it becomes more straightforward to collect genomic information, I think its greatest promise may be in applications outside humans, she said.

The lecture was co-sponsored by the Department of Organismic and Evolutionary Biology, the Harvard Museum of Natural History, and the Harvard Museums of Science and Culture.

More:

Cancer risk, wine preference, and your genes Harvard Gazette - Harvard Gazette

TMS student performs in A Wrinkle in Time – hometownweekly

By Madison Butkus

Hometown Weekly Reporter

Westwood resident and Thurston Middle School (TMS) student Eviva Hertz is currently performing in her third professional play, A Wrinkle in Time, at Wheelock Family Theatre (WFT) in Boston, MA. This play is adapted from Madeleine LEngles much-loved classic tale in which Hertz is playing the character of Charles Wallace.

Marketing Specialist at WFT, Jenna Corcoran, went on to write, In A Wrinkle in Time, directed by Regine Vitale, one of literatures most enduring young heroines, Meg Murry, is back, stubbornness and all. Joining forces with her baby brother Charles Wallace, friend and neighbor Calvin OKeefe, and the celestial beings Mrs. Whatsit, Mrs. Who, and Mrs. Which, she must battle the forces of evil in order to rescue her father, save humanity, and discover herself. Traveling through time and space, Meg must save both her father and the world from IT, an evil force that is rapidly casting its shadow over the universe. But what does Meg have that the IT does not? Love. For in the end, love is enough to overcome evil and bring ITs dark world crashing down. One dark and stormy night, the eccentric Mrs. Whatsit arrives at the home of Meg Murry, a young teen who doesnt fit in at her New England high school. Megs scientist Father vanished over two years ago, under mysterious circumstances. Aided by Mrs. Whatsit and her friends, Meg, her gifted brother Charles Wallace, and her friend Calvin are transported through time and space on a mission to rescue their father from the evil forces that hold him prisoner on another planet.

When talking with Hertz, she explained that her love for the theatre started through her older sister who started taking class at WFT. Upon taking a class there herself, she quickly fell in love with WFT and all that it had to offer. Wheelock is really amazing and fantastic, Hertz stated, because it is a family theatre and they really care about the kids. We get our own full dressing rooms and changing/bathroom areas which is really nice. Everyone there is really kind and just based towards a family experience and atmosphere. It is also very much an intergenerational theatre which is really comforting.

While Hertz is playing the role of a boy character in this production, she mentioned how this is rather usual for her. I am almost always the character of a boy, she explained, in a majority of the productions that I do. For Charles Wallace specifically, I really try to tap into that younger part of myself to really get into the boy character. I also get into the Im so smart and know-it-all mindset just as is seen within his character. As I have said to our amazing director Regine Vital multiple times, Charles Wallace is magic but he thinks about himself in a scientific way, where he knows the answers to the questions but he doesnt understand them. So I try to channel all that when becoming his character.

Hertzs love for the theatre is abundantly clear, but she is rather realistic about it when it comes to her future endeavors. I dont want it to become a full time job, she mentioned, because I know it is pretty hard to sustain yourself on just acting. I actually want to become a therapist when I grow up but I do want to continue acting on the side for as long as I can. I just love the feel of being on stage, the rehearsal process, and meeting new friends.

She further described just how much it means to her to be able to live out her dreams of acting on a professional stage. This continual experience means a lot to me, she stated, every time I go on stage to perform. I love letting other people experience the magic of the theater and, you know, the audience really does a lot of the work. They are so important to the show. And it really means a lot to me that I get to be another person and experiment on different ways I can portray different characters. I get to share with people the tiny parts of myself that I didnt know I had but that I now love to show off to them. Through the theatre, I am finding out things about myself that I had never realized before, and through performing, I feel a sense of comfort that I can show them.

A Wrinkle in Time made its opening debut at WFT on April 13th and will continue to run until May 11th. While this is Hertzs third professional play at WFT, she will be returning there in the summer to perform in the role of Mamillius in The Winters Tale.

For more information about WFT and/or to get tickets for A Wrinkle in Time, please visit their website at http://www.wheelockfamilytheatre.org. Here at Hometown Weekly Newspaper, we would like to congratulate Hertz on her role and wish her good luck in her upcoming performances!

Continued here:

TMS student performs in A Wrinkle in Time - hometownweekly

Posted in Tms

Deep learning-based classification of anti-personnel mines and sub-gram metal content in mineralized soil (DL-MMD … – Nature.com

The experimental arrangement in MMD is a prime factor that defines the integrity of the dataset. The dataset is obtained in lab environment with a PI sensitive coil made up of muti-stranded wire with coil diameter of 170mm. It is mounted on a transparent acrylic sheet with a miniaturized Tx/Rx (also mounted) at a distance of 100mm. The electromagnetic field (EMF) simulation of search-head in close proximity of mine is shown in Fig.7. The received signal is digitized, and synchronized data is obtained for both the transmitted positive and negative pulses. The dataset is then populated with this synchronized pulse data. The pulse repetition frequency, including both pulses, is 880Hz.The number of pulses M (refer to Eq.(1)) obtained per class is 1330, representing concatenated positive and negative pulses. It is done to simplify the model, with a total number of concatenated samples being N=244, consisting of 122 samples from each received pulse, respectively. It is approximately 3s of pulsed data per class.

Shows Electromagnetic field simulation of search head in (a) and search head in proximity of mine in (b).

The samples/targets used to represent the nine classes (previously discussed) include minrl/brick (mineralized soil), sand (non-mineralized soil), APM (standard 0.2 gm) and vertical paper pins (0.2 gm). Mineralization is an indication of magnetic permeability (or susceptibility) of the surface soils that have been exposed to high temperatures and heavy rainfall or water for extended periods of time, often exhibit high mineralization due to the presence of residual iron components. For an in-depth exploration of the magnetic susceptibility across a wide range of soil types, you can find comprehensive information in reference18. The choice of using brick, a clay-based material, as a representative sample for mineralized soil is grounded in its unique composition. It contains minerals like iron oxide, such as magnetite or hematite, and exhibits relatively low electrical conductivity19. These distinctive characteristics significantly enhance its detectable response when subjected to a MMD. In fact, this response is typically more robust than that of conventional mineralized soil (from which it originates) or even APM. For the sake of simplicity and consistency, we will refer to this material as "minrl" throughout this paper.

All of the targets mentioned pose their own challenges, but they are placed in close proximity to the MMD, within a distance of no more than 20mm parallel to the surface of the coil. The targets are positioned at the center of the coil. The received signals from different target samples of a positive and a negative transmitted pulses can be observed in Figs. 8 and 9 respectively. The figures display a magnified section of the received signal, focusing on the initial samples that are more strongly influenced by the secondary magnetic field compared to later samples. It can also be seen that signals vary in opposite directions as per polarity of the transmitted pulses.

Received signals of a positive transmitted pulse picked up at the sensor coil from the secondary magnetic field produced by the eddy currents induced within the targets. The x-axis shows few numbers of samples (initial part of the signal) per pulse and y-axis shows amplitude of the signal in volts. Signals from nine targets air, APM, pins, minrl, minrl+APM, minrl+pins, sand, sand+APM and sand+pins have been shown.

Received signals of a negative transmitted pulse picked up at the sensor coil from the secondary magnetic field produced by the eddy currents induced within the targets. The x-axis shows few numbers of samples (initial part of the signal) per pulse and y-axis shows amplitude of the signal in volts. Signals from nine targets air, APM, pins, minrl, minrl+APM, minrl+pins, sand, sand+APM and sand+pins have been shown.

The overall dataset comprises a total of 11,970 pulses, representing nine different classes. The dataset is sufficiently diverse, as illustrated in Fig.10 by examining inter-class distances. For this analysis, two distances are employed: Euclidean distance, which measures point-to-point distance, and Bhattacharyya distance, a metric indicating dissimilarity between two probability distributions. Two cases will be briefly discussed here: one involving the Euclidean distance between air and pins, where the maximum distance is observed as depicted in Fig.10, which is also evident in the received signal shown in Figs. 8 and 9. The second case pertains to the Bhattacharyya distance between air and sand, illustrating minimal dissimilarity. The impact of this dissimilarity will become evident in the overall results. To prepare this dataset for modelling, these pulses are randomly shuffled and subsequently split into two separate sets: a training dataset containing 10,773 pulses and a validation dataset comprising 1197 pulses.

Shows inter-class similarity through Euclidean and Bhattacharyya distances.

During the model training phase, input data is structured as a matrix with dimensions [10,773244], and the output, following a supervised learning approach, is provided as a one-hot encoded labeled matrix with dimensions [10,7739]. The accuracy of the trained model on the provided data is tracked across multiple epochs, including both training and validation accuracy. In the context of this training process, one epoch signifies a complete iteration over the entire training dataset of size [10,773244], with all training samples processed by the model. Figure11 depicts the trend, showing that as the training process repeats over multiple epochs, the model steadily enhances its performance and optimizes its parameters. After 4000 epochs, the trained accuracy reaches approximately 98%, while the validation accuracy hovers above 93%. It also shows that the DL-MMD model has more or less converged at 4000epochs, by achieving the optimum training performance. Likewise, its evident that the models error loss diminishes with the progression of epochs, as illustrated in Fig.12.

Shows the accuracy and validation accuracy of novel DL-MMD model versus epochs. For comparison, the validation accuracy of KNN and SVM classifier are also shown for k=8 and C=100 respectively.

Shows the loss and validation loss of novel DL-MMD model versus epochs.

Figure11, also shows that the presented model performs substantially better compared to support vector machine (SVM) and K-Nearest Neighbors (KNN) classifiers. The main working principle of SVM is to separate several classes in the training set with a surface that maximizes the margin (decision boundary) between them. It uses Structural Risk Minimization principle (SRM) that allows the minimization of a bound on the generalization error20. SVM model used in this research achieved a training accuracy of 93.6% and a validation accuracy of 86.5%, which is far lower than the performance achieved by the presented model. The parameter for kernel function used is the most popular i.e. radial basis function (RBF) and the value of regularization parameter c optimally selected is 100. The regularization parameter controls the trade-off between classifying the training data correctly and the smoothness of the decision boundary. Figure13 shows the influence of the regularization parameter c, on the performance of the classifier. The gamma is automatically calculated based on the inverse of the number of features, which ensures that each feature contributes equally to the decision boundary. The hyperparameter optimization is achieved through a manual grid search method. The code iterates through a predefined list of C values [0.1, 1, 10, 100, 1000, 10000], and for each value of C, it trains a Support Vector Machine (SVM) classifier with a radial basis function (RBF) kernel and evaluates its performance on the training and test sets. The accuracy and C values are then plotted to visually check the best performance. It can be seen that the generalization error increases when the value of C is greater than 100, the SVM starts to overfit the training data and thus resulting in decrease in validation accuracy.

Shows the accuracy of SVM classifier versus regularization parameter C.

While K-Nearest Neighbors (KNN) model with 8 neighbors (k) achieved a training accuracy of 92.6% and a validation accuracy of 90.7% (see Fig.11), which is lower than the performance achieved by the presented model. To enable comparative analysis, it is essential to showcase the performance of this non-parametric machine learning algorithm. In this context, the algorithm predicts the value of a new data point by considering the majority vote or average of its k nearest neighbors within the feature space21. Figure14 illustrates the influence of the hyperparameter k, the number of neighbors, on the performance of the algorithm. The graph demonstrates that the validation accuracy reaches a maximum of 90.7% when 8 neighbors are considered.

Shows the accuracy of KNN classifier versus number of neighbors k.

To further analyze the DL-MMD model versus the experimental data, one more graph has been plotted shown in Fig.15. This graph illustrates the comparative performance of the presented model using a different data split ratio (7030), with 70% for training and 30% for validation. The graph shows a slightly degraded performance when compared to the split ratio (9010) of 90% for training and 10% for validation. However, it still shows validation accuracy of above 88% at 4000 epochs. This degradation is attributed to epistemic uncertainty (model uncertainty) due to slightly less effective learning on a reduced training data and as the training data increases, this uncertainty also reduces.

Shows the accuracy and validation accuracy of novel DL-MMD model versus epochs at two different data split ratios i.e. of 9010 and 7030.

The performance of the model can also be inferred from the confusion matrix shown in Fig.16. It provides a tabular representation of the predicted and actual class labels, giving a very important analysis of the models in terms of true positives, true negatives, false positives, and false negatives. For an application perspective of an MMD, safety of the user is of utmost importance for which false negative matters a lot since mine as target must not be missed.. The overall prediction accuracy is above 93.5%, however, for cases of air and sand it is approximately 85 and 86.5% respectively, inferred from the confusion matrix. These two classification cases of relatively less prediction accuracy can be neglected since sand being wrongly classified as air only and vice-versa. These two classes (air & sand) do not trigger any detection alarm by an MMD, thus misclassification of them will not impact efficiency of DL-MMD classifier. It also highlights the fact that sand (of river) has minimal mineralized content and is generally designated as non-mineralised soil. It is therefore difficult to separate the boundary between these two classes in presence of noise and interference.

Confusion matrix of the proposed DL-MMD classification on 9 classes.

In addition to this, two further cases need to be examined: one involves mineralized soil (minrl) being wrongly classified as APM, and the other involves APM in sand (sand+APM) being wrongly classified as minrl. The first case is of false positive, it will generate a false alarm and will waste time of the user by requiring unnecessary further investigation. The second case is of more importance i.e. of false negative where an APM is detected but wrongly classified by a DL-MMD and will be discussed in next section. Apart from them, there are minor cases e.g. an APM misclassified as APM in sand (sand+APM), it will not have any impact since target of concern (APM) will remain the same but now being shown buried in sand. The occurrence of all these misclassification cases (apart from the air/sand case & vice-versa) is less than 5% approximately.

These results have been obtained by a substantial dataset based on actual data acquired in two sets of 665 (pulses per class) each obtained at two different times through the experimental setup explained previously and then combined together. Comprehensive simulations have been carried out in the Tensor Flow environment for evaluation of the proposed method. In addition to this, the algorithm has been extensively tested with an increased number of layers and channels, resulting in overfitting. Furthermore, the proposed model has been tested with different optimizers, such as Adagrad, Adamax, and Adam. The comparative analysis of Adam and Adamax can be seen in Fig.17. Both show equivalent performance after 2000epochs.

Shows the accuracy and validation accuracy of novel DL-MMD model versus epochs using two different optimizers Adamax and Adam.

In addition to the aforementioned analysis, the dataset underwent evaluation using other prevalent classification algorithms22, which utilize the principle of ensemble learning. However, upon comparison, the proposed deep learning architecture exhibited superior performance, achieving an accuracy exceeding 90%. The confusion matrices of these classification algorithms, AdaBoost and Bagged tree, are depicted in Figs. 18, 19, and 20, with the dataset partitioned into an 80/20 ratio, resulting in accuracies of 75.4%, 80%, and 83.3%, respectively. AdaBoost was employed without PCA, utilizing the maximum number of splits and learners set to 30, with a learning rate of 0.1. For Bagged tree, only Model 2 underwent preprocessing with PCA with a variance of 95%. They both utilized the same number of learners as AdaBoost and a maximum split of 11,969.

Confusion matrix model 1 AdaBoost.

Confusion matrix model 2 Bagged Tree.

Confusion matrix model 3 Bagged Tree.

It is pertinent to mention that there is always redundant information within the received signal that creates background bias, especially in sensitive areas with low metal content. Information regarding the detection of APM mines buried at different depths is available (in the parameter decay rate), but it is not utilized. Therefore, for an APM buried at a different depth (relative to the search head) to the one it is trained on, there is a chance that it can be misclassified. The information exists, but it needs to be pre-processed before feeding the signal to the model. One approach could be to use focused AI models, similar to those shown in Ref23, that inject synthetic bias into the signal to generalize the model in our case at different depths. Another approach can be to localize the area with different decay rates, similar to the one shown in Ref24 for 2D image application. One of the future work will be to utilize this information and integrate it into the DL_MMD architecture.

Read the original:

Deep learning-based classification of anti-personnel mines and sub-gram metal content in mineralized soil (DL-MMD ... - Nature.com

Enhancing cervical cancer detection and robust classification through a fusion of deep learning models | Scientific … – Nature.com

Dataset description

The dataset we used for this study is accessible through this link: https://www.cs.uoi.gr/~marina/sipakmed.html. It contains five different cell types, as detailed in24. In our research, we've transformed this dataset into a two-class system with two categories: normal and abnormal. Specifically, the normal category includes superficial intermediate cells and parabasal cells, while the aberrant category covers koilocytotic, dyskeratotic, and metaplastic cell types25. Within the normal category, we've further divided cells into two subcategories: superficial intermediate cells and parabasal cells. The essential dataset characteristics are summarized in Table 2. The SIPaKMeD dataset comprises a total of 4068 images, with 3254 allocated for training (making up 80% of the total), and 813 set aside for testing (accounting for 20% of the total). This dataset consists of two distinct classes: normal photos, totalling 1618, and aberrant images, amounting to 2450. Figure2 provides visual examples of photographs from these two different categories. The existing literature extensively covers different screening methods for cervical cancer, such as Pap smear, colposcopy, and HPV testing, emphasizing the importance of early detection. However, a significant gap exists in automated screening systems using pap smear images. Traditional methods rely on expert interpretation, but integrating deep learning (DL) and machine learning (ML) offers potential for intelligent automation. Despite this potential, few studies focus on developing and evaluating such systems specifically for cervical cancer prediction using pap smear images. This research addresses this gap by proposing a methodology that utilizes pre-trained deep neural network models for feature extraction and applies various ML algorithms for prediction. The study aims to contribute to advancing automated screening systems for cervical cancer, aiming to improve early detection and patient outcomes.

Proposed model cervical cancer classification.

The schematic representation of our proposed system can be observed in Fig.2. To facilitate the classification task for cervical cancer, we employ the SIPaKMeD dataset, which comprises images of pap smears. This dataset is categorized into two groups: abnormal and normal, with a distribution of 60% for training and 40% for testing. To extract relevant feature sets from well-established CNN architectures such as Alexnet, Resnet-101, Resnet-152, and InceptionV3, we initiate feature extraction from these pretrained CNN models. This step allows us to gather valuable information from the final layer activation values. For the task of classifying images into normal and abnormal categories, we leverage a variety of machine learning techniques, including Simple Logistic, Decision Tree, Random Forest, Naive Bayes, and Principal Component Analysis. Our approach is designed as a hybrid strategy, merging both DL and ML methodologies. The utilization of DL enables our model to capture intricate and complex features inherent in the data, while ML provides the necessary flexibility to handle diverse scenarios. By harnessing the last layer of pretrained models for feature extraction, we enable different machine learning algorithms to classify data based on these extracted attributes. This combination of DL and ML enhances our system's ability to effectively categorize cervical cancer cases.

The pre-trained model has undergone training on a larger dataset, acquiring specific weights and biases that encapsulate the dataset's distinctive characteristics. This model has been effectively employed for making predictions based on data. The transferability of learned features to other datasets is possible because certain fundamental abstract properties remain consistent across various types of images. By utilizing pre-trained models, significant time and effort savings are achieved, as a substantial portion of the feature extraction process has already been completed. Noteworthy examples of pre-trained models include Resnet152, ResNet101, Inceptionv3, and Alexnet, which are summarized in Table 3 for reference.

The image classification framework based on ResNet-101 consists of two main parts: feature extraction and feature classification. In Fig.3, you can see how the feature extractor is built, comprising five main convolution modules with a total of one hundred convolution layers, an average pooling layer, and a fully connected layer26. Once the features are extracted, they are used to train a classifier with a Softmax structure. Table 4 lists the convolution layers and their configurations in the ResNet-101 backbone. Using shortcut connections to increase data dimensions, the ResNet-101 model significantly improves performance by increasing convolutional depth. These shortcut connections also address the problem of network depth causing degradation by enabling identity mapping. For most binary classification tasks, the loss function is applied using the logical cross-entropy function, as shown in Eq.(1).

$$k_{({h_l},;{q_l})}^b = - {f_l}log left( {q_l} right) - left( {1 - {f_l}} right)log left( {1 - {q_l}} right)$$

(1)

where the ground truth value, (% {f_l}), and the predicted value, (% {q_l}), are respectively indicated as the lth training dataset's ground truth and predicted values. The value of the loss, ({k}_{({h_{l}}, ; {q_{l}})}^{b}), is then backpropagated through the CNN model. At the same time, the CNN model parameters (weights and biases) are gradually optimised during each epoch. This process continues until the loss is minimised and the CNN model converges to a solution.

The ResNet architecture is efficient, promoting the training of very deep neural networks (DNN) and enhancing accuracy. It addresses the challenge of accuracy degradation associated with increasing network depth. When depth is increased, accuracy often drops, which is a drawback. However, deeper networks can improve accuracy by avoiding the saturation of shallow networks, where errors remain minimal27. The key idea here is that information from one layer should easily flow to the next with the help of identity mapping. ResNet tackles the degradation problem, along with the gradient vanishing issue, using residual blocks. These blocks handle the remaining computation while considering the input and output of the block. Figure4, illustrates architecture of ResNet152. Table 5, illustrates the configuration of ResNet152.

This advanced model has undergone training by one of the industry's most renowned hardware specialists, leveraging an impressive repertoire of over 20 million distinct parameters. The model's architecture is a harmonious blend of symmetrical and asymmetrical construction blocks, each meticulously crafted with its own unique set of convolutional, average, and maximum pooling layers, concatenation operations, and fully connected layers configurations. Furthermore, the model's design incorporates an activation layer that takes advantage of batch normalization, a widely adopted technique in the field. This technique helps stabilize and accelerate the training process, making the model more robust and efficient28. For the critical task of classification, the model employs the Softmax method, a popular and well-established approach in machine learning. Softmax is instrumental in producing probability distributions over multiple classes, which enables the model to make informed and precise predictions. To provide a visual understanding of the Inception-V3 model's intricate design, Fig.5 serves as a diagrammatic representation, offering insights into the model's underlying architecture and the various components that make it a powerhouse in the realm of machine learning and artificial intelligence.

InceptionV3 architecture.

The field of machine learning, particularly in the domain of image processing, has witnessed a profound impact thanks to the advent of Alexnet. As suggested in Ref.29, this influential model boasts a preconfigured Convolutional Neural Network (CNN) with a total of eight distinct layers29. Its remarkable performance in the 2012 ImageNet Large Scale Visual Recognition Challenge (LSVRC-2012) competition marked a watershed moment, as it clinched victory with a substantial lead over its competitors. The architectural blueprint of Alexnet bears some resemblance to Yann Lecun's pioneering LeNet, highlighting its historical lineage and the evolutionary progress of convolutional neural networks.

Figure6 provides an insightful visual representation of the holistic design of the Alexnet system. In the journey of data processing within Alexnet, input data traverse through an intricate sequence, comprising five convolution layers and three max-pooling layers, as vividly illustrated in Fig.5. These layers play a pivotal role in feature extraction and hierarchical representation, which are vital aspects of image analysis and understanding. The culmination of AlexNet's network journey is marked by the application of the SoftMax activation function in the final layer, enabling it to produce probabilistic class predictions. Along the way, the Rectified Linear Unit (ReLU) activation function is systematically employed across all the network's convolution layers, providing a nonlinear transformation that enhances the network's capacity to learn and extract features effectively. This combination of architectural elements and activation functions has played a significant role in solidifying AlexNet's position as a groundbreaking model in the domain of image processing and machine learning.

Logistic regression serves as a powerful method for modelling the probability of a discrete outcome based on input variables, making the choice of input variables a pivotal aspect of this modelling process. The most common application of logistic regression involves modelling a binary outcome, which pertains to scenarios where the result can exclusively assume one of two possible values, such as true or false, yes or no, and the like. However, in situations where there are more than two discrete potential outcomes, multinomial logistic regression proves invaluable in capturing the complexity of the scenario. Logistic regression finds its primary utility in the realm of classification problems30. It becomes particularly valuable when the task at hand involves determining which category a new sample best aligns with. This becomes especially pertinent when dealing with substantial datasets, where the need to classify or categorize data efficiently and accurately is paramount. One noteworthy domain where logistic regression finds widespread application is in cybersecurity, where classification challenges are ubiquitous. A pertinent example is the detection of cyberattacks. Here, logistic regression plays a crucial role in identifying and categorizing potential threats, contributing significantly to bolstering the security of digital systems and networks.

In the realm of supervised learning algorithms, decision trees emerge as a highly versatile and powerful tool for both classification and regression tasks. They operate by constructing a tree-like structure, wherein internal nodes serve as decision points, branches represent the outcomes of attribute tests, and terminal nodes store class labels. The construction of a decision tree is an iterative process, continually dividing the training data into subsets based on attribute values until certain stopping conditions, such as reaching the maximum tree depth or the minimum sample size required for further division, are met. To guide this division process, the Decision Tree algorithm relies on metrics like entropy or Gini impurity, which gauge the level of impurity or unpredictability within the data subsets31. These metrics inform the algorithms choice of the most suitable attribute for data splitting during training, aiming to maximize information gain or minimize impurity. In essence, the central nodes of a decision tree represent the features, the branches encapsulate the decision rules, and the leaf nodes encapsulate the algorithms outcomes. This design accommodates both classification and regression challenges, making decision trees a flexible tool in supervised machine learning. One notable advantage of decision trees is their effectiveness in handling a wide range of problems. Moreover, their ability to be leveraged in ensembles, such as the Random Forest algorithm, enables the simultaneous training on multiple subsets of data, elevating their efficacy and robustness in real-world applications.

A Random Forest is a powerful machine learning tool that handles both regression and classification tasks effectively. It works by combining the predictions of multiple decision trees to solve complex problems. Here's how it works: The Random Forest algorithm builds a forest of decision trees using a technique called bagging. Bagging improves the precision and reliability of machine learning ensembles32. The algorithm then makes predictions by averaging the results from these trees, determining the final outcome. What makes the Random Forest special is its scalability. Unlike single decision trees, it can adapt to complex data and improves its accuracy as you add more trees to the forest. The Random Forest also helps prevent overfitting, making it a valuable tool for real-world applications with noisy and complex datasets. Moreover, it reduces the need for extensive fine-tuning, making it an appealing choice for practitioners seeking effective and dependable machine learning models.

Nave Bayes theorem forms the fundamental principle underlying the Naive Bayes algorithm. In this method, a key assumption is that there's no interdependence among the feature pairs, resulting in two pivotal presumptions: feature independence and attribute equality. Naive Bayes classifiers are versatile, existing in three primary variants: Gaussian Naive Bayes, Bernoulli Naive Bayes, and Multinomial Naive Bayes33. The choice of variant depends on the nature of the data being analyzed. For binary data, Bernoulli Nave Bayes is employed, while count data finds its match in Multinomial Nave Bayes, and continuous data is aptly handled by Gaussian Nave Bayes. Equation(2) serves as a proof of Bayes theorem, underpinning the mathematical foundations of this approach.

$$Zleft( {b|a} right) = frac{Zleft( b right)Zleft( b right)}{{Zleft( a right)}}$$

(2)

Principal Component Analysis (PCA) serves as a powerful technique designed to mitigate the impact of correlations among variables through an orthogonal transformation. PCA finds widespread use in both exploratory data analysis and machine learning for predictive modelling. In addition, PCA stands out as an unsupervised learning algorithm that offers a valuable approach for delving into the intricate relationships between variables. This method, also referred to as generic factor analysis, enables the discovery of the optimal line of fit through regression analysis34. What sets PCA apart is its ability to reduce the dimensionality of a dataset without prior knowledge of the target variables while preserving the most critical patterns and interdependencies among the variables. By doing so, PCA simplifies complex data, making it more amenable for various tasks, such as regression and classification. The result is a more streamlined subset of variables that encapsulates the essential essence of the data.

See the rest here:

Enhancing cervical cancer detection and robust classification through a fusion of deep learning models | Scientific ... - Nature.com

Predicting equilibrium distributions for molecular systems with deep learning – Nature.com

Deep neural networks have been demonstrated to predict accurate molecular structures from descriptors ({{{mathcal{D}}}}) for many molecular systems1,5,6,9,10,11,12. Here, DiG aims to take one step further to predict not only the most probable structure but also diverse structures with probabilities under the equilibrium distribution. To tackle this challenge, inspired by the heatingannealing paradigm, we break down the difficulty of this problem into a series of simpler problems. The heatingannealing paradigm can be viewed as a pair of reciprocal stochastic processes on the structure space that simulate the transformation between the system-specific equilibrium distribution and a system-independent simple distribution psimple. Following this idea, we use an explicit diffusion process (forward process; Fig. 1b, orange arrows) that gradually transforms the target distribution of the molecule ({q}_{{{{mathcal{D}}}},0}), as the initial distribution, towards psimple through a time period . The corresponding reverse diffusion process then transforms psimple back to the target distribution ({q}_{{{{mathcal{D}}}},0}). This is the generation process of DiG (Fig. 1b, blue arrows). The reverse process is performed by incorporating updates predicted by deep neural networks from the given ({{{mathcal{D}}}}), which are trained to match the forward process. The descriptor ({{{mathcal{D}}}}) is processed into node representations ({{{mathcal{V}}}}) describing the feature of each system-specific individual element and a pair representation ({{{mathcal{P}}}}) describing inter-node features. The ({{{{mathcal{V}}}},{{{mathcal{P}}}}}) representation is the direct input from the descriptor part to the Graphormer model10, together with the geometric structure input R to produce a physically finer structure (Supplementary Information sections B.1 and B.3). Specifically, we choose ({p}_{{{mbox{simple}}}}:= {{{mathcal{N}}}}({{{bf{0}}}},{{{bf{I}}}})) as the standard Gaussian distribution in the state space, and the forward diffusion process as the Langevin diffusion process targeting this psimple (OrnsteinUhlenbeck process)40,41,42. A time dilation scheme t (ref. 43) is introduced for approximate convergence to psimple after a finite time . The result is written as the following stochastic differential equation (SDE):

$${{{rm{d}}}}{{{{bf{R}}}}}_{t}=-frac{{beta }_{t}}{2}{{{{bf{R}}}}}_{t},{{{rm{d}}}}t+sqrt{{beta }_{t}},{{{rm{d}}}}{{{{bf{B}}}}}_{t}$$

(1)

where Bt is the standard Brownian motion (a.k.a. Wiener process). Choosing this forward process leads to a psimple that is more concentrated than a heated distribution, hence it is easier to draw high-density samples, and the form of the process enables efficient training and sampling.

Following stochastic process theory (see, for example, ref. 44), the reverse process is also a stochastic process, written as the following SDE:

$${{{rm{d}}}}{{{{bf{R}}}}}_{bar{t}}=frac{{beta }_{bar{t}}}{2}{{{{bf{R}}}}}_{bar{t}},{{{rm{d}}}}bar{t}+{beta }_{bar{t}}nabla log {q}_{{{{mathcal{D}}}},bar{t}}({{{{bf{R}}}}}_{bar{t}}),{{{rm{d}}}}bar{t}+sqrt{{beta }_{bar{t}}},{{{rm{d}}}}{{{{bf{B}}}}}_{bar{t}}$$

(2)

where (bar{t}:= tau -t) is the reversed time, ({q}_{{{{mathcal{D}}}},bar{t}}:= {q}_{{{{mathcal{D}}}},t = tau -bar{t}}) is the forward process distribution at the corresponding time and ({{{{bf{B}}}}}_{bar{t}}) is the Brownian motion in reversed time. Note that the forward and corresponding reverse processes, equations (1) and (2), are inspired from but not exactly the heating and annealing processes. In particular, there is no concept of temperature in the two processes. The temperature T mentioned in the PIDP loss below is the temperature of the real target system but is not related to the diffusion processes.

From equation (2), the only obstacle that impedes the simulation of the reverse process for recovering ({q}_{{{{mathcal{D}}}},0}) from psimple is the unknown (nabla log {q}_{{{{mathcal{D}}}},bar{t}}({{{{bf{R}}}}}_{bar{t}})). Deep neural networks are then used to construct a score model ({{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{bf{R}}}})), which is trained to predict the true score function (nabla log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}})) of each instantaneous distribution ({q}_{{{{mathcal{D}}}},t}) from the forward process. This formulation is called a diffusion-based generative model and has been demonstrated to be able to generate high-quality samples of images and other content27,28,45,46,47. As our score model is defined in molecular conformational space, we use our previously developed Graphormer model10 as the neural network architecture backbone of DiG, to leverage its capabilities in modelling molecular structures and to generalize to a range of molecular systems. Note that the score model aims to approximate a gradient, which is a set of vectors. As these are equivariant with respect to the input coordinates, we designed an equivariant vector output head for the Graphormer model (Supplementary Information section B.4).

With the ({{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{bf{R}}}})) model, drawing a sample R0 from the equilibrium distribution of a system ({{{mathcal{D}}}}) can be done by simulating the reverse process in equation (2) on N+1 steps that uniformly discretize [0,] with step size h=/N (Fig. 1b, blue arrows), thus

$$begin{array}{ll}&{{{{bf{R}}}}}_{N} sim {p}_{{{mbox{simple}}}},\ &{{{{bf{R}}}}}_{i-1}=frac{1}{sqrt{1-{beta }_{i}}}left({{{{bf{R}}}}}_{i}+{beta }_{i}{{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i})right)+{{{mathcal{N}}}}({{{bf{0}}}},{beta }_{i}{{{bf{I}}}}),,i=N,cdots ,,1,end{array}$$

where the discrete step index i corresponds to time t=ih, and i:=ht=ih. Supplementary Information section A.1 provides the derivation. Note that the reverse process does not need to be ergodic. The way that DiG models the equilibrium distribution is to use the instantaneous distribution at the instant t=0 (or (bar{t}=tau)) on the reverse process, but not using a time average. As RN samples can be drawn independently, DiG can generate statistically independent R0 samples for the equilibrium distribution. In contrast to MD or MCMC simulations, the generation of DiG samples does not suffer from rare events that link different states and can thus be far more computationally efficient.

DiG can be trained by using conformation data sampled over a range of molecular systems. However, collecting sufficient experimental or simulation data to characterize the equilibrium distribution for various systems is extremely costly. To address this data scarcity issue, we propose a pre-training algorithm, called PIDP, which effectively optimizes DiG on an initial set of candidate structures that need not be sampled from the equilibrium distribution. The supervision comes from the energy function ({E}_{{{{mathcal{D}}}}}) of each system ({{{mathcal{D}}}}), which defines the equilibrium distribution ({q}_{{{{mathcal{D}}}},0}({{{bf{R}}}})propto exp (-frac{{E}_{{{{mathcal{D}}}}}({{{bf{R}}}})}{{k}_{{{{rm{B}}}}}T})) at the target temperature T.

The key idea is that the true score function (nabla log {q}_{{{{mathcal{D}}}},t}) from the forward process in equation (1) obeys a partial differential equation, known as the FokkerPlanck equation (see, for example, ref. 48). We then pre-train the score model ({{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }) by minimizing the following loss function that enforces the equation to hold:

$$begin{array}{rc}&mathop{sum }limits_{i=1}^{N}frac{1}{M}mathop{sum }limits_{m=1}^{M}leftVert frac{{beta }_{i}}{2}left(nabla left({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)}cdot {{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)})right)right.+nabla leftVert {{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)})rightVert ^{2}+nabla left(nabla cdot {{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)})right)right)\ &left.-frac{partial }{partial t}{{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }left({{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)}right)rightVert^{2}+frac{{lambda }_{1}}{M}mathop{sum }limits_{m=1}^{M}leftVertfrac{1}{{k}_{{{{rm{B}}}}}T}nabla {E}_{{{{mathcal{D}}}}}left({{{{bf{R}}}}}_{{{{mathcal{D}}}},1}^{(m)}right)+{{{{bf{s}}}}}_{{{{mathcal{D}}}},1}^{theta }left({{{{bf{R}}}}}_{{{{mathcal{D}}}},1}^{(m)}right)rightVert^{2}end{array}$$

Here, the second term, weighted by 1, matches the score model at the final generation step to the score from the energy function, and the first term implicitly propagates the energy function supervision to intermediate time steps (Fig. 1b, upper row). The structures ({{{{{{bf{R}}}}}_{{{{mathcal{D}}}},i}^{(m)}}}_{m = 1}^{M}) are points on a grid spanning the structure space. Since these structures are only used to evaluate the loss function on discretized points, they do not have to obey the equilibrium distribution (as is required by structures in the training dataset), therefore the cost of preparing these structures can be much lower. As structure spaces of molecular systems are often very high dimensional (for example, thousands for proteins), a regular grid would have intractably many points. Fortunately, the space of actual interest is only a low-dimensional manifold of physically reasonable structures (structures with low energy) relevant to the problem. This allows us to effectively train the model only on these relevant structures as R0 samples. Ri samples are produced by passing R0 samples through the forward process. See Supplementary Information section C.1 for an example on acquiring relevant structures for protein systems.

We also leverage stochastic estimators, including Hutchinsons estimator49,50, to reduce the complexity in calculating derivatives of high order and for high-dimensional vector-valued functions. Note that, for each step i, the corresponding model ({{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }) receives a training loss independent of other steps and can be directly back-propagated. In this way, the supervision on each step can improve the optimizing efficiency.

In addition to using the energy function for information on the probability distribution of the molecular system, DiG can also be trained with molecular structure samples that can be obtained from experiments, MD or other simulation methods. See Supplementary Information section C for data collection details. Even when the simulation data are limited, they still provide information about the regions of interest and about the local shape of the distribution in these regions; hence, they are helpful to improve a pre-trained DiG. To train DiG on data, the score model ({{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i})) is matched to the corresponding score function (nabla log {q}_{{{{mathcal{D}}}},i}) demonstrated by data samples. This can be done by minimizing ({{mathbb{E}}}_{{q}_{{{{mathcal{D}}}},i}({{{{bf{R}}}}}_{i})}{parallel {{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i})-nabla log {q}_{{{{mathcal{D}}}},i}({{{{bf{R}}}}}_{i})parallel }^{2}) for each diffusion time step i. Although a precise calculation of (nabla log {q}_{{{{mathcal{D}}}},i}) is impractical, the loss function can be equivalently reformulated into a denoising score-matching form51,52

$$frac{1}{N}mathop{sum }limits_{i=1}^{N}{{mathbb{E}}}_{{q}_{{{{mathcal{D}}}},0}({{{{bf{R}}}}}_{0})}{{mathbb{E}}}_{p({{{{mathbf{epsilon }}}}}_{i})}{parallel {sigma }_{i}{{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({alpha }_{i}{{{{bf{R}}}}}_{0}+{sigma }_{i}{{{{mathbf{epsilon }}}}}_{i})+{{{{mathbf{epsilon }}}}}_{i}parallel }^{2}$$

where ({alpha }_{i}:= mathop{prod }nolimits_{j = 1}^{i}sqrt{1-{beta }_{j}}), ({sigma }_{i}:= sqrt{1-{alpha }_{i}^{2}}) and p(i) is the standard Gaussian distribution. The expectation under ({q}_{{{{mathcal{D}}}},0}) can be estimated using the simulation dataset.

We remark that this score-predicting formulation is equivalent (Supplementary Information section A.1.2) to the noise-predicting formulation28 in the diffusion model literature. Note that this function allows direct loss estimation and back-propagation for each i in constant (with respect to i) cost, recovering the efficient step-specific supervision again (Fig. 1b, bottom).

The computation of many thermodynamic properties of a molecular system (for example, free energy or entropy) also requires the density function of the equilibrium distribution, which is another aspect of the distribution besides a sampling method. DiG allows for this by tracking the distribution change along the diffusion process45:

$$begin{array}{l}log {p}_{{{{mathcal{D}}}},0}^{theta }({{{{bf{R}}}}}_{0})=log {p}_{{{mbox{simple}}}}left({{{{bf{R}}}}}_{{{{mathcal{D}}}},tau }^{theta }({{{{bf{R}}}}}_{0})right)\qquadqquadqquad;;-displaystyleintnolimits_{0}^{tau }frac{{beta }_{t}}{2}nabla cdot {{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }left({{{{bf{R}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{{bf{R}}}}}_{0})right),{{{rm{d}}}}t-frac{D}{2}intnolimits_{0}^{tau }{beta }_{t},{{{rm{d}}}}tend{array}$$

where D is the dimension of the state space and ({{{{bf{R}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{{bf{R}}}}}_{0})) is the solution to the ordinary differential equation (ODE)

$${{{rm{d}}}}{{{{bf{R}}}}}_{t}=-frac{{beta }_{t}}{2}left({{{{bf{R}}}}}_{t}+{{{{bf{s}}}}}_{{{{mathcal{D}}}},t}^{theta }({{{{bf{R}}}}}_{t})right),{{{rm{d}}}}t$$

(3)

with initial condition R0, which can be solved using standard ODE solvers or more efficient specific solvers (Supplementary Information section A.6).

There is a growing demand for the design of materials and molecules that possess desired properties, such as intrinsic electronic band gaps, elastic modulus and ionic conductivity, without going through a forward searching process. DiG provides a feature to enable such property-guided structure generation, by directly predicting the conditional structural distribution given a value c of a microscopic property.

To achieve this goal, regarding the data-generating process in equation (2), we only need to adapt the score function from (nabla log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}})) to ({nabla }_{{{{bf{R}}}}}log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}}| c)). Using Bayes rule, the latter can be reformulated as ({nabla }_{{{{bf{R}}}}}log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}}| c)=nabla log {q}_{{{{mathcal{D}}}},t}({{{bf{R}}}})+{nabla }_{{{{bf{R}}}}}log {q}_{{{{mathcal{D}}}}}(c| {{{bf{R}}}})), where the first term can be approximated by the learned (unconditioned) score model; that is, the new score model is

$${{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i}| c)={{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }({{{{bf{R}}}}}_{i})+{nabla }_{{{{{bf{R}}}}}_{i}}log {q}_{{{{mathcal{D}}}}}(c| {{{{bf{R}}}}}_{i})$$

Hence, only a ({q}_{{{{mathcal{D}}}}}(c| {{{bf{R}}}})) model is additionally needed45,46, which is a property predictor or classifier that is much easier to train than a generative model.

In a normal workflow for ML inverse design, a dataset must be generated to meet the conditional distribution, then an ML model will be trained on this dataset for structure distribution predictions. The ability to generate structures for conditional distribution without requiring a conditional dataset places DiG in an advantageous position when compared with normal workflows in terms of both efficiency and computational cost.

Given two states, DiG can approximate a reaction path that corresponds to reaction coordinates or collective variables, and find intermediate states along the path. This is achieved through the fact that the distribution transformation process described in equation (1) is equivalent to the process in equation (3) if ({{{{bf{s}}}}}_{{{{mathcal{D}}}},i}^{theta }) is well learned, which is deterministic and invertible, hence establishing a correspondence between the structure and latent space. We can then uniquely map the two given states in the structure space to the latent space, approximate the path in the latent space by linear interpolation and then map the path back to the structure space. Since the distribution in the latent space is Gaussian, which has a convex contour, the linearly interpolated path goes through high-probability or low-energy regions, so it gives an intuitive guess of the real reaction path.

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Read this article:

Predicting equilibrium distributions for molecular systems with deep learning - Nature.com

Cedars-Sinai research shows deep learning model could improve AFib detection – Healthcare IT News

A new artificial intelligence approach developed by investigators in Cedars-Sinai's Los Angeles-based Smidt Heart Institute has been shown to detect abnormal heart rhythms associated with atrial fibrillation that might otherwise be unnoticed by physicians.

WHY IT MATTERS Researchers at Smidt Heart Institute say the findings point to the potential for artificial intelligence to be used more widely in cardiac care.

In a recent study, published in npj Digital Medicine, Cedars-Sinai clinicians show how the deep learning model was developed to analyze images from echocardiogram imaging, in which sound waves show the heart's rhythm.

Researchers trained a program to study more than 100,000 echocardiogram videos from patients with atrial fibrillation, they explain. The model distinguished between echocardiograms showing a heart in sinus rhythm normal heartbeats and those showing a heart in an irregular heart rhythm.

The program was able to predict which patients in sinus rhythm had experienced or would develop atrial fibrillation within 90 days, they said, noting that the AI model evaluating the images performed better than estimating risk based on known risk factors.

"We were able to show that a deep learning algorithm we developed could be applied to echocardiograms to identify patients with a hidden abnormal heart rhythm disorder called atrial fibrillation," explained Dr. Neal Yuan, a staff scientist with the Smidt Heart Institute.

"Atrial fibrillation can come and go," he added, "so it might not be present at a doctor's appointment. This AI algorithm identifies patients who might have atrial fibrillation even when it is not present during their echocardiogram study."

THE LARGER TREND The Smidt Heart Institute is the biggest cardiothoracic transplant center in California and the third-largest in the United States.

An estimated 12.1 million people in the United States will have atrial fibrillation in 2030, according to the CDC. During AFib, the heart's upper chambers sometimes beat in sync with the lower chamber and sometimes they do not making the arrhythmia often difficult for clinicians to detect. In some patients, the condition causes no symptoms at all.

Researchers say a machine learning model trained to analyze echo imaging could help clinicians detect early and subtle changes in the hearts of patients with undiagnosed arrhythmias.

Indeed, AI has long shown big promise for early detection of AFib, as evidenced by similar studies at health systems such as Geisinger and Mayo Clinic.

ON THE RECORD "We're encouraged that this technology might pick up a dangerous condition that the human eye would not while looking at echocardiograms," said Dr. David Ouyang, a cardiologist and AI researcher in the Smidt Heart Institute. "It might be used for patients at risk for atrial fibrillation or who are experiencing symptoms associated with the condition."

"The fact that this program predicted which patients had active or hidden atrial fibrillation could have immense clinical applications," added Dr. Christine M. Albert, chair of the Department of Cardiology at the Smidt Heart Institute. "Being able to identify patients with hidden atrial fibrillation could allow us to treat them before they experience a serious cardiovascular event."

Mike Miliard is executive editor of Healthcare IT News Email the writer: mike.miliard@himssmedia.com Healthcare IT News is a HIMSS publication.

Go here to read the rest:

Cedars-Sinai research shows deep learning model could improve AFib detection - Healthcare IT News

Justice Alito Warns of Threats to Freedom of Speech and Religion – The New York Times

Justice Samuel A. Alito Jr. warned on Saturday that freedom of speech was under threat at universities and that freedom of religion was in peril in society at large.

Troubled waters are slamming against some of our most fundamental principles, he said.

He made his remarks at a commencement ceremony at the Franciscan University of Steubenville in Ohio, a Catholic institution.

Support for freedom of speech is declining dangerously, especially where it should find deepest acceptance, he said.

A university, he said, should be a place for reasoned debate. But he added that today, very few colleges live up to that ideal.

The same is true, he said, for tolerance of religious views in society generally.

Freedom of religion is also imperiled, he said. When you venture out into the world, you may well find yourself in a job or a community or a social setting when you will be pressured to endorse ideas you dont believe or to abandon core beliefs. It will be up to you to stand firm.

In other settings, Justice Alito has given a specific example, complaining that people opposed to same-sex marriage on religious grounds are sometimes treated as bigots.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Original post:

Justice Alito Warns of Threats to Freedom of Speech and Religion - The New York Times

Search warrant executed on home in East Freedom, investigation ongoing – WTAJ – www.wtaj.com

BLAIR COUNTY, Pa. (WTAJ) Multiple police officials were seen at a house in East Freedom on Friday.

Heavy police presence could be seen outside of a home near Rt. 36 on May 10. A WTAJ member at the scene confirmed seeing officers carrying guns out of the home.

Freedom Township Police confirmed that it is an ongoing investigation, but that they had received a tip that a fugitive was staying at the home. They said they executed a search warrant that led to another warrant for guns that they found inside the home.

This is a developing story, please check back for updates anddownload the WTAJ appto receive breaking news notifications.

We will continue to keep you updated online and on-air as we learn more.

Go here to read the rest:

Search warrant executed on home in East Freedom, investigation ongoing - WTAJ - http://www.wtaj.com

Freedom of the Press as an Element of Freedom of Belief – Bitter Winter

by Karolina Maria Kotkowska*

*A paper presented at the international webinar Media as Friends and Foes of FoRBand the Tai Ji Men Case, co-organized by CESNUR and Human Rights Without Frontiers on May 8, 2024, after World Press Freedom Day (May 3).

The topic of the relationships between new religious movements and the press, and more broadly the media, is a difficult one. On the one hand, it is obvious to everyone that the media should be free, and this is one of the aspects of the necessary freedom of speech in democratic countries. It is hard to imagine a free and pluralistic society without the possibility of expressing opinions, free circulation of thoughts, and also unrestricted access to reliable information.

On the other hand, as researchers of new religious movements, we continually encounter stories where the media have been used to strip vulnerable people of their freedom. Such actions often led to violence, including physical violence, and served as a tool to generate societal phobia towards new religions. The media were employed to construct negative narratives, resulting in police raids, sometimes even involving military personnel or special forces, against individuals without any weapons or criminal background, which were nonetheless presented as an appropriate means of dealing with cults.

Violation of basic human rights using the media is unfortunately a history we know all too well. Not only in the case of individuals and groups, but through entire years of obsession engulfing some countries or regions. And we are not talking about witch hunts, although undoubtedly the mechanism of creating a sense of threat is similar and the comparison is not entirely inadequate.

If we delve even deeper into the history of religions, the period of the formation of monotheisms, the coexistence of numerous competing heterodox groups at various stages of formation, then it will turn out that similar mechanisms were used centuries ago against various religious opponents. Times have changed, social contexts have changed, access to information has changed. One thing has not changedit is still easy to play on fears. It is one thing to provide accurate information about real threats, and another thing to deliberately arouse unjustified fear of threats from an imagined enemy.

Of course, one of the fears that is easiest to exploit nowadays is terrorism, which operates in such a way that a small, inconspicuous group of people can cause a great deal of harm to the entire society. It is not difficult to portray a group of individuals as radicals supposedly ready for anything. Stories like these have been happening for years. One can mention, for example, media attacks and tax police intervention against the Damanhur group operating in the Italian Alps, police raids against MISAthe Movement for Spiritual Integration into the Absolute established in Romania and operating in various countries, or recently also the attacks on an Argentine-based movement, the Buenos Aires Yoga School.

In one of my earlier Tai Ji Men webinars, at the beginning of the Russian invasion of Ukraine, I mentioned how various images of refugees were portrayed in the Polish media. On one hand, there was no doubt that immediate assistance should be provided to Ukrainian refugees, of whom about a million were eventually accepted, while at the same time, in the forests on the Polish-Belarusian border, victims of the war in Afghanistan, including small children, were dying. These individuals became part of a political game and media frenzy, frightening the public with the threat of Islamic terrorists invading the country. Many documents were created in the wake of those events, as well as a feature film, Green Border, directed by Agnieszka Holland.

This spring, with a group of researchers of new religiosity, we had on opportunity to visit the headquarters of one of the Islam-based new religious movements advocating for freedom and inclusivity: the Ahmadi Religion of Peace and Light, located in England. The example of experiences within this movement shows how media can play a positive role. During an attempt to legally cross the Turkish-Bulgarian border, the group was attacked for unclear reasons on the Turkish side. It was only thanks to technology enabling satellite data transmission that the documentation of this event and the entire violence during the incident could be recorded and transmitteddocumentation that would otherwise have been stopped or destroyed by the authorities committing these acts. Such materials, evidencing the violation of human rights of followers, managed to reach other media outlets and serve as evidence to initiate legal proceedings to assert their rights and protect members.

And today we will hear the testimonies of the dizi (disciples) of Tai Ji Men. The oldest of them were there when the Tai Ji Men case started in 1996. They suffered because of a politically motivated persecution, and the unjust arrest of their leaders, but perhaps they suffered even more because of media slander. Hundreds of articled depicting Tai Ji Men as a cult defrauding its victims, evading taxes, and even raising goblins were published. All these accusations were eventually declared false by courts of law, but in the meantime Tai Ji Men dizi were discriminated in their workplaces, bullied in schools, and even insulted in the streets just for wearing their distinctive uniform.

History teaches us that the media can either be allies in the fight for freedom or pose a deadly threat to religious freedom. It all depends on whether they are independent or whether they transmit manipulated data, succumbing to political and other pressures. This means, however, that they themselves are not always free. Lets fight for media freedom, because only true independence of the media gives hope for communication that builds and strengthens freedom, rather than taking it away.

More here:

Freedom of the Press as an Element of Freedom of Belief - Bitter Winter

Srinivasan on Open Letters, Protests, Free Speech, and Academic Freedom – Daily Nous – Daily Nous

Amia Srinivasans specialty, it seems to me, is making sense of moral ambivalence: detecting, dissecting, and sometimes defending its reasonability, even in the face of unavoidable and urgent decisions.

[Knot by Anni Albers]

It begins with the matter of signing open letters:

An open letter is an unloved thing. Written by committee and in haste, it is a monument to compromise: a minimal statement to which all signatories can agree, or worse a maximal statement that no signatory fully believes. Some academics have a general policy against signing them. I discovered that was true of some of my Oxford colleagues last year, when I drafted and circulated an open letter condemning Israels attack on Gaza and calling for a ceasefire. Some, like those who are in precarious employment or whose immigration status isnt settled, have good reasons for adopting such a policy. Others understandably dont want to put their name to something that doesnt perfectly represent their views, especially when it might be read as a declaration of faith. I always cringe at the self-importance of the genre: though open letters can sometimes exert influence, stiffly worded exhortations hardly suffice to stop states, militaries, bombs. And yet, a no open letters policy can serve as a convenient excuse when one is hesitant to stand up for ones political principles.

Srinivasan has signed several open letters about Gaza, and recently signed an open letter committing her to an academic and cultural boycott of Columbia University, owing to how it handled student protestors. Then:

In April I was asked to sign a letter opposing the University of Cambridges investigation into Nathan Cofnas, a Leverhulme early career fellow in philosophy. A self-described race realist, Cofnas has written widely in defence of abhorrently racist particularly anti-Black views, invoking what he claims are the findings of the science of heredity.

She shares her many reservations about signing the open letter, but also her reason for ultimately signing it:

Do we think that students should be able to trigger investigations into academics on the grounds that their extramural speech makes them feel unsafe? Do we want to fuel the rights sense of grievance towards the university, when their minority presence within it is owed to the robust correlation between education and political liberalism, not some Marxist plot? Do we want to empower university administrators to fire academics on the grounds that they are attracting negative publicity? Do we think there is any guarantee that a further strengthened institutional power will only be wielded against those whose views and politics we abhor? If we say yes, what picture of power theirs and ours does that presume?

But thats not the end of the discussion, for theres the question of whether her taking a principled stand is her also being a sucker for her political opponents:

free speech and academic freedom are, for many on the right, ideological notions, weapons to be wielded against the left and the institutions it is (falsely) believed to control, the university most of all [and] the free-speech brigade has found justifications for the draconian repression of student protest.

Theres also the question of the extent to which the free speech brigade understands how academic freedom and freedom of speech come apart, or how even different considerations in favor of free speech might be in tension with each other:

After signing the letter criticising the investigation into Cofnas, I was written to by someone from the Committee for Academic Freedom, which bills itself as a non-partisan group of academics from across the political spectrum. He asked me whether I might consider signing up to theCAFs three principles. I looked them up: I. Staff and students atUKuniversities should be free, within the limits of the law, to express any opinion without fear of reprisal. II. Staff and students atUKuniversities should not be compelled to express any opinion against their belief or conscience. III.UKuniversities should not promote as a matter of official policy any political agenda or affiliate themselves with organisations promoting such agendas. I thought about it for a bit. Im on board with PrincipleII, so long as we dont think that asking staff and students to use someones correct pronouns is akin to demanding they swear a loyalty oath. Principle I is problematic, because it doesnt register that academic freedom essentially involves viewpoint-based discrimination that indeed the whole point of academic freedom is to protect academics rights to exercise their expert judgment in hiring, peer review, promotion, examining, conferring degrees and so on. And PrincipleIIIwould prevent universities from condemning, say, Israels systematic destruction of universities and schools in Gaza, which I think as educational institutions they are entitled to do.

Discussion welcome, but read the whole thing first.

Read the rest here:

Srinivasan on Open Letters, Protests, Free Speech, and Academic Freedom - Daily Nous - Daily Nous

The Price of Freedom: Americas Unjust Cash Bail System – Brown Political Review

This piece was produced in part with the financial support of the Stone Inequality Initiative. The Brown Political Review maintains editorial independence over all columns and stories published.

Richard Griffin spent two days in Michigans Wayne County Jail as his family scrambled to find the funds to cover his $850 bail. Arrested for having a handgun in his car and an outstanding warrant due to an unpaid traffic ticket, Griffin quickly found himself embroiled in a troubling situation. While in jail, he missed his first day of work and was unable to warn his employer that he would be absentcausing him to lose his job. On top of this, he had arranged an appointment with a social service agency to seek emergency rental assistance, but his 48 hours of incarceration prevented him from attending it. Without the appointment, he was unable to secure aid and was subsequently evicted. Although Griffin endured a far shorter pretrial detention with a lower bail than most people accused of a crime, the cash bail system still acutely damaged his life. His situation is not unique. Hundreds of thousands of individuals across America are currently awaiting trial behind bars.

It is easy to imagine that justice is a givenan impartial, unyielding concept that a liberal, democratic society will always uphold. For millions like Griffin, however, justice is an unattained ideal. In the United States, those without money are incarcerated while they await trial, whereas those who can post bail await trial freely in the community; Lady Justices scales tip when the wealthy tip her. The structures forged to prevent crime have created an inherently unjust system in which freedom can be boughtif you can afford it. The cash bail system criminalizes poverty, corrupting the fundamental notion of being innocent until proven guilty and necessitating nationwide reform.

Between 1970 and 2015, the number of people incarcerated before being tried increased by 433 percent, largely due to judges relying more heavily on cash bail. When put into context, this figure is even more shocking: Two-thirds of those locked up in Americas local jails have not even been convicted of a crime. In 2015, courts typically set bail at $10,000 for feloniesa staggering number considering the fact that the median annual income for individuals in pretrial detention was $15,109. In 2022, 37 percent of Americans surveyed by the Federal Reserve said they could not afford to fully cover a $400 emergency expense immediately, meaning they would have to borrow money or sell possessions to do so. Some reported they would not be able to afford it at all. Because it is so often imposed on people who cannot pay, bail has become an insurmountable financial burden for countless Americans, threatening to irreparably disrupt their lives.

While the profound impact of spending months or years in pretrial detention is evident, even a brief period of incarceration can wreak havoc on individuals and their families. Spending just one day in jail can diminish a persons employment prospects and heighten the risk that they will lose their job. Research also indicates that spending greater than 23 hours in jail increases a persons chances of rearrest. When faced with these troubling prospects, individuals unable to post bail find themselves caught in a dilemma with no favorable options: borrow money from the predatory bail bonds industry, languish behind bars, or plead guilty. Unfortunately, many choose the last optiondefendants who are incarcerated pretrial are significantly more likely to enter into plea deals. Compared to those who are not detained pretrial, defendants in jail submit guilty pleas almost three times quicker. Poor defendants thus face an uphill battle within a system that is supposed to be impartial and just.

"In the United States, those without money are incarcerated while they await trial, whereas those who can post bail await trial freely in the community; Lady Justices scales tip when the wealthy tip her."

Despite the clear moral impetus, reforming the cash bail system is no politically easy task. Republicans and Democrats alike are wary of being perceived as pro-crime because of the publics heightened fears about rising crime rates; a November Gallup poll revealed that a majority of American adults felt that the criminal justice system was not tough enough. In the 2022 midterm elections, many of the most hotly contested races involved politicians who debated crime policy, with candidates from each party slamming their opponents with soft-on-crime accusations. Republicans have targeted a slate of anti-cash-bail candidates, including Senator John Fetterman (D-PA), accusing them of being soft on crime due to their support for criminal justice reform. On the flip side, Democratic candidates like Oklahomas Joy Hofmeister have criticized Republicans for being ineffective at addressing crime, citing their record of supporting bipartisan clemency initiatives intended to benefit those sitting in prisons.

Whats often overlooked in the political rhetoric against cash bail reform is the nature of the crimes being committed in the first place. Over 95 percent of crime in the United States is nonviolent, indicating that most people who are arrested can safely await trial in their communities rather than in holding cells. Moreover, cash bail reform is not a novel idea. It has been implemented to varying degrees in New York State, Washington, DC, and Illinois. In all of these cases, cash bail reform has led to a decrease in the likelihood of rearrest, proving that public safety concerns are unfounded. In Harris County, Texas, dropping cash bail for those charged with nonviolent offenses led to a 6 percent drop, not increase, in recidivism. Moreover, cash bail reform does not, in reality, decrease the rate at which defendants show up to their trialsnullifying the logical underpinning of cash bail programs. For politicians, resisting cash bail reform is merely a convenient way to appear tough on crime without actually presenting substantive solutions to underlying criminogenic issues. However, reform doesnt have to be uniform. Governments threatened by opponents who stir up fear of societal disorder can start with milder reforms, including reducing cash bail for nonviolent cases or ensuring that defendants have access to counsel before their bail hearings, rather than debating more controversial policies like eliminating bail entirely. States can also opt to try out reforms in specific counties before enacting statewide reformsIllinois, for instance, analyzed cash bail reforms in Cook County before eliminating cash bail statewide. Regardless of the approach, reform is necessary nationwide to ensure that we no longer allow bail to deprive people like Richard Griffin of their jobs, homes, and livelihoods. Your access to justice should never be determined by the thickness of your wallet.

More:

The Price of Freedom: Americas Unjust Cash Bail System - Brown Political Review

Protecting journalists and promoting media freedom: New rules enter into force – European Union

Independent, fact-based journalism helps protect our democracies by exposing injustices, holding leaders to account and allowing citizens to make informed decisions. Journalists, who sometimes work at great personal risk, should be able to work freely and safely. This lies at the heart of EU values and democracies. This week, two pieces of EU legislation enter into force which will ensure greater protection of journalists and further support media freedom:

These initiatives are part of a European strategy for the media, building on theEuropean Democracy Action Planand theMedia and Audiovisual Action Plan. A recent study also shows that EU countries are making progressin implementing the Commissions Recommendation on the protection, safety and empowerment of journalists. The new rules will help ensure that journalists can carry out their work in a healthy media landscape.

For more information

European Media Freedom Act

Regulation establishing a common framework for media services in the internal market and amending Directive 2010/13/EU (European Media Freedom Act)

EU Directive on protecting persons who engage in public participation from manifestly unfounded claims or abusive court proceedings (Strategic lawsuits against public participation)

Media and digital culture

Media and pluralism

European Democracy Action Plan

Media and Audiovisual Action Plan

Study on measures to improve journalists safety

Video on strategic lawsuits against public participation (SLAPPs)

Link:

Protecting journalists and promoting media freedom: New rules enter into force - European Union

The state of global press freedom in 10 numbers – Columbia Journalism Review

This past Friday, May 3, was World Press Freedom Day. The date marks the anniversary of the Windhoek Declaration, a 1991 statement, named for the capital of Namibia, that asserted the need for an independent and pluralistic African press. As the UN puts it, the annual event is a reminder to governments of the need to respect their commitment to press freedom, but also a day of reflection among media professionals about issues of press freedom and professional ethics, as well as a chance to pay tribute to journalists who have lost their lives in the line of duty.

Each year, World Press Freedom Day brings with it a welter of statistics on the state of press freedom around the worldno few of them offered up by Reporters Without Borders (RSF) alone, in its influential World Press Freedom Index. (The index ranks 180 countries and territories worldwide from best to worst on press freedom, according to five indicators spanning political, economic, legislative, social, and security considerations.) Journalists, of course, do not live or work by statistics aloneand, as Ive written before in this newsletter, press-freedom statistics are often contested, sometimes bitterly so, with the picture they paint depending, among other factors, on who we consider to be a journalist, what aspects of their experience we measure, and what aspects are even measurable in the first place.

Still, this picture can be revealingand on this years World Press Freedom Day, it showed a global crisis for the press that, on numerous metrics, is only getting worse. Below are ten figures from this years World Press Freedom Day, what they show, and, sometimes, what they dont.

At least 1 journalist was killed on World Press Freedom Day. According to Voice of America, Muhammad Siddique Mengal, the president of a local press club, was traveling in a car in Pakistans Balochistan Province when an assailant on a motorcycle attached a magnetic bomb to the vehicle, which blew up seconds later. The perpetrator has not been identified, but VOA notes that Balochistan has lately experienced almost daily attacks mostly claimed by ethnic Baluch insurgents and that the region is home to other militant groups; Pakistans security services have also been accused of attacking critics there. The killing came one day after the Committee to Protect Journalists raised the alarm about a series of recent death threats targeting Hamid Mir, a prominent Pakistani TV journalist (who has been attacked before, as I wrote in 2022). On Friday, Mir described Mengals killing as a message to all independent journalists in Pakistan.

3 journalists were called out by name in a statement that President Joe Biden issued to mark World Press Freedom Day: Austin Tice, an American journalist who was abducted in Syria in 2012; Evan Gershkovich, the Wall Street Journal reporter jailed in Russia since last year; and Alsu Kurmasheva, a journalist with the US-funded broadcaster Radio Free Europe/Radio Liberty who is also in jail in Russia. (She is a dual US-Russian citizen.) Biden has repeatedly spoken the names of Tice and Gershkovich. By my count, this was only the third time that he has publicly mentioned Kurmashevas nameand the second time in less than a week, after he said, during remarks at the White House Correspondents Dinner, that Russian president Vladimir Putin should release Evan and Alsu immediately. This recent uptick is notable: as I reported recently, critics have argued that Bidens administration could be doing more to highlight Kurmashevas case. Her husband told me that he would like to hear Biden say her name more often.

10 journalists worldwide are worthy of particularly urgent attention, according to the One Free Press Coalition, a collective of international news organizations that aims to highlight the cases of threatened media workers. The coalition launched its 10 Most Urgent list in 2019 and updated it monthly; it apparently stopped doing so in 2022, but has just relaunched the list as an annual project pegged to World Press Freedom Day, according to its website. Gershkovich and Kurmasheva lead the latest list, which also draws attention to the plight of jailed reporters in Ethiopia, Hong Kong, Rwanda, and Myanmar. Also on the list are three journalists Ive written about in this newsletter: Jos Rubn Zamora and Gustavo Gorritiveteran muckrakers in Guatemala and Peru, respectivelyas well as Shireen Abu Akleh, a Palestinian American reporter for Al Jazeera who was shot and killed while covering an Israeli raid in the occupied West Bank in 2022.

26 journalists deaths in the line of work have been condemned by UNESCO since Hamas attacked Israel on October 7 and Israel responded by bombarding Gaza. UNESCO cited this figure in a press release announcing that Palestinian journalists covering Gaza would collectively receive this years World Press Freedom Prize, an award given in honor of Guillermo Cano, a Colombian journalist who was assassinated outside his newspapers offices in 1986. In the same release, UNESCO attributed its Gaza figure to information from partner NGOs and said that it is reviewing dozens of other cases. Indeed, its figure is significantly lower than similar data maintained by various other groups; CPJs tally of media workers killed in the conflict currently stands at 97, while the International Federation of Journalists (IFJ) tally stands at 109 and regional groups peg the total higher still. As I wrote recently, how this figure is calculated has been a source of controversy. As of last month, RSFs tally stood at 105, but the group had to that point only determined that 22 of those journalists were killed in the course of their worka distinction that a Palestinian press group has blasted as tantamount to whitewashing Israeli crimes.

42 percent is the rate of increase in attacks on journalists and news outlets covering the environment in the past five years (compared with the prior five-year period) according to a new report produced by UNESCO. (The theme of this years World Press Freedom Day was journalism and freedom of expression in the context of the current global environmental crisis.) Earlier this year, UNESCO and the IFJ surveyed 905 environmental journalists in 129 countries, over 70 percent of whom said they had suffered attacks, threats, or pressure linked to their work. The report notes that such attacks have taken place in every region of the world, including Europe, where police have arrested reporters covering climate protests in the UK, France, Spain, Poland, and Sweden.

More than 50 percent of the worlds population now lives in countries colored red in RSFs World Press Freedom Indexthe groups lowest classification, reflecting poor scores on its indicators and a very serious situation for press freedom. Only 36 countries out of 180 worldwide are in RSFs red zone, but this figure is an increase on 31 last year and includes half of the worlds most populous countriesChina, Russia, Bangladesh, India, and Pakistanall of which (bar China) held or are holding elections this year. According to RSF, less than 8 percent of the worlds population now lives in places with good or satisfactory press freedom.

55 is the new ranking of the US on RSFs index, a 10-place drop from last year and a lower ebb than it recorded at any point when Donald Trump was president. The US has not placed higher than 40th since 2013, and comparing placements on the index from year to year is not an exact science anyway. But the recent dropwhich puts the US below various countries with notably hostile recent press-freedom climates, including Slovakia and Polandnonetheless reflects what RSF describes as major structural barriers to press freedom, including economic struggles and declining public trust. Not that the US was the biggest dropper in the index this year: Slovakia, for example, is down 12 places, Niger 19, Argentina 26, and Burkina Faso 28. All four countries have seen recent changes of government, be they the result of elections or coups.

177 is the new ranking on the index of North Korea, that countrys highest placement in at least a decadebut still the worlds fourth worst country for press freedom overall. For five of the past ten years, including the past two, North Koreawhich has a notoriously totalitarian approach toward independent journalism (and a more favorable one toward propagandistic cinema, as I wrote last year)has been rock bottom of the index, with Eritrea occupying that rank most of the rest of the time. Eritrea is back at the bottom this year. But Syria has now also fallen below North Koreaas has Afghanistan, where the repression of journalists has steadily intensified since the Taliban seized power in 2021, as RSF puts it. Prior to that, the country had hovered around the 120 mark for the better part of a decade.

310 BBC World Service journalists are now working in exile, according to a figure that the broadcaster released to mark World Press Freedom Day. The figure has nearly doubled since 2020, a reflection of events since then in Afghanistan and Russia, as well as in Ethiopia and Myanmar. The BBC pulled most of its staff out of Afghanistan after the Taliban took power, and moved its Moscow team to neighboring Latvia after Russia invaded Ukraine in 2022 and simultaneously intensified its crackdown on the press. (Last month, Russian officials labeled a BBC reporter as a foreign agent, a designation intended to confer stigma and onerous bureaucratic requirements that is also at issue in Kurmashevas case.) Some BBC journalists who were already working from exile, meanwhile, have recently been on the receiving end of an uptick in threatsnot least journalists working for BBC Persian, 10 of whom learned recently that they had secretly been convicted in absentia in their home country. Exiled Iranian journalists families have also been harassed, as I wrote recently.

2.5 billion is the amount (in US dollars) that tax authorities in Turkey fined a media company that had been critical of Recep Tayyip Erdoanostensibly on fraud charges, but actually, many critics suspected, as a political punishment. This happened in 2009, but on World Press Freedom Day last week, Jan-Werner Mller, a professor at Princeton, returned to the story to highlight the anti-press tactics to which repressive leaders (including Erdoan, who was prime minister then and is now the president) have resorted in order to maintain at least a veneer of plausible deniability. As another World Press Freedom Day arrives, news media organizations will dutifully display lists of journalists imprisoned or killed around the world, Mller wrote in Foreign Policy. It is important to acknowledge these victims. But its also time to recognize that analysts and policymakers need a new framework to understand how a new generation of authoritarian leaders disables critical coverage without putting journalists in jail or physically harming them.

Other notable stories:

ICYMI: New York just committed $90 million to help save local journalism. Will it work?

Continue reading here:

The state of global press freedom in 10 numbers - Columbia Journalism Review

10 countries in Africa with the best press freedom in 2024 – Business Insider Africa

It is impossible to overstate the value of press freedom in African nations. To defend democracy, encourage accountability and openness, advance socioeconomic development, and amplify the voices of various people, a free and independent media is essential.

Governments, civic society, and the international community must cooperate as stewards of democracy to safeguard press freedom and enable journalists to carry out their essential duty as defenders of democracy and the truth.

While this is hardly the case for several African countries, there are some on the continent where freedom of the press is hardly an issue.

These countries enjoy transparent coverage of current affairs, and those who have taken the profession of journalism have been made to feel safe.

One of the most important factors of a free press is its facilitation of openness and accountability in government institutions and public organizations.

Investigative journalism exposes corruption, inefficiency, and resource misuse, resulting in remedial action and the diffusion of good governance principles.

A free press also serves as the voice of the voiceless, often amplifying the plight of disenfranchised communities and peoples and fostering socioeconomic development and prosperity.

With that said, here are the 10 countries in Africa with the best freedom of the press in 2024, according to the latest annual World Press Freedom Index produced by Reporters Without Borders (RSF).

See the rest here:

10 countries in Africa with the best press freedom in 2024 - Business Insider Africa

I was taking absolute freedom: Denis Villeneuve Sacrificed Authenticity for One of the Most Mysterious Scenes in … – FandomWire

Denis Villeneuve successfully adapted the first Dune novel with the two Dune films. The evolution of Paul Atreides into a messianic figure and the rivalry between House Harkonnen and Atreides were chronicled in the two films. Villeneuve managed to bring a lot of authenticity and grandeur to the franchise making it an unforgettable experience.

One of the most interesting aspects in both films was the opening quotes that were delivered by a haunting voice speaking in an unknown language. In a recent interview, the director revealed that those lines were uttered by an anonymous Sardaukar, and he explained why a member of the imperial army was given such meaningful lines at the beginning of each film.

Adapting a magnum opus like Frank Herberts Dune is a herculean task and maverick directors like David Lynch have tried and not fully succeeded in realizing the complex world of the books. Denis Villeneuve knew that a 100% faithful adaptation of the books would never work, so he picked and chose elements from the books that were necessary and aligned them with new elements that he introduced.

One of them was switching the person who uttered the introductory quotes in the films compared to the books. In the books, Princess Irulan mostly starts each one with her insights and thoughts. However, in Villeneuves adaptation, a haunting voice in a mysterious language speaks and the film explains to the audience what it says.

In both Dune 1 and 2, the voice opens the films with the quotes, Dreams are messages from the deep and Power over spice is power over all. Audiences believed at first that this may be a vision that Paul has where the God-Emperor is speaking to him from the future (via X). However, Villeneuve confirmed that a priest of the Sardaukar army is the speaker of those lines.

In an interview with The New York Times, the director explained that this was done to add layers to the Sardaukar army who were mostly known for their powerful ways and their determination in battle. Villeneuve wanted to show a rich character-driven moment with these quotes, revealing their thoughtful and philosophical side. He deliberately took this creative liberty from the book to provide this depth to this group of people. Villeneuve said,

I thought it would be interesting to have a tiny bit of insight that they are not just tremendous warriors, but they have spirituality, philosophical thought. They have substance. Also, their sound was designed by Hans Zimmer. I absolutely loved how it feels like its coming from the deep, from the ancient world.

Frank Herbert said beginnings are very delicate times. By starting with a Sardaukar priest, I was indicating to the fans that I was taking absolute freedom with this adaptation, that I was hijacking the book.

Villeneuve clearly did not want the Sardaukar to be reduced to just an army that the Emperor and Harkonnens use in their fight against the Atreides. The Sardaukars are also insightful and can provide thought-provoking statements like any other character in the film. Thus, Villeneuve elevates the characters in such ways even though he takes narrative liberties.

Anya Taylor-Joy made a surprise cameo in Dune: Part Two as Pauls sister Alia Atreides (from the future, he sees her as he ingests the Water of Life). Denis Villeneuve had always had her in mind for the role but it seemed initially that she wouldnt be able to do it due to her busy schedules for Furiosa.

In an interview with Variety, Taylor-Joy stated that she kept on telling Villeneuve that everything would work out and she would be able to shoot her scene amidst shooting for Furiosa (by making adjustments). Both did not give up and finally, the director was able to cut a deal with the studio to make it happen. The actress said,

Before I even sat down, he was like, I want you to be in Dune, but you cant do it! Taylor-Joy recalls. I was like, Please? I skipped all the stages of grief and went straight to begging I was like, I can do this. I can be in Australia and Abu Dhabi at the same time. He wanted me to be part of the universe. We kept in touch. I just had this feeling that it wasnt over.

Villeneuve has started writing the third Dune film which is expected to finish off his Dune narrative. Taylor Joy is currently gearing up for the release of Furiosa, hitting theatres on May 24, 2024. Fans can watch watch the first Dune on Max and rent Dune: Part Two on Prime Video.

Read the original:

I was taking absolute freedom: Denis Villeneuve Sacrificed Authenticity for One of the Most Mysterious Scenes in ... - FandomWire

Key no vote on Alabama gambling bill suggests looking to next year Alabama Reflector – Alabama Reflector

A stalled gambling package in the Senate could be dead unless a member flips their vote or theres some real fancy jumping through hoops, Sen. Greg Albritton, R-Atmore, said Monday.

Albritton, who handled the package in the Senate but ultimately ended up voting against it, said that unless the Senate can get unanimous consent to suspend the rules for a new conference report, the upper chamber is stuck procedurally.

I dont think [unanimous consent] would happen. Were stuck. We either have to vote the [constitutional amendment] up or down or just leave it in the basket, Albritton said.

But he said the bill could still come up for a vote in the last days of the session, and any one of the nay votes could change.

That sounds easy enough, but the other problem that comes in is how many of the yes votes have already turned back to no. Every time we bring this up, we lose votes, Albritton said.

GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

SUBSCRIBE

And Albritton said his no vote wont change this year. He said he might change it next year if he gets a more palatable bill that [he] can vote for.

Albritton said the compromise would constrict the Poarch Band of Creek Indians (PCI) involvement in the industry while allowing other entities to grow. The Poarch Band, a federally recognized tribe that operates casinos in Atmore, Montgomery and Wetumpka, had sought to submit a final bid on any casino licenses issued.

In the compromise, Albritton said, there is no mechanism for PCI, based in Atmore in the senators district, to enter into a compact with the state because the state isnt offering something of value, such as another site off tribal land.

He said that while PCIs opposition was part of the reason for his nay vote, he voted in support of the bill before going to the conference committee, when he said PCI had been lobbying against it.

Gambling bills stalled in Senate as Alabama Legislature nears adjournment

Two other aspects of the bill kept him from voting for the compromise from the conference committee. There was no authorization for sports betting, which he said is a growing industry; there was no regulation for online gaming, whether it be slots, poker or roulette.

We have not done anything to control, restrict, oversee or tax that. Those are two growing portions of the industry that we just ignore with this bill, he said.

Sen. Garlan Gudger, R-Cullman, one of the lawmakers assigned to the conference committee, said in a phone interview Monday that the Senate thought expanding casino gaming, especially with an open bid process, and legalizing sports betting was too much for right now. Gudger said addiction is a concern when it comes with electronic sports betting, especially for young people.

It changes the whole course of, really, their life, just based on something they really didnt know that much about except were having fun or were trying to get out of some other debt or just make some easy money, Gudger said.

But he said that from their estimates, over $1.2 billion is being spent on sports betting in-state currently. He sees the need for lawmakers to have that discussion, but he said that it came on too quickly.

We felt like in the Senate, from the House version, that we werent ready for that, but I do think there is an appetite to look into that as time goes on, Gudger said.

House Speaker Nathaniel Ledbetter, R-Rainsville, said last week that hes not in the mood to work on another piece of legislation, saying its one of those things you cant win.

Rep. Chris Blackshear, R-Smiths Station, sponsored the original gambling proposal in the House. When asked about the chances of working on another piece gambling legislation in next years session, he had a definite, one-word response.

ZERO! Blackshear responded in a text.

See original here:

Key no vote on Alabama gambling bill suggests looking to next year Alabama Reflector - Alabama Reflector

Casino Hubs: 5 Airports That Serve Major Centers Of The Gambling Sector – Simple Flying

Summary

Among all leisure destinations worldwide, few are as unique in their scope as gambling hubs, to which millions flock annually in search of luck at some of the world's largest casinos. As a result, passenger demand for these major centers for the gaming industry is extremely strong, and airlines are quick to compete on some of these lucrative routes.

The airlines serving gambling hubs often include a mix of low-cost leisure-focused carriers in addition to the long-haul legacy carriers one would expect. Equivalently, the large airports that serve as major gateways for these gaming areas are similarly massive and often service traffic from multiple continents.

Photo: Philip Pilosian | Shutterstock

Across the board, however, gambling hub airports have a uniquely large number of flights from leisure-oriented airlines and are less oriented toward business travelers. In this article, we will examine five of the world's largest gambling hub airports and what makes these facilities so special.

Category:

LAS specification:

Operating base for:

Number of runways:

4

Unarguably the world's largest gambling hub, Las Vegas attracts tens of millions of passengers annually from all corners of the world. The city is not only notable for the many casinos located across its famous strip, but is also host to a number of events and professional sporting teams. In 2024, the Super Bowl was held at the city's Allegiant Stadium, named for the popular hometown leisure airline.

Photo: ZikG | Shutterstock

The airport is among the busiest that serve any of the world's major gambling hubs and is, as one might expect, heavily bolstered by leisure-focused low-cost carriers. Nonetheless, there are a number of major international legacy carriers that also serve Las Vegas, such as KLM from Amsterdam Schiphol International Airport (AMS) and Virgin Atlantic from London Heathrow Airport (LHR).

In fact, passenger demand has been so overwhelming for Las Vegas that the city has begun the process of developing another airport. Nonetheless, this facility will not be operational until at least the mid-2030s.

Category:

ACY specification:

Operating base for:

Number of runways:

2

Atlantic City International Airport is a unique, small airport located in Southern New Jersey that serves not just the nearby gambling center of Atlantic City, but also the greater South Jersey region as a whole. The facility, which is also a joint operating base for the New Jersey Air National Guard, is also home to the 177th Fighter Wing's F-16 fighter jets.

Photo:Robin Guess | Shutterstock

The only commercial operator at the airport is, unsurprisingly, leisure-focused Spirit Airlines. Interestingly, however, is that the carrier serves a number of leisure destinations in the American Southeast from the facility, demonstrating a lack of interest in Atlantic City from major population centers.

Category:

MFM specification:

Hub for:

Number of runways:

1

Located in Eastern Macau, this facility serves as the primary gateway to the largest gambling hub in Asia. The relatively small facility sees traffic from across the continent and serves as the primary hub for flag carrier Air Macau.

While a mid-size airline by Asian flag carrier standards, Air Macau does operate flights from the facility to over a dozen destinations across China and Southeast Asia. Most major Asian airlines operate nonstop flights from their hubs to this popular gambling destination, but the airport does lack service from major European, American, or Middle Eastern airlines.

Category:

NCE specification:

Focus city for:

Number of runways:

2

Arguably, the largest gambling hub in Europe is in the city-state of Monaco, where the legendary casino of Monte Carlo is located. The small nation, however, lacks a proper airport due to its small size and, as a result, relies heavily on nearby Nice Airport to serve as its primary gateway.

Due to its nearby proximity, a number of airlines operate nonstop services to Nice and market the nearby gambling destination. Many carriers also offer helicopter transfers to Monaco, notably including Emirates, which recently announced new connections via helicopter operator Blade.

Category:

RNO specification:

Number of runways:

3

The second-largest airport in Nevada, RNO serves the greater Lake Tahoe area, and offers flights to dozens of destinations across the United States from all major airlines. The city, which is home to casinos, is a popular gambling destination nestled amid a picturesque alpine lake.

Photo: EQRoy | Shutterstock

The airport offers services to most major destinations in the Western United States, with a few nonstop flights to East Coast transportation gateways. For example, JetBlue operates seasonal service to the airport from its primary hub at John F Kennedy International Airport.

It is also important to note that not all traffic to RNO is driven by the gambling industry. The airport, like many others in the alpine west of the United States, serves a number of world-class ski resorts in the area.

More here:

Casino Hubs: 5 Airports That Serve Major Centers Of The Gambling Sector - Simple Flying

Sports gambling creates a windfall, but raises questions of integrity here are three lessons from historic sports-betting … – The Conversation

Sports betting is having a big moment across the United States. While gambling on sports has been legal for decades in countries such as the U.K., it wasnt until 2018 that the U.S. Supreme Court ruled that states could legalize sports betting. Before then, sports betting had been permitted only in Nevada.

After the Supreme Court decision, the floodgates opened. Many states were happy to legalize sports gambling, enticed by the opportunity for more tax revenue. As of May 2024, sports gambling is legal in 38 states and Washington, D.C. Americans wagered nearly US$120 billion on sports in 2023 alone.

Until about 10 years ago, sports leagues in North America were apprehensive about if not totally against legalizing sports betting. The long history of sports gambling scandals in the U.S. led many to worry that legalizing sports betting would tarnish their sports credibility and image. The NCAA was one of many governing bodies that objected to legalizing sports gambling nationwide.

But now that the Supreme Court has blessed it, sports leagues have embraced gambling, forming partnerships with brands like Caesars Entertainment. The sportsbooks and platforms have integrity monitors to track potential inconsistencies. Still, a number of scandals involving athletes and the people around them have emerged since the Supreme Court ruling.

As a professor of critical sports studies, I teach students about the history of sports betting scandals. And I think they offer lessons for the present day.

The Black Sox Scandal of 1919 helped to further organize baseball, leading to the creation of the position of commissioner of baseball, which was first assumed by former judge and known racist Kennesaw Mountain Landis. Along with maintaining the color line, arguably his most notable action was banning, for life, the players on the Chicago White Sox involved in the fixing of the 1919 World Series.

Early professional baseball regulations explicitly banned gambling, but the money was too tempting for many players to ignore and that included members of the 1919 White Sox. The players hated the teams owner, Charles Comiskey, and felt that they were underpaid. But they were unable to change teams due to the reserve clause in their contracts, which gave owners exclusive rights to their players in perpetuity.

A faction of the team agreed to throw the World Series. Those players were ultimately indicted by a grand jury and went to trial. They were acquitted of criminal charges, but Landis suspended all of the players connected to the fix including superstar Shoeless Joe Jackson, who admitted taking money from a teammate but maintained he was innocent of game fixing.

This was the the most notable of several attempts to fix baseball games early in the 20th century, as the game grew in popularity and a number of people associated with baseball, including players, managers and even umpires, looked to cash in.

Athlete salaries have soared in recent decades. However, this money hasnt shielded players and others involved in sports from the grips of gambling addiction.

There are no rules banning athletes from sitting at a blackjack table or even gambling on other sports. Numerous players have wagered millions of dollars, with some athletes building up massive debts due to addiction.

These debts can lead to such desperation that athletes decide to risk their careers. Baseball legend and admitted compulsive gambler Pete Rose continues to sit outside the Hall of Fame because he bet on baseball games.

The most substantial gambling scandal in modern sports came in the NBA during the 2000s, involving referee Tim Donaghy. He admitted to providing information on NBA games, including those he officiated, which allegedly influenced his calls. Donaghy served time in prison as a result. So it isnt just players who get in trouble.

There have been several major point-shaving scandals in college basketball history, most famously at the City College of New York in the 1950s and at Boston College in the late 1970s the latter of which involved Henry Hill, the subject of the blockbuster film Goodfellas.

The increasing use of prop, or proposition, bets, which focus on a specific outcome within a game rather than the overall result, has created a new point of vulnerability for student-athletes. While influencing an entire team is hard, history shows that individual players are more susceptible to pressure. A point guard or quarterback can slow down the game and reduce the margin of victory.

And while todays unpaid student-athletes have the same financial incentives to cheat as earlier generations did, they face a new pressure: Theyre often surrounded by gamblers on campus and on social media. Betting is pervasive not only at large universities but at smaller schools, too. According to NCAA surveys, 1 in 3 student-athletes have faced harassment from gamblers, ranging from derogatory comments to death threats.

The sportsbooks have very little incentive to address potential violations, so its up to organizations that oversee sports to ensure the integrity of their games.

NCAA President Charlie Bakers suggestion to ban prop bets is a good first step: The more individual players and gameplay are isolated, the easier it is for improprieties to occur.

Providing more guidance for players and different types of punishments for different transgressions could also be useful. Gambling violations that dont affect competition outcomes should be treated differently from ones that do. The NCAA already does this by meting out lighter penalties for student-athletes who wager on other teams and sports as opposed to their own.

Providing treatment for players and others suffering from gambling addiction would be helpful as well, and theres some evidence that open discussions of gambling addiction in European soccer have had a positive impact.

NBA Commissioner Adam Silver has suggested implementing federal oversight to eliminate the uncertainty of state-by-state regulations. Although scandals are still likely to occur, gambling commissions like the one in the U.K. can provide a framework for federal licensing and oversight.

The suddenness of states adopting sports betting has led to a windfall of profit for gambling companies and tax revenue for the states. But it may also endanger the integrity of sports. As policymakers mull how to address the issue, they might be wise to learn from history.

Visit link:

Sports gambling creates a windfall, but raises questions of integrity here are three lessons from historic sports-betting ... - The Conversation

How did gambling develop into a major industry in Minnesota? – redlakenationnews.com

Minnesotans wanting to gamble have many options these days. They can visit a Native American casino, buy scratch-off games at a gas station, yank open pull-tabs at a bar or even play bingo at a church fish fry.

But this is a fairly recent phenomenon. Minnesota's founders took a hard line against gambling, and the activity remained largely illegal in the state until a half century ago.

What happened? That's what reader Rob Kloehn asked Curious Minnesota, the Star Tribune's reader-powered reporting project. He got interested partly because of recent talk of legalizing sports betting in the state.

https://www.startribune.com/how-did-gambling-develop-into-a-major-industry-in-minnesota/600364927/

View original post here:

How did gambling develop into a major industry in Minnesota? - redlakenationnews.com

Shohei Ohtani’s ex-interpreter agrees to plead guilty in $17M sports gambling scandal – NBC Los Angeles

L.L. Bean has just added a third shift at its factory in Brunswick, Maine, in an attempt to keep up with demand for its iconic boot.

Orders have quadrupled in the past few years as the boots have become more popular among a younger, more urban crowd.

The company says it saw the trend coming and tried to prepare, but orders outpaced projections. They expect to sell 450,000 pairs of boots in 2014.

People hoping to have the boots in time for Christmas are likely going to be disappointed. The bootsare back ordered through February and even March.

"I've been told it's a good problem to have but I"m disappointed that customers not getting what they want as quickly as they want," said Senior Manufacturing Manager Royce Haines.

Customers like, Mary Clifford, tried to order boots on line, but they were back ordered until January.

"I was very surprised this is what they are known for and at Christmas time you can't get them when you need them," said Clifford.

People who do have boots are trying to capitalize on the shortage and are selling them on Ebay at a much higher cost.

L.L. Bean says it has hired dozens of new boot makers, but it takes up to six months to train someone to make a boot.

The company has also spent a million dollars on new equipment to try and keep pace with demand.

Some customers are having luck at the retail stores. They have a separate inventory, and while sizes are limited, those stores have boots on the shelves.

Read more from the original source:

Shohei Ohtani's ex-interpreter agrees to plead guilty in $17M sports gambling scandal - NBC Los Angeles