3 Cheap Machine Learning Stocks That Smart Investors Will Snap Up Now – InvestorPlace

Source: shutterstock.com/cono0430

Machine learning stocks represent publicly traded firms specializing in a subfield of artificial intelligence (AI). The terms AI and machine learning have become synonymous, but machine learning is really about making machines imitate intelligent human behavior. Semantics aside, machine learning and AI have come to the forefront in 2023.

Generative AI has boomed this year, and the race is on to identify the next must-buy shares in the sector. The firms identified in this article arent cheap in an absolute sense. Their price can be quite high. However, they are expected to provide strong returns, making them a bargain for investors currently and cheap in a relative sense.

Source: Sundry Photography / Shutterstock.com

Lets begin our discussion of machine learning stocks with ServiceNow (NYSE:NOW). The firm offers a cloud computing platform utilizing machine learning to help firms manage their workflows. Enterprise AI is a burgeoning field that will only continue to grow as firms integrate machine learning into their workflows.

As mentioned in the introduction, ServiceNow is not cheap in an absolute sense. At $563 a share, there are a lot of other equities that investors could buy for much cheaper. However, Wall Street expects ServiceNow to move past $600 and perhaps $700. The metrics-oriented website Gurufocus believes ServiceNows potential returns are even higher and peg its value at $790.

The firms Q2 earnings report, released July 26, gives investors a lot of reason to believe that share prices should continue to rise. The firm exceeded revenue growth and profitability guidance during the period, which allowed management the confidence to raise subscription revenue and margin guidance for the year.

Q2 subscription revenue reached $2.075 billion, up 25% year-over-year (YOY). Total revenues reached $2.150 million in the quarter.

Source: Pamela Marciano / Shutterstock.com

AMD (NASDAQ:AMD) and its stock continued to be overshadowed by its main rival, Nvidia (NASDAQ:NVDA). The former has almost doubled in 2023, while the latter has more than tripled. Its basically become accepted that AMD is far behind its competition in all things AI and machine learning. However, the news is mixed, making AMD particularly interesting as Nvidia shares are continually scrutinized for their price levels.

An article from early 2023 noted that the comparison between AMD and Nvidia isnt unfair. It concluded that Nvidia is better all around. However, that article also touched on the notion that AMD could potentially optimize its cards through software capabilities inherent to the firm.

That was the same conclusion MosaicML came to when testing the two firms head-to-head several months later. AMD isnt very far behind Nvidia, after all, and it has a chance to make up ground via its software prowess. Thats exactly why investors should consider AMD currently, given its relatively cheaper price.

Source: T. Schneider / Shutterstock.com

CrowdStrike (NASDAQ:CRWD) operates in a combination of growing fields. The stock represents cybersecurity and machine learning directed toward identifying IT threats. It provides endpoint security and was recently awarded its second consecutive annual honor as the best at the SC Awards Europe 2023. The company is well-regarded in its industry and is growing very quickly.

The entity also has strong fundamentals. In Q1, revenues increased by 61% YOY, reaching $487.8 million. CrowdStrikes net income loss narrowed from $85 million to $31.5 million during the period YOY. The firm generated $215 million in cash flow, leaving a lot of room to maneuver overall.

Furthermore, CrowdStrike announced it is partnering with Amazon (NASDAQ:AMZN) to work with AWS on generative AI applications to increase security. CrowdStrike is arguably the best endpoint security stock available overall, and its strong inroads into AI and machine learning have set it up for even greater growth moving forward.

On the date of publication, Alex Sirois did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines.

Alex Sirois is a freelance contributor to InvestorPlace whose personal stock investing style is focused on long-term, buy-and-hold, wealth-building stock picks. Having worked in several industries from e-commerce to translation to education and utilizing his MBA from George Washington University, he brings a diverse set of skills through which he filters his writing.

View post:

3 Cheap Machine Learning Stocks That Smart Investors Will Snap Up Now - InvestorPlace

Tim Cook says AI, machine learning are part of virtually every product Apple is building – CryptoSlate

What is CryptoSlate Alpha?

A web3 membership designed to empower you with cutting-edge insights and knowledge. Learn more

Welcome! You are connected to CryptoSlate Alpha. To manage your wallet connection, click the button below.

If you don't have enough, buy ACS on the following exchanges:

Access Protocol is a web3 monetization paywall. When users stake ACS, they can access paywalled content. Learn more

Disclaimer: By choosing to lock your ACS tokens with CryptoSlate, you accept and recognize that you will be bound by the terms and conditions of your third-party digital wallet provider, as well as any applicable terms and conditions of the Access Foundation. CryptoSlate shall have no responsibility or liability with regard to the provision, access, use, locking, security, integrity, value, or legal status of your ACS Tokens or your digital wallet, including any losses associated with your ACS tokens. It is solely your responsibility to assume the risks associated with locking your ACS tokens with CryptoSlate. For more information, visit our terms page.

Original post:

Tim Cook says AI, machine learning are part of virtually every product Apple is building - CryptoSlate

AI GNNs: Transforming the Landscape of Machine Learning – Fagen wasanni

Unveiling the Power of AI GNNs: Transforming the Landscape of Machine Learning

Artificial Intelligence (AI) continues to redefine the boundaries of what is possible in the realm of technology, and its latest offering, Graph Neural Networks (GNNs), is set to transform the landscape of machine learning. GNNs are a novel and powerful tool that allows AI to understand and interpret data in ways that were previously unimaginable, opening up a world of possibilities for machine learning applications.

GNNs are a type of neural network designed to work specifically with graph data structures, which are mathematical models that represent relationships between objects. Traditional neural networks struggle to handle this type of data, as they are primarily designed to work with grid-like data structures. However, GNNs are uniquely equipped to handle graph data, enabling them to capture complex relationships and patterns that would otherwise go unnoticed.

The transformative power of GNNs lies in their ability to process and interpret complex, non-Euclidean data. This means they can handle data that does not fit neatly into a grid, such as social networks, molecular structures, or transportation networks. This capability opens up a new frontier in machine learning, allowing AI to tackle problems and analyze data in ways that were previously out of reach.

For instance, in the field of social network analysis, GNNs can identify influential individuals within a network, detect communities, and predict future interactions. In the realm of bioinformatics, GNNs can be used to predict the properties of molecules based on their structure, a task that could have significant implications for drug discovery. In transportation, GNNs can optimize routes and schedules, leading to more efficient and sustainable systems.

The application of GNNs extends beyond these examples. In fact, any field that deals with complex, interconnected data can potentially benefit from the power of GNNs. This versatility is one of the reasons why GNNs are being hailed as a game-changer in the world of machine learning.

However, as with any new technology, there are challenges to overcome. Training GNNs requires a significant amount of computational power and can be time-consuming. There are also questions about how to best design and optimize GNNs for specific tasks. Despite these challenges, the potential benefits of GNNs are immense, and researchers are actively working to address these issues.

The introduction of GNNs represents a significant step forward in the field of AI. By enabling machines to understand and interpret complex, interconnected data, GNNs are opening up new possibilities for machine learning applications. As researchers continue to refine and develop this technology, we can expect to see GNNs playing an increasingly important role in a wide range of fields, from social network analysis to bioinformatics, transportation, and beyond.

In conclusion, the advent of AI GNNs is transforming the landscape of machine learning. Their ability to handle complex, non-Euclidean data is unlocking new possibilities and applications, making them a powerful tool in the AI toolkit. As we continue to explore and harness the potential of GNNs, the future of machine learning looks more promising than ever.

Go here to read the rest:

AI GNNs: Transforming the Landscape of Machine Learning - Fagen wasanni

Machine-learning for the prediction of one-year seizure recurrence … – Nature.com

Fisher, R. S. et al. ILAE official report: A practical clinical definition of epilepsy. Epilepsia 55, 475482 (2014).

Article PubMed Google Scholar

Tatum, W. O. et al. Clinical utility of EEG in diagnosing and monitoring epilepsy in adults. Clin. Neurophysiol. 129, 10561082 (2018).

Article CAS PubMed Google Scholar

Pillai, J. & Sperling, M. R. Interictal EEG and the diagnosis of epilepsy. Epilepsia 47, 1422 (2006).

Article PubMed Google Scholar

Baldin, E., Hauser, W. A., Buchhalter, J. R., Hesdorffer, D. C. & Ottman, R. Yield of epileptiform electroencephalogram abnormalities in incident unprovoked seizures: A population-based study. Epilepsia 55, 13891398 (2014).

Article PubMed PubMed Central Google Scholar

Bouma, H. K., Labos, C., Gore, G. C., Wolfson, C. & Keezer, M. R. The diagnostic accuracy of routine electroencephalography after a first unprovoked seizure. Eur. J. Neurol. 23, 455463 (2016).

Article CAS PubMed Google Scholar

Jing, J. et al. Interrater reliability of experts in identifying interictal epileptiform discharges in electroencephalograms. JAMA Neurol. 77, 4957 (2020).

Article PubMed Google Scholar

Amin, U. & Benbadis, S. R. The role of EEG in the erroneous diagnosis of epilepsy. J. Clin. Neurophysiol. 36, 294297 (2019).

Article PubMed Google Scholar

Chadwick, D. & Smith, D. The misdiagnosis of epilepsy. BMJ 324, 495496 (2002).

Article PubMed PubMed Central Google Scholar

Seneviratne, U., Cook, M. & DSouza, W. The electroencephalogram of idiopathic generalized epilepsy. Epilepsia 53, 234248 (2012).

Article PubMed Google Scholar

Seneviratne, U., Boston, R. C., Cook, M. & DSouza, W. EEG correlates of seizure freedom in genetic generalized epilepsies. Neurol. Clin. Pract. 7, 3544 (2017).

Article PubMed PubMed Central Google Scholar

Guida, M., Iudice, A., Bonanni, E. & Giorgi, F. S. Effects of antiepileptic drugs on interictal epileptiform discharges in focal epilepsies: An update on current evidence. Expert Rev. Neurother. 15, 947959 (2015).

Article CAS PubMed Google Scholar

Arntsen, V., Sand, T., Syvertsen, M. R. & Brodtkorb, E. Prolonged epileptiform EEG runs are associated with persistent seizures in juvenile myoclonic epilepsy. Epilepsy Res. 134, 2632 (2017).

Article PubMed Google Scholar

Acharya, U. R., Vinitha Sree, S., Swapna, G., Martis, R. J. & Suri, J. S. Automated EEG analysis of epilepsy: A review. Knowl.-Based Syst. 45, 147165 (2013).

Article Google Scholar

Woldman, W. et al. Dynamic network properties of the interictal brain determine whether seizures appear focal or generalised. Sci. Rep. 10, 7043 (2020).

Article ADS CAS PubMed PubMed Central Google Scholar

Chowdhury, F. A. et al. Revealing a brain network endophenotype in families with idiopathic generalised epilepsy. PLoS ONE 9, e110136 (2014).

Article ADS PubMed PubMed Central Google Scholar

Varatharajah, Y. et al. Quantitative analysis of visually reviewed normal scalp EEG predicts seizure freedom following anterior temporal lobectomy. Epilepsia 63, 16301642 (2022).

Article PubMed PubMed Central Google Scholar

Abela, E. et al. Slower alpha rhythm associates with poorer seizure control in epilepsy. Ann. Clin. Transl. Neurol. 6(2), 333343 (2019).

Article PubMed Google Scholar

Larsson, P. G. & Kostov, H. Lower frequency variability in the alpha activity in EEG among patients with epilepsy. Clin. Neurophysiol. 116, 27012706 (2005).

Article PubMed Google Scholar

Pegg, E. J., Taylor, J. R. & Mohanraj, R. Spectral power of interictal EEG in the diagnosis and prognosis of idiopathic generalized epilepsies. Epilepsy Behav. 112, 107427 (2020).

Article PubMed Google Scholar

Larsson, P. G., Eeg-Olofsson, O. & Lantz, G. Alpha frequency estimation in patients with epilepsy. Clin. EEG Neurosci. 43(2), 97104 (2012).

Article PubMed Google Scholar

Miyauchi, T., Endo, K., Yamaguchi, T. & Hagimoto, H. Computerized analysis of EEG background activity in epileptic patients. Epilepsia 32, 870881 (1991).

Article CAS PubMed Google Scholar

Diaz, G. F. et al. Generalized background qEEG abnormalities in localized symptomatic epilepsy. Electroencephalogr. Clin. Neurophysiol. 106(6), 501507 (1998).

Article CAS PubMed Google Scholar

Urigen, J. A., Garca-Zapirain, B., Artieda, J., Iriarte, J. & Valencia, M. Comparison of background EEG activity of different groups of patients with idiopathic epilepsy using Shannon spectral entropy and cluster-based permutation statistical testing. PLoS ONE 12, e0184044 (2017).

Article PubMed PubMed Central Google Scholar

Sathyanarayana, A. et al. Measuring the effects of sleep on epileptogenicity with multifrequency entropy. Clin. Neurophysiol. 132, 20122018 (2021).

Article PubMed PubMed Central Google Scholar

Luo, K. & Luo, D. An EEG feature-based diagnosis model for epilepsy. in 2010 International Conference on Computer Application and System Modeling (ICCASM 2010) vol. 8 V8592-V8594 (2010).

Faiman, I., Smith, S., Hodsoll, J., Young, A. H. & Shotbolt, P. Resting-state EEG for the diagnosis of idiopathic epilepsy and psychogenic nonepileptic seizures: A systematic review. Epilepsy Behav. 121, 108047 (2021).

Article PubMed Google Scholar

Engel, J. Jr., Bragin, A. & Staba, R. Nonictal EEG biomarkers for diagnosis and treatment. Epilepsia Open 3, 120126 (2018).

Article PubMed PubMed Central Google Scholar

Dash, D. et al. Update on minimal standards for electroencephalography in Canada: A review by the Canadian Society of Clinical Neurophysiologists. Can. J. Neurol. Sci./J. Can. des Sci. Neurologiques 44, 631642 (2017).

Article Google Scholar

Jas, M., Engemann, D. A., Bekhti, Y., Raimondo, F. & Gramfort, A. Autoreject: Automated artifact rejection for MEG and EEG data. Neuroimage 159, 417429 (2017).

Article PubMed Google Scholar

Gandhi, T., Panigrahi, B. K. & Anand, S. A comparative study of wavelet families for EEG signal classification. Neurocomputing 74, 30513057 (2011).

Article Google Scholar

Zou, H. & Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B 67, 301320 (2005).

Article MathSciNet MATH Google Scholar

Ke, G. et al. LightGBM: A highly efficient gradient boosting decision tree. In Proceedings of the 31st International Conference on Neural Information Processing Systems (ed. Ke, G.) 31493157 (Curran Associates Inc, 2017).

Google Scholar

Cawley, G. C. & Talbot, N. L. C. On over-fitting in model selection and subsequent selection bias in performance evaluation. J. Mach. Learn. Res. 11, 20792107 (2010).

MathSciNet MATH Google Scholar

LeDell, E., Petersen, M. & van der Laan, M. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates. Electron. J. Stat. 9, 15831607 (2015).

Article MathSciNet PubMed PubMed Central MATH Google Scholar

DeLong, E. R., DeLong, D. M. & Clarke-Pearson, D. L. Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach. Biometrics 44, 837845 (1988).

Article CAS PubMed MATH Google Scholar

Collins, G. S., Reitsma, J. B., Altman, D. G. & Moons, K. G. Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): The TRIPOD statement. Ann. Intern. Med. https://doi.org/10.7326/M14-0697 (2015).

Article PubMed Google Scholar

Clarke, S. et al. Computer-assisted EEG diagnostic review for idiopathic generalized epilepsy. Epilepsy Behav. 121, 106556. https://doi.org/10.1016/j.yebeh.2019.106556 (2019).

Article PubMed Google Scholar

Drake, M. E., Padamadan, H. & Newell, S. A. Interictal quantitative EEG in epilepsy. Seizure Eur. J. Epilepsy 7, 3942 (1998).

Article CAS Google Scholar

Mammone, N. & Morabito, F. C. Analysis of absence seizure EEG via Permutation Entropy spatio-temporal clustering. Int. Jt. Conf. Neural Netw. https://doi.org/10.1109/ijcnn.2011.6033390 (2011).

Article Google Scholar

Lijmer, J. G. et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA 282, 10611066 (1999).

Article CAS PubMed Google Scholar

Pepe, M. S., Feng, Z., Janes, H., Bossuyt, P. M. & Potter, J. D. Pivotal evaluation of the accuracy of a biomarker used for classification or prediction: Standards for study design. J. Natl. Cancer Inst. 100, 14321438 (2008).

Article CAS PubMed PubMed Central Google Scholar

Zelig, D. et al. Paroxysmal slow wave events predict epilepsy following a first seizure. Epilepsia 63, 190198 (2022).

Article PubMed Google Scholar

Douw, L. et al. Functional connectivity is a sensitive predictor of epilepsy diagnosis after the first seizure. PLoS ONE 5, e10839 (2010).

Article ADS PubMed PubMed Central Google Scholar

Futoma, J., Simons, M., Panch, T., Doshi-Velez, F. & Celi, L. A. The myth of generalisability in clinical research and machine learning in health care. Lancet Digital Health 2, e489e492 (2020).

Article PubMed Google Scholar

Krumholz, A. et al. Evidence-based guideline: Management of an unprovoked first seizure in adults. Neurology 84, 1705 (2015).

Article PubMed PubMed Central Google Scholar

Gloss, D. et al. Antiseizure medication withdrawal in seizure-free patients: Practice advisory update summary: Report of the AAN guideline subcommittee. Neurology 97, 10721081 (2021).

Article PubMed Google Scholar

Selvitelli, M. F., Walker, L. M., Schomer, D. L. & Chang, B. S. The relationship of interictal epileptiform discharges to clinical epilepsy severity: A study of routine electroencephalograms and review of the literature. J. Clin. Neurophysiol. 27, 8792 (2010).

Article PubMed PubMed Central Google Scholar

Chu, C., Hsu, A.-L., Chou, K.-H., Bandettini, P. & Lin, C. Does feature selection improve classification accuracy? Impact of sample size and feature selection on classification using anatomical magnetic resonance images. Neuroimage 60, 5970 (2012).

Article PubMed Google Scholar

Jollans, L. et al. Quantifying performance of machine learning methods for neuroimaging data. Neuroimage 199, 351365 (2019).

Article PubMed Google Scholar

Fisher, R. S. Bad information in epilepsy care. Epilepsy Behav. 67, 133134 (2017).

Article PubMed Google Scholar

Buchhalter, J. et al. EEG parameters as endpoints in epilepsy clinical trialsAn expert panel opinion paper. Epilepsy Res. 187, 107028 (2022).

Read more here:

Machine-learning for the prediction of one-year seizure recurrence ... - Nature.com

Automated Machine Learning: Revolutionizing Predictive Analytics … – Fagen wasanni

AutoML, also known as Automated Machine Learning, is rapidly changing the landscape of predictive analytics and forecasting. It is a game-changing technology that is making data analysis more accessible, efficient, and accurate.

Traditionally, data scientists had to manually perform tasks such as data preprocessing, feature selection, algorithm choice, and model fine-tuning. This process required specialized knowledge and a significant amount of time. AutoML automates these tasks, reducing the time and expertise needed.

One of the revolutionary aspects of AutoML is its ability to automatically select the best algorithm for a given dataset. By evaluating multiple algorithms, it eliminates human bias and error, leading to more accurate predictions. Additionally, AutoML can optimize the parameters of the chosen algorithm, further improving predictive performance.

AutoMLs automation capabilities extend beyond model development to deployment and maintenance. It simplifies the complex and error-prone process of deploying models into production environments. It can also monitor deployed models, identify performance issues, and automatically retrain them if necessary. This end-to-end automation streamlines the predictive analytics process and ensures the models remain effective over time.

The democratization of predictive analytics is another significant benefit of AutoML. It makes predictive analytics accessible to non-data scientists, allowing them to develop and deploy models without a deep understanding of machine learning. This is particularly beneficial for small and medium-sized businesses that may not have the resources to hire a team of data scientists.

AutoML has a profound impact on predictive analytics and forecasting. It makes these processes faster, more accurate, and more accessible, enabling businesses and researchers to leverage the power of data like never before. However, challenges exist, such as the need for high-quality data and the complexity of the models it develops. Despite these challenges, the benefits of AutoML outweigh its drawbacks, making it a game-changing technology in the field of predictive analytics and forecasting.

As AutoML continues to evolve and mature, its impact on predictive analytics and forecasting is expected to grow further. It is revolutionizing the way data is analyzed and empowering businesses and researchers to make better-informed decisions based on data-driven insights.

Go here to read the rest:

Automated Machine Learning: Revolutionizing Predictive Analytics ... - Fagen wasanni

Machine learning identifies physical signs of stroke – Open Access Government

Researchers at the UCLA David Geffen School of Medicine and several medical institutions in Bulgaria collaborated on a study titled Smartphone-Enabled Machine Learning Algorithms for Autonomous Stroke Detection.

The study involved 240 stroke patients from four metropolitan stroke centers.

Within 72 hours of the onset of symptoms, the researchers recorded videos of the patients. They tested their arm strength to identify facial asymmetry, arm weakness, and speech changeswell-known physical signs of stroke.

To evaluate facial asymmetry, the researchers employed machine learning techniques to analyse 68 facial landmark points. They utilised a smartphones built-in 3D accelerometer, gyroscope, and magnetometer data to test arm weakness.

Mel-frequency cepstral coefficients were employed to detect speech changes, converting sound waves into images to compare standard and slurred speech patterns.

The app was then evaluated using neurologists reports and brain scan data, demonstrating high sensitivity and specificity in diagnosing stroke accurately in nearly all cases.

Dr Radoslav Raychev, a vascular and interventional neurologist from UCLAs David Geffen School of Medicine, expressed excitement about the potential impact of this app and machine learning technology on stroke care.

Identifying stroke symptoms swiftly and accurately is critical to ensure patient survival and facilitate regaining independence. With this apps deployment, the researchers hope to transform lives and improve the landscape of stroke care.

The revolutionary stroke detection app utilising machine learning shows promise in aiding the early identification of stroke symptoms, potentially saving lives and improving care.

This innovative application can play a pivotal role in transforming stroke care outcomes. Early detection is paramount in the treatment of strokes, as it allows for timely intervention and medical attention, which can make the difference between life and death for affected individuals.

Visit link:

Machine learning identifies physical signs of stroke - Open Access Government

AI and the Heart: How Machine Learning is Changing the Face of … – Fagen wasanni

Exploring the Intersection of AI and Cardiology: How Machine Learning is Revolutionizing Heart Care

The intersection of artificial intelligence (AI) and cardiology is proving to be a game-changer in the field of medicine. Machine learning, a subset of AI, is now at the forefront of revolutionizing heart care, making strides in the diagnosis, treatment, and management of heart diseases.

Machine learning algorithms are designed to learn from data and make predictions or decisions without being explicitly programmed. In the context of cardiology, these algorithms can analyze vast amounts of data, such as medical records, imaging data, and genetic profiles, to predict patient outcomes, identify disease patterns, and suggest optimal treatment strategies. This has the potential to significantly improve patient care and outcomes, while also reducing healthcare costs.

One of the key areas where machine learning is making a significant impact is in the early detection of heart diseases. Traditionally, heart diseases are diagnosed based on symptoms, physical examination, and various tests. However, these methods can sometimes be inaccurate or inconclusive. Machine learning algorithms, on the other hand, can analyze a wide range of data to identify subtle patterns and indicators of heart disease that may be missed by traditional methods. This can lead to earlier and more accurate diagnosis, allowing for timely intervention and treatment.

In addition to diagnosis, machine learning is also transforming the treatment of heart diseases. For instance, machine learning algorithms can analyze data from thousands of patients to identify the most effective treatment strategies for different types of heart diseases. This can help doctors make more informed decisions about treatment, leading to better patient outcomes.

Moreover, machine learning is playing a crucial role in the management of heart diseases. For example, wearable devices equipped with machine learning algorithms can continuously monitor a patients heart rate, blood pressure, and other vital signs. These devices can alert doctors to any abnormalities, allowing for immediate intervention. This can be particularly beneficial for patients with chronic heart conditions, as it can help prevent complications and hospitalizations.

Despite the promising potential of machine learning in cardiology, there are also challenges that need to be addressed. One of the main challenges is the need for large amounts of high-quality data to train the algorithms. This can be difficult to obtain due to privacy concerns and logistical issues. Additionally, there is a need for rigorous testing and validation of the algorithms to ensure their accuracy and reliability.

Furthermore, the integration of machine learning into clinical practice also requires changes in the healthcare system. This includes training healthcare professionals to use these technologies, updating regulations to accommodate these new technologies, and addressing ethical issues related to the use of AI in healthcare.

In conclusion, the intersection of AI and cardiology is ushering in a new era of heart care. Machine learning is revolutionizing the diagnosis, treatment, and management of heart diseases, offering the potential for improved patient care and outcomes. However, realizing this potential requires addressing the challenges associated with the use of AI in healthcare. As we continue to navigate this exciting frontier, it is clear that the future of cardiology will be shaped by the innovative application of machine learning.

Read the original:

AI and the Heart: How Machine Learning is Changing the Face of ... - Fagen wasanni

The Hidden Impact of AI in Photography and How Machine Learning … – Cryptopolitan

Description

Artificial Intelligence (AI) and machine learning have been quietly transforming photography, altering how we shoot and edit images. While more attention-grabbing technologies like Adobes Generative Fill feature in Photoshop have caught the eye, AIs subtler integrations in the photography field are playing a significant role. Here are five ways AI is invisibly enhancing your photography Read more

Artificial Intelligence (AI) and machine learning have been quietly transforming photography, altering how we shoot and edit images. While more attention-grabbing technologies like Adobes Generative Fill feature in Photoshop have caught the eye, AIs subtler integrations in the photography field are playing a significant role. Here are five ways AI is invisibly enhancing your photography experience.

Modern mirrorless cameras utilize machine learning algorithms to improve autofocus capabilities. While traditional autofocus systems rely on contrast detection and perspective analysis, a parallel process fueled by machine learning models is now at play. This AI-driven processor interprets the scene in real time, identifying subjects such as faces, objects, animals, and more. Cameras equipped with face and eye detection can lock focus on recognized subjects, providing improved precision and ease of use.

Smartphone cameras produce surprisingly high-quality images despite their small sensors and lenses. This is made possible by dedicated image processors enhanced with machine learning. Before the shutter button is even tapped, the camera system evaluates the scene and makes decisions based on detected elements, such as portraits or landscapes. After capturing multiple images with varying exposures and ISO settings, the processor blends them together, making adjustments based on scene recognition. The result is photos that rival those from larger-sensor cameras, achieved through the seamless integration of AI-driven image processing.

Image editing software has been utilizing machine learning-based people recognition for some time. Applications like Google Photos, Lightroom, and Apple Photos can easily identify specific individuals in photos, enabling users to locate images containing certain people quickly. This technology extends beyond photography to video editing, where programs like DaVinci Resolve can also recognize people in video footage. Additionally, facial feature recognition allows for more accurate selections and targeted adjustments in editing processes.

Auto-editing controls in photo software have evolved with the help of machine-learning models. For example, in Lightroom, clicking the Auto button in the Edit or Basic panels triggers Adobe Senseis cloud-based processing technology. The AI analyzes similar images in its database and applies relevant edit settings to improve the image. Other applications, such as Pixelmator Pro and Luminar Neo, offer similar AI-driven automatic editing features, giving users a starting point that can be further customized.

Machine learning technologies also assist photographers in quickly finding images without the need for extensive keywording. Many photo apps now employ object and scene recognition to scan images in the background or in the cloud. This allows users to perform searches based on recognized elements, such as landscapes, buildings, or animals. While not as precise as manually applied keywords, this AI-powered search feature saves time and streamlines the image retrieval process.

As AI-driven features become more integrated into photography tools, photographers are benefiting from improved precision, automatic adjustments, and simplified image searches. From camera autofocus to smartphone image processing, machine learning plays a crucial role in enhancing the visual experience for both professional and amateur photographers. Embracing these AI-powered capabilities allows photographers to focus on their craft, knowing that the technology is working seamlessly to enhance their creative vision.

View post:

The Hidden Impact of AI in Photography and How Machine Learning ... - Cryptopolitan

Machine learning-based technique for gain and resonance … – Nature.com

Thatere, A., Khade, S., Lande, V.S. & Chinchole, A. A T-shaped rectangular microstrip slot antenna for mid-band and 5G applications. JREAS6, 144146, https://doi.org/10.46565/jreas.2021.v06i03.007 (2021).

Moniruzzaman, M. et al. Gap coupled symmetric split ring resonator based near zero index ENG metamaterial for gain improvement of monopole antenna. Sci. Rep. 12, 7406. https://doi.org/10.1038/s41598-022-11029-7 (2022).

Article ADS CAS PubMed PubMed Central Google Scholar

Al-Bawri, S. S., Islam, M. T., Islam, M. S., Singh, M. J. & Alsaif, H. Massive metamaterial system-loaded MIMO antenna array for 5G base stations. Sci. Rep. 12, 14311. https://doi.org/10.1038/s41598-022-18329-y (2022).

Article ADS CAS PubMed PubMed Central Google Scholar

Shabbir, T. et al. 16-Port non-planar MIMO antenna system with Near-Zero-Index (NZI) metamaterial decoupling structure for 5G applications. IEEE Access 8, 157946157958. https://doi.org/10.1109/ACCESS.2020.3020282 (2020).

Article Google Scholar

Jilani, M. A.K. etal. Design of 2 1 patch array antenna for 5G communications systems using mm-wave frequency band. In 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC), 08570862, https://doi.org/10.1109/CCWC54503.2022.9720836 (IEEE, Las Vegas, NV, USA, 2022).

Padmanathan, S. et al. Compact multiband reconfigurable MIMO antenna for sub- 6GHz 5G mobile terminal. IEEE Access 10, 6024160252. https://doi.org/10.1109/ACCESS.2022.3180048 (2022).

Article Google Scholar

Azim, R. etal. Low profile multi-slotted patch antenna for lower 5G application. In 2020 IEEE Region 10 Symposium (TENSYMP), 366369, https://doi.org/10.1109/TENSYMP50017.2020.9230892 (IEEE, Dhaka, Bangladesh, 2020).

Sun, J.-N., Li, J.-L. & Xia, L. A dual-polarized magneto-electric dipole antenna for application to N77/N78 band. IEEE Access 7, 161708161715. https://doi.org/10.1109/ACCESS.2019.2951414 (2019).

Article Google Scholar

Mathew, P. K. A three element Yagi Uda antenna for RFID systems. Director 50, 2 (2014).

Google Scholar

Agrawal, S. R., Lele, K. A. & Deshmukh, A. A. Review on printed log periodic and Yagi MSA. IJCA 126, 3844. https://doi.org/10.5120/ijca2015906177 (2015).

Article Google Scholar

Mushiake, Y. A report on Japanese development of antennas: From the YagiUda antenna to self-complementary antennas. IEEE Antennas Propag. Mag. 46, 4760. https://doi.org/10.1109/MAP.2004.1373999 (2004).

Article ADS Google Scholar

Kazema, T. & Michael, K. Gain improvement of the YagiUda antenna using genetic algorithm for application in DVB-T2 television signal reception in Tanzania. J. Interdiscip. Sci. (2017).

Dalvadi, P. & Patel, D. A. A comprehensive review of different feeding techniques for quasi Yagi antenna. IJETER 9, 221226, https://doi.org/10.30534/ijeter/2021/12932021 (2021)

Yurt, R., Torpi, H., Mahouti, P., Kizilay, A. & Koziel, S. Buried object characterization using ground penetrating radar assisted by data-driven surrogate-models. IEEE Access11, https://doi.org/10.1109/ACCESS.2023.3243132 (2023).

Bai, Y., Gardner, P., He, Y. & Sun, H. A surrogate modeling approach for frequency reconfigurable antennas. IEEE Trans. Antennas Propag.https://doi.org/10.1109/TAP.2023.3248446 (2023).

Article Google Scholar

Koziel, S. & Pietrenko-Dabrowska, A. Expedited variable-resolution surrogate modeling of miniaturized microwave passives in confined domains. IEEE Transactions on Microwave Theory and Techniques70, https://doi.org/10.1109/TMTT.2022.3191327 (2022).

Yu, Y. etal. State-of-the-art: Ai-assisted surrogate modeling and optimization for microwave filters. IEEE Transactions on Microwave Theory and Techniques70, https://doi.org/10.1109/TMTT.2022.3208898 (2022).

Kouhalvandi, L. & Matekovits, L. Surrogate modeling for designing and optimizing mimo antennas.https://doi.org/10.1109/AP-S/USNC-URSI47032.2022.9886514 (2022).

Article Google Scholar

Khan, M.M., Hossain, S., Mozumdar, P., Akter, S. & Ashique, R.H. A review on machine learning and deep learning for various antenna design applications. Heliyon8, https://doi.org/10.1016/j.heliyon.2022.e09317 (2022).

Abdelhamid, A.A. & Alotaibi, S.R. Robust prediction of the bandwidth of metamaterial antenna using deep learning. Computers, Materials and Continua, https://doi.org/10.32604/cmc.2022.025739 (2022).

El-Kenawy, E. S.M. etal. Optimized ensemble algorithm for predicting metamaterial antenna parameters. Computt. Mater. Continua., https://doi.org/10.32604/cmc.2022.023884 (2022).

Ranjan, P., Maurya, A., Gupta, H., Yadav, S. & Sharma, A. Ultra-wideband cpw fed band-notched monopole antenna optimization using machine learning. Progr. Electromagn. Res., https://doi.org/10.2528/PIERM21122802 (2022).

Olcan, D., Ninkovic, D., Stankovic, Z., Doncov, N. & Kolundzija, B. Training of deep neural networks with up to 10 million antennas. In 2022 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting (AP-S/URSI), 6566, https://doi.org/10.1109/AP-S/USNC-URSI47032.2022.9886271 (IEEE, Denver, CO, USA, 2022).

Hong, T., Liu, C. & Kadoch, M. Machine learning based antenna design for physical layer security in ambient backscatter communications. Wirel. Commun. Mob. Comput. 110, 2019. https://doi.org/10.1155/2019/4870656 (2019).

Article Google Scholar

Barbano, N. Log periodic Yagi-Uda array. IEEE Trans. Antennas Propagat. 14, 235238. https://doi.org/10.1109/TAP.1966.1138641 (1966).

Article ADS Google Scholar

Sharma, G., Sharma, A.N., Duvey, A. & Singhal, P.K. Yagi-Uda antenna for L-band frequency range. IJET1, 315, https://doi.org/10.14419/ijet.v1i4.234 (2012).

Jehangir, S. S. & Sharawi, M. S. A single layer semi-ring slot Yagi-like MIMO antenna system with high front-to-back ratio. IEEE Trans. Antennas Propagat. 65, 937942. https://doi.org/10.1109/TAP.2016.2633938 (2017).

Article ADS Google Scholar

Soheilifar, M.R. Compact Yagi-Uda slot antenna with metamaterial element for wide bandwidth wireless application. Int. J. RF Microw. Comput. Aided Eng. 31, https://doi.org/10.1002/mmce.22380 (2021).

Althuwayb, A. A. MTM- and SIW-inspired bowtie antenna loaded with AMC for 5G mm-wave applications. Int. J. Antennas Propagat., https://doi.org/10.1155/2021/6658819 (2021).

Article Google Scholar

Desai, A., Upadhyaya, T., Patel, J., Patel, R. & Palandoken, M. Flexible CPW fed transparent antenna for WLAN and sub-6 GHz 5G applications. Microw. Opt. Technol. Lett. 62, 20902103. https://doi.org/10.1002/mop.32287 (2020).

Article Google Scholar

Chen, Z., Zeng, M., Andrenko, A. S., Xu, Y. & Tan, H. A dual-band high-gain quasi-Yagi antenna with split-ring resonators for radio frequency energy harvesting. Microw. Opt. Technol. Lett. 61, 21742181. https://doi.org/10.1002/mop.31872 (2019).

Article Google Scholar

Mahmud, M.Z. etal. A dielectric resonator based line stripe miniaturized ultra-wideband antenna for fifth-generation applications. Int. J. Commun. Syst.34, https://doi.org/10.1002/dac.4740 (2021).

Chen, H. N., Song, J.-M. & Park, J.-D. A compact circularly polarized MIMO dielectric resonator antenna over electromagnetic band-gap surface for 5G applications. IEEE Access 7, 140889140898. https://doi.org/10.1109/ACCESS.2019.2943880 (2019).

Article Google Scholar

Haque, M.A., Zakariya, M.A., Singh, N. S.S., Rahman, M.A. & Paul, L.C. Parametric study of a dual-band quasi-yagi antenna for lte application. Bull. EEI12, 15131522, https://doi.org/10.11591/eei.v12i3.4639 (2023).

Ramos, A., Varum, T. & Matos, J. Compact multilayer Yagi-Uda based antenna for IoT/5G sensors. Sensors 18, 2914. https://doi.org/10.3390/s18092914 (2018).

Article ADS PubMed PubMed Central Google Scholar

Al-Bawri, S. S. et al. Metamaterial cell-based superstrate towards bandwidth and gain enhancement of quad-band CPW-Fed antenna for wireless applications. Sensors 20, 457. https://doi.org/10.3390/s20020457 (2020).

Article ADS PubMed PubMed Central Google Scholar

Haque, M. A. et al. A plowing t-shaped patch antenna for wifi and c band applications.https://doi.org/10.1109/ACMI53878.2021.9528266 (2021).

Oluwole, A.S. & Srivastava, V.M. Designing of Smart Antenna for improved directivity and gain at terahertz frequency range. In 2016 Progress in Electromagnetic Research Symposium (PIERS), 473473, https://doi.org/10.1109/PIERS.2016.7734369 (IEEE, Shanghai, China, 2016).

Haque, M. A. et al. Analysis of slotted e-shaped microstrip patch antenna for ku band applications.https://doi.org/10.1109/MICC53484.2021.9642100 (2021).

Pozar, D.M. Microwave Engineering (Wiley, 2011).

Hannan, S., Islam, M. T., Faruque, M. R. I., Chowdhury, M. E. H. & Musharavati, F. Angle-insensitive co-polarized metamaterial absorber based on equivalent circuit analysis for dual band WiFi applications. Sci. Rep. 11, 13791. https://doi.org/10.1038/s41598-021-93322-5 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Hossain, A., Islam, M. T., Misran, N., Islam, M. S. & Samsuzzaman, M. A mutual coupled spider net-shaped triple split ring resonator based epsilon-negative metamaterials with high effective medium ratio for quad-band microwave applications. Results Phys. 22, 103902. https://doi.org/10.1016/j.rinp.2021.103902 (2021).

Article Google Scholar

Ranjan, P., Gupta, H., Yadav, S. & Sharma, A. Machine learning assisted optimization and its application to hybrid dielectric resonator antenna design. Facta universitatis - series: Electron. Energeti 36, 3142. https://doi.org/10.2298/FUEE2301031R (2023).

Article Google Scholar

Pan, X. etal. Deep learning for drug repurposing: Methods, databases, and applications. Wiley Interdisciplinary Reviews: Computational Molecular Science12, https://doi.org/10.1002/wcms.1597 (2022).

Talpur, M. A.H., Khahro, S.H., Ali, T.H., Waseem, H.B. & Napiah, M. Computing travel impendences using trip generation regression model: a phenomenon of travel decision-making process of rural households. Environment, Development and Sustainabilityhttps://doi.org/10.1007/s10668-022-02288-5 (2022).

Nguyen, Q.H. etal. Influence of data splitting on performance of machine learning models in prediction of shear strength of soil. Mathematical Problems in Engineering2021, https://doi.org/10.1155/2021/4832864 (2021).

Choudhury, S., Thatoi, D. N., Hota, J. & Rao, M. D. Predicting crack through a well generalized and optimal tree-based regressor. IJSI 11, 783807. https://doi.org/10.1108/IJSI-09-2019-0086 (2019).

Article Google Scholar

Laud, P. W. & Ibrahim, J. G. Predictive Model Selection. J. Roy. Stat. Soc.: Ser. B (Methodol.) 57, 247262. https://doi.org/10.1111/j.2517-6161.1995.tb02028.x (1995).

Article MathSciNet MATH Google Scholar

Singh, B., Sihag, P. & Singh, K. Modelling of impact of water quality on infiltration rate of soil by random forest regression. Modeling Earth Systems and Environment3, https://doi.org/10.1007/s40808-017-0347-3 (2017).

Rathore, S.S. & Kumar, S. A decision tree regression based approach for the number of software faults prediction. ACM SIGSOFT Software Engineering Notes41, https://doi.org/10.1145/2853073.2853083 (2016).

Madhuri, C. H., Anuradha, G. & Pujitha, M. V. House price prediction using regression techniques: A comparative study.https://doi.org/10.1109/ICSSS.2019.8882834 (2019).

Article Google Scholar

Pasha, G. R., Akbar, M. & Shah, A. Application of ridge regression to multicollinear data. J. res. Sci 15, 97106 (2004).

Google Scholar

Osman, A. I.A., Ahmed, A.N., Chow, M.F., Huang, Y.F. & El-Shafie, A. Extreme gradient boosting (xgboost) model to predict the groundwater levels in selangor malaysia. Ain Shams Engineering Journal12, https://doi.org/10.1016/j.asej.2020.11.011 (2021).

Raftery, A.E., Madigan, D. & Hoeting, J.A. Bayesian model averaging for linear regression models. Journal of the American Statistical Association92, https://doi.org/10.1080/01621459.1997.10473615 (1997).

Schulz, E., Speekenbrink, M. & Krause, A. A tutorial on Gaussian process regression: Modelling, exploring, and exploiting functions. J. Math. Psychol. 85, 116. https://doi.org/10.1016/j.jmp.2018.03.001 (2018).

Article MathSciNet MATH Google Scholar

Yurt, R. et al. Buried object characterization by data-driven surrogates and regression-enabled hyperbolic signature extraction. Sci. Rep. 13, 5717. https://doi.org/10.1038/s41598-023-32925-6 (2023).

Article ADS CAS PubMed PubMed Central Google Scholar

Doreswamy, KS, H., Km, Y. & Gad, I. Forecasting Air Pollution Particulate Matter (PM2.5) Using Machine Learning Regression Models. Procedia Computer Science171, 20572066, https://doi.org/10.1016/j.procs.2020.04.221 (2020).

Shetty, S.A., Padmashree, T., Sagar, B.M. & Cauvery, N.K. Performance Analysis on Machine Learning Algorithms with Deep Learning Model for Crop Yield Prediction. In JeenaJacob, I., KolandapalayamShanmugam, S., Piramuthu, S. & Falkowski-Gilski, P. (eds.) Data Intelligence and Cognitive Informatics, 739750, https://doi.org/10.1007/978-981-15-8530-2_58 (Springer Singapore, Singapore, 2021). Series Title: Algorithms for Intelligent Systems.

Kumar, R., Kumar, P. & Kumar, Y. Time series data prediction using IoT and machine learning technique. Procedia Comput. Sci. 167, 373381. https://doi.org/10.1016/j.procs.2020.03.240 (2020).

Article Google Scholar

Istaiteh, O., Owais, T., Al-Madi, N. & Abu-Soud, S. Machine learning approaches for COVID-19 forecasting. In 2020 International Conference on Intelligent Data Science Technologies and Applications (IDSTA), 5057, https://doi.org/10.1109/IDSTA50958.2020.9264101 (IEEE, Valencia, Spain, 2020).

Barua, L., Sharif, M. & Akter, T. Analyzing cervical cancer by using an ensemble learning approach based on meta classifier. IJCA 182, 2933. https://doi.org/10.5120/ijca2019918619 (2019).

Article Google Scholar

de Myttenaere, A., Golden, B., Le Grand, B. & Rossi, F. Mean absolute percentage error for regression models. Neurocomputing 192, 3848. https://doi.org/10.1016/j.neucom.2015.12.114 (2016).

Article Google Scholar

Gelman, A., Goodrich, B., Gabry, J. & Vehtari, A. R-squared for Bayesian regression models. Am. Stat. 73, 307309. https://doi.org/10.1080/00031305.2018.1549100 (2019).

Article MathSciNet MATH Google Scholar

Weiming, J.M. Mastering Python for Finance (Packt Publishing Ltd, 2015).

Singh, O. et al. Microstrip line fed dielectric resonator antenna optimization using machine learning algorithms. Sadhana Acad. Proc. Eng. Sci.https://doi.org/10.1007/s12046-022-01989-x (2022).

Article Google Scholar

Haque, M. A. et al. Dual band antenna design and prediction of resonance frequency using machine learning approaches. Appl. Sci. 12, 10505. https://doi.org/10.3390/app122010505 (2022).

Article CAS Google Scholar

See original here:

Machine learning-based technique for gain and resonance ... - Nature.com

Machine learning for the development of diagnostic models of … – Nature.com

Design

This was a prospective multicenter observational study. Unlike studies on prognostic models, in the present study, diagnostic models were developed, that is, models designed to determine whether a patient was in the compensated or decompensated phase of their disease (exacerbation of COPD and/or HF decompensation).

The criteria for admission to this study and the recruitment process have been previously reported19. Patients older than 55years who were able to walk at least 30m, with a main diagnosis of decompensated HF and/or exacerbation of COPD and hospitalized in the Department of Internal Medicine, Cardiology or Pneumology were included. Participants with a pacemaker or intracardiac device, domiciliary oxygen therapy users prior to admission and patients with HF functional class IV of the New York Heart Association (NYHA) classification were excluded29.

Four hospitals participated: two tertiary university hospitals (600900 hospital beds) and two regional secondary care hospitals (150400 hospital beds) in the provinces of Barcelona and Madrid.

Each center had a trained interviewer, and each department had a referring physician who was accessible to the interviewer. Each day, the interviewer contacted the referring physician to review the hospitalization census and identify patients with the diagnosis of interest. Next, the interviewer confirmed the main diagnosis (decompensated HF and/or exacerbation of COPD) with the physician responsible for the patient and then contacted the participant (the same day or the next day) to obtain informed consent and verify compliance with all admission criteria of this study. The sample was obtained through convenience sampling, and all patients were enrolled consecutively as they were identified.

The recruitment and follow-up periods lasted 18months starting in November 2010.

Each patient underwent three identical evaluations: the first in the hospitalization unit (V1) and the other two consecutively and at least 24h apart in the participants home 30days after hospital discharge (V2 and V3). Thus, each participant underwent one evaluation in the decompensated phase (V1) and two in the compensated phase (V2, V3) of their disease.

The evaluation protocol19 included documentation of symptoms (dyspnea according to the NYHA29 and Modified Medical Research Council (mMRC)30 scales) and physiological parameters (HR and Ox) in two consecutive periods: effort (walking at a normal pace and on flat terrain for a maximum of 6min) and recovery (seated for 4min after the end of the effort period).

HR and Ox were considered time series with a sample frequency of 1Hz and were collected throughout the evaluation with a pulse oximeter (Model 3100, brand Nonin Medical, Inc., Plymouth, MN, USA) placed on the left index finger.

Given the absence of a single standard diagnostic test to verify whether a patient was in the compensated or decompensated phase of their disease, the clinical judgment of the participants responsible physician was considered a standard diagnostic test. Thus, in the decompensated phase, the diagnosis of decompensated HF and/or COPD exacerbation corresponded to the confirmed diagnosis from the participants attending physician (in cases of diagnostic doubt, the patient was excluded). For the compensated phase, a standard diagnosis of compensated HF and/or stable COPD was confirmed by a study physician through telephone contact with the participant 30days after hospital discharge. During this telephone interaction, the patient was considered to be in the compensated phase if none of the following events had occurred since hospital discharge: increased cough, sputum or dyspnea; initiation of or an increase in corticosteroid use; and initiation of antibiotic treatment or medical consultation for worsening of the clinical situation from any cause. In cases of doubt or if the compensated phase could not be confirmed, successive telephone contacts were made until the phase could be confirmed. The interviewer scheduled home visits for the respective evaluations (V2, V3) only after confirmation and within 2448h of receiving confirmation.

Given the objective of this study (development of an online algorithm capable of detecting the onset of an exacerbation from HR and Ox data), various characteristics of each of the evaluations were extracted (V1, V2, V3). For this purpose, the effort phase (walking) and recovery phase of each evaluation were separated by verifying the times recorded manually in the data collection records at the beginning and end of each phase of the test and visually reviewing the signals to confirm the manual records. Once the signals were separated according to the evaluation phase, the corresponding characteristics of the available measures were extracted.

Numerous characteristics were extracted from the signals. During each of the tests, two different phases were considered: effort and recovery, which were treated separately. From each of the phases, three signals were considered: HR, Ox and the normalized difference between these variables. From each of these three temporal signals, the characteristics of the temporal (the mean, standard deviation, and range) and frequency domains (the characteristics of the first and second harmonics, the distribution of the harmonics [kurtosis and skewness], the sum of all harmonics and the six first indices of the principal component analysis [PCA] for the normalized fast Fourier transform [FFT] of the signal) were extracted. Accordingly, 16 characteristics were obtained from each phase (effort and recovery) of each signal (HR, Ox, and the normalized difference between these), resulting in a total of 96 characteristics for each evaluation. The normalized difference between Ox and HR was defined using the sklearn standardscaler function (the mathematical formula is available at https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html), and PCA was applied to the HR and Ox time series using the sklearn.decomposition.PCA function (formula available at https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html). Regarding the selection of the first 6 components of the PCA, this decision was made based on the researchers' criteria, considering that typically in this type of analysis, the first 3 to 6 components are considered.

Given that the main objective of this study was the detection of a transition from a state considered normal or stable (HF or COPD in the compensated phase [V2, V3]) to a state of decompensation or exacerbation (decompensated phase [V1]), a methodological scheme was applied based on calculation of the differences between the evaluations of each available characteristic. Thus, if a patient had three evaluations (V1, V2 and V3), six differences or useful comparative signals were obtained from these evaluations (V1V2, V1V3, V2V1, V2V3, V3V1, V3V2). The label of each of these comparative signals is illustrated in Fig.1.

Labeling and interpretation of comparative signals.

Although the differences V1V2 and V1V3 might be more appropriately considered decompensation recovery rather than no decompensation, we decided to discard a third label category (decompensation recovery) due to the small sample size and because the main objective of the trial was the detection of decompensation.

In a first approximation, potential predictive characteristics were selected using the random forest31, gradient boosting classifier31 and light gradient-boosting machine (LGBM)32 classification algorithms, which integrate the functions of characteristic selection by importance within the decision. We selected the top 10 features based on their importance ranking within the structure of each classifier model.

Figure2 shows an outline of the process for preparation and selection of the characteristics of the signals.

Process for preparation and selection of the characteristics of the evaluations.

During the process of selecting characteristics, all those that were redundant or had very low variabilities were discarded. In this study, by definition, we did not have variables with perfect separation that could cause overestimation of the diagnostic capacity of the models (overfitting)26.

In addition to the characteristics selected from the HR and Ox signals, the age, sex and baseline disease (HF or COPD) of the patients were considered potential predictors.

For the development of the algorithms, the ML techniques most used in the studies of classification models were considered: (i) decision trees, (ii) random forest, (iii) k-nearest neighbor (KNN), (iv) support vector machine (SVM), (v) logistic regression, (vi) naive Bayes classifier, (vii) gradient-boosting classifier and (viii) LGBM.

For each of these techniques, hyperparameters were selected based on a brute force scheme using all available data through a cross-validation scheme (K-fold cross-validation, k=5). A normalization process based on the medians and interquartile ranges (IQRs) was applied to all characteristics31.

Once the best parameters of each technique were identified, internal validation was performed with a leave-one-patient-out method. Thus, a new model was calculated for each patient by replacing the models data from the training and validation sets with the patients data. Figure3 shows an outline of the training and validation process.

Scheme of the training and validation of the study algorithms.

The observation units (inputs) on which the algorithms were applied were the differences between two different evaluations, as illustrated in Fig.1. Thus, the algorithms classified the evaluated difference as a state of no decompensation (label=0) or a change to decompensation (label=1). Therefore, the following parameters were defined:

True positive (TP) a change to decompensation as the classification result for a V3V1 or V2V1 comparison.

True negative (TN) no decompensation as the classification result for a V1V2, V1V3, V2V3 or V3V2 comparison.

False positive (FP) change to decompensation as the classification result for a V1V2, V1V3, V2V3 or V3V2 comparison.

False negative (FN) no decompensation as the classification result for a V3V1 or V2V1 comparison.

The parameters used to evaluate the diagnostic performance of the algorithms were S, E and accuracy (A). Each patient could have up to six observation units or inputs; therefore, up to six classification results were obtained, which were then defined as TP, TN, FP or FN. Then, the S, E and A were obtained for each patient. The final S, E and A of the entire sample were calculated from the mean of the parameters obtained from each patient.

The predictive values were not considered because the proportions of evaluations in the decompensated phase (33% [V1]) and compensated phase (66% [V2, V3]) did not correspond to the usual proportion found in clinical practice (the vast majority of patients in the community are usually in the compensated phase).

Missing data were not included in the analysis, but patients with missing data were not excluded (all available patient data were included in the analysis). No imputation of the missing data was performed.

During the process of signal review and verification of the start and end times of each evaluation from the manual records, missing sections of HR and/or Ox data due to poor contact between the skin and the sensor were observed. This incidence caused the introduction of some filters to be applied to exclude these missing sections from the analysis. Thus, an evaluation was excluded if it had a loss rate (missing measures divided by the total number of measures) greater than 10% in any phase. In addition, evaluations performed at home (V2, V3) that did not reveal an improvement in the sensation of dyspnea for the patient (of at least one point according to the mMRC scale30) with respect to the decompensated phase evaluation (V1) were also excluded to ensure that home assessments were performed in the compensated phase.

No indeterminate results were noted in the index test (algorithms); in all cases, the model produced a no decompensation or a change to decompensation result. On the other hand, all evaluations were always performed after a definitive result of the standard diagnostic reference test: clinical diagnosis of the decompensated phase by the doctor responsible for the patient in the hospital evaluation (V1) and clinical diagnosis of the compensated phase by the doctor who contacted the patients by phone before home evaluations (V2, V3). Thus, the algorithms were developed and applied on evaluations clearly labeled as the compensated or decompensated phase by the reference diagnostic test.

All methods and procedures were performed in accordance with the relevant guidelines and regulations. The study followed the principles contained in the Declaration of Helsinki and approved by the Ethics and Research Committee (ERC) of the center promoting the study (ERC of the Matar Hospital, approval number 1851806). Informed consent was obtained from all participants and/or their legal guardians.

See the rest here:

Machine learning for the development of diagnostic models of ... - Nature.com

Q & A: How A.I. and machine learning are transforming the lending … – Digital Journal

Image: AFP Jessica YANG

AI and machine learning are transforming every industry as we know it and the lending industry is no exception. In conversation with Experians Scott Brown, President, Consumer Information Services, we explore the trends and challenges in the lending business and how technology is making a positive impact in driving financial inclusion.

Digital Journal: What are the biggest challenges lenders are facing when it comes to driving financial inclusion, as well as speed and accuracy, in making credit decisions?

Scott Brown: A big challenge lenders face is reaching underserved consumers who need access to credit in order to attain their financial goals. Our research shows 106 million Americans, or 42 percent of the adult population, lack access to mainstream credit because they are credit invisible, unscoreable or have a subprime credit score. Communities of color are more likely to lack access to mainstream credit, with 28 percent of Black and 26 percent of Hispanic consumers unscoreable or invisible, which is perpetuating historic disadvantage.

The industry has made a lot of progress in recent years by incorporating new data into lending decisions. There continues to be opportunity with data assets, scores and models to ensure all consumers have access to fair and affordable credit. When advanced analytics and machine learning are combined with expanded data sets, lenders have the opportunity to bring more consumers into the credit ecosystem without taking on additional risk.

DJ: What are some of the notable trends you are seeing in the lending industry today?

Scott Brown: The push to make the credit and payments industry better, faster and smarter is more apparent than ever before. Consumer expectations are changing and its critical for our industry to adapt and meet consumers where they are. In todays rapidly changing environment, 95 percent of lenders are using advanced analytics and expanded data to stay ahead and best serve consumers. This is good news.

Consumers deserve models built on current and predictive behaviours. However, as models become more sophisticated, deployment timelines and costs can increase. In fact, our research shows it takes 15 months on average to build and deploy a model for credit decisioning and 55 percent of lenders have built models that have not made it to production, which is one of the biggest challenges for lenders. Because of this, many lenders rely on old models that leave consumers behind. By leveraging data and technology, lenders can streamline model deployment, cut costs and bring more consumers into the credit ecosystem.

DJ: How does A.I. and machine learning help address the challenges lenders are facing?

Scott Brown: The key to more predictive models is more data and technology that can deliver more meaningful insights. This is where machine learning and advanced analytics come into play. When advanced analytics and expanded data are used in credit decisions, more consumers can be scored and gain access to the financial services they need. There are platform solutions that exist today that can make leveraging this information in a compliant, explainable, and transparent way easier and more cost effective for lenders.

DJ: You recently launched Ascend Ops, can you tell me what that is and how it helps your clients?

Scott Brown: At Experian, we are continually innovating and using technology to find solutions to global issues. Our goal is to modernize the financial services industry and increase financial access for all. Ascend Ops is our most recent example of this in action. This first-of-its-kind solution empowers lenders to deploy new features and models in days or weeks instead of months. It is a game changer in operational efficiency and, most importantly, in helping our clients protect and better serve consumers without making significant investments in their infrastructure.

Ascend Ops is part of the Experian Ascend Technology Platform and helps lenders implement and manage models for key use cases across the customer lifecycle, including marketing, account management and more. This gives lenders the ability to deliver more relevant marketing offers to consumers and provide better insights to make lending decisions quickly and accurately. Were helping remove the challenges lenders face in deploying new models to market. The quicker more inclusive credit models can be deployed, the sooner businesses and consumers can benefit from their use.

DJ: How is technology like A.I. and machine learning poised to transform the landscape of the credit economy and expand the lending universe over the next 5-10 years?

Scott Brown: If you think back to the landscape 5 10 years ago, technology has made our current environment look nearly unrecognizable. Over the last decade, technology has fundamentally changed the financial services industry and weve played a large role in this disruption. As we look ahead, we will continue to see the ways technology and advancements in data can transform the financial services ecosystem, including consumer permissioned data. These advancements will enable us to include more consumers in mainstream lending and allow us to bring better, faster and smarter solutions to market.

See the rest here:

Q & A: How A.I. and machine learning are transforming the lending ... - Digital Journal

The Rise of AI and Machine Learning in Global E-Commerce … – Fagen wasanni

Exploring the Impact of AI and Machine Learning on Global E-Commerce Analytics: Opportunities and Challenges

The rise of artificial intelligence (AI) and machine learning in global e-commerce analytics is a phenomenon that is reshaping the landscape of online business. These technologies are not only transforming the way businesses operate but also creating a myriad of opportunities and challenges that are worth exploring.

AI and machine learning are increasingly being integrated into e-commerce platforms to enhance customer experience, streamline operations, and improve decision-making processes. These technologies are capable of analyzing vast amounts of data, identifying patterns, and making predictions, thereby enabling businesses to gain valuable insights into customer behavior and market trends.

One of the significant opportunities presented by AI and machine learning in e-commerce analytics is personalized marketing. By analyzing customer data, these technologies can predict individual preferences and buying habits, allowing businesses to tailor their marketing strategies accordingly. This level of personalization can significantly enhance customer engagement and loyalty, leading to increased sales and revenue.

Moreover, AI and machine learning can help businesses optimize their inventory management. These technologies can predict demand for different products based on historical sales data and current market trends, enabling businesses to maintain optimal stock levels and avoid overstocking or understocking. This can result in significant cost savings and improved customer satisfaction.

However, the integration of AI and machine learning into e-commerce analytics also presents several challenges. One of the main challenges is data privacy and security. As these technologies rely on analyzing customer data, businesses must ensure that they comply with data protection regulations and safeguard customer information from cyber threats. This requires significant investment in cybersecurity measures and ongoing monitoring to detect and respond to any potential breaches.

Another challenge is the lack of skilled professionals who can effectively implement and manage these technologies. While AI and machine learning can automate many tasks, they still require human oversight to ensure they are functioning correctly and delivering accurate results. Therefore, businesses need to invest in training and development to equip their staff with the necessary skills and knowledge.

Furthermore, the rapid pace of technological advancement means that businesses must continually update their systems and strategies to keep up with the latest developments in AI and machine learning. This can be a daunting task, especially for small and medium-sized enterprises that may lack the resources and expertise to do so.

In conclusion, the rise of AI and machine learning in global e-commerce analytics offers exciting opportunities for businesses to enhance their operations and customer experience. However, it also presents significant challenges that need to be addressed. Businesses must strike a balance between leveraging these technologies to gain a competitive edge and ensuring they comply with data protection regulations and maintain the trust of their customers. As the world of e-commerce continues to evolve, it will be fascinating to see how businesses navigate these opportunities and challenges.

See the rest here:

The Rise of AI and Machine Learning in Global E-Commerce ... - Fagen wasanni

86-year old Hammett equation gets a machine learning update – Chemistry World

The Hammett equation, a chemical theory that is over 80 years old, is being expanded upon and improved with the help of machine learning. The equation, which can help to explain the electron-donating or withdrawing nature of aromatic substituents via calculation of Hammett constants, has been analysed computationally by a team of Brazilian researchers who want to make it even more precise, and unlock unknown values for practical experiments.

There are some experimental Hammetts constants which, although widely used in many applications, were not measured or have inconsistent values, says Itamar Borges Jr from the Institute of Military Engineering in Brazil who worked on the study alongside Julio Cesar Duarte and Gabriel Monteiro-de-Castro. He adds that the work employ[s] machine learning algorithms and available experimental values to produce a consistent set of the different types of Hammetts constants.

In 1937, Louis Hammett published work that led to the eponymous Hammett equation. He was working at the time in a new field that he had named physical organic chemistry. Hammett recognised the relationship between the rate of hydrolysis of a series of ethyl esters and the subsequent equilibrium position of the ionisation of the corresponding acids in water. It was some of the first work of its kind to try to provide a quantitative theory to rationalise the relationships between chemical structures and reactivity in chemistry.

Applying his focus to meta and para-substituted benzoic acids and their respective esters Hammett found a direct relationship. Each substituent on the respective aromatic ring could be given a value representing their electron-donating or withdrawing effect. These values were calculated experimentally by Hammett, helping chemists to determine the impact on reactivity from these groups. Hammett then went even further, deriving values. These values meant chemists could predict the number of electrons involved in the transition state, allowing an understanding of the type of mechanistic pathway a reaction could take.

Borges Jr teams work uses a combination of density functional theory (DFT) methods and machine learning algorithms to calculate new Hammett constants. Whereas previous work used semi-empirical methods, Borges Jr states that the DFT methods used in this work are more accurate for calculating atomic charges. Using a variety of meta and para substituents on benzene and benzoic acid derivatives, DFT models calculated the atomic charges for the carbon atoms bonded to the groups being analysed. Processing these results with machine learning techniques resulted in the production of 219 values of which 92 were previously unknown.

Alongside this work, the Brazilian researchers included a set of simplified equations to obtain constants for new substituents that hadnt previously been calculated. They hope that with knowledge of atomic charges obtained from other DFT calculations the simple equations can help to obtain new constants.

Using this machine learning approach, earlier values that had only been found experimentally were calculated computationally for the first time for three substituents (CCl3, NHCHO and NHCONH2). Using DFT calculations to work out the atomic charges, these values were calculated and used as inputs for the machine learning algorithm. The resulting Hammett constants the algorithm helped to produce corresponded to literature values from experimental results for the three substituents.

Kristaps Ermanis from the University of Nottingham, an expert in computational organic chemistry, says that the work can fill in values where the data hasnt been previously found but that the study relies on limited amounts of DFT data, which limits the number of parameters in the machine learning method, and therefore potentially also limits its accuracy. He believes the accuracy could be easily improved in future work by acquiring more DFT data.

Matthew Grayson and his group work on computational chemistry at the University of Bath and describe the work as a valuable idea that allows experimentalists to access previously unknown Hammett constants using simple and readily available atomic charge features.

See the original post here:

86-year old Hammett equation gets a machine learning update - Chemistry World

Machine Learning-Trained Autonomy Tested By XQ-58 For Skyborg – Aviation Week

https://aviationweek.com/themes/custom/particle/dist/app-drupal/assets/awn-logo.svg

An XQ-58 flies while controlled by machine-learning algorithms, as an F-15E observes.

Credit: U.S. Air Force

A Kratos XQ-58 Valkyrie has been flown for the first time with a certain tactical problem solved during the flight by machine learning (ML)-trained algorithms developed by the Air Force Research Laboratory (AFRL). The July 25 flight at the Eglin Test and Training Complex off the Florida coast builds...

Machine Learning-Trained Autonomy Tested By XQ-58 For Skyborg is published in Aerospace Daily & Defense Report, an Aviation Week Intelligence Network (AWIN) Market Briefing and is included with your AWIN membership.

Already a member of AWIN or subscribe to Aerospace Daily & Defense Report through your company? Loginwith your existing email and password.

Not a member? Learn how you can access the market intelligence and data you need to stay abreast of what's happening in the aerospace and defense community.

Apple app store ID

6447645195

Apple app name

apple-itunes-app

App argument URL

https://shownews.aviationweek.com

Go here to see the original:

Machine Learning-Trained Autonomy Tested By XQ-58 For Skyborg - Aviation Week

Artificial Intelligence and Machine Learning in Packaging Robotics … – Fagen wasanni

At the forefront of the fourth industrial revolution, Artificial Intelligence (AI) and Machine Learning (ML) have become significant trends in packaging robotics and automation. AI refers to software that is trained to perform specific tasks, while ML uses algorithms to learn insights and patterns from data. According to a report from PMMI, AI-based applications in the packaging industry are expected to grow at a CAGR of over 50% in the next five years.

One company making strides in this field is Deep Learning Robotics (DLRob). DLRob has developed groundbreaking robot control software that allows users to teach robots tasks by demonstrating them. Through advanced ML algorithms, robots can learn by observing and mimicking human actions. DLRobs software is user-friendly and adaptable to various robots and applications.

In April, DLRob announced a software update that enables its customers to connect and control a wider range of robotics devices, including Universal Robots UR series of cobots. This expansion increases the capabilities and versatility of DLRobs software.

In another collaboration, Intrinsic, an Alphabet company, and Siemens have partnered to explore integrations and interfaces between Intrinsics AI-based robotics software and Siemens Digital Industries portfolio for industrial production automation. The goal is to bridge the gaps between robotics, automation engineering, and IT development, making industrial robotics more accessible and usable for businesses, entrepreneurs, and developers. This collaboration aims to bring joint solutions to the market that can benefit more enterprises.

Both Intrinsic and Siemens emphasize the importance of combining robotics with the production environment to maximize value. By accelerating the development process and facilitating seamless operation, they aim to democratize access to robotics and automation technology.

This collaboration highlights the growing significance of AI and ML in the packaging industry, paving the way for innovative advancements in robotics and automation for various sectors.

Read the original:

Artificial Intelligence and Machine Learning in Packaging Robotics ... - Fagen wasanni

How machine learning can expand the Landscape of Edge AI. | TDK – TDK Corporation

Edge AI and the evolution of edge devices

In the context of edge computing, an edge device simply refers to a device that operates at the edges of networks, collecting, processing, and analyzing data. Examples include smartphones, security cameras, smart speakers, and a variety of other devices. In recent years, with the rise of edge AI, these devices have evolved even smarter due to the machine learning functions.

Edge AI*2 is a collective term for technologies related to on-device collection, processing, and analysis of data for artificial intelligence purposes. Commonly, implementing AI requires vast amounts of data and computing power, which is why they are typically run on cloud-based servers. With edge AI, however, data is processed internally on the devices, reducing delays and costs related to data transmission, as well as improving privacy.

Cloud Computing and Edge Computing Compared

The coupling of edge devices with edge AI is broadening the realm of IoT (Internet of Things). Self-driving vehicles, factory automation, and medical device management are examples of edge devices already playing vital roles where real-time data processing and decision-making are required.

Edge AI has traditionally been implemented on devices with robust processing power, such as smartphones and tablets. With the proliferation of IoT, however, interest is growing in a technology known as TinyML (Tiny Machine Learning)*3, which enables small devices with only modest capabilities to execute machine learning functions onboard.

Generally, machine learning is performed on high-performance computers or cloud servers, requiring large amounts of memory and fast processors, incurring commensurate electrical power consumption. This permits the execution of large-scale machine learning models based on vast datasets, resulting in highly accurate image recognition, natural language processing, and more. However, every step of the workflowincluding data collection, model development, and validationusually requires handling by seasoned engineers specialized in each area.

TinyML is a machine learning technology designed for small devices, enabling edge AI to be implemented even on microcontrollers (MCUs), which only possess limited processing muscle. This, in turn, is expected to engender smaller IoT devices with low power consumption. It is now possible to run machine learning inference on almost any device with a sensor and marginal computing power, endowing it with intelligence.

Qeexo, a Silicon Valley startup that joined the TDK Group in 2023, specializes in machine learning solutions for edge devices, with a particular focus on TinyML. Qeexo AutoML, is an end-to-end, no-code (i.e., not requiring code to be hand-written in a programming language) platform that empowers non-engineers to implement machine learning on lightweight edge devices. Working in an intuitive, web-based interface, users can easily perform all the steps necessary to build a machine learning systembeginning with collecting and pre-processing raw data, followed by training and refining recognition models, then finally creating and installing the finished package onto edge devices where the machine learning-based intelligence comes to life.

TDK is currently developing i3 Micro Module, an ultracompact sensor module with onboard edge AI designed to be used for predictive maintenancethe practice of foreseeing and preempting anomalies in machinery and equipment at factories and similar facilities. Sensors, including those for vibration, temperature, and barometric pressure, as well as edge AI and mesh networking capabilities, are all integrated into a compact package, allowing equipment conditions to be monitored without having to rely on manpower, thereby helping minimize downtime and improve productivity. (Photo: Ultracompact sensor module i3 Micro Module)

Related Stories Predicting Anomalies Before Breakdowns Occur: Ultracompact Sensor Module Redefines the Status Quo of Equipment Maintenance

Michael A. Gamble, Director, Product Management for Qeexo, explained the significance of Qeexo AutoML. Conventionally, machine learning for embedded devices is a lengthy, complex process requiring highly specialized engineering skills. Qeexo AutoML enables almost anyoneincluding those not technically inclinedto accomplish the same, using an end-to-end, streamlined web interface. Similar to the way digital design tools and audio workstation software opened up graphic arts and music production to just about anyone with a creative spark, AutoML levels the playing field for machine learning. Put simply, we think of Qeexo AutoML as democratizing machine learning.

Advances in edge device technologies have spurred the development of numerous IoT devices and microcontrollers featuring sophisticated machine learning capabilities. With the advent of tools like Qeexo AutoML, it is now possible to create complex machine learning models that run on edge devices in short order.

Letting edge AI process data collected from sensors in edge devices substantially expands the range of possible solutions. Gamble continued, Pairing Qeexos machine learning solutions with TDKs sensor devices will allow us to provide customers with integrated, one-stop solutions. We look forward to a synergistic partnership in developing and delivering smart edge solutions that leverage each others strengths.

Today, edge devices are evolving into intelligent systems that learn by themselves, going well beyond merely gathering and transmitting data. Advanced manufacturing facilities, sometimes referred to as smart factories, will begin equipping almost every piece of machinery and equipment with edge devices. Edge devices are also becoming prevalent among consumers in the form of mobility products and smartphones. Propelled by tools like AutoML, TinyML and edge AI are expected to become increasingly familiar and commonplace. This will all have a significant positive impact on our daily lives, businesses, and industry as a whole.

Read more from the original source:

How machine learning can expand the Landscape of Edge AI. | TDK - TDK Corporation

Use cases of Stereo Matching part9(Machine Learning + AI) – Medium

Author : : Xuelian Cheng, Yiran Zhong, Mehrtash Harandi, Tom Drummond, Zhiyong Wang, Zongyuan Ge

Abstract : The self-attention mechanism, successfully employed with the transformer structure is shown promise in many computer vision tasks including image recognition, and object detection. Despite the surge, the use of the transformer for the problem of stereo matching remains relatively unexplored. In this paper, we comprehensively investigate the use of the transformer for the problem of stereo matching, especially for laparoscopic videos, and propose a new hybrid deep stereo matching framework (HybridStereoNet) that combines the best of the CNN and the transformer in a unified design. To be specific, we investigate several ways to introduce transformers to volumetric stereo matching pipelines by analyzing the loss landscape of the designs and in-domain/cross-domain accuracy. Our analysis suggests that employing transformers for feature representation learning, while using CNNs for cost aggregation will lead to faster convergence, higher accuracy and better generalization than other options. Our extensive experiments on Sceneflow, SCARED2019 and dVPN datasets demonstrate the superior performance of our HybridStereoNet.

2. EASNet: Searching Elastic and Accurate Network Architecture for Stereo Matching(arXiv)

Author : Qiang Wang, Shaohuai Shi, Kaiyong Zhao, Xiaowen Chu

Abstract : Recent advanced studies have spent considerable human efforts on optimizing network architectures for stereo matching but hardly achieved both high accuracy and fast inference speed. To ease the workload in network design, neural architecture search (NAS) has been applied with great success to various sparse prediction tasks, such as image classification and object detection. However, existing NAS studies on the dense prediction task, especially stereo matching, still cannot be efficiently and effectively deployed on devices of different computing capabilities. To this end, we propose to train an elastic and accurate network for stereo matching (EASNet) that supports various 3D architectural settings on devices with different computing capabilities. Given the deployment latency constraint on the target device, we can quickly extract a sub-network from the full EASNet without additional training while the accuracy of the sub-network can still be maintained. Extensive experiments show that our EASNet outperforms both state-of-the-art human-designed and NAS-based architectures on Scene Flow and MPI Sintel datasets in terms of model accuracy and inference speed. Particularly, deployed on an inference GPU, EASNet achieves a new SOTA 0.73 EPE on the Scene Flow dataset with 100 ms, which is 4.5 faster than LEAStereo with a better quality model

View original post here:

Use cases of Stereo Matching part9(Machine Learning + AI) - Medium

Harnessing the Power of AI and Machine Learning for Enhanced … – Fagen wasanni

Harnessing the Power of AI and Machine Learning for Enhanced Security Screening and Detection: A Comprehensive Guide

In the rapidly evolving world of technology, artificial intelligence (AI) and machine learning are increasingly being harnessed to enhance security screening and detection. These advanced technologies are revolutionizing the way security checks are conducted, offering unprecedented levels of accuracy and efficiency.

AI and machine learning are subsets of computer science that mimic human intelligence. They are capable of learning from experience, adjusting to new inputs, and performing tasks that normally require human intelligence. In the context of security screening and detection, these technologies can be trained to identify potential threats or anomalies with a high degree of precision.

One of the key areas where AI and machine learning are making a significant impact is in airport security. Traditional methods of security screening at airports, which rely heavily on human intervention, are often time-consuming and prone to errors. However, with the advent of AI and machine learning, the process has become more streamlined and effective. These technologies can analyze vast amounts of data in real-time, identify patterns, and flag potential security threats. This not only enhances the accuracy of security checks but also significantly reduces the time taken for screening.

Moreover, AI and machine learning are also being used to improve cybersecurity. With cyber threats becoming increasingly sophisticated, traditional methods of detection and prevention are often inadequate. AI and machine learning algorithms can analyze network traffic, detect unusual patterns, and identify potential cyber threats. They can also predict future attacks based on historical data, enabling organizations to take proactive measures to safeguard their systems.

In addition to airports and cybersecurity, AI and machine learning are also being utilized in other areas of security screening and detection. For instance, they are being used in facial recognition systems, biometric scanners, and surveillance cameras to enhance security in public places and prevent criminal activities. These technologies can accurately identify individuals, detect suspicious activities, and alert authorities in real-time, thereby enhancing public safety.

However, while the benefits of AI and machine learning in security screening and detection are immense, there are also challenges that need to be addressed. One of the key challenges is the risk of false positives, where innocent individuals or activities are flagged as potential threats. This can lead to unnecessary investigations and potential infringements on privacy. Therefore, it is crucial to ensure that these technologies are used responsibly and ethically.

Another challenge is the need for continuous learning and adaptation. AI and machine learning algorithms are only as good as the data they are trained on. Therefore, it is essential to continuously update these algorithms with new data to ensure their accuracy and effectiveness.

In conclusion, AI and machine learning hold great promise for enhancing security screening and detection. They offer the potential to significantly improve the accuracy and efficiency of security checks, detect potential threats in real-time, and predict future attacks. However, it is also important to address the challenges associated with their use to ensure that they are used responsibly and effectively. As these technologies continue to evolve, they are set to play an increasingly important role in ensuring our safety and security.

Go here to see the original:

Harnessing the Power of AI and Machine Learning for Enhanced ... - Fagen wasanni

Use cases of Stereo Matching part8(Machine Learning + AI) – Medium

Author : Andrea Pilzer, Yuxin Hou, Niki Loppi, Arno Solin, Juho Kannala

Abstract : We introduce visual hints expansion for guiding stereo matching to improve generalization. Our work is motivated by the robustness of Visual Inertial Odometry (VIO) in computer vision and robotics, where a sparse and unevenly distributed set of feature points characterizes a scene. To improve stereo matching, we propose to elevate 2D hints to 3D points. These sparse and unevenly distributed 3D visual hints are expanded using a 3D random geometric graph, which enhances the learning and inference process. We evaluate our proposal on multiple widely adopted benchmarks and show improved performance without access to additional sensors other than the image sequence. To highlight practical applicability and symbiosis with visual odometry, we demonstrate how our methods run on embedded hardware.

2.Comparison of Stereo Matching Algorithms for the Development of Disparity Map (arXiv)

Author : Hamid Fsian, Vahid Mohammadi, Pierre Gouton, Saeid Minaei

Abstract : Stereo Matching is one of the classical problems in computer vision for the extraction of 3D information but still controversial for accuracy and processing costs. The use of matching techniques and cost functions is crucial in the development of the disparity map. This paper presents a comparative study of six different stereo matching algorithms including Block Matching (BM), Block Matching with Dynamic Programming (BMDP), Belief Propagation (BP), Gradient Feature Matching (GF), Histogram of Oriented Gradient (HOG), and the proposed method. Also three cost functions namely Mean Squared Error (MSE), Sum of Absolute Differences (SAD), Normalized Cross-Correlation (NCC) were used and compared. The stereo images used in this study were from the Middlebury Stereo Datasets provided with perfect and imperfect calibrations. Results show that the selection of matching function is quite important and also depends on the images properties. Results showed that the BP algorithm in most cases provided better results getting accuracies over 95%

See more here:

Use cases of Stereo Matching part8(Machine Learning + AI) - Medium

Use cases of Stereo Matching part7(Machine Learning + AI) – Medium

Author : Philippe Weinzaepfel, Thomas Lucas, Vincent Leroy, Yohann Cabon, Vaibhav Arora, Romain Brgier, Gabriela Csurka, Leonid Antsfeld, Boris Chidlovskii, Jrme Revaud

Abstract : Despite impressive performance for high-level downstream tasks, self-supervised pre-training methods have not yet fully delivered on dense geometric vision tasks such as stereo matching or optical flow. The application of selfsupervised concepts, such as instance discrimination or masked image modeling, to geometric tasks is an active area of research. In this work, we build on the recent crossview completion framework, a variation of masked image modeling that leverages a second view from the same scene which makes it well suited for binocular downstream tasks. The applicability of this concept has so far been limited in at least two ways: (a) by the difficulty of collecting realworld image pairs in practice only synthetic data have been used and (b) by the lack of generalization of vanilla transformers to dense downstream tasks for which relative position is more meaningful than absolute position. We explore three avenues of improvement: first, we introduce a method to collect suitable real-world image pairs at large scale. Second, we experiment with relative positional embeddings and show that they enable vision transformers to perform substantially better. Third, we scale up vision transformer based cross-completion architectures, which is made possible by the use of large amounts of data. With these improvements, we show for the first time that stateof-the-art results on stereo matching and optical flow can be reached without using any classical task-specific techniques like correlation volume, iterative estimation, image warping or multi-scale reasoning, thus paving the way towards universal vision models.

2. Self-Supervised Intensity-Event Stereo Matching(arXiv)

Author : Jinjin Gu, Jinan Zhou, Ringo Sai Wo Chu, Yan Chen, Jiawei Zhang, Xuanye Cheng, Song Zhang, Jimmy S. Ren

Abstract : Event cameras are novel bio-inspired vision sensors that output pixel-level intensity changes in microsecond accuracy with a high dynamic range and low power consumption. Despite these advantages, event cameras cannot be directly applied to computational imaging tasks due to the inability to obtain high-quality intensity and events simultaneously. This paper aims to connect a standalone event camera and a modern intensity camera so that the applications can take advantage of both two sensors. We establish this connection through a multi-modal stereo matching task. We first convert events to a reconstructed image and extend the existing stereo networks to this multi-modality condition. We propose a self-supervised method to train the multi-modal stereo network without using ground truth disparity data. The structure loss calculated on image gradients is used to enable self-supervised learning on such multi-modal data. Exploiting the internal stereo constraint between views with different modalities, we introduce general stereo loss functions, including disparity cross-consistency loss and internal disparity loss, leading to improved performance and robustness compared to existing approaches. The experiments demonstrate the effectiveness of the proposed method, especially the proposed general stereo loss functions, on both synthetic and real datasets. At last, we shed light on employing the aligned events and intensity images in downstream tasks, e.g., video interpolation application.

Read the original:

Use cases of Stereo Matching part7(Machine Learning + AI) - Medium