Machine learning for the development of diagnostic models of … – Nature.com

Design

This was a prospective multicenter observational study. Unlike studies on prognostic models, in the present study, diagnostic models were developed, that is, models designed to determine whether a patient was in the compensated or decompensated phase of their disease (exacerbation of COPD and/or HF decompensation).

The criteria for admission to this study and the recruitment process have been previously reported19. Patients older than 55years who were able to walk at least 30m, with a main diagnosis of decompensated HF and/or exacerbation of COPD and hospitalized in the Department of Internal Medicine, Cardiology or Pneumology were included. Participants with a pacemaker or intracardiac device, domiciliary oxygen therapy users prior to admission and patients with HF functional class IV of the New York Heart Association (NYHA) classification were excluded29.

Four hospitals participated: two tertiary university hospitals (600900 hospital beds) and two regional secondary care hospitals (150400 hospital beds) in the provinces of Barcelona and Madrid.

Each center had a trained interviewer, and each department had a referring physician who was accessible to the interviewer. Each day, the interviewer contacted the referring physician to review the hospitalization census and identify patients with the diagnosis of interest. Next, the interviewer confirmed the main diagnosis (decompensated HF and/or exacerbation of COPD) with the physician responsible for the patient and then contacted the participant (the same day or the next day) to obtain informed consent and verify compliance with all admission criteria of this study. The sample was obtained through convenience sampling, and all patients were enrolled consecutively as they were identified.

The recruitment and follow-up periods lasted 18months starting in November 2010.

Each patient underwent three identical evaluations: the first in the hospitalization unit (V1) and the other two consecutively and at least 24h apart in the participants home 30days after hospital discharge (V2 and V3). Thus, each participant underwent one evaluation in the decompensated phase (V1) and two in the compensated phase (V2, V3) of their disease.

The evaluation protocol19 included documentation of symptoms (dyspnea according to the NYHA29 and Modified Medical Research Council (mMRC)30 scales) and physiological parameters (HR and Ox) in two consecutive periods: effort (walking at a normal pace and on flat terrain for a maximum of 6min) and recovery (seated for 4min after the end of the effort period).

HR and Ox were considered time series with a sample frequency of 1Hz and were collected throughout the evaluation with a pulse oximeter (Model 3100, brand Nonin Medical, Inc., Plymouth, MN, USA) placed on the left index finger.

Given the absence of a single standard diagnostic test to verify whether a patient was in the compensated or decompensated phase of their disease, the clinical judgment of the participants responsible physician was considered a standard diagnostic test. Thus, in the decompensated phase, the diagnosis of decompensated HF and/or COPD exacerbation corresponded to the confirmed diagnosis from the participants attending physician (in cases of diagnostic doubt, the patient was excluded). For the compensated phase, a standard diagnosis of compensated HF and/or stable COPD was confirmed by a study physician through telephone contact with the participant 30days after hospital discharge. During this telephone interaction, the patient was considered to be in the compensated phase if none of the following events had occurred since hospital discharge: increased cough, sputum or dyspnea; initiation of or an increase in corticosteroid use; and initiation of antibiotic treatment or medical consultation for worsening of the clinical situation from any cause. In cases of doubt or if the compensated phase could not be confirmed, successive telephone contacts were made until the phase could be confirmed. The interviewer scheduled home visits for the respective evaluations (V2, V3) only after confirmation and within 2448h of receiving confirmation.

Given the objective of this study (development of an online algorithm capable of detecting the onset of an exacerbation from HR and Ox data), various characteristics of each of the evaluations were extracted (V1, V2, V3). For this purpose, the effort phase (walking) and recovery phase of each evaluation were separated by verifying the times recorded manually in the data collection records at the beginning and end of each phase of the test and visually reviewing the signals to confirm the manual records. Once the signals were separated according to the evaluation phase, the corresponding characteristics of the available measures were extracted.

Numerous characteristics were extracted from the signals. During each of the tests, two different phases were considered: effort and recovery, which were treated separately. From each of the phases, three signals were considered: HR, Ox and the normalized difference between these variables. From each of these three temporal signals, the characteristics of the temporal (the mean, standard deviation, and range) and frequency domains (the characteristics of the first and second harmonics, the distribution of the harmonics [kurtosis and skewness], the sum of all harmonics and the six first indices of the principal component analysis [PCA] for the normalized fast Fourier transform [FFT] of the signal) were extracted. Accordingly, 16 characteristics were obtained from each phase (effort and recovery) of each signal (HR, Ox, and the normalized difference between these), resulting in a total of 96 characteristics for each evaluation. The normalized difference between Ox and HR was defined using the sklearn standardscaler function (the mathematical formula is available at https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html), and PCA was applied to the HR and Ox time series using the sklearn.decomposition.PCA function (formula available at https://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html). Regarding the selection of the first 6 components of the PCA, this decision was made based on the researchers' criteria, considering that typically in this type of analysis, the first 3 to 6 components are considered.

Given that the main objective of this study was the detection of a transition from a state considered normal or stable (HF or COPD in the compensated phase [V2, V3]) to a state of decompensation or exacerbation (decompensated phase [V1]), a methodological scheme was applied based on calculation of the differences between the evaluations of each available characteristic. Thus, if a patient had three evaluations (V1, V2 and V3), six differences or useful comparative signals were obtained from these evaluations (V1V2, V1V3, V2V1, V2V3, V3V1, V3V2). The label of each of these comparative signals is illustrated in Fig.1.

Labeling and interpretation of comparative signals.

Although the differences V1V2 and V1V3 might be more appropriately considered decompensation recovery rather than no decompensation, we decided to discard a third label category (decompensation recovery) due to the small sample size and because the main objective of the trial was the detection of decompensation.

In a first approximation, potential predictive characteristics were selected using the random forest31, gradient boosting classifier31 and light gradient-boosting machine (LGBM)32 classification algorithms, which integrate the functions of characteristic selection by importance within the decision. We selected the top 10 features based on their importance ranking within the structure of each classifier model.

Figure2 shows an outline of the process for preparation and selection of the characteristics of the signals.

Process for preparation and selection of the characteristics of the evaluations.

During the process of selecting characteristics, all those that were redundant or had very low variabilities were discarded. In this study, by definition, we did not have variables with perfect separation that could cause overestimation of the diagnostic capacity of the models (overfitting)26.

In addition to the characteristics selected from the HR and Ox signals, the age, sex and baseline disease (HF or COPD) of the patients were considered potential predictors.

For the development of the algorithms, the ML techniques most used in the studies of classification models were considered: (i) decision trees, (ii) random forest, (iii) k-nearest neighbor (KNN), (iv) support vector machine (SVM), (v) logistic regression, (vi) naive Bayes classifier, (vii) gradient-boosting classifier and (viii) LGBM.

For each of these techniques, hyperparameters were selected based on a brute force scheme using all available data through a cross-validation scheme (K-fold cross-validation, k=5). A normalization process based on the medians and interquartile ranges (IQRs) was applied to all characteristics31.

Once the best parameters of each technique were identified, internal validation was performed with a leave-one-patient-out method. Thus, a new model was calculated for each patient by replacing the models data from the training and validation sets with the patients data. Figure3 shows an outline of the training and validation process.

Scheme of the training and validation of the study algorithms.

The observation units (inputs) on which the algorithms were applied were the differences between two different evaluations, as illustrated in Fig.1. Thus, the algorithms classified the evaluated difference as a state of no decompensation (label=0) or a change to decompensation (label=1). Therefore, the following parameters were defined:

True positive (TP) a change to decompensation as the classification result for a V3V1 or V2V1 comparison.

True negative (TN) no decompensation as the classification result for a V1V2, V1V3, V2V3 or V3V2 comparison.

False positive (FP) change to decompensation as the classification result for a V1V2, V1V3, V2V3 or V3V2 comparison.

False negative (FN) no decompensation as the classification result for a V3V1 or V2V1 comparison.

The parameters used to evaluate the diagnostic performance of the algorithms were S, E and accuracy (A). Each patient could have up to six observation units or inputs; therefore, up to six classification results were obtained, which were then defined as TP, TN, FP or FN. Then, the S, E and A were obtained for each patient. The final S, E and A of the entire sample were calculated from the mean of the parameters obtained from each patient.

The predictive values were not considered because the proportions of evaluations in the decompensated phase (33% [V1]) and compensated phase (66% [V2, V3]) did not correspond to the usual proportion found in clinical practice (the vast majority of patients in the community are usually in the compensated phase).

Missing data were not included in the analysis, but patients with missing data were not excluded (all available patient data were included in the analysis). No imputation of the missing data was performed.

During the process of signal review and verification of the start and end times of each evaluation from the manual records, missing sections of HR and/or Ox data due to poor contact between the skin and the sensor were observed. This incidence caused the introduction of some filters to be applied to exclude these missing sections from the analysis. Thus, an evaluation was excluded if it had a loss rate (missing measures divided by the total number of measures) greater than 10% in any phase. In addition, evaluations performed at home (V2, V3) that did not reveal an improvement in the sensation of dyspnea for the patient (of at least one point according to the mMRC scale30) with respect to the decompensated phase evaluation (V1) were also excluded to ensure that home assessments were performed in the compensated phase.

No indeterminate results were noted in the index test (algorithms); in all cases, the model produced a no decompensation or a change to decompensation result. On the other hand, all evaluations were always performed after a definitive result of the standard diagnostic reference test: clinical diagnosis of the decompensated phase by the doctor responsible for the patient in the hospital evaluation (V1) and clinical diagnosis of the compensated phase by the doctor who contacted the patients by phone before home evaluations (V2, V3). Thus, the algorithms were developed and applied on evaluations clearly labeled as the compensated or decompensated phase by the reference diagnostic test.

All methods and procedures were performed in accordance with the relevant guidelines and regulations. The study followed the principles contained in the Declaration of Helsinki and approved by the Ethics and Research Committee (ERC) of the center promoting the study (ERC of the Matar Hospital, approval number 1851806). Informed consent was obtained from all participants and/or their legal guardians.

See the rest here:

Machine learning for the development of diagnostic models of ... - Nature.com

Machine learning-based technique for gain and resonance … – Nature.com

Thatere, A., Khade, S., Lande, V.S. & Chinchole, A. A T-shaped rectangular microstrip slot antenna for mid-band and 5G applications. JREAS6, 144146, https://doi.org/10.46565/jreas.2021.v06i03.007 (2021).

Moniruzzaman, M. et al. Gap coupled symmetric split ring resonator based near zero index ENG metamaterial for gain improvement of monopole antenna. Sci. Rep. 12, 7406. https://doi.org/10.1038/s41598-022-11029-7 (2022).

Article ADS CAS PubMed PubMed Central Google Scholar

Al-Bawri, S. S., Islam, M. T., Islam, M. S., Singh, M. J. & Alsaif, H. Massive metamaterial system-loaded MIMO antenna array for 5G base stations. Sci. Rep. 12, 14311. https://doi.org/10.1038/s41598-022-18329-y (2022).

Article ADS CAS PubMed PubMed Central Google Scholar

Shabbir, T. et al. 16-Port non-planar MIMO antenna system with Near-Zero-Index (NZI) metamaterial decoupling structure for 5G applications. IEEE Access 8, 157946157958. https://doi.org/10.1109/ACCESS.2020.3020282 (2020).

Article Google Scholar

Jilani, M. A.K. etal. Design of 2 1 patch array antenna for 5G communications systems using mm-wave frequency band. In 2022 IEEE 12th Annual Computing and Communication Workshop and Conference (CCWC), 08570862, https://doi.org/10.1109/CCWC54503.2022.9720836 (IEEE, Las Vegas, NV, USA, 2022).

Padmanathan, S. et al. Compact multiband reconfigurable MIMO antenna for sub- 6GHz 5G mobile terminal. IEEE Access 10, 6024160252. https://doi.org/10.1109/ACCESS.2022.3180048 (2022).

Article Google Scholar

Azim, R. etal. Low profile multi-slotted patch antenna for lower 5G application. In 2020 IEEE Region 10 Symposium (TENSYMP), 366369, https://doi.org/10.1109/TENSYMP50017.2020.9230892 (IEEE, Dhaka, Bangladesh, 2020).

Sun, J.-N., Li, J.-L. & Xia, L. A dual-polarized magneto-electric dipole antenna for application to N77/N78 band. IEEE Access 7, 161708161715. https://doi.org/10.1109/ACCESS.2019.2951414 (2019).

Article Google Scholar

Mathew, P. K. A three element Yagi Uda antenna for RFID systems. Director 50, 2 (2014).

Google Scholar

Agrawal, S. R., Lele, K. A. & Deshmukh, A. A. Review on printed log periodic and Yagi MSA. IJCA 126, 3844. https://doi.org/10.5120/ijca2015906177 (2015).

Article Google Scholar

Mushiake, Y. A report on Japanese development of antennas: From the YagiUda antenna to self-complementary antennas. IEEE Antennas Propag. Mag. 46, 4760. https://doi.org/10.1109/MAP.2004.1373999 (2004).

Article ADS Google Scholar

Kazema, T. & Michael, K. Gain improvement of the YagiUda antenna using genetic algorithm for application in DVB-T2 television signal reception in Tanzania. J. Interdiscip. Sci. (2017).

Dalvadi, P. & Patel, D. A. A comprehensive review of different feeding techniques for quasi Yagi antenna. IJETER 9, 221226, https://doi.org/10.30534/ijeter/2021/12932021 (2021)

Yurt, R., Torpi, H., Mahouti, P., Kizilay, A. & Koziel, S. Buried object characterization using ground penetrating radar assisted by data-driven surrogate-models. IEEE Access11, https://doi.org/10.1109/ACCESS.2023.3243132 (2023).

Bai, Y., Gardner, P., He, Y. & Sun, H. A surrogate modeling approach for frequency reconfigurable antennas. IEEE Trans. Antennas Propag.https://doi.org/10.1109/TAP.2023.3248446 (2023).

Article Google Scholar

Koziel, S. & Pietrenko-Dabrowska, A. Expedited variable-resolution surrogate modeling of miniaturized microwave passives in confined domains. IEEE Transactions on Microwave Theory and Techniques70, https://doi.org/10.1109/TMTT.2022.3191327 (2022).

Yu, Y. etal. State-of-the-art: Ai-assisted surrogate modeling and optimization for microwave filters. IEEE Transactions on Microwave Theory and Techniques70, https://doi.org/10.1109/TMTT.2022.3208898 (2022).

Kouhalvandi, L. & Matekovits, L. Surrogate modeling for designing and optimizing mimo antennas.https://doi.org/10.1109/AP-S/USNC-URSI47032.2022.9886514 (2022).

Article Google Scholar

Khan, M.M., Hossain, S., Mozumdar, P., Akter, S. & Ashique, R.H. A review on machine learning and deep learning for various antenna design applications. Heliyon8, https://doi.org/10.1016/j.heliyon.2022.e09317 (2022).

Abdelhamid, A.A. & Alotaibi, S.R. Robust prediction of the bandwidth of metamaterial antenna using deep learning. Computers, Materials and Continua, https://doi.org/10.32604/cmc.2022.025739 (2022).

El-Kenawy, E. S.M. etal. Optimized ensemble algorithm for predicting metamaterial antenna parameters. Computt. Mater. Continua., https://doi.org/10.32604/cmc.2022.023884 (2022).

Ranjan, P., Maurya, A., Gupta, H., Yadav, S. & Sharma, A. Ultra-wideband cpw fed band-notched monopole antenna optimization using machine learning. Progr. Electromagn. Res., https://doi.org/10.2528/PIERM21122802 (2022).

Olcan, D., Ninkovic, D., Stankovic, Z., Doncov, N. & Kolundzija, B. Training of deep neural networks with up to 10 million antennas. In 2022 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting (AP-S/URSI), 6566, https://doi.org/10.1109/AP-S/USNC-URSI47032.2022.9886271 (IEEE, Denver, CO, USA, 2022).

Hong, T., Liu, C. & Kadoch, M. Machine learning based antenna design for physical layer security in ambient backscatter communications. Wirel. Commun. Mob. Comput. 110, 2019. https://doi.org/10.1155/2019/4870656 (2019).

Article Google Scholar

Barbano, N. Log periodic Yagi-Uda array. IEEE Trans. Antennas Propagat. 14, 235238. https://doi.org/10.1109/TAP.1966.1138641 (1966).

Article ADS Google Scholar

Sharma, G., Sharma, A.N., Duvey, A. & Singhal, P.K. Yagi-Uda antenna for L-band frequency range. IJET1, 315, https://doi.org/10.14419/ijet.v1i4.234 (2012).

Jehangir, S. S. & Sharawi, M. S. A single layer semi-ring slot Yagi-like MIMO antenna system with high front-to-back ratio. IEEE Trans. Antennas Propagat. 65, 937942. https://doi.org/10.1109/TAP.2016.2633938 (2017).

Article ADS Google Scholar

Soheilifar, M.R. Compact Yagi-Uda slot antenna with metamaterial element for wide bandwidth wireless application. Int. J. RF Microw. Comput. Aided Eng. 31, https://doi.org/10.1002/mmce.22380 (2021).

Althuwayb, A. A. MTM- and SIW-inspired bowtie antenna loaded with AMC for 5G mm-wave applications. Int. J. Antennas Propagat., https://doi.org/10.1155/2021/6658819 (2021).

Article Google Scholar

Desai, A., Upadhyaya, T., Patel, J., Patel, R. & Palandoken, M. Flexible CPW fed transparent antenna for WLAN and sub-6 GHz 5G applications. Microw. Opt. Technol. Lett. 62, 20902103. https://doi.org/10.1002/mop.32287 (2020).

Article Google Scholar

Chen, Z., Zeng, M., Andrenko, A. S., Xu, Y. & Tan, H. A dual-band high-gain quasi-Yagi antenna with split-ring resonators for radio frequency energy harvesting. Microw. Opt. Technol. Lett. 61, 21742181. https://doi.org/10.1002/mop.31872 (2019).

Article Google Scholar

Mahmud, M.Z. etal. A dielectric resonator based line stripe miniaturized ultra-wideband antenna for fifth-generation applications. Int. J. Commun. Syst.34, https://doi.org/10.1002/dac.4740 (2021).

Chen, H. N., Song, J.-M. & Park, J.-D. A compact circularly polarized MIMO dielectric resonator antenna over electromagnetic band-gap surface for 5G applications. IEEE Access 7, 140889140898. https://doi.org/10.1109/ACCESS.2019.2943880 (2019).

Article Google Scholar

Haque, M.A., Zakariya, M.A., Singh, N. S.S., Rahman, M.A. & Paul, L.C. Parametric study of a dual-band quasi-yagi antenna for lte application. Bull. EEI12, 15131522, https://doi.org/10.11591/eei.v12i3.4639 (2023).

Ramos, A., Varum, T. & Matos, J. Compact multilayer Yagi-Uda based antenna for IoT/5G sensors. Sensors 18, 2914. https://doi.org/10.3390/s18092914 (2018).

Article ADS PubMed PubMed Central Google Scholar

Al-Bawri, S. S. et al. Metamaterial cell-based superstrate towards bandwidth and gain enhancement of quad-band CPW-Fed antenna for wireless applications. Sensors 20, 457. https://doi.org/10.3390/s20020457 (2020).

Article ADS PubMed PubMed Central Google Scholar

Haque, M. A. et al. A plowing t-shaped patch antenna for wifi and c band applications.https://doi.org/10.1109/ACMI53878.2021.9528266 (2021).

Oluwole, A.S. & Srivastava, V.M. Designing of Smart Antenna for improved directivity and gain at terahertz frequency range. In 2016 Progress in Electromagnetic Research Symposium (PIERS), 473473, https://doi.org/10.1109/PIERS.2016.7734369 (IEEE, Shanghai, China, 2016).

Haque, M. A. et al. Analysis of slotted e-shaped microstrip patch antenna for ku band applications.https://doi.org/10.1109/MICC53484.2021.9642100 (2021).

Pozar, D.M. Microwave Engineering (Wiley, 2011).

Hannan, S., Islam, M. T., Faruque, M. R. I., Chowdhury, M. E. H. & Musharavati, F. Angle-insensitive co-polarized metamaterial absorber based on equivalent circuit analysis for dual band WiFi applications. Sci. Rep. 11, 13791. https://doi.org/10.1038/s41598-021-93322-5 (2021).

Article ADS CAS PubMed PubMed Central Google Scholar

Hossain, A., Islam, M. T., Misran, N., Islam, M. S. & Samsuzzaman, M. A mutual coupled spider net-shaped triple split ring resonator based epsilon-negative metamaterials with high effective medium ratio for quad-band microwave applications. Results Phys. 22, 103902. https://doi.org/10.1016/j.rinp.2021.103902 (2021).

Article Google Scholar

Ranjan, P., Gupta, H., Yadav, S. & Sharma, A. Machine learning assisted optimization and its application to hybrid dielectric resonator antenna design. Facta universitatis - series: Electron. Energeti 36, 3142. https://doi.org/10.2298/FUEE2301031R (2023).

Article Google Scholar

Pan, X. etal. Deep learning for drug repurposing: Methods, databases, and applications. Wiley Interdisciplinary Reviews: Computational Molecular Science12, https://doi.org/10.1002/wcms.1597 (2022).

Talpur, M. A.H., Khahro, S.H., Ali, T.H., Waseem, H.B. & Napiah, M. Computing travel impendences using trip generation regression model: a phenomenon of travel decision-making process of rural households. Environment, Development and Sustainabilityhttps://doi.org/10.1007/s10668-022-02288-5 (2022).

Nguyen, Q.H. etal. Influence of data splitting on performance of machine learning models in prediction of shear strength of soil. Mathematical Problems in Engineering2021, https://doi.org/10.1155/2021/4832864 (2021).

Choudhury, S., Thatoi, D. N., Hota, J. & Rao, M. D. Predicting crack through a well generalized and optimal tree-based regressor. IJSI 11, 783807. https://doi.org/10.1108/IJSI-09-2019-0086 (2019).

Article Google Scholar

Laud, P. W. & Ibrahim, J. G. Predictive Model Selection. J. Roy. Stat. Soc.: Ser. B (Methodol.) 57, 247262. https://doi.org/10.1111/j.2517-6161.1995.tb02028.x (1995).

Article MathSciNet MATH Google Scholar

Singh, B., Sihag, P. & Singh, K. Modelling of impact of water quality on infiltration rate of soil by random forest regression. Modeling Earth Systems and Environment3, https://doi.org/10.1007/s40808-017-0347-3 (2017).

Rathore, S.S. & Kumar, S. A decision tree regression based approach for the number of software faults prediction. ACM SIGSOFT Software Engineering Notes41, https://doi.org/10.1145/2853073.2853083 (2016).

Madhuri, C. H., Anuradha, G. & Pujitha, M. V. House price prediction using regression techniques: A comparative study.https://doi.org/10.1109/ICSSS.2019.8882834 (2019).

Article Google Scholar

Pasha, G. R., Akbar, M. & Shah, A. Application of ridge regression to multicollinear data. J. res. Sci 15, 97106 (2004).

Google Scholar

Osman, A. I.A., Ahmed, A.N., Chow, M.F., Huang, Y.F. & El-Shafie, A. Extreme gradient boosting (xgboost) model to predict the groundwater levels in selangor malaysia. Ain Shams Engineering Journal12, https://doi.org/10.1016/j.asej.2020.11.011 (2021).

Raftery, A.E., Madigan, D. & Hoeting, J.A. Bayesian model averaging for linear regression models. Journal of the American Statistical Association92, https://doi.org/10.1080/01621459.1997.10473615 (1997).

Schulz, E., Speekenbrink, M. & Krause, A. A tutorial on Gaussian process regression: Modelling, exploring, and exploiting functions. J. Math. Psychol. 85, 116. https://doi.org/10.1016/j.jmp.2018.03.001 (2018).

Article MathSciNet MATH Google Scholar

Yurt, R. et al. Buried object characterization by data-driven surrogates and regression-enabled hyperbolic signature extraction. Sci. Rep. 13, 5717. https://doi.org/10.1038/s41598-023-32925-6 (2023).

Article ADS CAS PubMed PubMed Central Google Scholar

Doreswamy, KS, H., Km, Y. & Gad, I. Forecasting Air Pollution Particulate Matter (PM2.5) Using Machine Learning Regression Models. Procedia Computer Science171, 20572066, https://doi.org/10.1016/j.procs.2020.04.221 (2020).

Shetty, S.A., Padmashree, T., Sagar, B.M. & Cauvery, N.K. Performance Analysis on Machine Learning Algorithms with Deep Learning Model for Crop Yield Prediction. In JeenaJacob, I., KolandapalayamShanmugam, S., Piramuthu, S. & Falkowski-Gilski, P. (eds.) Data Intelligence and Cognitive Informatics, 739750, https://doi.org/10.1007/978-981-15-8530-2_58 (Springer Singapore, Singapore, 2021). Series Title: Algorithms for Intelligent Systems.

Kumar, R., Kumar, P. & Kumar, Y. Time series data prediction using IoT and machine learning technique. Procedia Comput. Sci. 167, 373381. https://doi.org/10.1016/j.procs.2020.03.240 (2020).

Article Google Scholar

Istaiteh, O., Owais, T., Al-Madi, N. & Abu-Soud, S. Machine learning approaches for COVID-19 forecasting. In 2020 International Conference on Intelligent Data Science Technologies and Applications (IDSTA), 5057, https://doi.org/10.1109/IDSTA50958.2020.9264101 (IEEE, Valencia, Spain, 2020).

Barua, L., Sharif, M. & Akter, T. Analyzing cervical cancer by using an ensemble learning approach based on meta classifier. IJCA 182, 2933. https://doi.org/10.5120/ijca2019918619 (2019).

Article Google Scholar

de Myttenaere, A., Golden, B., Le Grand, B. & Rossi, F. Mean absolute percentage error for regression models. Neurocomputing 192, 3848. https://doi.org/10.1016/j.neucom.2015.12.114 (2016).

Article Google Scholar

Gelman, A., Goodrich, B., Gabry, J. & Vehtari, A. R-squared for Bayesian regression models. Am. Stat. 73, 307309. https://doi.org/10.1080/00031305.2018.1549100 (2019).

Article MathSciNet MATH Google Scholar

Weiming, J.M. Mastering Python for Finance (Packt Publishing Ltd, 2015).

Singh, O. et al. Microstrip line fed dielectric resonator antenna optimization using machine learning algorithms. Sadhana Acad. Proc. Eng. Sci.https://doi.org/10.1007/s12046-022-01989-x (2022).

Article Google Scholar

Haque, M. A. et al. Dual band antenna design and prediction of resonance frequency using machine learning approaches. Appl. Sci. 12, 10505. https://doi.org/10.3390/app122010505 (2022).

Article CAS Google Scholar

See original here:

Machine learning-based technique for gain and resonance ... - Nature.com

86-year old Hammett equation gets a machine learning update – Chemistry World

The Hammett equation, a chemical theory that is over 80 years old, is being expanded upon and improved with the help of machine learning. The equation, which can help to explain the electron-donating or withdrawing nature of aromatic substituents via calculation of Hammett constants, has been analysed computationally by a team of Brazilian researchers who want to make it even more precise, and unlock unknown values for practical experiments.

There are some experimental Hammetts constants which, although widely used in many applications, were not measured or have inconsistent values, says Itamar Borges Jr from the Institute of Military Engineering in Brazil who worked on the study alongside Julio Cesar Duarte and Gabriel Monteiro-de-Castro. He adds that the work employ[s] machine learning algorithms and available experimental values to produce a consistent set of the different types of Hammetts constants.

In 1937, Louis Hammett published work that led to the eponymous Hammett equation. He was working at the time in a new field that he had named physical organic chemistry. Hammett recognised the relationship between the rate of hydrolysis of a series of ethyl esters and the subsequent equilibrium position of the ionisation of the corresponding acids in water. It was some of the first work of its kind to try to provide a quantitative theory to rationalise the relationships between chemical structures and reactivity in chemistry.

Applying his focus to meta and para-substituted benzoic acids and their respective esters Hammett found a direct relationship. Each substituent on the respective aromatic ring could be given a value representing their electron-donating or withdrawing effect. These values were calculated experimentally by Hammett, helping chemists to determine the impact on reactivity from these groups. Hammett then went even further, deriving values. These values meant chemists could predict the number of electrons involved in the transition state, allowing an understanding of the type of mechanistic pathway a reaction could take.

Borges Jr teams work uses a combination of density functional theory (DFT) methods and machine learning algorithms to calculate new Hammett constants. Whereas previous work used semi-empirical methods, Borges Jr states that the DFT methods used in this work are more accurate for calculating atomic charges. Using a variety of meta and para substituents on benzene and benzoic acid derivatives, DFT models calculated the atomic charges for the carbon atoms bonded to the groups being analysed. Processing these results with machine learning techniques resulted in the production of 219 values of which 92 were previously unknown.

Alongside this work, the Brazilian researchers included a set of simplified equations to obtain constants for new substituents that hadnt previously been calculated. They hope that with knowledge of atomic charges obtained from other DFT calculations the simple equations can help to obtain new constants.

Using this machine learning approach, earlier values that had only been found experimentally were calculated computationally for the first time for three substituents (CCl3, NHCHO and NHCONH2). Using DFT calculations to work out the atomic charges, these values were calculated and used as inputs for the machine learning algorithm. The resulting Hammett constants the algorithm helped to produce corresponded to literature values from experimental results for the three substituents.

Kristaps Ermanis from the University of Nottingham, an expert in computational organic chemistry, says that the work can fill in values where the data hasnt been previously found but that the study relies on limited amounts of DFT data, which limits the number of parameters in the machine learning method, and therefore potentially also limits its accuracy. He believes the accuracy could be easily improved in future work by acquiring more DFT data.

Matthew Grayson and his group work on computational chemistry at the University of Bath and describe the work as a valuable idea that allows experimentalists to access previously unknown Hammett constants using simple and readily available atomic charge features.

See the original post here:

86-year old Hammett equation gets a machine learning update - Chemistry World

The Rise of AI and Machine Learning in Global E-Commerce … – Fagen wasanni

Exploring the Impact of AI and Machine Learning on Global E-Commerce Analytics: Opportunities and Challenges

The rise of artificial intelligence (AI) and machine learning in global e-commerce analytics is a phenomenon that is reshaping the landscape of online business. These technologies are not only transforming the way businesses operate but also creating a myriad of opportunities and challenges that are worth exploring.

AI and machine learning are increasingly being integrated into e-commerce platforms to enhance customer experience, streamline operations, and improve decision-making processes. These technologies are capable of analyzing vast amounts of data, identifying patterns, and making predictions, thereby enabling businesses to gain valuable insights into customer behavior and market trends.

One of the significant opportunities presented by AI and machine learning in e-commerce analytics is personalized marketing. By analyzing customer data, these technologies can predict individual preferences and buying habits, allowing businesses to tailor their marketing strategies accordingly. This level of personalization can significantly enhance customer engagement and loyalty, leading to increased sales and revenue.

Moreover, AI and machine learning can help businesses optimize their inventory management. These technologies can predict demand for different products based on historical sales data and current market trends, enabling businesses to maintain optimal stock levels and avoid overstocking or understocking. This can result in significant cost savings and improved customer satisfaction.

However, the integration of AI and machine learning into e-commerce analytics also presents several challenges. One of the main challenges is data privacy and security. As these technologies rely on analyzing customer data, businesses must ensure that they comply with data protection regulations and safeguard customer information from cyber threats. This requires significant investment in cybersecurity measures and ongoing monitoring to detect and respond to any potential breaches.

Another challenge is the lack of skilled professionals who can effectively implement and manage these technologies. While AI and machine learning can automate many tasks, they still require human oversight to ensure they are functioning correctly and delivering accurate results. Therefore, businesses need to invest in training and development to equip their staff with the necessary skills and knowledge.

Furthermore, the rapid pace of technological advancement means that businesses must continually update their systems and strategies to keep up with the latest developments in AI and machine learning. This can be a daunting task, especially for small and medium-sized enterprises that may lack the resources and expertise to do so.

In conclusion, the rise of AI and machine learning in global e-commerce analytics offers exciting opportunities for businesses to enhance their operations and customer experience. However, it also presents significant challenges that need to be addressed. Businesses must strike a balance between leveraging these technologies to gain a competitive edge and ensuring they comply with data protection regulations and maintain the trust of their customers. As the world of e-commerce continues to evolve, it will be fascinating to see how businesses navigate these opportunities and challenges.

See the rest here:

The Rise of AI and Machine Learning in Global E-Commerce ... - Fagen wasanni

Q & A: How A.I. and machine learning are transforming the lending … – Digital Journal

Image: AFP Jessica YANG

AI and machine learning are transforming every industry as we know it and the lending industry is no exception. In conversation with Experians Scott Brown, President, Consumer Information Services, we explore the trends and challenges in the lending business and how technology is making a positive impact in driving financial inclusion.

Digital Journal: What are the biggest challenges lenders are facing when it comes to driving financial inclusion, as well as speed and accuracy, in making credit decisions?

Scott Brown: A big challenge lenders face is reaching underserved consumers who need access to credit in order to attain their financial goals. Our research shows 106 million Americans, or 42 percent of the adult population, lack access to mainstream credit because they are credit invisible, unscoreable or have a subprime credit score. Communities of color are more likely to lack access to mainstream credit, with 28 percent of Black and 26 percent of Hispanic consumers unscoreable or invisible, which is perpetuating historic disadvantage.

The industry has made a lot of progress in recent years by incorporating new data into lending decisions. There continues to be opportunity with data assets, scores and models to ensure all consumers have access to fair and affordable credit. When advanced analytics and machine learning are combined with expanded data sets, lenders have the opportunity to bring more consumers into the credit ecosystem without taking on additional risk.

DJ: What are some of the notable trends you are seeing in the lending industry today?

Scott Brown: The push to make the credit and payments industry better, faster and smarter is more apparent than ever before. Consumer expectations are changing and its critical for our industry to adapt and meet consumers where they are. In todays rapidly changing environment, 95 percent of lenders are using advanced analytics and expanded data to stay ahead and best serve consumers. This is good news.

Consumers deserve models built on current and predictive behaviours. However, as models become more sophisticated, deployment timelines and costs can increase. In fact, our research shows it takes 15 months on average to build and deploy a model for credit decisioning and 55 percent of lenders have built models that have not made it to production, which is one of the biggest challenges for lenders. Because of this, many lenders rely on old models that leave consumers behind. By leveraging data and technology, lenders can streamline model deployment, cut costs and bring more consumers into the credit ecosystem.

DJ: How does A.I. and machine learning help address the challenges lenders are facing?

Scott Brown: The key to more predictive models is more data and technology that can deliver more meaningful insights. This is where machine learning and advanced analytics come into play. When advanced analytics and expanded data are used in credit decisions, more consumers can be scored and gain access to the financial services they need. There are platform solutions that exist today that can make leveraging this information in a compliant, explainable, and transparent way easier and more cost effective for lenders.

DJ: You recently launched Ascend Ops, can you tell me what that is and how it helps your clients?

Scott Brown: At Experian, we are continually innovating and using technology to find solutions to global issues. Our goal is to modernize the financial services industry and increase financial access for all. Ascend Ops is our most recent example of this in action. This first-of-its-kind solution empowers lenders to deploy new features and models in days or weeks instead of months. It is a game changer in operational efficiency and, most importantly, in helping our clients protect and better serve consumers without making significant investments in their infrastructure.

Ascend Ops is part of the Experian Ascend Technology Platform and helps lenders implement and manage models for key use cases across the customer lifecycle, including marketing, account management and more. This gives lenders the ability to deliver more relevant marketing offers to consumers and provide better insights to make lending decisions quickly and accurately. Were helping remove the challenges lenders face in deploying new models to market. The quicker more inclusive credit models can be deployed, the sooner businesses and consumers can benefit from their use.

DJ: How is technology like A.I. and machine learning poised to transform the landscape of the credit economy and expand the lending universe over the next 5-10 years?

Scott Brown: If you think back to the landscape 5 10 years ago, technology has made our current environment look nearly unrecognizable. Over the last decade, technology has fundamentally changed the financial services industry and weve played a large role in this disruption. As we look ahead, we will continue to see the ways technology and advancements in data can transform the financial services ecosystem, including consumer permissioned data. These advancements will enable us to include more consumers in mainstream lending and allow us to bring better, faster and smarter solutions to market.

See the rest here:

Q & A: How A.I. and machine learning are transforming the lending ... - Digital Journal

Machine Learning-Trained Autonomy Tested By XQ-58 For Skyborg – Aviation Week

https://aviationweek.com/themes/custom/particle/dist/app-drupal/assets/awn-logo.svg

An XQ-58 flies while controlled by machine-learning algorithms, as an F-15E observes.

Credit: U.S. Air Force

A Kratos XQ-58 Valkyrie has been flown for the first time with a certain tactical problem solved during the flight by machine learning (ML)-trained algorithms developed by the Air Force Research Laboratory (AFRL). The July 25 flight at the Eglin Test and Training Complex off the Florida coast builds...

Machine Learning-Trained Autonomy Tested By XQ-58 For Skyborg is published in Aerospace Daily & Defense Report, an Aviation Week Intelligence Network (AWIN) Market Briefing and is included with your AWIN membership.

Already a member of AWIN or subscribe to Aerospace Daily & Defense Report through your company? Loginwith your existing email and password.

Not a member? Learn how you can access the market intelligence and data you need to stay abreast of what's happening in the aerospace and defense community.

Apple app store ID

6447645195

Apple app name

apple-itunes-app

App argument URL

https://shownews.aviationweek.com

Go here to see the original:

Machine Learning-Trained Autonomy Tested By XQ-58 For Skyborg - Aviation Week

Artificial Intelligence and Machine Learning in Packaging Robotics … – Fagen wasanni

At the forefront of the fourth industrial revolution, Artificial Intelligence (AI) and Machine Learning (ML) have become significant trends in packaging robotics and automation. AI refers to software that is trained to perform specific tasks, while ML uses algorithms to learn insights and patterns from data. According to a report from PMMI, AI-based applications in the packaging industry are expected to grow at a CAGR of over 50% in the next five years.

One company making strides in this field is Deep Learning Robotics (DLRob). DLRob has developed groundbreaking robot control software that allows users to teach robots tasks by demonstrating them. Through advanced ML algorithms, robots can learn by observing and mimicking human actions. DLRobs software is user-friendly and adaptable to various robots and applications.

In April, DLRob announced a software update that enables its customers to connect and control a wider range of robotics devices, including Universal Robots UR series of cobots. This expansion increases the capabilities and versatility of DLRobs software.

In another collaboration, Intrinsic, an Alphabet company, and Siemens have partnered to explore integrations and interfaces between Intrinsics AI-based robotics software and Siemens Digital Industries portfolio for industrial production automation. The goal is to bridge the gaps between robotics, automation engineering, and IT development, making industrial robotics more accessible and usable for businesses, entrepreneurs, and developers. This collaboration aims to bring joint solutions to the market that can benefit more enterprises.

Both Intrinsic and Siemens emphasize the importance of combining robotics with the production environment to maximize value. By accelerating the development process and facilitating seamless operation, they aim to democratize access to robotics and automation technology.

This collaboration highlights the growing significance of AI and ML in the packaging industry, paving the way for innovative advancements in robotics and automation for various sectors.

Read the original:

Artificial Intelligence and Machine Learning in Packaging Robotics ... - Fagen wasanni

How machine learning can expand the Landscape of Edge AI. | TDK – TDK Corporation

Edge AI and the evolution of edge devices

In the context of edge computing, an edge device simply refers to a device that operates at the edges of networks, collecting, processing, and analyzing data. Examples include smartphones, security cameras, smart speakers, and a variety of other devices. In recent years, with the rise of edge AI, these devices have evolved even smarter due to the machine learning functions.

Edge AI*2 is a collective term for technologies related to on-device collection, processing, and analysis of data for artificial intelligence purposes. Commonly, implementing AI requires vast amounts of data and computing power, which is why they are typically run on cloud-based servers. With edge AI, however, data is processed internally on the devices, reducing delays and costs related to data transmission, as well as improving privacy.

Cloud Computing and Edge Computing Compared

The coupling of edge devices with edge AI is broadening the realm of IoT (Internet of Things). Self-driving vehicles, factory automation, and medical device management are examples of edge devices already playing vital roles where real-time data processing and decision-making are required.

Edge AI has traditionally been implemented on devices with robust processing power, such as smartphones and tablets. With the proliferation of IoT, however, interest is growing in a technology known as TinyML (Tiny Machine Learning)*3, which enables small devices with only modest capabilities to execute machine learning functions onboard.

Generally, machine learning is performed on high-performance computers or cloud servers, requiring large amounts of memory and fast processors, incurring commensurate electrical power consumption. This permits the execution of large-scale machine learning models based on vast datasets, resulting in highly accurate image recognition, natural language processing, and more. However, every step of the workflowincluding data collection, model development, and validationusually requires handling by seasoned engineers specialized in each area.

TinyML is a machine learning technology designed for small devices, enabling edge AI to be implemented even on microcontrollers (MCUs), which only possess limited processing muscle. This, in turn, is expected to engender smaller IoT devices with low power consumption. It is now possible to run machine learning inference on almost any device with a sensor and marginal computing power, endowing it with intelligence.

Qeexo, a Silicon Valley startup that joined the TDK Group in 2023, specializes in machine learning solutions for edge devices, with a particular focus on TinyML. Qeexo AutoML, is an end-to-end, no-code (i.e., not requiring code to be hand-written in a programming language) platform that empowers non-engineers to implement machine learning on lightweight edge devices. Working in an intuitive, web-based interface, users can easily perform all the steps necessary to build a machine learning systembeginning with collecting and pre-processing raw data, followed by training and refining recognition models, then finally creating and installing the finished package onto edge devices where the machine learning-based intelligence comes to life.

TDK is currently developing i3 Micro Module, an ultracompact sensor module with onboard edge AI designed to be used for predictive maintenancethe practice of foreseeing and preempting anomalies in machinery and equipment at factories and similar facilities. Sensors, including those for vibration, temperature, and barometric pressure, as well as edge AI and mesh networking capabilities, are all integrated into a compact package, allowing equipment conditions to be monitored without having to rely on manpower, thereby helping minimize downtime and improve productivity. (Photo: Ultracompact sensor module i3 Micro Module)

Related Stories Predicting Anomalies Before Breakdowns Occur: Ultracompact Sensor Module Redefines the Status Quo of Equipment Maintenance

Michael A. Gamble, Director, Product Management for Qeexo, explained the significance of Qeexo AutoML. Conventionally, machine learning for embedded devices is a lengthy, complex process requiring highly specialized engineering skills. Qeexo AutoML enables almost anyoneincluding those not technically inclinedto accomplish the same, using an end-to-end, streamlined web interface. Similar to the way digital design tools and audio workstation software opened up graphic arts and music production to just about anyone with a creative spark, AutoML levels the playing field for machine learning. Put simply, we think of Qeexo AutoML as democratizing machine learning.

Advances in edge device technologies have spurred the development of numerous IoT devices and microcontrollers featuring sophisticated machine learning capabilities. With the advent of tools like Qeexo AutoML, it is now possible to create complex machine learning models that run on edge devices in short order.

Letting edge AI process data collected from sensors in edge devices substantially expands the range of possible solutions. Gamble continued, Pairing Qeexos machine learning solutions with TDKs sensor devices will allow us to provide customers with integrated, one-stop solutions. We look forward to a synergistic partnership in developing and delivering smart edge solutions that leverage each others strengths.

Today, edge devices are evolving into intelligent systems that learn by themselves, going well beyond merely gathering and transmitting data. Advanced manufacturing facilities, sometimes referred to as smart factories, will begin equipping almost every piece of machinery and equipment with edge devices. Edge devices are also becoming prevalent among consumers in the form of mobility products and smartphones. Propelled by tools like AutoML, TinyML and edge AI are expected to become increasingly familiar and commonplace. This will all have a significant positive impact on our daily lives, businesses, and industry as a whole.

Read more from the original source:

How machine learning can expand the Landscape of Edge AI. | TDK - TDK Corporation

Use cases of Stereo Matching part9(Machine Learning + AI) – Medium

Author : : Xuelian Cheng, Yiran Zhong, Mehrtash Harandi, Tom Drummond, Zhiyong Wang, Zongyuan Ge

Abstract : The self-attention mechanism, successfully employed with the transformer structure is shown promise in many computer vision tasks including image recognition, and object detection. Despite the surge, the use of the transformer for the problem of stereo matching remains relatively unexplored. In this paper, we comprehensively investigate the use of the transformer for the problem of stereo matching, especially for laparoscopic videos, and propose a new hybrid deep stereo matching framework (HybridStereoNet) that combines the best of the CNN and the transformer in a unified design. To be specific, we investigate several ways to introduce transformers to volumetric stereo matching pipelines by analyzing the loss landscape of the designs and in-domain/cross-domain accuracy. Our analysis suggests that employing transformers for feature representation learning, while using CNNs for cost aggregation will lead to faster convergence, higher accuracy and better generalization than other options. Our extensive experiments on Sceneflow, SCARED2019 and dVPN datasets demonstrate the superior performance of our HybridStereoNet.

2. EASNet: Searching Elastic and Accurate Network Architecture for Stereo Matching(arXiv)

Author : Qiang Wang, Shaohuai Shi, Kaiyong Zhao, Xiaowen Chu

Abstract : Recent advanced studies have spent considerable human efforts on optimizing network architectures for stereo matching but hardly achieved both high accuracy and fast inference speed. To ease the workload in network design, neural architecture search (NAS) has been applied with great success to various sparse prediction tasks, such as image classification and object detection. However, existing NAS studies on the dense prediction task, especially stereo matching, still cannot be efficiently and effectively deployed on devices of different computing capabilities. To this end, we propose to train an elastic and accurate network for stereo matching (EASNet) that supports various 3D architectural settings on devices with different computing capabilities. Given the deployment latency constraint on the target device, we can quickly extract a sub-network from the full EASNet without additional training while the accuracy of the sub-network can still be maintained. Extensive experiments show that our EASNet outperforms both state-of-the-art human-designed and NAS-based architectures on Scene Flow and MPI Sintel datasets in terms of model accuracy and inference speed. Particularly, deployed on an inference GPU, EASNet achieves a new SOTA 0.73 EPE on the Scene Flow dataset with 100 ms, which is 4.5 faster than LEAStereo with a better quality model

View original post here:

Use cases of Stereo Matching part9(Machine Learning + AI) - Medium

Use cases of Stereo Matching part7(Machine Learning + AI) – Medium

Author : Philippe Weinzaepfel, Thomas Lucas, Vincent Leroy, Yohann Cabon, Vaibhav Arora, Romain Brgier, Gabriela Csurka, Leonid Antsfeld, Boris Chidlovskii, Jrme Revaud

Abstract : Despite impressive performance for high-level downstream tasks, self-supervised pre-training methods have not yet fully delivered on dense geometric vision tasks such as stereo matching or optical flow. The application of selfsupervised concepts, such as instance discrimination or masked image modeling, to geometric tasks is an active area of research. In this work, we build on the recent crossview completion framework, a variation of masked image modeling that leverages a second view from the same scene which makes it well suited for binocular downstream tasks. The applicability of this concept has so far been limited in at least two ways: (a) by the difficulty of collecting realworld image pairs in practice only synthetic data have been used and (b) by the lack of generalization of vanilla transformers to dense downstream tasks for which relative position is more meaningful than absolute position. We explore three avenues of improvement: first, we introduce a method to collect suitable real-world image pairs at large scale. Second, we experiment with relative positional embeddings and show that they enable vision transformers to perform substantially better. Third, we scale up vision transformer based cross-completion architectures, which is made possible by the use of large amounts of data. With these improvements, we show for the first time that stateof-the-art results on stereo matching and optical flow can be reached without using any classical task-specific techniques like correlation volume, iterative estimation, image warping or multi-scale reasoning, thus paving the way towards universal vision models.

2. Self-Supervised Intensity-Event Stereo Matching(arXiv)

Author : Jinjin Gu, Jinan Zhou, Ringo Sai Wo Chu, Yan Chen, Jiawei Zhang, Xuanye Cheng, Song Zhang, Jimmy S. Ren

Abstract : Event cameras are novel bio-inspired vision sensors that output pixel-level intensity changes in microsecond accuracy with a high dynamic range and low power consumption. Despite these advantages, event cameras cannot be directly applied to computational imaging tasks due to the inability to obtain high-quality intensity and events simultaneously. This paper aims to connect a standalone event camera and a modern intensity camera so that the applications can take advantage of both two sensors. We establish this connection through a multi-modal stereo matching task. We first convert events to a reconstructed image and extend the existing stereo networks to this multi-modality condition. We propose a self-supervised method to train the multi-modal stereo network without using ground truth disparity data. The structure loss calculated on image gradients is used to enable self-supervised learning on such multi-modal data. Exploiting the internal stereo constraint between views with different modalities, we introduce general stereo loss functions, including disparity cross-consistency loss and internal disparity loss, leading to improved performance and robustness compared to existing approaches. The experiments demonstrate the effectiveness of the proposed method, especially the proposed general stereo loss functions, on both synthetic and real datasets. At last, we shed light on employing the aligned events and intensity images in downstream tasks, e.g., video interpolation application.

Read the original:

Use cases of Stereo Matching part7(Machine Learning + AI) - Medium

Harnessing the Power of AI and Machine Learning for Enhanced … – Fagen wasanni

Harnessing the Power of AI and Machine Learning for Enhanced Security Screening and Detection: A Comprehensive Guide

In the rapidly evolving world of technology, artificial intelligence (AI) and machine learning are increasingly being harnessed to enhance security screening and detection. These advanced technologies are revolutionizing the way security checks are conducted, offering unprecedented levels of accuracy and efficiency.

AI and machine learning are subsets of computer science that mimic human intelligence. They are capable of learning from experience, adjusting to new inputs, and performing tasks that normally require human intelligence. In the context of security screening and detection, these technologies can be trained to identify potential threats or anomalies with a high degree of precision.

One of the key areas where AI and machine learning are making a significant impact is in airport security. Traditional methods of security screening at airports, which rely heavily on human intervention, are often time-consuming and prone to errors. However, with the advent of AI and machine learning, the process has become more streamlined and effective. These technologies can analyze vast amounts of data in real-time, identify patterns, and flag potential security threats. This not only enhances the accuracy of security checks but also significantly reduces the time taken for screening.

Moreover, AI and machine learning are also being used to improve cybersecurity. With cyber threats becoming increasingly sophisticated, traditional methods of detection and prevention are often inadequate. AI and machine learning algorithms can analyze network traffic, detect unusual patterns, and identify potential cyber threats. They can also predict future attacks based on historical data, enabling organizations to take proactive measures to safeguard their systems.

In addition to airports and cybersecurity, AI and machine learning are also being utilized in other areas of security screening and detection. For instance, they are being used in facial recognition systems, biometric scanners, and surveillance cameras to enhance security in public places and prevent criminal activities. These technologies can accurately identify individuals, detect suspicious activities, and alert authorities in real-time, thereby enhancing public safety.

However, while the benefits of AI and machine learning in security screening and detection are immense, there are also challenges that need to be addressed. One of the key challenges is the risk of false positives, where innocent individuals or activities are flagged as potential threats. This can lead to unnecessary investigations and potential infringements on privacy. Therefore, it is crucial to ensure that these technologies are used responsibly and ethically.

Another challenge is the need for continuous learning and adaptation. AI and machine learning algorithms are only as good as the data they are trained on. Therefore, it is essential to continuously update these algorithms with new data to ensure their accuracy and effectiveness.

In conclusion, AI and machine learning hold great promise for enhancing security screening and detection. They offer the potential to significantly improve the accuracy and efficiency of security checks, detect potential threats in real-time, and predict future attacks. However, it is also important to address the challenges associated with their use to ensure that they are used responsibly and effectively. As these technologies continue to evolve, they are set to play an increasingly important role in ensuring our safety and security.

Go here to see the original:

Harnessing the Power of AI and Machine Learning for Enhanced ... - Fagen wasanni

Use cases of Stereo Matching part8(Machine Learning + AI) – Medium

Author : Andrea Pilzer, Yuxin Hou, Niki Loppi, Arno Solin, Juho Kannala

Abstract : We introduce visual hints expansion for guiding stereo matching to improve generalization. Our work is motivated by the robustness of Visual Inertial Odometry (VIO) in computer vision and robotics, where a sparse and unevenly distributed set of feature points characterizes a scene. To improve stereo matching, we propose to elevate 2D hints to 3D points. These sparse and unevenly distributed 3D visual hints are expanded using a 3D random geometric graph, which enhances the learning and inference process. We evaluate our proposal on multiple widely adopted benchmarks and show improved performance without access to additional sensors other than the image sequence. To highlight practical applicability and symbiosis with visual odometry, we demonstrate how our methods run on embedded hardware.

2.Comparison of Stereo Matching Algorithms for the Development of Disparity Map (arXiv)

Author : Hamid Fsian, Vahid Mohammadi, Pierre Gouton, Saeid Minaei

Abstract : Stereo Matching is one of the classical problems in computer vision for the extraction of 3D information but still controversial for accuracy and processing costs. The use of matching techniques and cost functions is crucial in the development of the disparity map. This paper presents a comparative study of six different stereo matching algorithms including Block Matching (BM), Block Matching with Dynamic Programming (BMDP), Belief Propagation (BP), Gradient Feature Matching (GF), Histogram of Oriented Gradient (HOG), and the proposed method. Also three cost functions namely Mean Squared Error (MSE), Sum of Absolute Differences (SAD), Normalized Cross-Correlation (NCC) were used and compared. The stereo images used in this study were from the Middlebury Stereo Datasets provided with perfect and imperfect calibrations. Results show that the selection of matching function is quite important and also depends on the images properties. Results showed that the BP algorithm in most cases provided better results getting accuracies over 95%

See more here:

Use cases of Stereo Matching part8(Machine Learning + AI) - Medium

Research Analyst/ Associate/ Fellow in Machine Learning and … – Times Higher Education

The Role

The Sustainable and Green Finance Institute (SGFIN) is a new university-level research institute in the National University of Singapore (NUS), jointly supported by the Monetary Authority of Singapore (MAS) and NUS. SGFIN aspires to develop deep research capabilities in sustainable and green finance, provide thought leadership in the sustainability space, and shape sustainability outcomes across the financial sector and the economy at large.

This role is ideally suited for those wishing to work in academic or industry research in quantitative analysis, particularly in the area of machine learning and artificial intelligence. The responsibilities of the role will include designing and developing various analytical frameworks to analyze structure, unstructured and non-traditional data related to corporate financial, environmental, and social indicators.

There are no teaching obligations for this position, and the candidate will have the opportunity to develop their research portfolio.

Duties and Responsibilities

The successful candidate will be expected to assume the following responsibilities:

Requirements

View post:

Research Analyst/ Associate/ Fellow in Machine Learning and ... - Times Higher Education

AI and Machine Learning: The New Frontier in Global Anti-Money … – Fagen wasanni

The New Frontier in Global Anti-Money Laundering Efforts: AI and Machine Learning

The world of finance is no stranger to the nefarious activities of money laundering, a global menace that has proven to be a tough nut to crack for financial institutions and regulatory bodies. However, the advent of Artificial Intelligence (AI) and Machine Learning (ML) is heralding a new frontier in global anti-money laundering efforts, offering promising solutions to this age-old problem.

Money laundering, the process of making illegally-gained proceeds appear legal, is a complex and sophisticated crime. It often involves multiple transactions, used to disguise the origin of financial assets so that they appear to have originated from legitimate sources. Traditional methods of detecting and preventing money laundering have often fallen short, due to the sheer volume of financial transactions that occur daily and the clever tactics employed by money launderers.

Enter AI and ML, two technological advancements that are revolutionizing various sectors, including finance. These technologies are now being harnessed to combat money laundering, and early indications suggest they could be game-changers.

AI, with its ability to mimic human intelligence, and ML, a subset of AI that involves the science of getting computers to learn and act like humans, are being used to analyze vast amounts of financial data. They can sift through millions of transactions in a fraction of the time it would take a human, identifying patterns and anomalies that could indicate suspicious activity.

Moreover, these technologies are not just faster; they are also more accurate. Traditional anti-money laundering systems often generate a high number of false positives, leading to wasted time and resources. AI and ML, on the other hand, can learn from past data and improve their accuracy over time, reducing the number of false positives and allowing financial institutions to focus their resources on genuine threats.

The use of AI and ML in anti-money laundering efforts is not without its challenges. For one, these technologies require vast amounts of data to function effectively. This raises privacy concerns, as financial institutions must balance the need for effective anti-money laundering measures with the need to protect their customers personal information. Additionally, the use of AI and ML requires significant investment in technology and skilled personnel, which may be beyond the reach of smaller financial institutions.

Despite these challenges, the potential benefits of AI and ML in combating money laundering cannot be overstated. Regulatory bodies around the world are recognizing this potential and are beginning to incorporate these technologies into their anti-money laundering frameworks. For instance, the Financial Action Task Force (FATF), an intergovernmental body that sets standards for combating money laundering, has acknowledged the role of digital innovation in enhancing the effectiveness of anti-money laundering measures.

In conclusion, AI and ML represent a new frontier in global anti-money laundering efforts. While there are challenges to overcome, the potential of these technologies to revolutionize the fight against money laundering is immense. As they continue to evolve and improve, they promise to be powerful tools in the hands of financial institutions and regulatory bodies, helping to make the world of finance a safer place for all.

See the original post:

AI and Machine Learning: The New Frontier in Global Anti-Money ... - Fagen wasanni

Harnessing the Power of AI and Machine Learning: Growth … – Fagen wasanni

Harnessing the Power of AI and Machine Learning: Growth Opportunities in Database Management and SaaS for the Telecommunications Industry

The telecommunications industry is on the cusp of a significant transformation, driven by the rapid advancements in Artificial Intelligence (AI) and Machine Learning (ML). These technologies are not only reshaping the way telecom companies operate but also creating unprecedented growth opportunities in database management and Software as a Service (SaaS) sectors.

AI and ML are proving to be game-changers in the telecom industry, enabling companies to streamline operations, enhance customer experience, and drive revenue growth. They are particularly instrumental in managing the vast amounts of data generated by telecom networks. With AI and ML, telecom companies can automate the process of collecting, storing, and analyzing data, thereby improving efficiency and reducing operational costs.

Database management, a critical aspect of telecom operations, is one area where AI and ML are making a significant impact. Traditional database management systems are often unable to handle the sheer volume of data generated by telecom networks. However, AI and ML-powered systems can not only manage large data sets but also provide valuable insights that can help telecom companies make informed business decisions. For instance, predictive analytics, powered by ML, can help telecom companies anticipate customer behavior and tailor their services accordingly, thereby enhancing customer satisfaction and loyalty.

Moreover, AI and ML are also driving growth in the SaaS sector within the telecom industry. SaaS, which allows users to access software over the internet, is becoming increasingly popular among telecom companies due to its scalability and cost-effectiveness. AI and ML are enhancing the capabilities of SaaS solutions, enabling telecom companies to offer more personalized and efficient services. For example, AI-powered chatbots can provide instant customer support, while ML algorithms can optimize network performance in real-time.

The integration of AI and ML into database management and SaaS is also opening up new revenue streams for telecom companies. By offering AI and ML-powered solutions, telecom companies can not only improve their own operations but also provide valuable services to other industries. For instance, telecom companies can offer AI-powered data analytics services to businesses in sectors such as retail, healthcare, and finance, thereby creating additional revenue opportunities.

However, harnessing the power of AI and ML is not without challenges. Telecom companies need to invest in the necessary infrastructure and skills to implement these technologies effectively. Data privacy and security are also major concerns, as AI and ML systems often require access to sensitive information. Therefore, telecom companies need to ensure robust data protection measures are in place.

In conclusion, AI and ML are set to revolutionize the telecommunications industry, offering significant growth opportunities in database management and SaaS. By embracing these technologies, telecom companies can not only enhance their operations but also tap into new revenue streams. However, to fully harness the power of AI and ML, telecom companies need to overcome the associated challenges and invest in the necessary resources. As the telecom industry continues to evolve, the role of AI and ML will undoubtedly become increasingly important.

Link:

Harnessing the Power of AI and Machine Learning: Growth ... - Fagen wasanni

Bridging the Digital Divide: How Artificial Intelligence Services are … – Fagen wasanni

Bridging the Digital Divide: How Artificial Intelligence Services are Expanding Global Internet Access

The digital divide, a term coined to describe the gap between those who have access to the internet and digital technologies and those who do not, has been a persistent issue globally. However, recent advancements in artificial intelligence (AI) services are playing a pivotal role in bridging this divide, expanding global internet access, and fostering digital inclusivity.

AI, with its transformative potential, is revolutionizing various sectors, and the realm of internet connectivity is no exception. The technology is being harnessed to address the challenges of internet accessibility, particularly in remote and underprivileged regions. AI-powered predictive models are being used to identify areas with low internet penetration, enabling service providers to strategically expand their networks and reach.

One of the key ways AI is facilitating this expansion is through the optimization of network deployment. Traditional methods of network expansion are often time-consuming and expensive, involving extensive groundwork and physical infrastructure. AI, on the other hand, can analyze vast amounts of data to predict the optimal locations for network towers and satellites, significantly reducing costs and accelerating deployment.

Moreover, AI is also enhancing the quality of internet services. Machine learning algorithms can monitor network performance in real-time, identifying and rectifying issues before they impact users. This not only improves the user experience but also increases the efficiency of network maintenance, further contributing to the expansion of internet services.

In addition to network optimization and maintenance, AI is also instrumental in developing innovative solutions for internet access. For instance, AI-powered drones and balloons are being deployed to provide internet connectivity in remote areas. These solutions are particularly beneficial in disaster-stricken regions where traditional network infrastructure may be damaged or non-existent.

Furthermore, AI is playing a crucial role in making the internet more accessible and user-friendly. AI-driven applications such as voice recognition and translation services are making digital platforms more inclusive, enabling individuals with varying levels of literacy and language proficiency to navigate the digital world with ease.

However, while AI is undoubtedly a powerful tool in bridging the digital divide, it is not without its challenges. Concerns around data privacy, security, and the ethical use of AI are paramount. As AI services expand, it is crucial to establish robust regulatory frameworks to ensure that these technologies are used responsibly and that the benefits of increased internet access are not overshadowed by potential risks.

In conclusion, AI services are playing a significant role in expanding global internet access and bridging the digital divide. By optimizing network deployment, enhancing service quality, and developing innovative connectivity solutions, AI is helping to bring the internet to remote and underprivileged regions. At the same time, AI-driven applications are making the digital world more accessible and inclusive. As we move forward, it is essential to address the challenges associated with AI to ensure that its potential is harnessed responsibly and effectively for the benefit of all.

Follow this link:

Bridging the Digital Divide: How Artificial Intelligence Services are ... - Fagen wasanni

Forget artificial intelligence, its about robots in the Bronx – The Riverdale Press

By STACY DRIKS

A pair of robots from the Bronx High School of Science that weigh about 125 pounds and are controlled by a simple X-Box remote control showed off their abilities earlier this year during a New York city competition. And came away with some awards.

Behind the remotes were Bronx Science students. And the challenge is simple pick up cones and cubes with their arms and bring them to the other side of the arena.

The teams were competing to advance to the world championships in Houston. At the regional in Manhattan teams from other states, India, Turkey and Azerbaijan competed with their industrialized-size robots.

During the regional two Bronx Science teams were competing: the all-girls FeMaidens and the co-educational team, the Sciborgs, where students spent seven to eight weeks building with coding and testing. The FeMaidens finished third and took home the Team Spirit Award for their enthusiasm. The Sciborgs took home an honorable mention.

Each robot has a battery that looks similar to a car battery, but this one weighs between eight to 12 pounds.

We will go through one of these in every match we can drain this entire battery in three minutes, said Charlie Peskay, one of the main student strategists for the Sciborgs and part of the construction of robots.

Their drive team consists of three people.

Operator: Responsible for movements such as arms and spins. Driver: Drives the robot Coach: Directs operator and driver to work together and says what to pick up and where to place them.

Each game lasts three minutes, and they go through at least five minutes for the playoffs. Then there are more games that would need to be completed for the semi-finals and then finals.

Even though both teams did not make the regional finals, they were awarded and honored by Optimum and parent company Altice USA. The sponsor gave $2,500 to first-place winners; $1,500 to runners-up, and $500 for honorable mentions .

Optimum provides internet, phone services, and more in most households; they are built on innovation, said Rafaella Mazzella of Optimum. The company has long supported the competition and sponsored high school teams and regional competitions throughout its service area.

The money is used often for tools like a portable belt sander and a drill press, said chemistry teacher and robotics adviser Katherine Carr.

FeMaidens took first place for the Excellence in Technology Award. Whereas the Sciborgs received an honorable mention.

It was the gracious professionalism where students wanted to win, but there was not much animosity between the teams.

During the games, opposing teams would need to join an alliance and work together. This year the FeMaidens were aligned with High Voltage Robotics from William Grady in Brooklyn and RoHawks from Hunter College High School.

Its a very interesting dynamic, Carr said. When I first thought of it, I was like, so were friends, were also against each other sometimes.

In one match, the teams will be against each other, and in the next, theyll work together. But the students agree that its more fun that way.

One alliance had used all their timeouts, but they needed time to fix something. And then the other team the other alliance used one of their timeouts to help them fix it, Peskay said.

Both alliances are not competitive with each other, as some might think. They just want every match to be a fair match, Peskay continued.

But this year, students changed it up. And it sounds simple. New wheels.

Our swerve modules are pretty new; in the past, we never did swerve because swerve is a newer version and costs a lot of money, said FeMaidens captain and head of engineering Melody Jiang.

Robotics are many different types of drives, which are used to move and steer the robot. The best part of this new module is that there is a lot more mobility. However, it isnt straightforward to code and build.

For example, their previous wheels were movements to that of a car. The robot would need to be at a complete stop to make a turn. Whereas now, they can move simultaneously.

Warren Yun, Sciborgs captain, said one of the drives is similar to that of a shopping cart going forward and backward.

Theyre really large, and theyre heavy, too, she said.

However, its downfall is the quality of it. If another teams robot pushes a robot to prevent them from scoring with this module, the mobility will help it move.

Thats another part of robotics, Jiang said. Theres a lot of strategies involved because you cant really do everything; you kind of have to debate what you want to prioritize. For example, the drive you sacrifice, like how much you get pushed for that mobility.

The teams always need to trade off on things. Thats why there is a strategy department. Shinyoung Kang is the head of engineering and strategy for the Femaidens. She said she needed to be the salesperson of the match.

Not only does Kangs department needs to convince other teams how they will work well in an alliance together. They need to show off what their robot does and promote themselves.

And even during the competition, the strategy team will meet to find ways to proceed with a game and who to work with.

Both teams have five departments.

Engineering and construction: They make the robot. Electronics: They work with the wires and motors. which can be noticeable for some. Marketing: They communicate with sponsors like Optimum, which provides awards. Programing: They programs the robots. Strategy: They do the challenging part of it, Jiang said.

But getting onto the team can be quite challenging. The students say it has a lower acceptance rate than Harvard.

Approximately 350 people are interested across both clubs, but they only have 10 available spots each year.

We lose a lot of great potential robotics people inspired to do engineering, Carr said.

The two current teams have been around since the early 2000s, and now they are about to start another team but with a different type of robot. The new team will be able to create robots like the two current teams but on a smaller scale. Carr mentioned it should be starting in the fall.

Anthony, founder of the new Apiero team and its senior captain, did not have an opportunity to work with robotics because of Covid. Everything was remote. He hopes expanding a new team will help more people learn about robotics.

Eventually, the schools goal is to have multiple smaller robotic teams. But they need to find more resources, space, and money. Im like (I told assistant principal of physical science and math) we have 20 plus problems. Where do you want to start, Anthony said.

However, Bronx Science is where most of these students started with robotics. Others started with Mindstorms programmed robots made from Lego when they were in elementary and middle school.

Last year, Peskay worked with an elementary school in Manhattan once a week to help their Lego team. His job was to help them with designs.

A lot of this gets us into our career paths personally, I was really into biology before engineering, but now Im going into engineering completely, Jiang said.

This is what kind of led me into the path of engineering, and Im planning on majoring in engineering (in college).

Its a completely student-led program. We make all the curriculums ourselves, we determine the kind of timing of everything, a lot of it is time management, how to communicate with others, communicate with our sponsors and even things such as like forming lifelong friendships,

Read more here:

Forget artificial intelligence, its about robots in the Bronx - The Riverdale Press

Protecting Passwords in the Age of Artificial Intelligence – Fagen wasanni

Passwords remain a critical tool for safeguarding personal information, despite the availability of new security measures. However, the rise of artificial intelligence (AI) poses new challenges and risks to password security. AIs ability to process vast amounts of data and employ advanced machine learning algorithms allows it to analyze patterns, detect correlations, and make countless attempts at cracking passwords within seconds. Unfortunately, cybercriminals are taking advantage of these capabilities.

AI applications designed for password guessing can evade detection and rapidly crack complex passwords. For example, the AI tool PassGAN can decrypt any 7-digit password, even one with symbols, numbers, and mixed cases, in less than 6 minutes. These developments highlight the weaknesses that exist in password security.

AI employs various methods to crack passwords. Enhanced brute force attacks leverage neural networks and machine learning algorithms to test numerous password combinations rapidly. Optimized dictionary attacks analyze leaked password data to create more effective keyword lists, increasing the chances of success. Automated social engineering uses AI to glean personal information from social media profiles and other public sources to facilitate password guessing. Additionally, AI can generate fake passwords and simulate login attempts to confuse intrusion detection systems and gain unauthorized access. Keystroke analysis, utilizing machine learning techniques, can infer passwords accurately by analyzing patterns in keystrokes.

To defend against AI-powered attacks, it is essential to use strong, complex passwords consisting of a combination of numbers, uppercase and lowercase letters, and symbols. Cybersecurity experts recommend passwords of at least 12 characters, if not 15. Implementing multi-factor authentication (MFA) provides an additional layer of security by requiring an additional form of authentication alongside the password. It is crucial to avoid reusing passwords across different accounts and instead use password managers to securely manage multiple passwords. Regularly updating passwords helps minimize the risk of discovery. Education and awareness about online security practices, as well as phishing attacks and social engineering tactics, are vital for both individuals and organizations.

Companies and platforms should invest in advanced security measures, including anomalous behavior detection systems and other technologies, to detect and prevent AI attacks. Importantly, AI algorithms can also contribute to password security by generating strong and unique passwords that are difficult to crack, and by learning users normal behavior to detect any anomalous activity.

While the advancements in AI pose challenges to password security, implementing strong security practices and utilizing advanced protection technologies can enhance defense against potential AI attacks and ensure the safety of personal information.

View original post here:

Protecting Passwords in the Age of Artificial Intelligence - Fagen wasanni

How Artificial Intelligence is Shaping the Future – Fagen wasanni

Artificial intelligence (AI) is rapidly transforming various aspects of our daily lives. It has revolutionized the way we shop, access news, and interact with the world around us. As AI continues to advance, its influence will only become more profound.

One major way that AI is expected to change the world is through automation. Already, AI is being used to automate tasks that were once carried out by humans, such as data entry, customer service, and even driving. As AI technology continues to progress, we can anticipate even more automation, which may result in job displacements. However, this evolving technology is also predicted to create new job opportunities in AI development and maintenance.

AI is also being harnessed for personalization purposes. Recommender systems powered by AI algorithms can suggest products tailored to our interests, while AI-driven newsfeeds deliver news articles personalized to our preferences. As AI becomes more sophisticated, it is likely that personalization will become even more prevalent in our lives.

In addition, AI is increasingly making decisions across various fields including healthcare, finance, and business. For instance, AI-powered medical devices aid doctors in accurate disease diagnosis, and AI-powered trading algorithms assist investors in making informed decisions. As AI progresses, we can expect it to play an even larger role in complex decision-making processes.

Another intriguing aspect of AIs advancement is its ability to foster creativity. AI is already being used to generate new forms of art, music, and literature. AI-powered music generators can create original songs, and AI-powered writers can generate poems and stories. As AIs creativity evolves, we can anticipate even more astonishing works of art produced by this technology.

While AI offers potential benefits including increased productivity, improved decision-making, personalized experiences, new forms of art, and solutions to complex problems, it also poses certain risks. Job displacement, bias and discrimination, privacy concerns, security threats, and ethical implications are some of the potential pitfalls associated with AI.

Therefore, it is crucial to carefully consider the potential benefits and risks of AI. Proper planning and management can ensure its positive impact on the world. However, without vigilance, AI could pose a significant threat to our society.

As the future of AI remains uncertain, one thing is clear: it will have a substantial impact on our lives and the world. It is our responsibility to ensure that AI is utilized for the greater good, and safeguards are in place to prevent any harm it may cause.

See the rest here:

How Artificial Intelligence is Shaping the Future - Fagen wasanni

DNV and KIRIA Extend Collaboration in Cybersecurity and Artificial … – Fagen wasanni

DNV and the Korea Institute for Robot Industry Advancement (KIRIA) have extended their Memorandum of Understanding (MoU) to collaborate in the fields of cybersecurity and artificial intelligence in the robotics industry. The purpose of this extension is to support the international development of Koreas growing robotics industry and facilitate its entry into the European Union (EU) market.

Under the extended MoU, DNV and KIRIA will share technical and regulatory information about robots and relevant components. They will also cooperate in exchanging technical visits to review safety standards and explore the option of jointly providing advisory services to the Korean robot industry regarding safety standards. Additionally, they will have the opportunity to participate in the standardization process for robots.

The European Commission has recently implemented new legislation, the Machinery Regulation and the Artificial Intelligence Act, to enhance the safety and performance of machinery, including robots. Manufacturers of machinery, including robots, will need to comply with stricter product safety and sustainability requirements to access the European market. They will also need to address emerging risks in areas such as cybersecurity, human-machine interaction, and traceability of safety components and software behavior.

DNV, as an independent assurance and risk management provider, brings their expertise in technical standards development, assessments, certifications, and training to support the Korean robotics industry. KIRIAs goal is to access regulated markets worldwide and ensure that appropriate standards are in place for manufacturers to meet.

By combining DNVs capabilities in artificial intelligence assurance, functional safety, and cybersecurity with KIRIAs ambition, this collaboration aims to drive the maturity and global growth of the Korean robotics industry.

See the article here:

DNV and KIRIA Extend Collaboration in Cybersecurity and Artificial ... - Fagen wasanni