Page 142«..1020..141142143144..150160..»

Category Archives: Ai

Expert.ai and Fincons Group Extend the Use of Natural Language in Insurance and Financial Services via APIs – Herald-Mail Media

Posted: April 9, 2021 at 2:40 am

ROCKVILLE, Md., April 8, 2021 /PRNewswire/ --Fincons Group increases its support of the digital transformation within financial services, enhancing its offering for banks and insurance companies with the expert.ai Natural Language API and expert.ai Studio.

The cloud-based expert.ai NL API provides developers and data scientists deep language understanding without requiring IT infrastructure, reducing development cost while optimizing natural language processing (NLP) applications. Smart from the start, expert.ai Studio is a downloadable fully integrated development environment for building custom applications leveraging advanced expert.ai AI capabilities for categorization, extraction etc. to meet specific customer needs.

For banks and insurance companies, the need to accelerate innovation by leveraging artificial intelligence applied to text analysis via NLU is an essential factor for creating value and increasing competitiveness. However, legacy or fragmented systems and long implementation times increase costs and often create barriers to digital transformation. The use of APIs in a practical and fully integrated development environment for building and deploying a custom AI-based text model makes application development more agile, faster, and facilitates the integration of innovative functionalities into the existing infrastructure. Fincons has extensive experience creating tailored solutions for specific customer needs which has led to the development of advanced AI NLU solutions, driven by expert.ai NL API and expert.ai Studio.

"By using AI applied to natural language, companies in the financial services and insurance sector can derive tangible value from their information. The potential offered by expert.ai in terms of its API and development environment is a critical success factor for initiating change in a way that is not only fast, but also predictable and easy,"said Giuliano Altamura, Global Financial Services and Insurance Business Unit General Manager at Fincons Group. "Our partnership with expert.ai will allow us to offer an enriched proposition of digital transformation solutions. In fact, we can provide even greater support to customers in extracting relevant information from complex and unstructured texts and automating tasks to take operational efficiency to the next level."

The integration of Fincons Group's technical and functional expertise in banking and insurance, combined with the practicality of the expert.ai NL API and the ease of the expert.ai Studio development environment, allows developers to streamline the design of new functionalities and development of applications, including: chatbots, virtual assistants, advanced recommendation engines, natural language mobile payment systems, smart claims, automatic email management, and the analysis and comparison of policies and contracts, among many others.

"We are excited to deepen the use of our NL API into the financial services vertical," said Gavin Sollinger, Head of Global Channel Development at expert.ai. "The combination of Fincons Group's industry experience with the skills of expert.ai-certified data scientists and developers using our solutions will support banks and insurance companies in their digital evolution journeys. By leveraging the ability to understand natural language, Fincons will optimize the development of new AI NLU apps that automate expertise in mission critical processes."

Webinar session A practical AI Adoption for Financial Services: Natural Language Understanding for improving efficiency and scaling business

Fincons Group and expert.ai will present the benefits of expert.ai NL API and expert.ai Studio for banks and insurance companies in a webinar session on 21 April 2021, at 11 am EDT/5 pm CEST: https://bit.ly/2PN2Psf

About Fincons Group

Fincons is a consulting and system integration company, providing a broad range of services and solutions in strategy, digital, technology and operations to a diverse range of industries. In the Financial Services & Insurance sector, Fincons has long term and successful relationships with the main international software vendors, providing customized solutions for different business processes and supporting its clients during their digital transformation journeys. With over 2000 employees worldwide and 38 years of experience in consulting and system integration, Fincons Group has offices in Switzerland (Kssnacht Am Rigi, Bern, Zurich, Lugano), Italy (Milan, Verona, Rome, Bari, Catania), Germany (Munich), in the UK (London), in France (Paris) and in the US (New York, Los Angeles).

For more information visit http://www.finconsgroup.com. Follow us on Linkedin and Twitter

About expert.ai

Expert.ai is the premier artificial intelligence platform for language understanding. Its unique hybrid approach to NL combines symbolic human-like comprehension and machine learning to transform language-intensive processes into practical knowledge, providing the insight required to improve decision making throughout organizations. By offering a full range of on-premise, private and public cloud offerings, expert.ai augments business operations, accelerates and scales data science capabilities and simplifies AI adoption across a vast range of industries including Insurance, Banking & Finance, Publishing & Media, Defense & Intelligence, Life Science & Pharma, Oil Gas & Energy, and more. The expert.ai brand is owned by Expert System (EXSY:MIL), that has cemented itself at the forefront of natural language solutions, and serves global businesses such as AXA XL, Zurich Insurance Group, Generali, The Associated Press, Bloomberg INDG, BNP Paribas, Rabobank, Dow Jones, Gannett, and EBSCO.

For more information visit http://www.expert.aiand follow us onTwitterandLinkedIn.

View original content to download multimedia:http://www.prnewswire.com/news-releases/expertai-and-fincons-group-extend-the-use-of-natural-language-in-insurance-and-financial-services-via-apis-301264829.html

SOURCE expert.ai

Originally posted here:

Expert.ai and Fincons Group Extend the Use of Natural Language in Insurance and Financial Services via APIs - Herald-Mail Media

Posted in Ai | Comments Off on Expert.ai and Fincons Group Extend the Use of Natural Language in Insurance and Financial Services via APIs – Herald-Mail Media

Army Revives 10x Platoon Experiment In Robotics, AI – Breaking Defense

Posted: at 2:40 am

An 82nd Airborne soldier trains with a Black Hornet mini-drone before deploying to Afghanistan.

WASHINGTON: How do you make a platoon of foot soldiers ten times more effective? If your company has a new technology that might help, Army Futures Command wants to know by May 5th.

Thats the deadline to submit ideas in AI, networking, and robotics for the Armys 10x platoon experiment, which is trying out tech to upgrade the future infantry force. The near-term goal is a series of field demonstrations next year at Fort Bennings annual Maneuver Warfighter Conference and as part of Futures Commands Project Convergence 2022 wargames. The ultimate goal, around 2028-2035? An infantry platoon that can see further, shoot further, and make better decisions 10 times faster than before, thanks to unmanned sensors, robots, networks, and AI systems that help share intelligence and advise commanders.

The original plan had been to do a demonstration last year, but that had to be cancelled amidst concerns over COVID and unspecified funding issues, said Ted Maciuba, deputy director of Futures Commands Robotics Requirements Division, during Fort Bennings online industry day on Wednesday.

Submissions are limited to members of the National Advanced Mobility Consortium (NAMC), which issued the formal Request for Prototype Proposals (RPP) on Tuesday. Going through public-private consortia this way is an increasingly common expedient for military projects that want bypass the traditional acquisition bureaucracy and tap non-government innovation quickly.

What kind of technologies does the 10x experiment want? This is open-ended, Maciuba said. Propose any technologies that you have that you feel are mature enough [and] will move us towards a 10x increase in effectiveness.

Qinetiqs Robotic Combat Vehicle Light (RCV-L)

The Robotics Requirements division will pick tech to fund with an eye to impressing Army brass that further work and further funding is warranted. After May 5, Maciuba continued, our selection processes [will] determine which of those technologies will demonstrate to the best degree possible for senior leaders that this is something that they need to start reallocating resources towards.

Georgia Tech Research Institute will put together the chosen technologies into an integrated system of systems, intended to simplify their use and not overwhelm infantry soldiers with a welter of different controllers, interfaces, and manuals.

Today, soldiers have to learn a different set of controls for each unmanned system they operate. The Army aims to streamline the system by creating a common Universal Robotic Controller. URC will eventually operate all the unmanned vehicles used by a combat brigade, both aerial drones and ground robots even armed Robotic Combat Vehicles now in development. (The URC wont handle high-end drones that require the specialized skills of an Army aviator to operate).

The next step, Maciuba said, is to turn the URC from a physical gadget to an app. It will be one of many on a future open architecture system called AI for Small Unit Maneuver. AISUM will also have apps to collect intelligence data from all the platoons drones, curate it, and present it to the platoons leaders to help them make better decisions faster. The ultimate goal is an AI cloud, running off robot-carried mini-servers, that manages the robots movements moment-to-moment so the humans can focus on the bigger picture. Just like the lieutenant can order a squad to seize a hill and not worry about telling them the exact route to follow or cover to take, the lieutenant will order the AI cloud to scout out an area and not have to micromanage the individual robots executing that order.

The Army plans to consolidate disparate robotic systems, first onto a single Universal Robotic Controller (URC), then into an AI cloud.

Basically, AI for Small Unit Maneuver would be a platoon-sized microcosm of the future Joint All-Domain Command & Control system networking the entire military. While JADC2 would link the Army, Navy, Marines, Air Force, and Space Force across all five domains of land, sea, air, space, and cyberspace, AISUM would link Army systems across air and ground, manned and unmanned.

Maciuba says this problem can be solved. Thats in part because ever-shrinking computers let robots process sensor data onboard instead of having to live stream every bit of data over limited bandwidth for a human to decipher. And its in part due to the manageable scale of the infantry platoon.

This is not a huge data problem; this is probably a medium-sized data problem, he said. You only need to be communicating, from the center of the manned formation, out perhaps five kilometers to the robotic systems that are out there are arrayed in a constellation around that that manned formation.

Read this article:

Army Revives 10x Platoon Experiment In Robotics, AI - Breaking Defense

Posted in Ai | Comments Off on Army Revives 10x Platoon Experiment In Robotics, AI – Breaking Defense

TheStreet Crypto: Coindex Capital Launches 4 AI-Driven Crypto Funds – The Street Crypto: Bitcoin and cryptocurrency news, advice, analysis and more -…

Posted: at 2:40 am

While most people prepared for a long, holiday weekend, the team at Coindex Capital Management geared up to debut four artificial intelligence-backed crypto strategies.

The Coindex AI, which was trained to read and trade crypto markets, will be responsible for the minute-to-minute management of all the firms strategies, leaving the management team to focus on operations.

Each strategy Long/Short Crypto Series, Long/Short ETF Series, Market Neutral Series and Stable Yield Series will have a minimum $100,000 buy-in for accredited investors when live trading begins in the next two weeks.

AI hedge funds have picked up a lot of velocity. In the three years ending May 2020, AI-led funds saw 34% gains compared to 12% overall for the global hedge fund industry, according to a report from Cerulli Associates.

The Coindex team has invested proprietary capital although they declined to say how much into each of the strategies and will cover the startup and operating costs for the first year, said Shareef Abdou, Coindex director of strategy and finance.

None of the Coindex strategies will be entirely in the AIs control, but by now Abdou said the algorithms are good enough to go long periods without any help.

Human intervention has not been needed in approximately 9 months of live trading to date, he said Monday.

Thats possible because the team runs many variations of the code against one another, choosing the most viable iterations to maximize historical returns while minimizing historical volatility.

Ryan DeMattia, Coindex founder and managing director, used Darwins theory of evolution to explain how the Coindex team has refined its trading algorithm, where variations of the AI represent genomes competing over generations.

What were doing, specifically, is neuroevolutionary code, he said.

DeMattia has spent over a decade trading and modeling markets, launching startups and honing quant skills that have now been codified in the teams AI.

We can really select for the features we want and that might not necessarily be the absolute highest return, DeMattia said. If our very best, highest return comes with a really, really significant draw down, that may be technically interesting. But that might not be a compelling investment opportunity.

Abdou and DeMattia say the adaptability of their AI is most evident when applied to their Long/Short ETF fund, an option they wanted to include for investors still unsure of crypto. They say the AI gained enough general financial intelligence trading in crypto markets to remain competitive when it traded in leveraged ETFs.

As long as we see the indicators we need to see to validate [the AI], DeMattia said, we can step into new markets that it can thrive in.

Abdou and DeMattia met when they were both in New York City during the winter of 2019. It was the kind of chance meeting that could have only happened before COVID-19 lockdowns made travel nearly impossible. Abdou lives in Los Angeles and DeMattia in Atlanta, where Coindexs office will be located.

If Abdous name sounds familiar, its because he was one of four whistleblowers to receive part of a $16.65 billion federal penalty against Bank of America in 2014 for its mortgage practices in the run-up to the Great Recession.

He joined Countrywide Home Loans in 2006 and later became a senior vice president at Bank of Americas Los Angeles office, where he oversaw residential mortgage-backed securities claims.

Now, hes putting that risk management and securities background to work making sure Coindex partners with the right providers for cybersecurity, custody and valuation.

Were expecting money to continue to flow into the [crypto] space, Abdou said. Even novices are getting five to seven percent savings rates from retail [crypto] products.

While most people prepared for a long, holiday weekend, the team at Coindex Capital Management geared up to debut four artificial intelligence-backed crypto strategies.

The Coindex AI, which was trained to read and trade crypto markets, will be responsible for the minute-to-minute management of all the firms strategies, leaving the management team to focus on operations. Subscribe for full article

Already subscribed?Log In

Read more:

TheStreet Crypto: Coindex Capital Launches 4 AI-Driven Crypto Funds - The Street Crypto: Bitcoin and cryptocurrency news, advice, analysis and more -...

Posted in Ai | Comments Off on TheStreet Crypto: Coindex Capital Launches 4 AI-Driven Crypto Funds – The Street Crypto: Bitcoin and cryptocurrency news, advice, analysis and more -…

AI in Cardiology: Where We Are Now and Where to Go Next – TCTMD

Posted: March 31, 2021 at 5:41 am

Artificial intelligence (AI) has become a buzzword in every cardiology subspecialty from imaging to electrophysiology, and researchers across the fields spectrum are increasingly using it to look at big data sets and mine electronic health records (EHRs).

But in concrete terms, what is the potential for AIas well as its subcategories, like machine learningto treat heart disease or improve research? And what are its greatest challenges?

There are so many unknowns that we capture in our existing data sources, so the exciting part of AI is that we have now the tools to leverage our current diagnostic modalities to their extreme to the entirety, Rohan Khera, MBBS (Yale School of Medicine, New Haven, CT), told TCTMD. We don't have to gather new data per se, but just the way to incorporate that data into our clinical decision-making process and the research process is exciting.

Generally speaking, I'm excited about anything that allows doctors to be more of a doctor and that would make our job easier, so that we can focus our time on the hard part, Ann Marie Navar, MD, PhD (UT Southwestern Medical Center, Dallas, TX), commented to TCTMD. If a computer-based algorithm can help me more quickly filter out what therapy somebody should or shouldn't be on and help me estimate the potential benefit of those therapies for an individual patient in a way that helps me explain it to a patient, that's great. That saves me a lot of cognitive load and frankly time that I can then spend in the fun part of being a doctor, which is the patient-physician interface.

There should not be too much hype, hope, or phobia. Jai Nahar

Although a proponent of AI in medicine, Jai Nahar, MD (George Washington School of Medicine, Washington, DC), who co-chairs the American College of Cardiologys Advanced Healthcare and Analytics Work Group, said the technology faces hurdles related to clinical validation, implementation, regulation, and reimbursement before it can be readily adopted.

Cardiologists should be balanced in their approach to AI, he stressed to TCTMD. There should not be too much hype, hope, or phobia. Like any advanced technology, we have to be cognizant about what the technology stands for and how it could be used in a constructive and equitable manner.

A Foothold in Imaging

Nico Bruining, PhD (Erasmus Medical Center, Rotterdam, the Netherlands), editor-in-chief of the European Heart Journal Digital Health, told TCTMD there are two main places in hospitals where he sees AI technologies flourishing in the near future: image analysis and real-time monitoring in conjunction with EHRs.

What we also envision is that some of the measurements we now do in the hospitallike taking the EKG and making a decision about itcan move more to the general practitioner, so closer to the patient at home, he said. Since its already possible to get a hospital-grade measurement onsite, AI will be applied more in primary and secondary prevention, Bruining predicted. There we see a lot of possibilities, which we see sometimes deferred to a hospital.

On the imaging front, Manish Motwani, MBChB, PhD (Central Manchester University Hospital NHS Trust, England), who recently co-chaired a Society of Cardiovascular Computed Tomography webinar on the intersection of AI and cardiac imaging, observed to TCTMD that the field has really exploded over the past 5 years. Across all the modalities, people are looking at how they can extract more information from the images we acquire faster with less human input, he noted.

Already, most cardiac MRI vendors have embedded AI-based technologies to aid in calculating LV mass and ejection fraction by automatically drawing contours on images and allowing for readers to merely make tweaks instead of starting from scratch, Motwani explained. Also, a field called radiomics is digging deeper into images and extracting data that would be impossible for humans to glean.

When you're looking at a CT in black and white, your human eye can only resolve so much distinction between the pixels, Motwani said. But it's actually acquired at a much higher spatial resolution than your monitor can display. This technique, radiomics, actually looks at the heterogeneity of all the pixels and the texture that is coming out with data and metrics that you can't even imagine. And for things like tumors, this can indicate mitoses and replication and how malignant something may be. So were really starting to think about AI as finding things out that you didn't even know existed.

Yet another burgeoning sector of AI use in cardiology is in ECG analysis, where recent studies have shown machine-learning models to be better than humans at identifying conditions such as long QT syndrome and atrial fibrillation, calculating coronary calcium, and estimating physiologic age.

Peter Noseworthy, MD (Mayo Clinic, Rochester, MN), who studies AI-enabled ECG identification of hypertrophic cardiomyopathy, told TCTMD that although the term low-hanging fruit is overused, [this ECG approach] basically readily available data and an opportunity that we wanted to take advantage of.

Based on his findings and others, his institution now offers a research tool that allows any Mayo clinician to enable the AI-ECG to look for a variety of conditions. Noseworthy said the dashboard has been used thousands of times since it was made available a few months ago. It is not currently available to clinicians at other institutions, but eventually, if it is approved by the US Food and Drug Administration, then we could probably build a widget in the Epic [EHR] that would allow other people to be able to run these kinds of dashboards.

The exciting thing is that the AI-ECG allows us to truly do preventative medicine, because we can now start to anticipate the future development of diseases, said Michael Ackerman, MD, PhD (Mayo Clinic), a colleague of Noseworthys who led the long QT study. We can start to see diseases that nobody knew were there. . . . AI technology has the ability to sort of shake up all our standard excuses as to why we can't screen people for these conditions and change the whole landscape of the early detection of these very treatable, sudden-death-associated syndromes.

New Era of Research

Ackerman believes AI has also opened the door to a new era of research. Every one of our papers combined from 2000 until this year [didnt include] 1.5 million ECGs, he said, noting that a single AI study can encompass that level of data. It's kind of mind-boggling really.

Bruining said this added capability will allow researchers to collect the data from not only your own hospital but also over a country or perhaps even over multiple countries. It's a little bit different in Europe than it is in the United States. In the United States you are one big country, although different states, and one language. For us in Europe, we are smaller countries and that makes it a little more difficult to combine all the data from all the different countries. But that is the size you need to develop robust and trustworthy algorithms. Because the higher the number the more details, you will find.

AI technology has the ability to sort of shake up all our standard excuses as to why we can't screen people for these conditions and change the whole landscape of the early detection of these very treatable, sudden-death-associated syndromes. Michael Ackerman

Wei-Qi Wei, MD, PhD (Vanderbilt University Medical Center, Nashville, TN), who has used AI to mine EHR data, told TCTMD that while messy, the information found in EHRs provides a much more comprehensive look at patient cohorts compared to the cross-sectional data typically used by clinical trials. We have huge data here, he said. The important thing for us is to understand the data and to try to use the data to improve the health care quality. That's our goal.

In order to do that, algorithms must first be able to bridge information from various systems, as otherwise the end result will not be relevant, Wei continued. A very common phrase in the machine-learning world [is] garbage in, garbage out, he said.

Siloes and Generalizability

The development of different AI platforms has mostly emerged within institutions, with researchers often validating algorithms on internal data.

That scenario has transpired because to develop these tools, researchers need two things often only found at large academic institutions, said Ackerman. You need to have the AI scientiststhe people who play in the sandbox with all their computational powerand you need a highly integrated, accessible electronic health record that you can then data mine and marry the two.

Though productive, this system may build in inherent limits to AI algorithms in terms of generalizability.

Motwani pointed to the UKs National Health System (NHS) to illustrate the point. The NHS, in theory, stores all patient data in a single database, Motwani said. There's an opportunity for big data mining and you have the database ready to retrospectively test these programs. But on the other hand, in the US you have huge resources but a very fragmented healthcare system and you've got one institution that might find it difficult to get the numbers and therefore need to interact with another institution but there isn't a combined database.

This has resulted in a situation where much of the data used to develop AI tools are coming only from a handful of institutions and not a broad spectrum, he continued. That makes it difficult, necessarily, to say, well, all of this applies to a population on the other end of the world.

Still, the beauty of AI and machine learning is that it's constantly learning, Motwani said. Most vendors, in whatever task they're doing, have a constant feedback loop so their algorithms are always updating in theory. As more sites come onboard, the populations are more diverse that the algorithms are being used on. There's this positive feedback loop for it to constantly become more and more accurate.

It would be helpful, said Bruining, if there was more open access to algorithms so that they could be more easily reproduced and evolved. That would help grow interest into these kinds of developments, he suggested, because the medical field is still very conservative and not everybody will trust a deep-learning algorithm which teaches itself. So we have to overcome that.

Navar acknowledged the siloes that many AI researchers are working in, but said she has no doubt that technology will be able to merge and become more generally disseminated with time. Once an institution learns something and figures something out, [they will be able] to package that knowledge up and spread it in a way that everybody else can share in it, she predicted, adding that this will manifest in many ways. There aren't that many different EHR companies in the US market. Although there's competition, there's not an infinite number of technology vendors for things like EKG reading software or echo reading software. So if we're talking about the cardiology space, here this is where I think that there's probably going to be a role for a lot of the companies in the space to adopt and help spread this technology.

While no large consortium of AI researchers in cardiology yet exists, Bruining said he is petitioning for one within the European Society of Cardiology. For now, initiatives like the National Institutes of Health (NIH)-funded eMERGE network and BigData@Heart in Europe are enabling scientists to collaborate and work toward common goals.

Regulation Speculation

Its one thing to use technology to help increase efficiency or keep track of patient data, but regulation will be required once physicians start using AI to help in clinical decision-making.

Bruining said many of these AI tools should be validated and regulated in the same way as any other medical device. If you have software that does make a diagnosis, then you have to go through a whole regulation and you have to inform notified bodies, who will then look. The same thing as with drugs, he said. Software is a device.

Ethically, Khera referred to AI-based predictive models as a digital assay of sorts that is no different than any other lab test. So I think the bar for some of these tools, especially if they alter the way we deliver care, will probably be the same as any other assay, he said.

Navar agreed. We're probably going to need to categorize AI-based interventions into those that are direct-to-patient or impacting patient care immediately versus those that are sort of filtered through a human or physician interaction, she said. For example, an AI-based prediction model that's used to immediately recommend something to a patient that doesn't go through a doctor, that's like a medical device and needs to be really highly scrutinized because we're depending on the accuracy of that to guide a treatment or a therapy for a patient. If it's something that is augmenting the workflow, maybe suggesting to the physician to consider something, then I think there's a bit more safeguard there because it's being filtered through the physician and so may not be quite as scrutinized.

Up until now, however, regulatory approval has not kept pace with these technologies, Motwani noted, and there lies a challenge for this expanding field. At the moment, a lot of these technologies are being put through as devices, when actually if you think about it, they're not really.

Additionally, he continued, many AI-based tools are not prospectively tested but rather developed on historical databases.

It's very hard to actually prospectively test AI, Motwani explained. Are we going to say, Ok, all these X-rays are now going to be read by a computer, and then compare that to humans? You'd have to get ethical approval for such a trial, and that's where it gets difficult. The way AI is transitioning at the moment is that it's more of an assistant to human reads. [This] is probably a natural kind of transition that's helpful rather than saying all of a sudden all of this is read by computers, which I think most people would feel uncomfortable with. In some ways, the lack of regulatory clarity is helpful because people are just dipping in and out, they're beginning to get comfortable with it and seeing whether it's useful or not.

Who Pays?

Who will bear the cost for development and application of these new technologies remains an open question.

Right now, Khera said most tools related to clinical decision support are institutionally supported, although the NIH has funded some research. However, implementation is more often paid for by health systems in a bid for efficiency in healthcare, he said. Still other algorithms are covered by the Centers for Medicare & Medicaid Services like any other assay.

In the imaging space, Motwani said most of the companies that are involved are developing fee-to-scan pricing. If you're a large institution and you have a huge faculty, is that really attractive? Probably not. But if you're a smaller institution that maybe doesn't have cardiac MRI readers, for example, it might be cost-effective rather than employing staff to get some of this processing done.

Yet then questions arise, such as Are insurers going to pay for this if a human can do it? and Is it then that those human staff can then be deployed to other stuff? Motwani said. I think the business model will be that companies will make the case that your throughput of cases will be higher using some of these technologies. Read times will be shortened. Second opinions can be reduced. So that's the financial incentive.

This will vary depending on geography and how a countrys health system is set up, added Motwani. For example, in the US, the more [scans] that you're doing, the more you're billing, the more money you're making, and therefore it makes sense. Other healthcare systems like the UK where it doesn't work necessarily in that way, . . . there the case would be that you just have to get all the cases done as quickly and effectively as possible, so we'll pay for these tools because the human resources may not be available.

There are also cost savings to be had, added Bruining. If you can reduce the workload, that can save costs, and also other costs could be saved if you can use AI more in primary prevention and secondary prevention, because the patients are then coming less to the hospitals.

Overall, Nahar observed, once we have robust evidence that this worksit decreases the healthcare spending and increases the quality of carethen I think we'll have more payers buy into this and they'll buy into paying for the cost.

Fear and Doctoring

The overarching worry about AI replacing human jobs is present in the cardiovascular medicine sector as well, but everyone who spoke with TCTMD said cardiologists should have no anxiety about adopting new technology.

There's an opportunity to replace an old model, Wei said. Yeah, it might replace some human jobs, but from another way, it creates some jobs as well. . . . Machine learning improves the efficiency or efficacy of learning from data, but at the same time more and more jobs will be available for people who understand artificial intelligence.

Bruining said that technologies come and go, citing the stethoscope as one example of something that has disappeared more or less within cardiology. Physicians and also the nurses and other allied health professionals very much want a reduction in the workload and a reduction in the amount of data they have to type over from one system to another, and that can be handled by AI, he said.

I think that as humans, we're always going to need the human part of doctoring. Ann Marie Navar

Is there scope for automation in some of our fields? Definitely so, said Khera, who recently led a study looking at machine learning and post-MI risk prediction. But would it supersede our ability to provide care, especially in healthcare? I don't think that will actually happen. . . . I think [AI is] just expanding our horizons rather than replacing our existing framework, because our framework needs a lot of work.

Navar agreed. I have yet to see a technology come along that is so transformative that it takes away so much of our work that we have nothing to do and now we're all going to be unemployed, she said. I don't worry that computers are going to replace physicians at all. I'm actually excited that a lot of these AI-based tools can help us reduce the time that we spend doing things that we don't need to and spend more time doing the fun or harder part of being a doctor. . . . I think that there's always going to be limitations with any of these technologies and I think that as humans, we're always going to need the human part of doctoring.

That notionthat AI might allow for deeper connection with patients, rather than taking doctors out of the equationhas been dubbed deep medicine after a book of the same name by cardiologist Eric Topol, MD.

COVID-19, Prevention, and Opportunity

In light of the COVID-19 pandemic, Bruining said AI research has been one of the fields to get a bump this past year. It has torn down a lot of walls, and it also shows how important it is to share your scientific data, he said.

Motwani agreed. On a quick Google search, you'll find hundreds and hundreds of AI-based algorithms for COVID, he said. I think the people who had the infrastructure and databases and algorithms rapidly pivoted toward the COVID. Now whether they've made a difference in it prospectively is probably uncertain, but the tool set is there. It shows how they can actually use that data for a new disease and potentially if there were something to happen again.

Looking forward, Nahar sees the largest opportunity for AI in prediction of disease and precision medicine. Just before the disease or the adverse events manifest, we have a big window for opportunity, and that's where we could use digital biomarkers, computational biomarkers for prediction of which patients are at risk, and target early intervention based on the risk intervention to the patient so they would not manifest their disease or the disease prevention would be halted, he said. They would not go to the ER as frequently, they would not get hospitalized, and the disease would not advance.

Navar said for now cardiologists should think of AI more as augmented intelligence rather than artificial, given that humans are still an integral part of its application. In the future, she predicted, this technology will be viewed through a different lens. We don't talk about statistics as a field. We don't say, What are the challenges of statistics? because there's like millions of applications of statistics and it depends on both where you're using it, which statistical technique you're using, and how good you are at that particular technique, she said. We're going to look back on how we're talking about AI now and laugh a little bit at trying to capture basically an entirely huge set of different technologies with a single paintbrush, which is pretty impossible.

Go here to see the original:

AI in Cardiology: Where We Are Now and Where to Go Next - TCTMD

Posted in Ai | Comments Off on AI in Cardiology: Where We Are Now and Where to Go Next – TCTMD

Allscripts CEO talks EHR innovation, AI and the cloud – Healthcare IT News

Posted: at 5:41 am

Electronic health records have come a long way but for many users, they have a long way yet to go.

Physicians and nurses who are tasked with using the complex tools day in and day out have usability issues that stand in the way of spending quality time with their patients. They often ask, 'Am I taking care of the patient or the computer?'

But in recent years, EHR vendors have been innovating with their technologies andmaking usability strides with tools such as artificial intelligence and the cloud. Innovation with EHRs is key to making the health IT experience andthe overall healthcare experience better for patients and caregivers.

Healthcare IT News interviewed Paul Black, CEO of Allscripts, an EHR giant, to discuss EHR innovation and the state of electronic health records today.

Q: On the subject of innovation in EHRs, technology should make the human experience easier for providers and health systems. Where is health IT failing healthcare organizations today with EHRs?

A: It isn't that health IT is necessarily failing, but priorities have been directed toward less innovative endeavors, such as meaningful use requirements. For the last several years, and since the advent of meaningful use, some health IT development has been more about checking the box with technology, due to tight timelines and the changing scope of the rules. This has had a profound effect on innovation and only over the past few years has the effect of physician burnout been made clear.

Q: On the flipside of this coin, how is health IT today successfully innovating and making the EHR experience for providers better?

A: Health IT companies that are embracing technology to deliver human-centered design features and products will be the software suppliers that are most sought after.

These software suppliers will need to look within, create design expertise and hierarchies that support human-centric design thinking, support industry-leading training, and enable anyone who has a product role to lead by design and revamp clinical interactions with the products by doing practice-in-motion studies to inform the design.

Health IT companies need to look beyond what is typically in use today look to the consumer market and pull in those technologies to make the EHRs more holistic in addressing the healthcare industry's challenges.

For example, socioeconomic barriers to healthcare are a large challenge for organizations today. Let's take one specific issue, which is inadequate transportation for patients to get to their follow-up appointments, which creates gaps in care and potentially adds up to 22 million missed appointments per year.

Integrating ride-share capabilities into the EHR makes it very easy to ensure patients have reliable transportation and incorporates a consumer-familiar product that already is trusted by patients, outside of healthcare.

Allscripts has successfully done this with our Lyft integration into Sunrise. Software suppliers, like Allscripts, who make the decision to prioritize creating solutions to these types of challenges, are effectively delivering a human-centric technology experience.

Q: What role does artificial intelligence play in innovation with EHRs today?

A: Currently, artificial intelligence is playing a small role in EHRs. However, over the next two to three years, this role will increase exponentially. On the clinical side, AI is still getting its "sea legs" and has slower adoption, only due to the refining and training of the AI models.

One area where it is being heavily adopted is with patient summarization. This is the concept of organizing the patient's clinical data in a way that makes it easy for a provider to consume it.They don't have to manually gather the pertinent information, as it's fed to the provider right in their workflow.

AI will be used to provide curated clinical decision insights at the point of care, serve up critical information that will help clinicians make faster decisions, and automate clinical tasks that have bogged down clinicians today and have led to clinician burnout.

On the revenue cycle side, AI and RPA (robotic processing automation) are being used today to ensure accurate and timely claims, reducing the workloads on the back-end processes and driving down revenue-cycle administrative costs.

The use of these tools, as well as AI bots, will increase significantly and eventually automate the revenue-cycle process, further driving down costs and providing increased revenue to power hospital growth initiatives.

Allscripts is excited to be moving along a development road map that includes these AI innovations, as well as additional machine learning and cognitive support, through our healthcare IT.

Q: What role does the cloud play in innovation with EHRs today?

A: The cloud plays a significant role with innovation. The ability to do complex computing in the cloud will enable healthcare IT advances that have not been achieved on local computation stacks.

Healthcare IT software suppliers that can take advantage of these cloud innovations will be poised to deliver point-of-care cognitive support, quickly, to their provider organizations.

Allscripts recognized early on that the future was in the cloud and invested heavily in an extended strategic partnership with Microsoft and the use of the Azure cloud-based capabilities to enable the best possible healthcare IT cloud experience.

Among other things, these types of partnerships can bring healthcare organizations innovations like Microsoft Text analytics, part of Azure cognitive services, which automates the clinician's workflows in the areas like problem-list management, actionable orders and progress=note creation reducing cognitive burden and driving better outcomes for patients.

With the ability to amass large amounts of federated dataand the ability to perform advanced analytics, cloud computing can provide faster evidence-based treatment protocols that have typically taken up to 10 years to get from bench to bedside.

This will now take a fraction of that time, providing personalized care plans that will increase the speed to wellness and reduce the cost of healthcare compared with the sometimes trial-and-error approach to medication and treatment management.

Twitter:@SiwickiHealthITEmail the writer:bsiwicki@himss.orgHealthcare IT News is a HIMSS Media publication.

See original here:

Allscripts CEO talks EHR innovation, AI and the cloud - Healthcare IT News

Posted in Ai | Comments Off on Allscripts CEO talks EHR innovation, AI and the cloud – Healthcare IT News

Silicon Valley leaders think A.I. will one day fund free cash handouts. But experts arent convinced – CNBC

Posted: at 5:41 am

Sam Altman, president of Y Combinator

Patrick T. Fallon | Bloomberg | Bloomberg | Getty Images

Artificial intelligence companies could become so powerful and so wealthy that they're able to provide a universal basic income to every man, woman and child on Earth.

That's how some in the AI community have interpreted a lengthy blog post from Sam Altman, the CEO of research lab OpenAI, that was published earlier this month.

In as little as 10 years, AI could generate enough wealth to pay every adult in the U.S. $13,500 a year, Altman said in his 2,933 word piece called "Moore's Law for Everything."

"My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe," said Altman, the former president of renowned start-up accelerator Y-Combinator earlier this month. "Software that can think and learn will do more and more of the work that people now do."

But critics are concerned that Altman's views could cause more harm than good, and that he's misleading the public on where AI is headed.

Glen Weyl, an economist and a principal researcher at Microsoft Research, wrote on Twitter: "This beautifully epitomizes the AI ideology that I believe is the most dangerous force in the world today."

One industry source, who asked to remain anonymous due to the nature of the discussion, told CNBC that Altman "envisions a world wherein he and his AI-CEO peers become so immensely powerful that they run every non-AI company (employing people) out of business and every American worker to unemployment. So powerful that a percentage of OpenAI's (and its peers') income could bankroll UBI for every citizen of America."

Altman will be able to "get away with it," the source said, because "politicians will be enticed by his immense tax revenue and by the popularity that paying their voter's salaries (UBI) will give them. But this is an illusion. Sam is no different from any other capitalist trying to persuade the government to allow an oligarchy."

Oneof the main thrusts of the essay is a call to tax capital companies and land instead of labor. That's where the UBI money would come from.

"We could do something called the American Equity Fund," wrote Altman. "The American Equity Fund would be capitalized by taxing companies above a certain valuation 2.5% of their market value each year, payable in shares transferred to the fund, and by taxing 2.5% of the value of all privately-held land, payable in dollars."

He added: "All citizens over 18 would get an annual distribution, in dollars and company shares, into their accounts. People would be entrusted to use the money however they needed or wanted for better education, healthcare, housing, starting a company, whatever."

Altman said every citizen would get more money from the fund each year, providing the country keeps doing better.

"Every citizen would therefore increasingly partake of the freedoms, powers, autonomies, and opportunities that come with economic self-determination," he said. "Poverty would be greatly reduced and many more people would have a shot at the life they want."

Matt Clifford, the co-founder of start-up builder Entrepreneur First, wrote in his "Thoughts in Between" newsletter: "I don't think there is anythingintellectuallyradical here ... these ideas have been around for a long time but it's fascinating as a showcase of how mainstream these previously fringe ideas have become among tech elites."

Meanwhile, Matt Prewitt, president of non-profit RadicalxChange, which describes itself as a global movement for next-generation political economies, told CNBC: "The piece sells a vision of the future that lets our future overlords off way too easy, and would likely create a sort of peasant class encompassing most of society."

He added: "I can imagine even worse futures but this the wrong direction in which to point our imaginations. By focusing instead on guaranteeing and enabling deeper, broader participation inpolitical and economic life, I think we can do far better."

Richard Miller, founder of tech consultancy firm Miller-Klein Associates, told CNBC that Altman's post feels "muddled," adding that "the model is unfettered capitalism."

Michael Jordan, an academic at University of California Berkeley, told CNBC the blog post is too far from anything intellectually reasonable, either from a technology point of view, or an economic point of view, that he'd prefer not to comment.

In Altman's defense, he wrote in his blog that the idea is designed to be little more than a "conversation starter." Altman did not immediately reply to a CNBC request for an interview.

An OpenAI spokesperson encouraged people to read the essay for themselves.

Not everyone disagreed with Altman. "I like the suggested wealth taxation strategies," wrote Deloitte worker Janine Moir on Twitter.

Founded in San Francisco in 2015 by a group of entrepreneurs including Elon Musk, OpenAI is widely regarded as one of the top AI labs in the world, along with Facebook AI Research, and DeepMind, which was acquired by Google in 2014.

The research lab, backed by Microsoft with $1 billion in July 2019, is best known for creating an AI image generator, called Dall-E, and an AI text generator, known as GPT-3. It has also developed agents that can beat the best humans at games like Dota 2. But it's nowhere near creating the AI technology that Altman describes, experts told CNBC.

Daron Acemoglu, an economist at MIT, told CNBC: "There is an incredible mistaken optimism of what AI is capable of doing."

Acemoglu said algorithms are good at performing some "very, very narrow tasks" and that they can sometimes help businesses to cut costs or improve a product.

"But they're not that revolutionary, and there's no evidence that any of this is going to be revolutionary," he said, adding that AI leaders are "waxing lyrical about what AI is doing already and how it's revolutionizing things."

In terms of the measures that are standard for economic success, like total factor productivity growth, or output per worker, many sectors are having the worst time they've had in about 100 years, Acemoglu said. "It's not comparable to previous periods of rapid technological progress," he said.

"If you look at the 1950s and the 1960s, the rate of TFP (total factor productivity) growth was about 3% a year," said Acemoglu. "Today it's about 0.5%. What that means is you're losing about a point and a half percentage growth of GDP (gross domestic product) every year so it's a really huge, huge, huge productivity slowdown. It's completely inconsistent with this view that we're just getting an enormous amount of benefits (from AI)."

Technology evangelists have been saying AI will change the world for years with some speculating that "artificial general intelligence" and "superintelligence" isn't far away.

AGI is the hypothetical ability of an AI to understand or learn any intellectual task that a human being can, while superintelligence is defined by Oxford professor Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest."

But some argue that we're no closer to AGI or superintelligence than we were at the start of the century.

"One can say, and some do, 'oh it's just around the corner.' But the premise of that doesn't seem to be very well articulated. It was just around the corner 10 years ago and it hasn't come," said Acemoglu.

Read the original here:

Silicon Valley leaders think A.I. will one day fund free cash handouts. But experts arent convinced - CNBC

Posted in Ai | Comments Off on Silicon Valley leaders think A.I. will one day fund free cash handouts. But experts arent convinced – CNBC

Arms first new architecture in a decade is designed for security and AI – The Verge

Posted: at 5:41 am

Chip designer Arm has announced Armv9, its first new chip architecture in a decade following Armv8 way back in 2011. According to Arm, Armv9 offers three major improvements over the previous architecture: security, better AI performance, and faster performance in general. These benefits should eventually trickle down to devices with processors based on Arms designs.

Its an important milestone for the company, whose designs power almost every smartphone sold today, as well as increasing numbers of laptops and even servers. Apple announced its Mac computers transition to its own Arm-based processors last year, and its first Apple Silicon Macs released later in the year. Other manufacturers like Microsoft have also released Arm-based laptops in recent years.

First of the big three improvements coming with Armv9 is security. The new Arm Confidential Compute Architecture (CCA), attempts to protect sensitive data with a secure, hardware-based environment. These so-called Realms can be dynamically created to protect important data and code from the rest of the system.

Next up is AI processing. Armv9 will include Scalable Vector Extension 2 (SVE2), a technology that is designed to help with machine learning and digital signal processing tasks. This should benefit everything from 5G systems to virtual and augmented reality and machine learning workloads like image processing and voice recognition. AI applications like these are said to be a key reason why Nvidia is currently in the process of buying Arm for $40 billion.

But away from these more specific improvements, Arm also promises more general performance increases from Armv9. It expects CPU performance to increase by over 30 percent across the next two generations, with further boosts performance coming from software and hardware optimizations. Arm says all existing software will run on Armv9-based processors without any problems.

With the architecture announced, the big question is when the processors using the architecture might release and find their way into consumer products. Arm says it expects the first Armv9-based silicon to ship before the end of the year.

Read this article:

Arms first new architecture in a decade is designed for security and AI - The Verge

Posted in Ai | Comments Off on Arms first new architecture in a decade is designed for security and AI – The Verge

Regulators Want to Know How Financial Institutions Use AI and How They’re Mitigating Risks – Nextgov

Posted: at 5:41 am

A group of federal financial regulators says they know U.S. financial institutions are using artificial intelligence but wants more information on where the technology is being deployed and how those organizations are accounting for the risks involved.

The financial sector is using forms of AIincluding machine learning and natural language processingto automate rote tasks and spot trends humans might miss. But new technologies always carry inherent risks, and AI has those same issues, as well as a host of its own.

On Wednesday, the Board of Governors of the Federal Reserve System, the Bureau of Consumer Financial Protection, the Federal Deposit Insurance Corporation, the National Credit Union Administration and the Office of the Comptroller of the Currency will publish a request for information in the Federal Register seeking feedback on AI uses and risk management in the financial sector.

All of these agencies have some regulatory oversight responsibility, including the use of new technologies and techniques, and the associated risks involved with any kind of innovation.

With appropriate governance, risk management, and compliance management, financial institutions use of innovative technologies and techniques, such as those involving AI, has the potential to augment business decision-making, and enhance services available to consumers and businesses, the request states.

Financial organizations are already using some AI technologies to identify fraud and unusual transactions, personalize customer service, help make decisions on creditworthiness, using natural language processing on text documents, and for cybersecurity and general risk management.

AIlike most other technologieshas the potential to automate some tasks and can help identify trends human analysts might have missed.

AI can identify relationships among variables that are not intuitive or not revealed by more traditional techniques, the RFI states. AI can better process certain forms of information, such as text, that may be impractical or difficult to process using traditional techniques. AI also facilitates processing significantly large and detailed datasets, both structured and unstructured, by identifying patterns or correlations that would be impracticable to ascertain otherwise.

That said, there are risks to deploying the new technologyas there are with any innovation disrupting a sectorsuch as automating discriminatory processes and policies, creating data leakage and sharing problems and new cybersecurity weaknesses.

But AI also carries its own specific challenges. The financial agencies cite explainability, broader or more intensive data usage and dynamic updating as examples.

The request for information seeks to understand respondents views on the use of AI by financial institutions in their provision of services to customers and for other business or operational purposes; appropriate governance, risk management, and controls over AI; and any challenges in developing, adopting, and managing AI.

The request also turns the tables on the agencies themselves, asking institutions about what assistance, regulations, laws and the like would help the sector better manage the promise and risk of AI.

The RFI includes 17 detailed questions. Responses are due 60 days after the publish date, or June 30.

Follow this link:

Regulators Want to Know How Financial Institutions Use AI and How They're Mitigating Risks - Nextgov

Posted in Ai | Comments Off on Regulators Want to Know How Financial Institutions Use AI and How They’re Mitigating Risks – Nextgov

A.I. researchers urge regulators not to slam the brakes on its development – CNBC

Posted: at 5:41 am

LONDON Artificial intelligence researchers argue that there's little point in imposing strict regulations on its development at this stage, as the technology is still in its infancy and red tape will only slow down progress in the field.

AI systems are currently capable of performing relatively "narrow" tasks such as playing games, translating languages, and recommending content.

But they're far from being "general" in any way and some argue that experts are no closer to the holy grail of AGI (artificial general intelligence) the hypothetical ability of an AI to understand or learn any intellectual task that a human being can than they were in the 1960s when the so-called "godfathers of AI" had some early breakthroughs.

Computer scientists in the field have told CNBC that AI's abilities have been significantly overhyped by some. Neil Lawrence, a professor at the University of Cambridge, told CNBC that the term AI has been turned into something that it isn't.

"No one has created anything that's anything like the capabilities of human intelligence," said Lawrence, who used to be Amazon's director of machine learning in Cambridge. "These are simple algorithmic decision-making things."

Lawrence said there's no need for regulators to impose strict new rules on AI development at this stage.

People say "what if we create a conscious AI and it's sort of a freewill" said Lawrence. "I think we're a long way from that even being a relevant discussion."

The question is, how far away are we? A few years? A few decades? A few centuries? No one really knows, but some governments are keen to ensure they're ready.

In 2014, Elon Musk warned that AI could "potentially be more dangerous than nukes" and the late physicist Stephen Hawking said in the same year that AI could end mankind. In 2017, Musk again stressed AI's dangers, saying that it could lead to a third world war and he called for AI development to be regulated.

"AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk said. However, many AI researchers take issue with Musk's views on AI.

In 2017, Demis Hassabis, the polymath founder and CEO of DeepMind, agreed with AI researchers and business leaders (including Musk) at a conference that "superintelligence" will exist one day.

Superintelligence is defined by Oxford professor Nick Bostrom as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest." He and others have speculated that superintelligent machines could one day turn against humans.

A number of research institutions around the world are focusing on AI safety including the Future of Humanity Institute in Oxford and the Centre for the Study Existential Risk in Cambridge.

Bostrom, the founding director of the Future of Humanity Institute, told CNBC last year that there's three main ways in which AI could end up causing harm if it somehow became much more powerful. They are:

"Each of these categories is a plausible place where things could go wrong," said the Swedish philosopher.

Skype co-founder Jaan Tallinn sees AI as one of the most likely existential threats to humanity's existence. He's spending millions of dollars to try to ensure the technology is developed safely. That includes making early investments in AI labs like DeepMind (partly so that he can keep tabs on what they're doing) and funding AI safety research at universities.

Tallinn told CNBC last November that it's important to look at how strongly and how significantly AI development will feed back into AI development.

"If one day humans are developing AI and the next day humans are out of the loop then I think it's very justified to be concerned about what happens," he said.

But Joshua Feast, an MIT graduate and the founder of Boston-based AI software firm Cogito, told CNBC: "There is nothing in the (AI) technology today that implies we will ever get to AGI with it."

Feast added that it's not a linear path and the world isn't progressively getting toward AGI.

He conceded that there could be a "giant leap" at some point that puts us on the path to AGI, but he doesn't view us as being on that path today.

Feast said policymakers would be better off focusing on AI bias, which is a major issue with many of today's algorithms. That's because, in some instances, they've learned how to do things like identify someone in a photo off the back of human datasets that have racist or sexist views built into them.

The regulation of AI is an emerging issue worldwide and policymakers have the difficult task of finding the right balance between encouraging its development and managing the associated risks.

They also need to decide whether to try to regulate "AI as a whole" or whether to try to introduce AI legislation for specific areas, such as facial recognition and self-driving cars.

Tesla's self-driving driving technology is perceived as being some of the most advanced in the world. But the company's vehicles still crash into things earlier this month, for example, a Tesla collided with a police car in the U.S.

"For it (legislation) to be practically useful, you have to talk about it in context," said Lawrence, adding that policymakers should identify what "new thing" AI can do that wasn't possible before and then consider whether regulation is necessary.

Politicians in Europe are arguably doing more to try to regulate AI than anyone else.

In Feb. 2020, the EU published its draft strategy paper for promoting and regulating AI, while the European Parliament put forward recommendations in October on what AI rules should address with regards to ethics, liability and intellectual property rights.

The European Parliament said "high-risk AI technologies, such as those with self-learning capacities, should be designed to allow for human oversight at any time." It added that ensuring AI's self-learning capacities can be "disabled" if it turns out to be dangerous is also a top priority.

Regulation efforts in the U.S. have largely focused on how to make self-driving cars safe and whether or not AI should be used in warfare. In a 2016 report, the National Science and Technology Council set a precedent to allow researchers to continue to develop new AI software with few restrictions.

The National Security Commission on AI, led by ex-Google CEO Eric Schmidt, issued a 756-page report this month saying the U.S. is not prepared to defend or compete in the AI era. The report warns that AI systems will be used in the "pursuit of power" and that "AI will not stay in the domain of superpowers or the realm of science fiction."

The commission urged President Joe Biden to reject calls for a global ban on autonomous weapons, saying that China and Russia are unlikely to keep to any treaty they sign. "We will not be able to defend against AI-enabled threats without ubiquitous AI capabilities and new warfighting paradigms," wrote Schmidt.

Meanwhile, there's also global AI regulation initiatives underway.

In 2018, Canada and France announced plans for a G-7-backed international panel to study the global effects of AI on people and economies while also directing AI development. The panel would be similar to the international panel on climate change. It was renamed the Global Partnership on AI in 2019. The U.S is yet to endorse it.

Follow this link:

A.I. researchers urge regulators not to slam the brakes on its development - CNBC

Posted in Ai | Comments Off on A.I. researchers urge regulators not to slam the brakes on its development – CNBC

Project Force: AI and the military a friend or foe? – Al Jazeera English

Posted: at 5:41 am

The accuracy and precision of todays weapons are steadily forcing contemporary battlefields to empty of human combatants.

As more and more sensors fill the battlespace, sending vast amounts of data back to analysts, humans struggle to make sense of the mountain of information gathered.

This is where artificial intelligence (AI) comes in learning algorithms that thrive off big data; in fact, the more data these systems analyse, the more accurate they can be.

In short, AI is the ability for a system to think in a limited way, working specifically on problems normally associated with human intelligence, such as pattern and speech recognition, translation and decision-making.

AI and machine learning have been a part of civilian life for years. Megacorporations like Amazon and Google have used these tools to build vast commercial empires based in part on predicting the wants and needs of the people that use them.

The United States military has also long invested in civilian AI, with the Pentagons Defense Advanced Research Projects Agency (DARPA), funnelling money into key areas of AI research.

However, to tackle specific military concerns, the defence establishment soon realised its AI needs were not being met. So they approached Silicon Valley, asking for its help in giving the Pentagon the tools it would need to process an ever-growing mountain of information.

Employees at several corporations were extremely uncomfortable with their research being used by the military and persuaded the companies Google being one of them to opt out of, or at least dial down, its cooperation with the defence establishment.

While the much-hyped idea of Killer Robots remorseless machines hunting down humans and terminating them for some reason known to themselves has caught the publics imagination, the current focus of AI could not be further from that.

As a recent report on the military applications of AI points out, the technology is central to providing robotic assistance on the battlefield, which will enable forces to maintain or expand warfighting capacity without increasing manpower.

What does this mean? In effect, robotic systems will do tasks considered too menial or too dangerous for human beings such as unmanned supply convoys, mine clearance or the air-to-air refuelling of aircraft. It is also a force multiplier, which means it allows the same amount of people to do and achieve more.

An idea that illustrates this is the concept of the robotic Loyal Wingman being developed for the US Air Force. Designed to fly alongside a jet flown by a human pilot, this unmanned jet would fight off the enemy, be able to complete its mission, or help the human pilot do so. It would act as an AI bodyguard, defending the manned aircraft, and is also designed to sacrifice itself if there is a need to do so to save the human pilot.

A Navy X-47B drone, an unmanned combat aerial vehicle [File: AP]As AI power develops, the push towards systems becoming autonomous will only increase. Currently, militaries are keen to have a human involved in the decision-making loop. But in wartime, these communication links are potential targets cut off the head and the body would not be able to think. The majority of drones currently deployed around the world would lose their core functions if the data link connecting them to their human operator were severed.

This is not the case with the high-end, intelligence-gathering, unarmed drone Global Hawk, which, once given orders is able to carry them out independently without the need for a vulnerable data link, allowing it to be sent into highly contested airspaces to gather vital information. This makes it far more survivable in a future conflict, and money is now pouring into these new systems that can fly themselves, like Frances Dassault Neuron or Russias Sukhoi S70 both semi-stealthy autonomous combat drone designs.

AI programmes and systems are constantly improving, as their quick reactions and data processing allow them to finely hone the tasks they are designed to perform.

Robotic air-to-air refuelling aircraft have a better flight record and are able to keep themselves steady in weather that would leave a human pilot struggling. In war games and dogfight simulations, AI pilots are already starting to score significant victories over their human counterparts.

While AI algorithms are great at data-crunching, they have also started to surprise observers in the choices they make.

In 2016, when an AI programme, AlphaGo, took on a human grandmaster and world champion of the famously complex game of Go, it was expected to act methodically, like a machine. What surprised everyone watching was the unexpectedly bold moves it sometimes made, catching its opponent Lee Se-dol off-guard. The algorithm went on to win, to the shock of the tournaments observers. This kind of breakthrough in AI development had not been expected for years, yet here it was.

Machine intelligence is and will be increasingly incorporated into manned platforms. Ships will now have fewer crew members as the AI programmes will be able to do more. Single pilots will be able to control squadrons of unmanned aircraft that will fly themselves but obey that humans orders.

Facial recognition security cameras monitor a pedestrian shopping street in Beijing [File: AP]AIs main strength is in the arena of surveillance and counterinsurgency: being able to scan images made available from millions of CCTV cameras; being able to follow multiple potential targets; using big data to finesse predictions of a targets behaviour with ever-greater accuracy. All this is already within the grasp of AI systems that have been set up for this purpose unblinking eyes that watch, record, and monitor 24 hours a day.

The sheer volume of material that can be gathered is staggering and would be beyond the scope of human analysts to watch, absorb and fold into any conclusions they made.

AI is perfect for this and one of the testbeds for this kind of analytical, detection software is in special operations, where there has been a significant success. The tempo of special forces operations in counterinsurgency and counterterrorism has increased dramatically as information from a raid can now be quickly analysed and acted upon, leading to other raids that same night, which leads to more information gathered.

This speed has the ability to knock any armed group off balance as the raids are so frequent and relentless that the only option left is for them to move and hide, suppressing their organisation and rendering it ineffective.

A man uses a PlayStation-style console to manoeuvre the aircraft, as he demonstrates a control system for unmanned drones [File: AP]As AI military systems mature, their record of success will improve, and this will help overcome another key challenge in the acceptance of informationalised systems by human operators: trust.

Human soldiers will learn to increasingly rely on smart systems that can think at a faster rate than they can, spotting threats before they do. An AI system is only as good as the information it receives and processes about its environment, in other words, what it perceives. The more information it has, the more accurate it will be in its perception, assessment and subsequent actions.

The least complicated environment for a machine to understand is flight. Simple rules, a slim chance of collision, and relatively direct routes to and from its area of operations mean that this is where the first inroads into AI and relatively smart systems have been made. Loitering munitions, designed to search and destroy radar installations, are already operational and have been used in conflicts such as the war between Armenia and Azerbaijan.

Investment and research have also poured into maritime platforms. Operating in a more complex environment with sea life and surface traffic potentially obscuring sensor readings, a major development is in unmanned underwater vehicles (UUVs). Stealthy, near-silent systems, they are virtually undetectable and can stay submerged almost indefinitely.

Alongside the advances, there is a growing concern with how deadly these imagined AI systems could be.

Human beings have proven themselves extremely proficient in the ways of slaughter but there is increased worry that these mythical robots would run amuck, and that humans would lose control. This is the central concern among commentators, researchers and potential manufacturers.

But an AI system would not get enraged, feel hatred for its enemy, or decide to take it out on the local population if its AI comrades were destroyed. It could have the Laws of Armed Conflict built into its software.

The most complex and demanding environment is urban combat, where the wars of the near future will increasingly be fought. Conflicts in cities can overwhelm most human beings and it is highly doubtful a machine with a very narrow view of the world would be able to navigate it, let alone fight and prevail without making serious errors of judgement.

A man looks at a demonstration of human motion analysis software at the stall of an artificial intelligence solutions maker at an exhibition in China [File: Reuters]While they do not exist now, killer robots continue to appear as a worry for many and codes of ethics are already being worked on. Could a robot combatant indeed understand and be able to apply the Laws of Armed Conflict? Could it tell friend from foe, and if so, what would its reaction be? This applies especially to militias, soldiers from opposing sides using similar equipment, fighters who do not usually wear a defining uniform, and non-combatants.

The concern is so high that the Human Rights Watch has urged for the prohibition of fully autonomous AI units capable of making lethal decisions, calling for a ban very much like those in place for mines and chemical and biological weapons.

Another main concern is that a machine can be hacked in ways a human cannot. It might be fighting alongside you one minute but then turn on you the next. Human units have mutinied and changed allegiances before but to turn ones entire army or fleet against them with a keystroke is a terrifying possibility for military planners. And software can go wrong. A pervasive phrase in modern civilian life is sorry, the system is down; imagine this applied to armed machines engaged in battle.

Perhaps the most concerning of all is the offensive use of AI malware. More than 10 years ago, the worlds most famous cyber-weapon Stuxnet sought to insinuate itself into the software controlling the spinning of centrifuges refining uranium in Iran. Able to hide itself, it covered up its tracks, searching for a particular piece of code to attack that would cause the centrifuges to spin out of control and be destroyed. Although highly sophisticated then, it is nothing compared with what is available now and what could be deployed during a conflict.

The desire to design and build these new weapons that are expected to tip the balance in future conflicts has triggered an arms race between the US and its near-peer competitors Russia and China.

AI can not only be empowering, it is asymmetric in its leverage, meaning a small country can develop effective AI software without the industrial might needed to research, develop and test a new weapons system. It is a powerful way for a country to leapfrog over the competition, producing potent designs that will give it the edge needed to win a war.

Russia has declared this the new frontier for military research. President Vladimir Putin in an address in 2017 said that whoever became the leader in the sphere of AI would become the ruler of the world. To back that up, the same year Russias Military-Industrial Committee approved the integration of AI into 30 percent of the countrys armed forces by 2030.

Current realities are different, and so far Russian ventures into this field have proven patchy. The Uran-9 unmanned combat vehicle performed poorly in the urban battlefields of Syria in 2018, often not understanding its surroundings or able to detect potential targets. Despite these setbacks, it was inducted into the Russian military in 2019, a clear sign of the drive in senior Russian military circles to field robotic units with increasing autonomy as they develop in complexity.

China, too, has clearly stated that a major focus of research and development is how to win at intelligent(ised) warfare. In a report into Chinas embracing of and use of AI in military applications, the Brookings Institution wrote that it will include command decision making, military deductions that could change the very mechanisms for victory in future warfare. Current areas of focus are AI-enabled radar, robotic ships and smarter cruise and hypersonic missiles, all areas of research that other countries are focusing on.

An American military pilot flies a Predator drone from a ground command post during a night border mission [File: AP]The development of military artificial intelligence giving systems increasing autonomy gives military planners a tantalising glimpse at victory on the battlefield, but the weapons themselves, and the countermeasures that would be aimed against them in a war of the near future, remain largely untested.

Countries like Russia and China with their revamped and streamlined militaries are no longer looking to achieve parity with the US; they are looking to surpass it by researching heavily into the weapons of the future.

Doctrine is key: how these new weapons will integrate into future war plans and how they can be leveraged for their maximum effect on the enemy.

Any quantitative leap in weapons design is always a concern as it gives a country the belief that they could be victorious in battle, thus lowering the threshold for conflict.

As war speeds up even further, it will increasingly be left in the hands of these systems to fight them, to give recommendations, and ultimately, to make the decisions.

See more here:

Project Force: AI and the military a friend or foe? - Al Jazeera English

Posted in Ai | Comments Off on Project Force: AI and the military a friend or foe? – Al Jazeera English

Page 142«..1020..141142143144..150160..»