...23456...102030...


Loyal Markets on the FX Market and AI Technology – GlobeNewswire

BELIZE CITY, Belize, Sept. 04, 2020 (GLOBE NEWSWIRE) -- With forex trading growing in popularity along with the artificial intelligence revolution, companies like Loyal Markets are playing its part in helping the industry realise the full potential of artificial intelligence in trading.

Both the forex and technology industries are changing and accelerating at an unprecedented rate. As regulation shifts to keep up with the growth, brokers are competing to unveil the latest technological advancements. As such, most have now expanded their offerings to include on-the-go trading through mobile apps. The challenge in the competitive field of forex trading, therefore, is to create a solution that stands out from the pack one that simultaneously adheres to regulatory changes while also meeting the needs of a new trading generation.

Loyal Markets has been using artificial intelligence to create a proprietary system that combines the machine learning of AI. with the discretion of humans to analyse trading insights and to find trading patterns and trends with high odds of success.

Some of the most valuable information for retail investors in forex trading is currency patterns and trends. Investors of Loyal Markets can now select various different AI trading solutions from the trading platform to assist in their trading decisions.

"With the Intraday Pattern Feed and Trend Prediction Engine, using artificial intelligence to trade forex currency is now significantly simpler," said Will Colmore. "Retail traders and independent investment advisors can use the same technology as Wall Street firms to find patterns early."

This technology can also provide insights on the percentage of outcomes that confirm successful trade signals in the past. Pre-calculated through backtesting, this information enables Loyal Market's Fund Management team to make informed decisions about the pattern using artificial intelligence's predictions.

About Loyal Markets

Loyal Markets is one of the world's leading brokerage firms. The company's mission is to expand internationally and become a global financial powerhouse. Uniting a work-force which specialized investment professionals globally, Loyal Markets also boasts a comprehensive administrative support, state-of-the-art artificial intelligence and excellent risk control protocols.

Media ContactCompany: Loyal MarketsContact Person: Will ColmoreEmail: contact@loyalmarkets.comWebsite: https://www.loyalmarkets.comTelephone: +501 4892 5899Address: 1782 Coney Dr, Belize City, Belize

See more here:

Loyal Markets on the FX Market and AI Technology - GlobeNewswire

Posted in Ai

Healthcare AI: How one hospital system is using technology to adapt to COVID-19 – TechRepublic

One Illinois hospital system is using artificial intelligence and other technologies to keep patients safe and the hospital in full operation during a pandemic.

TechRepublic's Karen Roby spoke with Jay Roszhart of Memorial Health Center's Systems Ambulatory Group in Illinois about artificial intelligence (AI) in hospitals. The following is an edited transcript of their conversation.

Karen Roby: The American Hospital Association estimates that hospitals have lost more than $200 billion because of the COVID-19 pandemic. Hospital leaders are always looking for ways to get patients back into doctors' offices and the hospitals in a safe and secure way.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

Talk a little bit just to start us off here about the population that you serve there in Illinois.

Jay Roszhart: We're in Springfield, Ill. We're serving around half a million to a million people across 9 to 27 counties, depending on if you're looking at our primary or secondary service areas. We do that with five hospitals, as well as a physician services group. We've got well over 200 providers. It's physicians and advanced practice providers now, as well as a home health organization, home hospice, durable medical equipment, behavioral health, etc. We're a large integrated system. About $1.4 billion in revenue when we don't have a COVID-19 year.

Karen Roby: It's been a challenging year, to say the least.

Jay Roszhart: It has. The very first month of the pandemic, we saw our volumes decline about 40%. We actually saw about $100 million loss in one month, which is a larger loss than we've ever had in our history. Primarily, that is due to having to shut down things like clinics, having to shut down some of our more elective procedures, and figure out the processes that we needed to keep people safe so that they could still get the care they need, but in a safe way.

Karen Roby: Talk a little bit about the AI-based program and how that's helping you achieve just that.

Jay Roszhart: We're actually back to about 100% of our pre-COVID volumes in our ambulatory areasthat's our physician clinics. Primarily, we do about 300,000 of those a year. And the way we've gotten back is by making it as safe as possible. One of the things we're doing is rolling out a chatbot to actually be the virtual waiting room. The good silver lining in this pandemic is it's forced hospitals, health systems, physician practices to innovate and to try to think a little bit more consumer friendly and a little bit more technology-forward.

We've always been technology-forward on the direct care giving side, but this is more on some of the other things. You ever go to a physician office and you have to fill out 15 forms? And think about all of the waste that is being consumed during that period of time that you are checking in, scheduling an appointment, giving them your insurance card. Somebody is entering that manually into a system. Somebody is then billing for it. Somebody is then coding for it. Somebody is then adjudicating that claim on the insurance side. There is all kinds of waste and all kinds of manual process that really is not related to actually a physician giving you care. We're using AI to try and automate that. We're using a chatbot to try and automate as much of that manual process as we can.

SEE: Natural language processing: A cheat sheet (TechRepublic)

Karen Roby: There's one thing that that the pandemic has certainly taught companies, and school systems, and health systems is where they're lacking when it comes to technology. So many have moved really down the line with digital transformation, whereas others are way behind and having to pick back up. Talk a little bit about, for a patient now, what they see when they're interacting now. What is the message that's being sent is now you can come in safely to the hospital or to the doctor's office. What does that look like for the patient?

Jay Roszhart: To keep people safe, what we're doing is we're doing as much as we can outside the physical four walls of the hospital. Whether that is telehealth visits through a platform like Zoom that we're on right now, or whether that's through in-person visits. But moving as much outside of the four walls of the clinic or the hospital as possible. Right now, what a patient would experience is they would actually get a text message saying, "Hey, you've got an upcoming appointment. Click on this link."

It would confirm who they are, and they would actually go through their phone ... their cell phone device, to actually complete all of that paperwork that they would normally complete sitting in the waiting room to do all of the registration, all of the check-in, to get the information on their insurance. It would also give them reminders about the day of in terms of, "Hey, your doctor's running a little bit ahead of time. If you can be here early, great." Or, "Hey, your doctor might be running a little bit late. If you want to push back your arrival time by five minutes, that's fine." And essentially all the way to the point where as soon as the patient's ready to be seen, they can walk straight into the office, bypass the waiting room, go straight to the doctor, and be seen by the doctor. That's the goal of this.

Karen Roby: How do you see long-term that things have changed or will change as a result of the last six months?

Jay Roszhart: I think long-term, first of all, the social and financial implications of COVID-19 are going to be long lasting. They really are. From a financial perspective, hospitals, health systems, everywhere are going to be looking for ways in which they can reduce costs, improve efficiencies, and attract and keep patients loyal to their health systems, and loyal to their physician practices, and loyal to their primary care panels.

SEE:AI on the high seas: Digital transformation is revolutionizing global shipping (free PDF)(TechRepublic)

One of the ways they're going to do that, it'd be more consumer-focused and a little bit more in touch with what a consumer and an individual who is seeking healthcare in the local community wants. In our local community, one of the things that we know they want is a little bit more technology, but they still want that personal touch. One thing that our solution has done in partnership with LifeLink, the company that's helping us develop the chatbot in the AI solution, is ensure that personal touch. They use a term called a "person in the loop AI," which means that if the AI or the chatbot ever gets to that point where it doesn't understand something, you can link up to a real person.

You can really get up to that real person who can intervene in the real time to assist that individual, and still have that personal touch, and still have that personal interaction. For our patients, that's really important. It's really important that they feel that local face. It's also really important from our safety standpoint that we can continue to deliver the care we want to deliver and the care patients need, but in a way that does not jeopardize their safety during a pandemic when we should be social distancing, and we should be masking, and keeping people out of waiting rooms, etc.

Karen Roby: Obviously no one could anticipate where we would be right now if you asked in February where we'd be in August or September. But do you feel like you guys have moved through the most difficult part of it now, things are coming together, and the technology is starting to work to the benefit of your patients? Do you feel like you've made it through or over those speed bumps?

Jay Roszhart: I sure hope so. But hope's not a strategy. We are continually developing strategies to ensure if we do have any more speed bumps or if we do see the positivity rate grow, that we know what we're going to do next. I will say we're in a much better position now than we were back in March when this was really developing and starting to heat up. And that's despite our positivity rate being a lot higher than it was back in March. Not only nationally, but here locally. To me, that's a testament to some of the work that we've done. It speaks to our ability to really adapt much more quickly in the future. That adaptation is going to have to continue to come. Because regardless of what happens with the positivity rate, I think the financial and social implications that have occurred are going to force that adaptation even quicker.

Learn the latest news and best practices about data science, big data analytics, and artificial intelligence. Delivered Mondays

TechRepublic's Karen Roby spoke with Jay Roszhart of Memorial Health Center's Systems Ambulatory Group in Illinois about artificial intelligence (AI) in hospitals.

Image: Mackenzie Burke

Go here to see the original:

Healthcare AI: How one hospital system is using technology to adapt to COVID-19 - TechRepublic

Posted in Ai

Artificial Intelligence: How realistic is the claim that AI will change our lives? – Bangkok Post

Artificial Intelligence: How realistic is the claim that AI will change our lives?

Artificial Intelligence (AI) stakes a claim on productivity, corporate dominance, and economic prosperity with Shakespearean drama. AI will change the way you work and spend your leisure time and puts a claim on your identity.

First, an AI primer.

Let's define intelligence, before we get onto the artificial kind. Intelligence is the ability to learn. Our senses absorb data about the world around us. We can take a few data points and make conceptual leaps. We see light, feel heat, and infer the notion of "summer."

Our expressive abilities provide feedback, i.e., our data outputs. Intelligence is built on data. When children play, they engage in endless feedback loops through which they learn.

Computers too, are deemed intelligent if they can compute, conceptualise, see and speak. A particularly fruitful area of AI is getting machines to enjoy the same sensory experiences that we have. Machines can do this, but they require vast amounts of data. They do it by brute force, not cleverness. For example, they determine the image of a cat by breaking pixel data into little steps and repeat until done.

Key point: What we do and what machines do is not so different, but AI is more about data and repetition than it is about reasoning. Machines figure things out mathematically, not visually.

AI is a suite of technologies (machines and programs) that have predictive power, and some degree of autonomous learning.

AI consists of three building blocks:

An algorithm is a set of rules to be followed when solving a problem. The speed of the volume of data that can be fed into algorithms is more important than the "smartness" of algorithms.

Let's examine these three parts of the AI process:

The raw ingredient of intelligence is data. Data is learning potential. AI is mostly about creating value through data. Data has become a core business value when insights can be extracted. The more you have, the more you can do. Companies with a Big Data mind-set don't mind filtering through lots of low value data. The power is in the aggregation of data.

Building quality datasets for input is critical too, so human effort must first be spent obtaining, preparing and cleaning data. The computer does the calculations and provides the answers, or output.

Conceptually, Machine Learning (ML) is the ability to learn a task without being explicitly programmed to do so. ML encompasses algorithms and techniques that are used in classification, regression, clustering or anomaly detection.

ML relies on feedback loops. The data is used to make a model, and then test how well that model fits the data. The model is revised to make it fit the data better, and repeated until the model cannot be improved anymore. Algorithms can be trained with past data to find patterns and make predictions.

Key point: AI expands the set of tools that we have to gain a better grasp of finding trends or structure in data, and make predictions. Machines can scale way beyond human capacity when data is plentiful.

Prediction is the core purpose of ML. For example, banks want to predict fraudulent transactions. Telecoms want to predict churn. Retailers want to predict customer preferences. AI-enabled businesses make their data assets a strategic differentiator.

Prediction is not just about the future; it's about filling in knowledge gaps and reducing uncertainty. Prediction lets us generalise, an essential form of intelligence. Prediction and intelligence are tied at the hip.

Let's examine the wider changes unfolding.

AI increases our productivity. The question is how we distribute the resources. If AI-enhanced production only requires a few people, what does that mean for income distribution? All the uncertainties are on how the productivity benefits will be distributed, not how large they will be.

Caution:

ML is already pervasive in the internet. Will the democratisation of access brought on by the internet continue to favour global monopolies? Unprecedented economic power rests in a few companies you can guess which ones with global reach. Can the power of channelling our collective intelligence continue to be held by these companies that are positioned to influence our private interests with their economic interests?

Nobody knows if AI will produce more wealth or economic precariousness. Absent various regulatory measures, it is inevitable that it will increase inequality and create new social gaps.

Let's examine the impact on everyone.

As with all technology advancements, there will be changes in employment: the number of people employed, the nature of jobs and the satisfaction we will derive from them. However, with AI all classes of labour are under threat, including management. Professions involving analysis and decision-making will become the providence of machines.

New positions will be created, but nobody really knows if new jobs will sufficiently replace former ones.

We will shift more to creative or empathetic pursuits. To the extent of income shortfall, should we be rewarded for contributing in our small ways to the collective intelligence? Universal basic income is one option, though it remains theoretical.

Our consumption of data (mobile phones, web-clicks, sensors) provides a digital trail that is fed into corporate and governmental computers. For governments, AI opens new doors to perform surveillance, predictive policing, and social shaming. For corporates, it's not clear whether surveillance capitalism, the commercialisation of your personal data, will be personalised to you, or for you. Will it direct you where they want you to go, rather than where you want to go?

How will your data be a measure of you?

The interesting angle emerging is whether we will be hackable. Thats when the AI knows more about you than yourself. At that point you become completely influenceable because you can be made to think and to react as directed by governments and corporates.

We do need artificial forms of intelligence because our prediction abilities are limited, especially when handling big data and multiple variables. But for all its stunning accomplishments, AI remains very specific. Learning machines are circumscribed to very narrow areas of learning. The Deep Mind that wins systematically at Go can't eat soup with a spoon or predict the next financial crises.

Filtering and personalisation engines have the potential to both accommodate and exploit our interests. The degree of change will be propelled, and restrained, by new regulatory priorities. The law always lags behind technology, so expect the slings and arrows of our outrageous fortune.

Author: Greg Beatty, J.D., Business Development Consultant. For further information please contact gregfieldbeatty@gmail.com

Series Editor: Christopher F. Bruton, Executive Director, Dataconsult Ltd, chris@dataconsult.co.th. Dataconsult's Thailand Regional Forum provides seminars and extensive documentation to update business on future trends in Thailand and in the Mekong Region.

Visit link:

Artificial Intelligence: How realistic is the claim that AI will change our lives? - Bangkok Post

Posted in Ai

NASAs impressive new AI can predict when a hurricane intensifies – The Next Web

Meteorologists have gotten pretty damn good at forecasting a hurricanes track. But they still struggle to calculate when it will intensify, as its seriously hard to understand whats happening inside atropical cyclone.

A new machine learning model developed by NASA could dramatically improve their calculations,and give people in a hurricanes path more time to prepare.

Scientists at the space agencys Jet Propulsion Laboratory in Southern California developed the system after searching through years of satellite data.

They discovered three strong signalsthat a hurricane will become more severe: abundant rainfall inside the storms inner core; the amount of ice water in the clouds within the tropical cyclone; and the temperature of the air flowing away from the eye of the hurricane.

[Read:We asked 3 CEOs what tech trends will dominate post-COVID]

The team then used IBM Watson Studio to build a model that analyzes all these factors, as well as those already used bythe National Hurricane Center, a US government agency that monitors hazardous tropical weather.

The researchers trained the model to detect when a hurricane will undergo rapid intensification which happens when wind speeds increase by 56 kmph or more within 24 hours on storms that swept across the US between 1998 and 2008. They then tested iton a separate set of stormsthat hit the country from 2009 to 2014. Finally, they compared the systems forecasts to the model used by the National Hurricane Center for the latter set of storms.

The teamsays their modelwas 60% more likely to predict a hurricanes winds would increase by at least 56 kmph within 24 hours. But for hurricanes whose windsshot up by at least 64 kmph, the new system had a 200% higher chance of detecting these events.

The team is now testing the model on storms during the current hurricane season. If that proves successful, it could help minimize the loss of life and property caused when futuretropical cyclones hit.

You can read a research paper on the modelin the journal Geophysical Research Letters.

So youre interested in AI? Then join our online event, TNW2020, where youll hear how artificial intelligence is transforming industries and businesses.

Published September 3, 2020 10:23 UTC

Here is the original post:

NASAs impressive new AI can predict when a hurricane intensifies - The Next Web

Posted in Ai

A London AI Hub, a Facility Bigger than the Louvre, Are Among the Newest Footprint Expansions in the Life Sciences Industry – BioSpace

GlaxoSmithKline has opened a new $13 million research hub in London focused on artificial intelligence. The new hub is close to a similar research facility owned by Internet giant Google, which is using AI in its own life sciences research.

The GSK site will draw on the expertise of other AI-focused companies as it moves forward with drug discovery efforts, Pharmophorum reported. The drug developer intends to rely on AI companies to investigate the gene-related cause of some diseases, as well as screening for potential drugs, according to the report. GSKs new London facility will become the home of 30 scientists and engineers. The employees based at the facility are expected to begin collaborating with companies such as Cerebras, the Crick Institute and the Alan Turing Institute.

As GSK moves forward with its new AI-focused research, Chief Executive Officer Emma Walmsley told the London Evening Standard that it was her hope the new site will become a beacon for jobs and attract those machine learning experts and programmers who may traditionally eye Silicon Valley for jobs.

Using technologies like AI is a critical part of helping us to discover and develop medicines for serious diseases, Walmsley said, according to the report.

In addition to the AI employees in London, GSK also has other employees skilled in the discipline-based in San Francisco and Boston.

GSK isnt the only company expanding its footprint. Koreas Samsung Biologicsis spending $2 billion on a new manufacturing plant that is expected to become the largest of its kind across the globe. The Journal quipped that the Samsung site will be larger than The Louvre in Paris, the former royal residence and current museum in Paris that takes up 652,500 square feet.

The Samsung site, which will be approximately 230,000 square meters, will support biologics manufacturing that are used by some of the biggest drugmakers in the world, including Bristol Myers Squibb and GSK. In an interview with The Wall Street Journal, CEO Kim Tae-han said the demands for biologics used in the effort to combat COVID-19 highlighted the need for a larger-than-expected facility.

Covid-19 is giving us more opportunity than crisis,Kim said, according to the report.

There is also growth taking place in the United States. The Boston Business Journal reported that four life science companies are leasing a214,440-square-foot, four-story lab building in Lexington, Mass. The building will become the home for Dicerna Pharmaceuticals, Frequency Therapeutics, Integral Health and Voyager Therapeutics, the Journal reported.

The four-story building was constructed by King Street properties with life science companies in mind. Although the building was not built with a specific client in mind, it drew interest from prospective tenants across the region, King Street Properties told the Journal.

According to the Journal, the breakdown for the amount of space used by each company in the Lexington site is as follow:

Follow this link:

A London AI Hub, a Facility Bigger than the Louvre, Are Among the Newest Footprint Expansions in the Life Sciences Industry - BioSpace

Posted in Ai

Robotics and AI leaders spearheading the battle with COVID-19 – ShareCafe

Alex Cooks13 May 2020 bloghighlighted the role of robotics and artificial intelligence (A.I.) technologies in fighting the spread of COVID-19.

In todays post, we look behind the ticker of theBetaShares Global Robotics and Artificial Intelligence ETF (ASX: RBTZ)at some of the leading companies in this space, and how they have contributed to fighting the pandemic, or are well-placed to benefit from economic, social and geo-political shifts borne out of the crisis.

The most visually obvious contribution of robotics and A.I. to combating COVID-19 has been the development of autonomous robots in healthcare such asOmrons LD-UVC, shown in Figure 1 below. Omron makes up 4.5% of RBTZs index (as at 21 August 2020). Their ground-breaking LD-UVC disinfects a particular premises by eliminating 99.9% of bacteria and viruses, both airborne and droplet, with a precise dosage of UVC energy1.

Figure 1: The LD UVC, developed by Omron Asia Pacific, in conjunction with Techmetics Robotics

Reducing the risk of human exposure to the coronavirus is one application of robotics, while scaling up our capacity for clinical testing is another critical element of the fight.

Swiss Healthcare company,Tecan Group, which makes up 5.3% of RBTZs index (as at 21 August 2020), is a market leader in laboratory instruments, reagents and smart consumables used to automate diagnostic workflow in life sciences and clinical testing laboratories.

Tecan has experienced strong demand for its products to help in the global fight against the coronavirus pandemic, resulting in a substantial increase in sales and a surge in orders in the first half of 2020.

Automation is critical for countries attempting to scale up their COVID-19 testing capacity. Tecan is aiming to double production of its laboratory automation solutions and disposable pipette tip products, and has accessed emergency stockpiles to keep up with the massive demand2.

Californian companyNvidiamakes up 9.4% of the index which RBTZ aims to track (as at 21 August 2020), making it the Funds largest holding. Nvidia is at the forefront of deep learning, artificial intelligence, and accelerated analytics. Nvidia was able to design and build the worlds seventh fastest supercomputer in three weeks, a task that normally takes many months, to be used by the U.S. Argonne National Laboratory to research ways to stop the coronavirus3.

Supercomputers are proving to be a critical tool in many facets of responding to the disease, including predicting the spread of the virus, optimising contact tracing, allocating resources and providing decisions for physicians, designing vaccines and developing rapid testing tools.

Then there are companies and products that are helping us adapt to a post-COVID world and beyond.

Keyence Corporation, from Japan, positioned itself at the forefront of several key trends in an era of increasing factory automation. In the wake of the COVID-19 crisis, factories have never faced such an urgent need to replace humans with machines to keep production lines running.

Keyence specialises in automation systems for manufacturing, food processing and pharma machine vision systems, sensors, laser markers, measuring instruments and digital microscopes. Think precision tools and quality control sensors that eliminate or detect infinitesimal assembly-line mistakes, improving throughput, and reducing wastage and costly shutdowns.

Its focus on product innovation and direct-sales model give it a competitive advantage, making it better able to adapt to new manufacturing processes and workflows while introducing high-value client solutions.

Keyence has maintained an operating profit margin >50%, has no net debt and managed to increase its dividend for the 2020 financial year, to become Japans third-largest company by market value4.

One unfortunate consequence of the virus crisis has been the straining of international relations and a deterioration of the rules-based order.AeroVironmentis a global leader in unmanned aircraft systems, or drones, and tactical missile systems. It is the number one supplier of small drones to the U.S. military. The Australian Defence Force is also an AeroVironment customer5, with spending on drone and military technology expected to increase after the release of the 2020 Defence Strategic Update in July6.

Beyond weapons systems, AeroVironment is also leading the evolution in stratospheric unmanned flight with the development of the Sunglider solar-powered high-altitude pseudo-satellite (HAPS), currently undergoing testing at Spaceport America in New Mexico. AeroVironment recently announced it was building a drone helicopter that will be deployed to Mars along with NASAs Perseverence rover in 20217. The Mars Helicopter will be the first aircraft to attempt controlled flight on another planet, in its mission searching for signs of habitable conditions and evidence of past microbial life.

A simple and cost-effective method of accessing the dynamic and fast-growing robotics and A.I. thematic is available on the ASX through theBetaShares Global Robotic and Artificial Intelligence ETF (ASX: RBTZ). The Fund invests in companies from across the globe involved in:

This includes exposure to the companies mentioned in this article, and other leaders expected to benefit from the increased adoption and utilisation of Robotics and A.I. Over the 12 months to 31 July 2020, RBTZ returned 23.7%, outperforming the broad global MSCI World Index (AUD) shares benchmark by 20.6%8.

There are risks associated with an investment in the Fund, including concentration risk, robotics and A.I. companies risk, smaller companies risk and currency risk. For more information on risks and other features of the Fund, please see the Product Disclosure Statement, available atwww.betashares.com.au.

ENDNOTES

More:

Robotics and AI leaders spearheading the battle with COVID-19 - ShareCafe

Posted in Ai

AI enhanced content coming to future Android TVs – Android Authority

Whenever an event like IFA rolls around, the artificial intelligence buzzword emerges to dazzle prospective customers and investors. However, the number of actually impressive use cases for AI is increasing. TCL, one of the industrys biggest TV brands, showcased the AI capabilities of its second-generation AiPQ Engine onstage at IFA. Get ready for Android TV and other smart TVs with AI enhancements in the near future.

TCLs little chip leverages machine learning capabilities to recognize parts of video content, such as landscape backgrounds or faces to ensure accurate skin mapping. The AI processor can also adjust audio playback based on the scene content or music. It can also increase or lower volume based on ambient sounds in your living room. TCL also envisions this being used to dynamically upscale 4K content using super-resolution enhancements.

The bottom line is that this AI display processor can detect and enhance both audio and visual content dynamically, rather than relying on adaptive presets and standard settings. The video embedded below showcases some of the processors features.

It looks pretty nifty, especially when combined with TCLs other TV innovations. These include QLED and mini LED display technology, living room hands-free voice controls, and pop-up cameras for making calls and chatting on social media. Keep an eye out future TCL Android TVs sporting these enhanced AI capabilities.

See also: AI-enhanced displays are coming to affordable smartphones

Read more:

AI enhanced content coming to future Android TVs - Android Authority

Posted in Ai

Law and Justice Powered by Artificial Intelligence? It’s Already a Reality – JD Supra

The AI wave in law is not coming, its already here and its already transforming law firms.

Change happens faster than we predict. It is also happening more frequently. Consider, China is launching an online AI arbitrator this year. The United Nations wants to improve access to justice through AI judges and has been actively working on this for four years. A handful of firms have built digital assistants to help legal team comply with case rules to reduce time and expenses that are actually not billable.

Now factor in COVID-19. While it has been a pox on our lives, it has also been a great accelerator for innovation. With physical courtrooms closed, it accelerated the adoption virtual courtrooms. Law firms that never though a remote workforce would be effective are now wondering why they need huge offices when people seem to be working more effectively from home. Both the courts and firms are also turning more to AI-powered solutions to improve operational collaboration and efficiencies as well as to establish deeper engagement with petitioners and clients.

Society has entered the fourth industrial revolution that will trigger massive changes in how all industries operate. Artificial intelligence (AI) is one the biggest drivers fueling this revolution. Most law firms, court systems, and government agencies agree that AI will have a huge impact in three areas:

The question, though, is where are we currently with AI in these three impact areas? Much further along than people probably realize.

For many years, there wasnt much resonance in law firms regarding AI. Most of the interest was wondering how laws would change and anticipating how this would drive new legal matters. Through personal interactions with law firm leadership, there was lack of trust in the technology. Many managing partners expressed disbelief that a machine could do some of the things a lawyer does or possibly do them as well, if not better. Even at 99% confidence level, lawyers ask about the 1%, states Susan Wortzman, Partner at MT>3, a division of McCarthy Tetrault. She emphasizes the need to build truth and technology before firms will embrace adoption, and it is a much higher bar to get there than in other industries. Thats why MT>3 has focused on creating buy-in for so long. They make sure people understand the technology, and their process includes trust building activities and system validation. Wortzman also states that law firms must overcome the perception that AI is a pure corporate cost by emphasizing the industry pressures on better efficiencies in legal operations.

Wortzman outlines an effective (and effort intensive) approach. Asked to help one of the largest, international law firms, I was sitting in a room with a few of the managing partners and the firms Operations and IT leadership. Wanting to use AI, the firm had outlined a series of use cases ranging from better calendar management to timesheet management. Reviewing their list, I made the blunt statement that AI wasnt really needed for these items. One of the managing partners pounded the table with his fist and said, That what I thought! Whats the big deal with AI? Its all hype right?

Not to be dissuaded, I stated lets see if theres something that you need that AI could actually add value. So, I asked this partner what his biggest problem was, and it turned out to be was talent management. The firm had issues in recruiting and assessing talent. They could not effectively figure out if a person would succeed as a trial lawyer, rainmaker, etc. In fact, they had let go of people they thought were mediocre only to see these people turn into superstar lawyers for other firms.

Well, AI can help them with this idea, and I explained how AI could be applied to solve this problem. Their first reaction was disbelief. Even after walking them through similar success stories in other industries and explaining how the firm could implement this, there was still some skepticism that a machine could do this better but the law firm moved forward with it.

...clients are fueling the move to AI by pressuring revenue model changes in law firms.

Then, something changed two years ago that triggered a tsunami of interest and investment by the law firms. I was suddenly getting a flood of requests to help firms figure out competitive advantages by building their own AI solutions. Even the mega firms were moving rapidly and launching several AI projects at once. Moreover, some established lawyers were slowing down their practice of law to pursue entrepreneurial AI ventures. What was the inflection point that triggered this rapid change? Well, it was a combination of a few things starting with client needs.

AI is like sex in high school everybody says theyre doing it but very few actually are, says Richard Robbins, Director of Knowledge Management at Sidley Austin. AI is a buzz word and it spurs interest and engagement from clients. However, clients are fueling the move to AI by pressuring revenue model changes in law firms. The past decade produced a lot of case rules on what can be billed or expensed. Now, clients want more cost certainty and fix bid pricing. To accomplish this, law firms are looking to streamline their legal operations, and theyre finding a powerful ally in AI tools. The AI software in e-discovery, case rules, and admin tools, such as timesheets and calendars, have reduced costs and freed up more resource time for things that are billable or that contributed to more robust firm cost estimates to make a confident fixed price for clients. Robbins stresses that people hire firms for the wisdom and knowledge of the lawyers. Thus, the more mundane, administrative tasks are immaterial so there is a natural fit in finding technology solutions and opportunities.

Beyond automation, AI solutions can also provide insight that might be invaluable for firms and clients. Consider the matter of the chicken and Walmart. A man (who ironically is a dentist) bought a whole chicken from Walmart. When he bit into the gizzard, he broke his tooth on a stone, so he sued Walmart for damages. Typically, Walmart might have settled this case out of court. However, they use an AI powered solution from Legalmation, a company started by three lawyers. Legalmations AI can ready court documents and generate interrogatories. In review this case, the AI called out that it was a material fact that when chickens eat, they eat stones which get stored in the gizzard. Therefore, by eating the gizzard, the plaintiff should have been aware of this fact and not hold Walmart accountable. This was the argument that allowed Walmart to get a favorable ruling. Now, how many lawyers might have unearthed this material fact? Or considered theres enough promise to even do the research? This is the added value we can get from AI insights.

Changing competitors into clients is another driver for AI adoption and innovation for firms. Remember the gold rush stories and the people who actually got rich? It was the people who sold the equipment. It is not enough to be using these AI tools because most firms will be using them. A few firms have realized that if they build their own tools, then they would have a huge competitive advantage which could then be licensed to other firms turning them into clients.

Per Jeff Richardson, Partner and Chair of Technology Committee at Adams and Reese LLP, Lawyers who use tools will be better attorneys and provide better quality of service. In turn, firms that create the best tools will have a large pool of lawyers who would like to use those tools. Its a model that is quickly taking root especially at a time when firms are starting to hire business executives to run the operations of the firm, in effect, treating things more like a traditional business.

Lastly, coming to grips on how rapidly law and the future of the profession are changing is a big driver for firms using AI. Consider the recent case in which Alexa was called as a witness in a double homicide trial. If youre the defense attorney, how do you cross-examine Alexa? Now take this a step further with the on-going rise in the use of robots (which are already factory workers, bartenders, tour guides, hotel receptionists, elderly care takers).

Firms are starting to realize that the nature of the work is changing...

China is already rolling out an AI arbitrator in virtual courtrooms. The United Nations has been actively pursuing AI robot judges. Firms are starting to realize that the nature of the work is changing (less administrative to more complex tasks like case strategy or business development.) However, how do they get ready to try a case in front of a robot judge? What if the opposition counsel is a robot? As Richardson points out, Law school teaches you to think like a lawyer but not really what it means to be a lawyer unless youre in a clinical program. In the near future, firms will be looking to AI to help create information arbitrage opportunities (as Robbins coined it.) That is, the firms that can better identify information sets (even something small like chicken gizzards) and transform that into actionable insights will be the ones sought by clients.

In the future, firms not using AI will fall away, states Workman. The AI wave in law is not coming, its already here and its already transforming law firms. Its a newer model, but one that aligns with the needs of clients, changing laws and regulations, and how lawyers will perform work. Any change is difficult, in part, because many people dont like change. This, however, is a time of adapt-and-overcome or die. So where should lawyers start? As Chuck Rossman, Chief Operations Officer at MT>3, eloquently puts it, Dont be afraid of change. Dont change for changes sake. Explore. Firms need to marry legal, business, compliance, and IT as a collaborative team. The best AI solutions in law have come from the people looking to solve a pain point in the system or the firm by thinking beyond just automation (faster, cheaper, less errors) and finding innovation, a better way of performing legal services.

*

Neil Sahota is an IBM Master Inventor, United Nations (UN) Artificial Intelligence (AI) subject matter expert, and Professor at UC Irvine. With 20+ years of business experience, Neil works to inspire clients and business partners to foster innovation and develop next generation products/solutions powered by AI.

neil@neilsahota.com http://www.neilsahota.com

See original here:

Law and Justice Powered by Artificial Intelligence? It's Already a Reality - JD Supra

Posted in Ai

Are China and South Korea quietly dominating AI innovation? – Tech Wire Asia

China is developing its artificial intelligence (AI) industry to accelerate its national strategy of China Made 2025. Source: Shutterstock

Artificial intelligence (AI) has already been identified as a crucial technology front, as nations and companies jockey to gain the edge in developing AI-driven applications. The potential impact of AI cannot be understated in todays business, with AI being considered a force multiplier because of its capacity to amplify company resources and to maximize output.

Technology powerhouses are well aware of the ability of AI to transform businesses in a variety of ways, which explains why so much money is being poured into AI startups. Spend on AI systems is expected to top US$77.6 billion in 2022, according to one IDC report, while another commissioned by Microsoft illustrated that AI will almost double the rate of innovation and workforce productivity in the Asia Pacific (APAC) region in the next three years.

With plenty of innovation being driven by AI, protecting these artificial intelligence inventions becomes crucial as well. And not just by organizations the US government has pledged to boost spending on AI next year by as much as US$1.5 billion, with US chief technology officer, Michael Kratsios, implicitly stating that the Trump administration had taken unprecedented action to prioritize American leadership in AI [] as the technology is increasingly seen as having strategic implications for the innovation leaders.

For now, the US maintains a wide lead in AI development as well as in the number of artificial intelligence patents that have been granted. But over the past two years, Chinese and South Korean technology firms have significantly increased their filing of AI patent applications.

According to patent application statistics released last week, Chinas patent office processed 4,636 AI patent applications over the last two years or 64.8% of all such IP claims since 2018. Patent figures compiled by RS Components also list Chinese companies and universities as dominating the list of top patent filers.

Chinas patent office has processed around two-thirds of all AI applications in the past couple of years, but the single entity with the most AI patents filed is LG Electronics of South Korea with 731 applications. Mirroring the 5G patent battle where Korean and Chinese firms are also the leading patent filers not just in APAC but also in the world, the second most patents have been applied for by Ping An Technology, a Chinese AI technology developer and cloud provider, with 308 patent applications in total.

China has also recently shown signs of realizing AIs strategic importance, with the government just amending a list of technologies to include AI that is now restricted or banned from being exported out of the country.

It is worth noting that while South Korea and China especially are coming on strong in filing so many AI patent applications, the runaway leader in terms of AI-related patents is still Intel Corp., which has been granted nearly 45,600 patents around artificial intelligence alone.

As it stands, both China and South Korea have earmarked AI as one of the cornerstone technologies to help revitalize their post-pandemic economic recovery. In fact, recent AI talent analysis by MacroPolo found that China is the largest source of top-tier AI researchers, with 29% of top AI talent coming from Chinese universities, ahead of 20% from the US.

Joe Devanesan| @thecrystalcrown

Joe's interest in tech began when, as a child, he first saw footage of the Apollo space missions. He still holds out hope to either see the first man on Mars, or Jetsons-style flying cars in his lifetime.

Go here to see the original:

Are China and South Korea quietly dominating AI innovation? - Tech Wire Asia

Posted in Ai

Human-centered redistricting automation in the age of AI – Science Magazine

Redistrictingthe constitutionally mandated, decennial redrawing of electoral district boundariescan distort representative democracy. An adept map drawer can elicit a wide range of election outcomes just by regrouping voters (see the figure). When there are thousands of precincts, the number of possible partitions is astronomical, giving rise to enormous potential manipulation. Recent technological advances have enabled new computational redistricting algorithms, deployable on supercomputers, that can explore trillions of possible electoral maps without human intervention. This leaves us to wonder if Supreme Court Justice Elena Kagan was prescient when she lamented, (t)he 2010 redistricting cycle produced some of the worst partisan gerrymanders on record. The technology will only get better, so the 2020 cycle will only get worse (Gill v. Whitford). Given the irresistible urge of biased politicians to use computers to draw gerrymanders and the capability of computers to autonomously produce maps, perhaps we should just let the machines take over. The North Carolina Senate recently moved in this direction when it used a state lottery machine to choose from among 1000 computer-drawn maps. However, improving the process and, more importantly, the outcomes results not from developing technology but from our ability to understand its potential and to manage its (mis)use.

It has taken many years to develop the computing hardware, derive the theoretical basis, and implement the algorithms that automate map creation (both generating enormous numbers of maps and uniformly sampling them) (14). Yet these innovations have been easy compared with the very difficult problem of ensuring fair political representation for a richly diverse society. Redistricting is a complex sociopolitical issue for which the role of science and the advances in computing are nonobvious. Accordingly, we must not allow a fascination with technological methods to obscure a fundamental truth: The most important decisions in devising an electoral map are grounded in philosophical or political judgments about which the technology is irrelevant. It is nonsensical to completely transform a debate over philosophical values into a mathematical exercise.

As technology advances, computers are able to digest progressively larger quantities of data per time unit. Yet more computation is not equivalent to more fairness. More computation fuels an increased capacity for identifying patterns within data. But more computation has no relationship with the moral and ethical standards of an evolving and developing society. Neither computation nor even an equitable process guarantees a fair outcome.

The way forward is for people to work collaboratively with machines to produce results not otherwise possible. To do this, we must capitalize on the strengths and minimize the weaknesses of both artificial intelligence (AI) and human intelligence. Ensuring representational fairness requires metacognition that integrates creative and benevolent compromises. Humans have the advantage over machines in metacognition. Machines have the advantage in producing large numbers of rote computations. Although machines produce information, humans must infuse values to make judgments about how this information should be used (5).

Markedly different outcomes can emerge when six Republicans and six Democrats in these 12 geographic units are grouped into four districts. A 50-50 party split can be turned into a 3:1 advantage for either party. When redistricting a state with thousands of precincts, the potential for political manipulation is enormous.

Accordingly, machines can be tasked with the menial aspects of cognitionthe meticulous exploration of the astronomical number of ways in which a state can be partitioned. This helps us classify and understand the range of possibilities and the interplay of competing interests. Machines enhance and inform intelligent decision-making by helping us navigate the unfathomably large and complex informational landscape. Left to their own devices, humans have shown themselves to be unable to resist the temptation to chart biased paths through that terrain.

The ideal redistricting process begins with humans articulating the initial criteria for the construction of a fair electoral map (e.g., population equality, compactness measures, constraints on breaking political subdivisions, and representation thresholds). Here, the concerns of many different communities of interest should be solicited and considered. Note that this starting point already requires critical human interaction and considerable deliberation. Determining what data to use, and how, is not automatable (e.g., citizen voting age versus voting age population, relevant past elections, and how to forecast future vote choices). Partisan measures (e.g., mean-median difference, competitiveness, likely seat outcome, and efficiency gap) as well as vote prediction models, which are often contentious in court, should be transparently specified.

Once we have settled on the inputs to the algorithm, the computational analysis produces a large sample of redistricting plans that satisfy these principles. Trade-offs usually arise (e.g., adhering to compactness rules might require splitting jagged cities). Humans must make value-laden judgments about these trade-offs, often through contentious debate.

The process would then iterate. After some contemplation, we may decide, perhaps, on two, not three, majority-minority districts so that a particular town is kept together. These refined goals could then be specified for another computational analysis round with further deliberation to follow. Sometimes a Pareto improvement principle applies, with the algorithm assigned to ascertain whether, for example, city splits or minority representation can be maintained or improved even as one raises the overall level of compliance with other factors such as compactness. In such a process, computers assist by clarifying the feasibility of various trade-offs, but they do not supplant the human value judgments that are necessary for adjusting these plans to make them humanly rational. Neglecting the essential human role is to substitute machine irrationality for human bias.

Automation in redistricting is not a substitute for human intelligence and effort; its role is to augment human capabilities by regulating nefarious intent with increased transparency, and by bolstering productivity by efficiently parsing and synthesizing data to improve the informational basis for human decision-making. Redistricting automation does not replace human labor; it improves it. The critical goal for AI in governance is to design successful processes for human-machine collaboration. This process must inhibit the ill effects from sole reliance on humans as well as overreliance on machines. Human-machine collaboration is key, and transparency is essential.

The most promising institutional route in the near term for adopting this human-machine line-drawing process is through independent redistricting commissions (IRCs) that replace politicians with a balanced set of partisan citizen commissioners. IRCs are a relatively new concept and exist in only some states. They have varied designs. In eight states, a commission has primary responsibility for drawing the congressional plan. In six, they are only advisory to the legislature. In two states, they have no role unless the legislature fails to enact a plan. IRCs also vary in the number of commissioners, partisan affiliation, how the pool of applicants is created, and who selects the final members.

The lack of a blueprint for an IRC allows each to set its own rules, paving the way for new approaches. Although no best practices have yet emerged for these new institutions, we can glean some lessons from past efforts about how to integrate technology into a partisan balanced deliberation process. For example, Mexico's process integrated algorithms but struggled with transparency, and the North Carolina Senate relied heavily on a randomness component. Both offer lessons and help us refine our understanding of how to keep bias from creeping into the process.

Once these structural decisions are made, we must still contend with the fact that devising electoral maps is an intricate process, and IRCs generally lack the expertise that politicians and their staffs have cultivated from decades of experience. In addition, as the bitter partisanship of the 2011 Arizona citizen commission demonstrated, without a method to assess the fairness of proposals, IRCs can easily deadlock or devolve into lengthy litigation battles (6). New technological tools can aid IRCs in fulfilling their mandate by compensating for this experience deficiency as well as providing a way to benchmark fairness conceptualizations.

To maintain public confidence in their processes, IRCs would need to specify the criteria that guide the computational algorithm and implement the iterative process in a transparent manner. Open deliberation is crucial. For instance, once the range of maps is known to produce, say, a seven-to-eight likely split in Democrat-to-Republican seats 35% of the time, an eight-to-seven likely Democrat-to-Republican split 40% of the time, and something outside these two choices 25% of the time, how does an IRC choose between these partisan splits? Do they favor a split that produces more compact districts? How do they weigh the interests of racial minorities versus partisan considerations?

Regardless of what technology may be developed, in many states, the majority party of the state legislature assumes the primary role in creating a redistricting planand with rare exceptions, enjoys wide latitude in constructing district lines. There is neither a requirement nor an incentive for these self-interested actors to consent to a new process or to relinquish any of their constitutionally granted control over redistricting.

All the same, technological innovation can still have benefits by ameliorating informational imbalance. Consider redistricting Ohio's 16 congressional seats. A computational analysis might reveal that, given some set of prearranged criteria (e.g., equal population across districts, compact shapes, a minority district, and keeping particular communities of interest together), the number of Republican congressional seats usually ends up being 9 out of 16, and almost never more than 11. Although the politicians could still then introduce a map with 12 Republican seats, they would now have to weigh the potential public backlash from presenting electoral districts that are believed, a priori, to be overtly and excessively partisan. In this way, the information that is made more broadly known through technological innovation induces a new pressure point on the system whereby reform might occur.

Although politicians might not welcome the changes that technology brings, they cannot prevent the ushering in of a new informational era. States are constitutionally granted the right to enact maps as they wish, but their processes in the emerging digital age are more easily monitored and assessed. Whereas before, politicians exploited an information advantage, scientific advances can decrease this disparity and subject the process to increased scrutiny.

Although science has the potential to loosen the grip that partisanship has held over the redistricting process, we must ensure that the science behind redistricting does not, itself, become partisanship's latest victim. Scientific research is never easy, but it is especially vulnerable in redistricting where the technical details are intricate and the outcomes are overtly political.

We must be wary of consecrating research aimed at promoting a particular outcome or believing that a scientist's credentials absolve partisan tendencies. In redistricting, it may seem obvious to some that the majority party has abused its power, but validating research that supports that conclusion because of a bias toward such a preconceived outcome would not improve societal governance. Instead, use of faulty scientific tests as a basis for invalidating electoral maps allows bad actors to later overturn good maps with the same faulty tests, ultimately destroying our ability to legally distinguish good from bad. Validating maps using partisan preferences under the guise of science is more dangerous than partisanship itself.

The courts must also contend with the inconvenient fact that although their judgments may rely on scientific research, scientific progress is necessarily and excruciatingly slow. This highlights a fundamental incompatibility between the precedential nature of the law and the unrelenting need for high-quality science to take time to ponder, digest, and deliberate. Because of the precedential nature of legal decision-making, enshrining underdeveloped ideas has harmful path-dependent effects. Hence, peer review by the relevant scientific community, although far from perfect, is clearly necessary. For redistricting, technical scientific communities as well as the social scientific and legal communities are all relevant and central, with none taking over the role of another.

The relationship of technology with the goals of democracy must not be underappreciatedor overappreciated. Technological progress can never be stopped, but we must carefully manage its impact so that it leads to improved societal outcomes. The indispensable ingredient for success will be how humans design and oversee the processes we use for managing technological innovation.

Acknowledgments: W.K.T.C. has been an expert witness for A. Philip Randolph Institute v. Householder, Agre et al. v. Wolf et al., and The League of Women Voters of Pennsylvania et al. v. The Commonwealth of Pennsylvania et al.

Read this article:

Human-centered redistricting automation in the age of AI - Science Magazine

Posted in Ai

Banks arent as stupid as enterprise AI and fintech entrepreneurs think – TechCrunch

Announcements like Selina Finances $53 million raise and another $64.7 million raise the next day for a different banking startup spark enterprise artificial intelligence and fintech evangelists to rejoin the debate over how banks are stupid and need help or competition.

The complaint is banks are seemingly too slow to adopt fintechs bright ideas. They dont seem to grasp where the industry is headed. Some technologists, tired of marketing their wares to banks, have instead decided to go ahead and launch their own challenger banks.

But old-school financiers arent dumb. Most know the buy versus build choice in fintech is a false choice. The right question is almost never whether to buy software or build it internally. Instead, banks have often worked to walk the difficult but smarter path right down the middle and thats accelerating.

Thats not to say banks havent made horrendous mistakes. Critics complain about banks spending billions trying to be software companies, creating huge IT businesses with huge redundancies in cost and longevity challenges, and investing into ineffectual innovation and intrapreneurial endeavors. But overall, banks know their business way better than the entrepreneurial markets that seek to influence them.

First, banks have something most technologists dont have enough of: Banks have domain expertise. Technologists tend to discount the exchange value of domain knowledge. And thats a mistake. So much abstract technology, without critical discussion, deep product management alignment and crisp, clear and business-usefulness, makes too much technology abstract from the material value it seeks to create.

Second, banks are not reluctant to buy because they dont value enterprise artificial intelligence and other fintech. Theyre reluctant because they value it too much. They know enterprise AI gives a competitive edge, so why should they get it from the same platform everyone else is attached to, drawing from the same data lake?

Competitiveness, differentiation, alpha, risk transparency and operational productivity will be defined by how highly productive, high-performance cognitive tools are deployed at scale in the incredibly near future. The combination of NLP, ML, AI and cloud will accelerate competitive ideation in order of magnitude. The question is, how do you own the key elements of competitiveness? Its a tough question for many enterprises to answer.

If they get it right, banks can obtain the true value of their domain expertise and develop a differentiated edge where they dont just float along with every other bank on someones platform. They can define the future of their industry and keep the value. AI is a force multiplier for business knowledge and creativity. If you dont know your business well, youre wasting your money. Same goes for the entrepreneur. If you cant make your portfolio absolutely business relevant, you end up being a consulting business pretending to be a product innovator.

So are banks at best cautious, and at worst afraid? They dont want to invest in the next big thing only to have it flop. They cant distinguish whats real from hype in the fintech space. And thats understandable. After all, they have spent a fortune on AI. Or have they?

It seems they have spent a fortune on stuff called AI internal projects with not a snowballs chance in hell to scale to the volume and concurrency demands of the firm. Or they have become enmeshed in huge consulting projects staggering toward some lofty objective that everyone knows deep down is not possible.

This perceived trepidation may or may not be good for banking, but it certainly has helped foster the new industry of the challenger bank.

Challenger banks are widely accepted to have come around because traditional banks are too stuck in the past to adopt their new ideas. Investors too easily agree. In recent weeks, American challenger banks Chime unveiled a credit card, U.S.-based Point launched and German challenger bank Vivid launched with the help of Solarisbank, a fintech company.

Traditional banks are spending resources on hiring data scientists too sometimes in numbers that dwarf the challenger bankers. Legacy bankers want to listen to their data scientists on questions and challenges rather than pay more for an external fintech vendor to answer or solve them.

This arguably is the smart play. Traditional bankers are asking themselves why should they pay for fintech services that they cant 100% own, or how can they buy the right bits, and retain the parts that amount to a competitive edge? They dont want that competitive edge floating around in a data lake somewhere.

From banks perspective, its better to fintech internally or else theres no competitive advantage; the business case is always compelling. The problem is a bank is not designed to stimulate creativity in design. JPMCs COIN project is a rare and fantastically successful project. Though, this is an example of a super alignment between creative fintech and the bank being able to articulate a clear, crisp business problem a Product Requirements Document for want of a better term. Most internal development is playing games with open source, with the shine of the alchemy wearing off as budgets are looked at hard in respect to return on investment.

A lot of people are going to talk about setting new standards in the coming years as banks onboard these services and buy new companies. Ultimately, fintech firms and banks are going to join together and make the new standard as new options in banking proliferate.

So, theres a danger to spending too much time learning how to do it yourself and missing the boat as everyone else moves ahead.

Engineers will tell you that untutored management can fail to steer a consistent course. The result is an accumulation of technical debt as development-level requirements keep zigzagging. Laying too much pressure on your data scientists and engineers can also lead to technical debt piling up faster. A bug or an inefficiency is left in place. New features are built as workarounds.

This is one reason why in-house-built software has a reputation for not scaling. The same problem shows up in consultant-developed software. Old problems in the system hide underneath new ones and the cracks begin to show in the new applications built on top of low-quality code.

So how to fix this? Whats the right model?

Its a bit of a dull answer, but success comes from humility. It needs an understanding that big problems are solved with creative teams, each understanding what they bring, each being respected as equals and managed in a completely clear articulation on what needs to be solved and what success looks like.

Throw in some Stalinist project management and your probability of success goes up an order of magnitude. So, the successes of the future will see banks having fewer but way more trusted fintech partners that jointly value the intellectual property they are creating. Theyll have to respect that neither can succeed without the other. Its a tough code to crack. But without it, banks are in trouble, and so are the entrepreneurs that seek to work with them.

Read the original:

Banks arent as stupid as enterprise AI and fintech entrepreneurs think - TechCrunch

Posted in Ai

How AI is being used to socially distance audiences at ‘Tenet’ and why Netflix is no threat, according to this movie theater chain boss – MarketWatch

Elizabeth Debicki, left, and John David Washington in a scene from director Christopher Nolan's "Tenet." Melinda Sue Gordon/Associated Press

Sophisticated algorithms are being used by one of Europes biggest movie theater chains to help with social distancing.

Vue International, which has around 230 cinemas in the U.K., Germany, Taiwan, Italy, Poland and other European countries, has been using artificial intelligence to optimize screening times and ismaking adjustmentsto control the flow of audiences into auditoriums.

Tim Richards, who founded privately owned Vue cinemasaround20 years ago, said 10 years worth of data had been fed in computers pre-COVID to decide on the timing and frequency for screening movies.

This has now been adapted to control the flow of customers into the cinemas by staggering screening times. It is being linked withseating softwarethatcocoonscustomers within their family bubbles, or on their own, a safe distance away from other customers.

Read: Heres an overlooked way to play the stuck-at-home trend in the stock market

Richards, speaking at a press briefing on Monday evening,said: It took me 17 years to build the group up to 230 cinemas. What happened just a few months ago was apocalyptic.

We have planned for crises such as a cinema being shut and blockbusters tanking,but not all the cinemas being down. Our big [cost] exposures are studios, people, and rent we were quickly focused on our burn rate and liquidity.

Last month it was reported that Vue was lining up 100 million ($133 million) in additional debt financing. The firm is owned by Alberta Investment Management Corporation and pension fund Omers. Richards and other managers hold a 27% stake.

Vue has been slowly reopening its cinemas around Europe over the past few weeks.

We have been using AI to help determine what is played,at what screen,and at which cinemas[to optimize revenues], he said. Our operating systems have been tweaked to social distance customers. It recognizes if you are with family and it will cocoon you. At the moment we are probably able to use 50% of cinemascapacities.

We can control the number of people in the foyer at any one time. Crowds would not be conducive to helping customersfeel comfortablecoming back. Every member of staff went through two days of safety training.

Richards said when he did reopen his movie theaters there was pent-up demand from customers but no new movies to screen.

We still managed at 50% run rate with classic movies that were not onlyalready availableon streaming services but on terrestrial televisionas well. Peoplejustwanted to get out of their homes and have some kind of normalcy.

Christopher Nolans complex thriller Tenet is the first major new film to be released and Richards said: We are seeing Tenet performing at the same levels as Inception and Interstellar did which has been amazing.

It will be a bumpy road in some areas but we expect a return to normalcy in six months it will take a couple of months to get people comfortable again with their positions.

He said entertainment giant Disney DIS, -1.58% has a strong line up of movie theater releases, despite placing Mulan direct to its streaming channel.

Fears that streaming service Netflix NFLX, -4.90% is a threat to the industry,as movie lovers become used to watching films at home,are unfounded, he said.

Opinion:Is Mulan worth $30? The answer, and other streaming picks for September 2020

Netflix has been disruptive for everything in the home, he said. We are out ofthehome,so Netflix is complementary to us because most people who like film like film on all formats.

Ive seen the demise of the industry predicted definitely five or six times. We have been counter cyclical during downturns we are reasonably priced so people come out and enjoy what we have to offer.

Here is the original post:

How AI is being used to socially distance audiences at 'Tenet' and why Netflix is no threat, according to this movie theater chain boss - MarketWatch

Posted in Ai

This AI tool helps healthcare workers look after their mental health – The European Sting

Credit: Unsplash

This article is brought to you thanks to the collaboration ofThe European Stingwith theWorld Economic Forum.

Author: Francis Lee, Psychiatrist-in-Chief, New-York- Presbyterian Hospital, Conor Liston, Director, Sackler Institute, Weill Cornell Medicine & Laura L. Forese, Executive Vice President & Chief Operating Officer, New-York-Presbyterian Hospital

As the COVID-19 pandemic continues to exert pressure on global healthcare systems, frontline healthcare workers remain vulnerable to developing significant psychiatric symptoms. These effects have the potential to further cripple the healthcare workforce at a time when a second wave of the coronavirus is considered likely in the fall, and workforce shortages already pose a serious challenge.

Studies show that healthcare workers are also less likely to proactively seek mental health services due to concerns about confidentiality, privacy and barriers to accessing care. Thus, there is an obvious and pressing need for scalable tools to act as an early warning system to alert healthcare workers when they are at risk of depression, anxiety or trauma symptoms and then rapidly connect them with the help they need. To address the mental health needs of the 47,000 employees and affiliated physicians in our hospital system, New York-Presbyterian (NYP) has developed an artificial intelligence (AI)-enabled digital tool that screens for symptoms, provides instant feedback, and connects participants with crisis counselling and treatment referrals.

Called START (Symptom Tracker And Resources for Treatment), this screening tool enables healthcare workers to confidentially and anonymously track changes in their mental health status. This tool is unique in that it not only provides immediate feedback to participants on the severity of their symptoms but also connects them to existing mental healthcare resources. Participants are asked every two weeks to complete a short battery of questions that assess symptoms of depression, anxiety, trauma and perceived stress, as well as potential risk factors for poor mental health and ability to function at work.

To maximise engagement, the psychiatric symptom questions in the START platform are drawn from widely validated psychiatric screening tools and adaptively selected using AI algorithms that capture the most relevant clinical symptom data in a time-efficient manner. This is achieved in two ways. First, the START platform automatically selects the most informative questions based on a participants previous responses in a minimum amount of time (around five to seven minutes). Second, it focuses on questions that are reliably correlated with particular functional connectivity patterns in depression-related brain networks. Much like our national airport network, brain networks are organised into a system of hubs that facilitate efficient information flow, just as hub airports like OHare and JFK connect passengers with smaller regional destinations. Disrupted connections between brain hubs may contribute to specific symptoms and behaviours in depression.

For example, in previous work (see figure below), our group has found that psychiatric symptoms like anhedonia (a loss of interest in pleasurable activities) are reliably correlated with functional magnetic resonance imaging (fMRI) measures of connectivity in reward-related brain regions, whereas symptoms like anxiety and insomnia are correlated with differing connectivity alterations in other brain areas.

At the end of the survey, participants receive feedback on their results and are provided with options for connecting with existing and accessible mental healthcare resources. For those who need psychiatric care for their symptoms, we integrated the START platform with a telemedicine urgent counselling service at NYP that is available seven days a week and which provides faculty and staff across NYP hospitals with quick and free access to confidential and supportive virtual counselling by trained mental health professionals a special feature of this tool and our COVID-19 response. This is important, because if treatment resources are not made immediately available and easily accessible to our healthcare workers, they may be less likely to seek help when they need it.

Within one week of deploying the symptom tracker, the utilization of our urgent counselling services had more than doubled, resulting in numerous referrals to mental health professionals. Another key element contributing to the increase in utilization was frequent communication from NYP leadership about the Symptom Tracker and the availability of crisis support. In the near future, a mobile cognitive behavioral therapy (CBT) app developed at NYP (by a group led Francis Lee) will be linked to START to target specific mood, anxiety, and trauma symptom profiles, and is currently being tested in a clinical trial for safety and efficacy.

Ultimately, we hope that such emerging digital tools will transform mental health services not only for our healthcare workers but also for larger populations affected by the pandemic.

Go here to see the original:

This AI tool helps healthcare workers look after their mental health - The European Sting

Posted in Ai

Building up its AI operations, GSK opens a $13M London hub with plans to woo talent now trekking to Silicon Valley – Endpoints News

Continuing its efforts to ramp up global AI operations, GlaxoSmithKline has opened a 10 million ($13 million-plus) research base in Kings Cross, London.

The AI hotspot is already home to Googles DeepMind, and the Francis Crick and Alan Turing research institutes. GSK said it hopes to tap into the huge London tech talent pool and attract candidates who might otherwise head to Silicon Valley.

Its a vibrant ecosystem that has everything from outstanding medicine as well as also being a big tech corridor. DeepMind is there. Google is there. Its near the Crick Institute, and of course modern computing was born, basically, with Alan Turing and the Turing Institute, GSK R&D president Hal Barron said at a London Tech Week fireside chat. So we are quite convinced that both the talent and the ecosystem will enable us to build a very vibrant hub in London, getting the top talent, the best thinkers and people to be able to interact with us in GSK to take technology and help us turn it into medicines.

The company believes AI has the power to vastly improve its drug discovery process. It claims that genetically validated drugs are twice as likely to be successful. And GSK has lots of genetic data to work with. The new workspace, located in the Stanley Building, has already lured in 30 scientists, 10 of whom are in the companys AI fellow program.

In fact, many biotechs are now turning to AI, which they believe can speed up successful development by analyzing hundreds of genes at once or rapidly screening billions of molecules.

GSK is focused on finding better medicines and vaccines not just better products, but finding them in better ways, so we are using functional genomics, human genetics and artificial intelligence and machine learning, the company said in a statement.

It also has AI researchers based in San Francisco and Boston, and aims to reach 100 AI-focused employees by mid-2021. Our goal is to have the best and brightest people in the world to join us, Barron said.

In AI, we are scouring the planet for the best people. These folks are very rare to find. Competition is high and there arent a large number of them, Tony Wood, GSKs SVP of medicinal science and technology, told The Guardian in December.

The new London hub has the capacity for 60 to 80 staff members. Now all thats left to do is fill it.

Continued here:

Building up its AI operations, GSK opens a $13M London hub with plans to woo talent now trekking to Silicon Valley - Endpoints News

Posted in Ai

Reimagining creativity and AI to boost enterprise adoption – TechTarget

An AI algorithm capable of thought and creation has the potential to enhance applications and unlock better analysis with less oversight for organizations. However, it still remains out of reach. Until then, AI has an important role to play in augmenting human creativity.

Since the inception of artificial intelligence, researchers have had a goal to create a machine capable of matching or surpassing a human's skills of reasoning and expression. Advancing AI past self-training to computational creativity will require going beyond data augmentation into original thought.

Currently, machine learning specializes in limited data creativity, with algorithms that can train on historical data and allow organizations to make better-informed decisions with analytics. These algorithms use training data sets to "predict" future outcomes and generate new data.

"There are dozens of examples in which different algorithms that, given the observation of real data, are capable of generating very plausible fictitious data, which is almost indistinguishable from real data," Haldo Sponton, vice president of technology and head of AI development at digital consultant firm Globant.

Algorithms can create data, but only when prompted to and only from something that has already been created -- current algorithms can only mirror training data. This falls short of the insular creativity the technology hoped to reach.

To Sponton, creativity is as universal as it is individual. Each being has the ability to be creative, but each individual has a unique approach to creation. Creativity is that ability to use imagination or have original ideas, as well as the ability to create. It is a fundamental feature of human intelligence, and AI cannot ignore it as a step to further advancement.

As AI processes more information, or takes on more intricate tasks, it can evolve and learn to make better decisions. What would make an AI creative is more than just training algorithms and learning outputs, but building from scratch and creating something new, unrelated to existing data.

"This evolution is really valuable, but true creativity has yet to be achieved," said Jess Kennedy, co-founder of Beeline, a SaaS company based in Jacksonville, Fla.

The potential of a creative machine capable of both learning and the ability to create on its own has tremendous potential in marketplace as well as enterprise settings.

A creative algorithm would be able to create data and discover trends without prompting and without supervision. This would mean less maintenance for an organization's data science team and lead to even greater insights, as they wouldn't have to be modeled on existing correlations.

The truth is that these algorithms generate new data, such as images or music, which can be considered a result of the imitation of the human creative process. Haldo Spontonvice president of technology and head of AI development, Globant

Overall, a creative AI would have the ability to find the best way to approach most any problem presented to it by an organization. Anything from hunting for anomalies in data sets to prevent fraud to making conversations with virtual assistants feel more natural.

"Tools based on AI algorithms will generate new creative processes, new ways of creating and thinking, new horizons to explore," Sponton said.

At the moment, artificial intelligence has not reached that level of advancement, and the enterprise applications of true creativity are out of reach. Apart from the difficulty of developing an AI capable of creativity, proving that it has had an original idea and is an added level of advancement.

There are some applications of creativity among existing AI technologies. Neural networks are at the point where they can identify tasks in the creative process. Supervised and unsupervised learning can find meaningful connections and patterns within an organization's data set. These systems and approaches have already proven their capabilities in the enterprise, from recommendations for users online to advanced analytics for business intelligence and analytics vendors.

The combination of creativity and AI has reached an impressive level, but the way we look at it may be hindering enterprise applications. Instead of focusing on developing an AI that can stand alone and be considered creative, experts note that AI is already successfully helping to further human creativity.

"AI has been used to create things like art and music, but it has been based on existing information and data provided to the AI interface in order to do so," Kennedy said.

This allows for the creation of traditionally creative materials by AI but falls short of that ultimate goal of a creative AI. This does, however, allow for a uniquely nonhuman approach to the creation of artistic works.

"Artists around the world are already adopting this technology for musical composition, for the creation of plastic works and even choreographies or sculptures (just appreciate the work of choreographer Wayne McGregor or plastic artist Sarah Meyohas)," Sponton said.

Adding another layer into the field of creative arts opens up new opportunities for expression and beauty for those working in the field. Instead of taking the human aspect out of this field, this augmentation role for AI finds a balance between creative AI and solely human creations.

"The truth is that these algorithms generate new data, such as images or music, which can be considered a result of the imitation of the human creative process," Sponton said.

AI is not at the stage where it can stand on its own and create, but for now, it serves a valuable role of creating data, analyzing processes and augmenting the creation process. When the time comes for an AI to take the next step, however, we may even have to redefine creativity.

See the original post:

Reimagining creativity and AI to boost enterprise adoption - TechTarget

Posted in Ai

Facebook and NYU use artificial intelligence to make MRI scans four times faster – The Verge

If youve ever had an MRI scan before, youll know how unsettling the experience can be. Youre placed in a claustrophobia-inducing tube and asked to stay completely still for up to an hour while unseen hardware whirs, creaks, and thumps around you like a medical poltergeist. New research, though, suggests AI can help with this predicament by making MRI scans four times faster, getting patients in and out of the tube quicker.

The work is a collaborative project called fastMRI between Facebooks AI research team (FAIR) and radiologists at NYU Langone Health. Together, the scientists trained a machine learning model on pairs of low-resolution and high-resolution MRI scans, using this model to predict what final MRI scans look like from just a quarter of the usual input data. That means scans can be done faster, meaning less hassle for patients and quicker diagnoses.

Its a major stepping stone to incorporating AI into medical imaging, Nafissa Yakubova, a visiting biomedical AI researcher at FAIR who worked on the project, tells The Verge.

The reason artificial intelligence can be used to produce the same scans from less data is that the neural network has essentially learned an abstract idea of what a medical scan looks like by examining the training data. It then uses this to make a prediction about the final output. Think of it like an architect whos designed lots of banks over the years. They have an abstract idea of what a bank looks like, and so they can create a final blueprint faster.

The neural net knows about the overall structure of the medical image, Dan Sodickson, professor of radiology at NYU Langone Health, tells The Verge. In some ways what were doing is filling in what is unique about this particular patients [scan] based on the data.

The fastMRI team has been working on this problem for years, but today, they are publishing a clinical study in the American Journal of Roentgenology, which they say proves the trustworthiness of their method. The study asked radiologists to make diagnoses based on both traditional MRI scans and AI-enhanced scans of patients knees. The study reports that when faced with both traditional and AI scans, doctors made the exact same assessments.

The key word here on which trust can be based is interchangeability, says Sodickson. Were not looking at some quantitative metric based on image quality. Were saying that radiologists make the same diagnoses. They find the same problems. They miss nothing.

This concept is extremely important. Although machine learning models are frequently used to create high-resolution data from low-resolution input, this process can often introduce errors. For example, AI can be used to upscale low-resolution imagery from old video games, but humans have to check the output to make sure it matches the input. And the idea of AI imagining an incorrect MRI scan is obviously worrying.

The fastMRI team, though, says this isnt an issue with their method. For a start, the input data used to create the AI scans completely covers the target area of the body. The machine learning model isnt guessing what a final scan looks like from just a few puzzle pieces. It has all the pieces it needs, just at a lower resolution. Secondly, the scientists created a check system for the neural network based on the physics of MRI scans. That means at regular intervals during the creation of a scan, the AI system checks that its output data matches what is physically possible for an MRI machine to produce.

We dont just allow the network to create any arbitrary image, says Sodickson. We require that any image generated through the process must have been physically realizable as an MRI image. Were limiting the search space, in a way, making sure that everything is consistent with MRI physics.

Yakubova says it was this particular insight, which only came about after long discussions between the radiologists and the AI engineers, that enabled the projects success. Complementary expertise is key to creating solutions like this, she says.

The next step, though, is getting the technology into hospitals where it can actually help patients. The fastMRI team is confident this can happen fairly quickly, perhaps in just a matter of years. The training data and model theyve created are completely open access and can be incorporated into existing MRI scanners without new hardware. And Sodickson says the researchers are already in talks with the companies that produce these scanners.

Karin Shmueli, who heads the MRI research team at University College London and was not involved with this research, told The Verge this would be a key step to move forward.

The bottleneck in taking something from research into the clinic, is often adoption and implementation by manufacturers, says Shmueli. She added that work like fastMRI was part of a wider trend incorporating artificial intelligence into medical imaging that was extremely promising. AI is definitely going to be more in use in the future, she says.

Read the rest here:

Facebook and NYU use artificial intelligence to make MRI scans four times faster - The Verge

Posted in Ai

Too many AI researchers think real-world problems are not relevant – MIT Technology Review

Any researcher whos focused on applying machine learning to real-world problems has likely received a response like this one: The authors present a solution for an original and highly motivating problem, but it is an application and the significance seems limited for the machine-learning community.

These words are straight from a review I received for a paper I submitted to the NeurIPS (Neural Information Processing Systems) conference, a top venue for machine-learning research. Ive seen the refrain time and again in reviews of papers where my coauthors and I presented a method motivated by an application, and Ive heard similar stories from countless others.

This makes me wonder: If the community feels that aiming to solve high-impact real-world problems with machine learning is of limited significance, then what are we trying to achieve?

The goal of artificial intelligence (pdf) is to push forward the frontier of machine intelligence. In the field of machine learning, a novel development usually means a new algorithm or procedure, orin the case of deep learninga new network architecture. As others have pointed out, this hyperfocus on novel methods leads to a scourge of papers that report marginal or incremental improvements on benchmark data sets and exhibit flawed scholarship (pdf) as researchers race to top the leaderboard.

Meanwhile, many papers that describe new applications present both novel concepts and high-impact results. But even a hint of the word application seems to spoil the paper for reviewers. As a result, such research is marginalized at major conferences. Their authors only real hope is to have their papers accepted in workshops, which rarely get the same attention from the community.

This is a problem because machine learning holds great promise for advancing health, agriculture, scientific discovery, and more. The first image of a black hole was produced using machine learning. The most accurate predictions of protein structures, an important step for drug discovery, are made using machine learning. If others in the field had prioritized real-world applications, what other groundbreaking discoveries would we have made by now?

This is not a new revelation. To quote a classic paper titled Machine Learning that Matters (pdf), by NASA computer scientist Kiri Wagstaff: Much of current machine learning research has lost its connection to problems of import to the larger world of science and society. The same year that Wagstaff published her paper, a convolutional neural network called AlexNet won a high-profile competition for image recognition centered on the popular ImageNet data set, leading to an explosion of interest in deep learning. Unfortunately, the disconnect she described appears to have grown even worse since then.

Marginalizing applications research has real consequences. Benchmark data sets, such as ImageNet or COCO, have been key to advancing machine learning. They enable algorithms to train and be compared on the same data. However, these data sets contain biases that can get built into the resulting models.

More than half of the images in ImageNet (pdf) come from the US and Great Britain, for example. That imbalance leads systems to inaccurately classify images in categories that differ by geography (pdf). Popular face data sets, such as the AT&T Database of Faces, contain primarily light-skinned male subjects, which leads to systems that struggle to recognize dark-skinned and female faces.

While researchers try to outdo one another on contrived benchmarks, one in every nine people in the world is starving.

When studies on real-world applications of machine learning are excluded from the mainstream, its difficult for researchers to see the impact of their biased models, making it far less likely that they will work to solve these problems.

One reason applications research is minimized might be that others in machine learning think this work consists of simply applying methods that already exist. In reality, though, adapting machine-learning tools to specific real-world problems takes significant algorithmic and engineering work. Machine-learning researchers who fail to realize this and expect tools to work off the shelf often wind up creating ineffective models. Either they evaluate a models performance using metrics that dont translate to real-world impact, or they choose the wrong target altogether.

For example, most studies applying deep learning to echocardiogram analysis try to surpass a physicians ability to predict disease. But predicting normal heart function (pdf) would actually save cardiologists more time by identifying patients who do not need their expertise. Many studies applying machine learning to viticulture aim to optimize grape yields (pdf), but winemakers want the right levels of sugar and acid, not just lots of big watery berries, says Drake Whitcraft of Whitcraft Winery in California.

Another reason applications research should matter to mainstream machine learning is that the fields benchmark data sets are woefully out of touch with reality.

New machine-learning models are measured against large, curated data sets that lack noise and have well-defined, explicitly labeled categories (cat, dog, bird). Deep learning does well for these problems because it assumes a largely stable world (pdf).

But in the real world, these categories are constantly changing over time or according to geographic and cultural context. Unfortunately, the response has not been to develop new methods that address the difficulties of real-world data; rather, theres been a push for applications researchers to create their own benchmark data sets.

The goal of these efforts is essentially to squeeze real-world problems into the paradigm that other machine-learning researchers use to measure performance. But the domain-specific data sets are likely to be no better than existing versions at representing real-world scenarios. The results could do more harm than good. People who might have been helped by these researchers work will become disillusioned by technologies that perform poorly when it matters most.

Because of the fields misguided priorities, people who are trying to solve the worlds biggest challenges are not benefiting as much as they could from AIs very real promise. While researchers try to outdo one another on contrived benchmarks, one in every nine people in the world is starving. Earth is warming and sea level is rising at an alarming rate.

As neuroscientist and AI thought leader Gary Marcus once wrote (pdf): AIs greatest contributions to society could and should ultimately come in domains like automated scientific discovery, leading among other things towards vastly more sophisticated versions of medicine than are currently possible. But to get there we need to make sure that the field as whole doesnt first get stuck in a local minimum.

For the world to benefit from machine learning, the community must again ask itself, as Wagstaff once put it: What is the fields objective function? If the answer is to have a positive impact in the world, we must change the way we think about applications.

Hannah Kerner is an assistant research professor at the University of Maryland in College Park. She researches machine learning methods for remote sensing applications in agricultural monitoring and food security as part of the NASA Harvest program.

Visit link:

Too many AI researchers think real-world problems are not relevant - MIT Technology Review

Posted in Ai

Global AI in Healthcare Diagnosis Market 2020-2027 – AI in Future Epidemic Outbreaks Prediction and Response – ResearchAndMarkets.com – Business Wire

DUBLIN--(BUSINESS WIRE)--The "Artificial Intelligence in Healthcare Diagnosis Market Forecast to 2027 - COVID-19 Impact and Global Analysis by Diagnostic Tool; Application; End User; Service; and Geography" report has been added to ResearchAndMarkets.com's offering.

The global artificial intelligence (AI) in healthcare diagnosis market was valued at US$ 3,639.02 million in 2019 and is projected to reach US$ 66,811.97 million by 2027; it is expected to grow at a CAGR of 44% during 2020-2027.

The growth of the market is mainly attributed to factors such rising adoption of AI in disease identification and diagnosis, and increasing investments in AI healthcare startups. However, the lack of skilled workforce and ambiguity in regulatory guidelines for medical software are the factor hindering the growth of the market.

Artificial Intelligence in healthcare is one of the most significant technological advancements in medicine so far. The involvement of multiple startups in the development of AI-driven imaging and diagnostic solutions is the major factors contributing to the growth of the market. China, the US, and the UK are emerging as popular hubs for healthcare innovations.

Additionally, the British government has announced the establishment of a National Artificial Intelligence Lab that would collaborate with the country's universities and technology companies to conduct research on cancer, dementia, and heart diseases. The UK-based startups have received benefits from the government's robust library of patient data, as British citizens share their anonymous healthcare data with the British National Health Service. As a result, the number of artificial intelligence startups in the healthcare sector has significantly grown in the past few years, and the trend is expected to be the same in the coming years.

Based on diagnostic tool, the global artificial intelligence in healthcare diagnosis market is segmented into medical imaging tool, automated detection system, and others. The medical imaging tool segment held the largest share of the market in 2019, and the market for automated detection system is expected to grow at the highest CAGR during the forecast period.

Based on application, the global artificial intelligence in healthcare diagnosis market is segmented into eye care, oncology, radiology, cardiovascular, and others. The oncology segment held the larger share of the market in 2019, and the radiology segment is expected to register the highest CAGR during the forecast period.

Based on service, the global artificial intelligence in healthcare diagnosis market is segmented into tele-consultation, tele monitoring, and others. The tele-consultation segment held the largest share of the market in 2019, however, tele monitoring segment it is further expected to report the highest CAGR in the market during the forecast period.

Based on end-user, the global artificial intelligence in healthcare diagnosis market is segmented into hospital and clinic, diagnostic laboratory, and home care. The hospital and clinic segment held the highest share of the market in 2019 and is expected to register the highest CAGR in the market during the forecast period.

Key Topics Covered

1. Introduction

1.1 Scope of the Study

1.2 Report Guidance

1.3 Market Segmentation

1.3.1 Artificial Intelligence in Healthcare Diagnosis Market - By Diagnostic Tool

1.3.2 Artificial Intelligence in Healthcare Diagnosis Market - By Application

1.3.3 Artificial Intelligence in Healthcare Diagnosis Market - By Service

1.3.4 Artificial Intelligence in Healthcare Diagnosis Market - By End User

1.3.5 Global Artificial Intelligence in Healthcare Diagnosis Market - By Geography

2. Artificial Intelligence in Healthcare Diagnosis Market - Key Takeaways

3. Research Methodology

3.1 Coverage

3.2 Secondary Research

3.3 Primary Research

4. Artificial Intelligence in Healthcare Diagnosis Market - Market Landscape

4.1 Overview

4.2 PEST Analysis

4.2.1 North America - PEST Analysis

4.2.2 Europe - PEST Analysis

4.2.3 Asia-Pacific - PEST Analysis

4.2.4 Middle East & Africa - PEST Analysis

4.2.5 South & Central America

4.3 Expert Opinion

5. Artificial Intelligence in Healthcare Diagnosis Market - Key Market Dynamics

5.1 Market Drivers

5.1.1 Rising Adoption of Artificial Intelligence (AI) in Disease Identification and Diagnosis

5.1.2 Increasing Investment in AI Healthcare Start-ups

5.2 Market Restraints

5.2.1 Lack of skilled AI Workforce and Ambiguous Regulatory Guidelines for Medical Software

5.3 Market Opportunities

5.3.1 Increasing Potential in Emerging Economies

5.4 Future Trends

5.4.1 AI in Epidemic Outbreak Prediction and Response

5.5 Impact Analysis

6. Artificial Intelligence in Healthcare Diagnosis Market - Global Analysis

6.1 Global Artificial Intelligence in Healthcare Diagnosis Market Revenue Forecast and Analysis

6.2 Global Artificial Intelligence in Healthcare Diagnosis Market, By Geography - Forecast and Analysis

6.3 Market Positioning of Key Players

7. Artificial Intelligence in Healthcare Diagnosis Market - By Diagnostic Tool

7.1 Overview

7.2 Artificial Intelligence in Healthcare Diagnosis Market Revenue Share, by Diagnostic Tool (2019 and 2027)

7.3 Medical Imaging Tool

7.4 Automated Detection System

7.5 Others

8. Artificial Intelligence in Healthcare Diagnosis Market Analysis, By Application

8.1 Overview

8.2 Artificial Intelligence in Healthcare Diagnosis Market Revenue Share, by Application (2019 and 2027)

8.3 Eye Care

8.4 Oncology

8.5 Radiology

8.6 Cardiovascular

8.7 Others

9. Artificial Intelligence in Healthcare Diagnosis Market - By End-User

9.1 Overview

9.2 Artificial Intelligence in Healthcare Diagnosis Market, by End-User, 2019 and 2027 (%)

9.3 Hospital and Clinic

9.4 Diagnostic Laboratory

9.5 Home Care

10. Artificial Intelligence in Healthcare Diagnosis Market - By Service

10.1 Overview

10.2 Artificial Intelligence in Healthcare Diagnosis Market, by Service, 2019 and 2027 (%)

10.3 Tele-Consultation

10.4 Tele-Monitoring

10.5 Others

11. Artificial Intelligence in Healthcare Diagnosis Market - Geographic Analysis

11.1 North America: Artificial Intelligence in Healthcare Diagnosis Market

11.2 Europe: Artificial Intelligence in Healthcare Diagnosis Market

11.3 Asia-Pacific: Artificial Intelligence in Healthcare Diagnosis Market

11.4 Middle East and Africa: Artificial Intelligence in Healthcare Diagnosis Market

11.5 South & Central America: Artificial Intelligence in Healthcare Diagnosis Market

12. Impact of COVID-19 Pandemic on Global Artificial Intelligence in Healthcare Diagnosis Market

12.1 North America: Impact Assessment of COVID-19 Pandemic

12.2 Europe: Impact Assessment of COVID-19 Pandemic

12.3 Asia-Pacific: Impact Assessment of COVID-19 Pandemic

12.4 Rest of the World: Impact Assessment of COVID-19 Pandemic

13. Artificial Intelligence in Healthcare Diagnosis Market - Industry Landscape

13.1 Overview

13.2 Growth Strategies Done by the Companies in the Market, (%)

13.3 Organic Developments

13.4 Inorganic Developments

14. Company Profiles

14.1 General Electric Company

14.2 Aidoc

14.3 Arterys Inc.

14.4 icometrix

14.5 IDx Technologies Inc.

14.6 MaxQ AI Ltd.

14.7 Caption Health, Inc.

14.8 Zebra Medical Vision, Inc.

14.9 Siemens Healthineers AG

More here:

Global AI in Healthcare Diagnosis Market 2020-2027 - AI in Future Epidemic Outbreaks Prediction and Response - ResearchAndMarkets.com - Business Wire

Posted in Ai

No, AI and Big Data Are Not Going to Win the Next Great Power Competition – The Defense Post

Artificial Intelligence and Big Data, two buzzwords that are colloquially interchangeable but subtly nuanced, are not silver bullets poised to handily solve all of the US militarys problems.

Unpopular opinion: the US military and the defense industrial complex are currently giving up one heck of a run to the Chinese Communist Party. Note how I didnt say that were losing yet.

This century saw a solid first quarter, with US domination that witnessed the rise and fall of a competing Soviet Union and the establishment of global American hegemony both militarily and economically. We have enjoyed decades of unipolar dominance. When youre on top, it typically feels like you could never lose.

However, the rapidly shifting political landscape and return to great power competition have America reeling. The Chinese Communist Party and its Peoples Liberation Army are mounting a comeback.

While we were worried about cracking terrorist networks of bearded men with AK-47s in caves, the Chinese were speeding towards advanced technologies, hypersonic weapons, and the very defenses required to put up a front against the predictable American military machine.

The Chinese have seized this strategic opportunity. While the West was distracted, Beijing sunk billions of dollars into anti-access and area denial capabilities (A2/AD), a defensive posture aimed at the American way of fighting. They have also amassed massive amounts of data required to weaponize and harness the benefits of Big Data.

Chinese tech companies and government-sponsored research initiatives have built massive data sets while we were preoccupied with Iraq and Afghanistan. These are precisely the requisite data sets necessary to train Machine Learning algorithms and AI neural networks.

While we were building social networks by word of mouth of terrorist cells, the Chinese were collecting intelligence and building advanced systems for moving data with little regard for civil liberties, privacy, or data protection. Not that I advocate for it, but it is amazing what you can do when you ignore ethics or societal norms.

In defense tech news, all I read about is AI solving the joint, all-domain command and control problem, or Big Data providing a potential solution for some multi-domain capability gap. Perhaps we just desire an easy, one size fits all solution in the form of a Big Data Band-Aid?

Indeed, it seems like our greatest adversary and the second greatest existential threat to the American way of life after nuclear war has already found the elixir of life in Big Data, so why cant we?

For starters, artificial intelligence is not the Terminator. It is not a killing machine that is easily weaponized, deployed, and employed to combat adversary capabilities. Even the most cutting edge artificial intelligence tools today are narrow in scope and limited in application.

While that will change eventually, algorithms are currently fantastic for vehicle routing, search engine optimization, facial recognition, asking Siri to set a timer, and other modern technical conveniences that we all carry around in our pockets. These are simple applications of AI. These are not weaponized, military applications that result in warheads on foreheads or power projection.

AI is great at parsing through billions of bits quickly and making sense of it all; creating information from data is its strong suit. This is not complicated. At their core, these algorithms rely on data configuration and formatting to sort and shape vast matrices full of different variables, perform some sort of reduction or matrix operation, and compare this reduction to a set of user-programmed decision criteria.

There is a difference between artificial intelligence and decision making. AI facilitates expedited data to decision throughput, but it does not make its own decisions in a vacuum.

Next, AI is slightly more complicated versions of the matrix math you were probably introduced to in algebra. This advanced linear algebra is advanced applied statistics. By itself, it does not result in a major weapons system delivering effects against an adversary position. Just like space and cyber effects at your favorite large force exercise, you cannot simply sprinkle some Big Data on top and bring added military capability to bear to win the 21st-century fight.

On their own, AI and Big Data do not result in increased competition by the US military. They dont produce a capability to which the Chinese Communist Party has no solution. While they can expedite paths through a particular kill-web to deliver effects, they arent a standalone military capability.

Another reason why AI and Big Data wont solve the A2/AD problem is because of the laws of physics. The US Indo-Pacific Command Area of Responsibility poses a geography problem for the US military. It requires ships and airplanes to travel farther to even get to the fight. Missiles can only go so far and fast, and AI does not provide a solution that creates a silver bullet hypersonic solution.

A2/AD is also a logistics nightmare. Posturing the supplies and equipment at disparate operating locations anywhere from the Philippines to Guam or to Alaska to support even a limited regional conflict is a hard nut to crack, and AI does not by itself solve the agile, global logistics problem.

I might sound exceptionally contrarian in my simplification of AI and Big Data. In truth, Im a huge proponent of defense applications for AI and Big Data. Our militarys future hinges on it.

For the Department of Defense (DoD) to harness AI and to weaponize Big Data, the US military machine and industrial base need to integrate artificial intelligence into military systems.

The current generation of developmental systems needs to bake in advanced algorithms to take the human brain as a data filter out of the loop while introducing fusion, machine/deep learning, and the power of computation to military applications.

The old way of filtering data and enabling the military operators tactical decision making is irrelevant today. If the DoD cant shift, adapt, and embrace this change, theyre doomed to fight the last war for the rest of this century.

The DoD, like many contemporary large organizations, will face many hurdles in weaponizing artificial intelligence capabilities.

One of the main challenges in this transition is simple integration. Thats something the DoD already isnt good at. To abuse an overused example, the F-22 and F-35, arguably the worlds most advanced tactical fighters, cannot communicate via their tactical data links. While they were both developed by Lockheed Martin, their data links use different standards for their waveforms and are not interoperable. To oversimplify two prodigiously complex weapon systems, the F-22 is using AM radio and the F-35 uses FM.

This is partially the governments fault but also the fault of the big defense contractors. Back to my data link example: in the 21st century, these capabilities are software-driven. However, major defense contractors are hardware companies.

During the early years of the American century, they mobilized and bent metal to create some of the last generations most capable machines. That said, they have a comparative advantage only in producing hardware, not in the software required to fight in the 21st century.

For the DoD to be successful in harnessing AI for the next conflict, it needs to foster relationships with organizations that operate in the tech space with a mastery of software development featuring AI applications.

The key is integrating AI and Big Data capabilities into military applications of all kinds, across the full spectrum of military operations. Global logistics, command and control, persistent ISR, and advanced weapons are untapped applications for AI that have not yet been touched by the tech space.

The traditional bloated defense contractor is not resourced for this, nor do they have the right skill sets. Only seasoned developers outside the typical defense industrial base have the know-how to actually succeed with this integration.

AI alone wont compete with Chinese military capabilities. Applying the tenets of big data and weaponizing it to field advanced and lethal military capabilities is the future of power competition.

The Chinese are catching up and may one day challenge American global military dominance, but applied AI capabilities and advanced data science just might be the key to preserving American hegemony and protecting American interests domestically and abroad.

Alex Hillman is an analyst and engineer in the defense space. A US Air Force Academy graduate, he holdsmasters degrees in operations research, systems engineering, and flight test engineering, and has previously served in various technical and leadership roles for the USAF. Alex is a graduate of the United States Air Force Test Pilot School and a former US Department of State Critical Language Scholar for Russian.

Disclaimer: The views and opinions expressed here are those of the author and do not necessarily reflect the editorial position of The Defense Post.

The Defense Post aims to publish a wide range of high-quality opinion and analysis from a diverse array of people do you want to send us yours?Click hereto submit an op-ed.

Original post:

No, AI and Big Data Are Not Going to Win the Next Great Power Competition - The Defense Post

Posted in Ai

Want to Teach An AI Novelty? First, Teach It Monopoly. Then Throw Out the Rules. – ScienceBlog.com

Researchers from theUSC ViterbiSchool of EngineeringsInformation Sciences Institute(ISI) have partnered withPurdue Universityto take part in theDefense Advanced Research Projects Agency(DARPA)-funded program that seeks to develop the science that will allow AI systems to adapt to novelty, or new conditions that havent been seen before.

Take an AI that has been trained to play a standard game of Monopoly. What if you change the rules so that you can buy houses and hotels without first getting a monopoly? What if the game is set to end after 100 turns instead of waiting for bankruptcies? These are both novelties which would affect the optimal strategy to win.

And yet, as Mayank Kejriwal, the primary investigator on the project and a USC Viterbi research assistant professor, added, even today the most advanced AIs are ill-equipped to deal with this sort of novelty.

Even though there have been lots of advancements in AI, they are very task specific, Kejriwal said. The moment you introduce changes that the AI is not specifically equipped to handle, you have to go back and retrain the program. There is no general AI, something that can adapt to novel situations. We are really in uncharted waters because there is no science of novelty.

Thats the significance of this project, he added. Its not just about improving some specific AI module. By developing a science of novelty, we are laying the foundation for future generations of AI.

TheScience of Artificial Intelligence and Learning for Open-world Novelty(SAIL-ON) program, or SAIL-ON program began in November of 2019 and will continue until 2023. At the programs end, theDepartment of Defensehopes to use the research in a range of applications, from autonomous disaster-relief robots to self-driving military vehicles. The USC and Purdue collaborative team has been allocated $1.2 million from DARPA, and will likely receive more as the program goes on.

In some respects, AI has already surpassed human capabilities. Kejriwal citedAlphaZeroas an example a computer program that uses machine learning to play board games such as chess and Go, can now beat even the most advanced human players.

Unfortunately, because of an inability to handle novelty, most successful applications of AI such as AlphaZero are limited to tasks with fixed rules and objectives.

If we want AI systems to operate successfully in real-world environments, we need them to handle things they havent seen before, Kejriwal added; the real world is full of new situations.

COVID-19 is a perfect example of a novelty, Kejriwal said. Its not like we are trained to deal with this, but we figured it out and adapted. An AI would not have known what to do.

As an example, he spoke about an AI security system whose purpose was to protect an online retailer from different types of cyber-attacks. When the pandemic caused people to panic-buy toilet paper from the retailer, the AI saw more such requests than ever before. Not understanding the influence of the pandemic, the system assumed it was under attack and blocked all of the valid requests. Faced with this novel situation, the AI was unable to adapt.

There are infinitely many possibilities in a real-world environment, Kejriwal said, which means theres no way an AI can anticipate everything that might happen. Short of anticipating every single possibility, how do you actually learn to deal with novelty in the same way that a human does? he asked. In this project, we want to establish an entire paradigm for doing this, which doesnt exist currently.

While the program aims to develop general solutions for handling novelty across many fields, each group chose specific domains for testing. Researchers at ISI are working in the domain of board games, specifically Monopoly, while their counterparts at Purdue focus on ride-sharing.

In the context of Monopoly, like the real world, there are infinitely many ways to introduce novelty.

In addition to the possible rule changes mentioned previously, Kejriwal explained that you could add more dice, have different paths to choose from, alter the objective of the game, or even introduce incentives for teamwork.

The AI has to adapt to all of this, and it doesnt know beforehand what types of novelties can happen, he said.

Similarly, for an AI system that governs a ride-sharing app, there are so many possible real-time changes that theres no way to account for them all individually. Vaneet Aggarwal, an associate professor at Purdue and one of the project leaders, talked about the importance of adaptability for AI in this field.

We want the algorithms to be scalable to different things that happen around us, he said. It should adapt to different countries, different cities, different rules, as well as any unexpected events like road blockages.

Aggarwal added that the underlying science of novelty developed in the project would be useful for far more than just ride-sharing or game-playing. It would be applicable in any place where decision-making has to happen under uncertain conditions, he said.

Link:

Want to Teach An AI Novelty? First, Teach It Monopoly. Then Throw Out the Rules. - ScienceBlog.com

Posted in Ai

...23456...102030...