The 10 most innovative artificial intelligence companies of 2020 – Fast Company

Artificial intelligence has reached the inflection point where its less of a trend than a core ingredient across virtually every aspect of computing. These companies are applying the technology to everything from treating strokes to detecting water leaks to understanding fast-food orders. And some of them are designing the AI-ready chips that will unleash even more algorithmic innovations in the years to come.

For enabling the next generation of AI applications with its Intelligent Processing Unit AI chip

As just about every aspect of computing is being transformed by machine learning and other forms of AI, companies can throw intense algorithms at existing CPUs and GPUs. Or they can embrace Graphcores Intelligence Processing Unit, a next-generation processor designed for AI from the ground up. Capable of reducing the necessary number crunching for tasks such as algorithmic trading from hours to minutes, the Bristol, England, startups IPUs are now shipping in Dell servers and as an on-demand Microsoft Azure cloud service.

Read more about why Graphcore is one of the Most Innovative Companies of 2020.

For tutoring clients like Chase to fluency in marketing-speak

Ever tempted to click on the exciting discount offered to you in a marketing email? That might be the work of Persado, which uses AI and data science to generate marketing language that might work best on you. The companys algorithms learn what a brand hopes to convey to potential customers and suggests the most effective approachand it works. In 2019, Persado signed contracts with large corporations like JPMorgan Chase, which signed a five-year deal to use the companys AI across all its marketing. In the last three years, Persado claims that it has doubled its annual recurring revenue.

For becoming a maven in discerning customer intent via messaging apps

We may be a long way from AI being able to replace a friendly and knowledgeable customer-service representative. But LivePersons Conversational AI is helping companies get more out of their human reps. The machine-learning-infused service routes incoming queries to the best agent, learning as it goes so that it grows more accurate over time. It works over everything from text messaging to WhatsApp to Alexa. With Conversational AI and LivePersons chat-based support, the companys clients have seen a two-times increase in agent efficiency and a 20% boost in sales conversions compared to voice interactions.

For catalyzing care after a patients stroke

When a stroke victim arrives at the ER, it can sometimes be hours before they receive treatment. Viz.ai makes an artificial intelligence program that analyzes the patients CT scan, then organizes all the clinicians and facilities needed to provide treatment. This sets up workflows that happen simultaneously, instead of one at a time, which collapses how long it takes for someone to receive treatment and improves outcomes. Viz.ai says that its hospital customer base grew more than 1,600% in 2019.

For transforming sketches into finished images with its GauGAN technology

GauGAN, named after post-Impressionist painter Paul Gauguin, is a deep-learning model that acts like an AI paintbrush, rapidly converting text descriptions, doodles, or basic sketches into photorealistic, professional-quality images. Nvidia says art directors and concept artists from top film studios and video-game companies are already using GauGAN to prototype ideas and make rapid changes to digital scenery. Computer scientists might also use the tool to create virtual worlds used to train self-driving cars, the company says. The demo video has more than 1.6 million views on YouTube.

For bringing savvy to measuring the value of TV advertising and sponsorship

Conventional wisdom has it that precise targeting and measuring of advertising is the province of digital platforms, not older forms of media. But Hives AI brings digital-like precision to linear TV. Its algorithms ingest video and identify its subject matter, allowing marketers to associate their ads with relevant contentsuch as running a car commercial after a chase scene. Hives Mensio platform, offered in partnership with Bain, melds the companys AI-generated metadata with info from 20 million households to give advertisers new insights into the audiences their messages target.

For moving processing power to the smallest devices, with its low-power chips that handle voice interactions

Semiconductor company Syntiant builds low-power processors designed to run artificial intelligence algorithms. Because the companys chips are so small, theyre ideal for bringing more sophisticated algorithms to consumer tech devicesparticularly when it comes to voice assistants. Two of Syntiants processors can now be used with Amazons Alexa Voice Service, which enables developers to more easily add the popular voice assistant to their own hardware devices without needing to access the cloud. In 2019, Syntiant raised $30 million from the likes of Amazon, Microsoft, Motorola, and Intel Capital.

For plugging leaks that waste water

Wint builds software that can help stop water leaks. That might not sound like a big problem, but in commercial buildings, Wint says that more than 25% of water is wasted, often due to undiscovered leaks. Thats why the company launched a machine-learning-based tool that can identify leaks and waste by looking for water use anomalies. Then, managers for construction sites and commercial facilities are able to shut off the water before pipes burst. In 2019, the companys attention to water leaks helped it grow its revenue by 400%, and it has attracted attention from Fortune 100 companies, one of which reports that Wint has reduced its water consumption by 24%.

For serving restaurants an intelligent order taker across app, phone, and drive-through

If youve ever ordered food at a drive-through restaurant and discovered that the items you got werent the ones you asked for, you know that the whole affair is prone to human error. Launched in 2019, Interactions Guest Experience Platform (GXP) uses AI to accurately field such orders, along with ones made via phone and text. The technology is designed to unflinchingly handle complex custom ordersand yes, it can ask you if you want fries with that. Interactions has already handled 3 million orders for clients youve almost certainly ordered lunch from recently.

For giving birth to Kai (born from the same Stanford research as Siri), who has become a finance whiz

Kasisto makes digital assistants that know a lot about personal finance and know how to talk to human beings. Its technology, called KAI, is the AI brains behind virtual assistants offered by banks and other financial institutions to help their customers get their business done and make better decisions. Kasisto incubated at the Stanford Research Institute, and KAI branched from the same code base and research that birthed Apples Siri assistant. Kasisto says nearly 18 million banking customers now have access to KAI through mobile, web, or voice channels.

Read more about Fast Companys Most Innovative Companies:

Read this article:

The 10 most innovative artificial intelligence companies of 2020 - Fast Company

AI could help to reduce diabetic screening backlog in wake of COVID-19 – AOP

Scientists highlight that machine learning could safely halve the number of images that need to be assessed by humans

Pixabay/Michael Schwarzenberger

The study, which was published in British Journal of Ophthalmology, used the AI technology, EyeArt, to analyse 120,000 images from 30,000 patient scans in the English Diabetic Eye Screening Programme.

The technology had 95.7% accuracy for detecting eye damage that would require specialist referral, and 100% accuracy for moderate to severe retinopathy.

The researchers from St Georges, University of London, Moorfields Eye Hospital, UCL, Homerton University Hospital, Gloucestershire Hospitals and Guys and St Thomas NHS Foundation Trusts highlight that the introduction of the technology to the diabetic screening programme could save 10 million per year in England alone.

Professor Alicja Rudnicka, from St Georges, University of London, said that using machine learning technology could safely halve the number of images that need to be assessed by humans.

If this technology is rolled out on a national level, it could immediately reduce the backlog of cases created due to the coronavirus pandemic, potentially saving unnecessary vision loss in the diabetic population, she emphasised.

Moorfields Eye Hospital consultant ophthalmologist Adnan Tufai, highlighted that most AI technology is tested by developers or companies, but this research was an independent study involving images from real-world patients.

The technology is incredibly fast, does not miss a single case of severe diabetic retinopathy and could contribute to healthcare system recovery post-COVID, he said.

Here is the original post:

AI could help to reduce diabetic screening backlog in wake of COVID-19 - AOP

No Jitter Roll: AI Routing in the Contact Center, Voice Analytics | No Jitter – No Jitter

This week we share announcements around intelligent contact center routing, voice analytics tools, a secure access service edge (SASE) and Google Cloud integration, a personalized videoconferencing kit, and CPaaS funding.

Nice Aims to Hyper-Personalize Customer Engagement

For each interaction, Enlighten AI Routing evaluates data from Enlighten AI and other datasets to get a holistic view of the customer and determine the most influential data for that engagement, Nice said in its press release. Likewise, Enlighten AI Routing assesses agent-related data, such as recent training successes, active listening skills, and empathy, to optimize agent assignments, Nice said.

Enlighten AI, which uses machine learning to self-learn and improve datasets with each interaction, then can provide agents with real-time interaction guidance, Nice said. [Agents] can see the impact of their actions on the customer center and are given advice on how to adjust their tone, speed, and other key behaviors such as demonstrating ownership to improve it, Barry Cooper, president of the Nice Workforce and Customer Experience Division, said when introducing the product at Interactions.

TCN Launches Voice Analytics for Contact Center

TCN is initially offering Voice Analytics as a free 60-day trial.

Versa Networks, Google Cloud Team on Integration

Konftel Personalizes Meeting Experience

The Konftel Personal Video Kit, available now, is priced at $279.

IntelePeer Gets Funding Boost

Ryan Daily, No Jitter associate editor, contributed to this article.

More here:

No Jitter Roll: AI Routing in the Contact Center, Voice Analytics | No Jitter - No Jitter

AI creates fictional scenes out of real-life photos – Engadget

Researcher Qifeng Chen of Stanford and Intel fed his AI system 5,000 photos from German streets. Then, with some human help it can build slightly blurry made-up scenes. The image at the top of this article is a example of the network's output.

To create an image a human needs to tell the AI system what goes where. Put a car here, put a building there, place a tree right there. It's paint by numbers and the system generates a wholly unique scene based on that input.

Chen's AI isn't quite good enough to create photorealistic scenes just yet. It doesn't know enough to fill in all those tiny pixels. It's not going to replace the high-end special effects houses that spend months building a world. But, it could be used to create video game and VR worlds where not everything needs to look perfect in the near future.

Intel plans on showing off the tech at the International Conference on Computer Vision in October.

More:

AI creates fictional scenes out of real-life photos - Engadget

How to Stop Sharing Sensitive Content with AWS AI Services – Computer Business Review

Add to favorites

You can use API, CLI, or Console

AWS has released a new tool that allows customers of its AI services to more easily stop sharing their datasets with Amazon for product improvement purposes: something that is currently a default opt-in for many AWS AI services.

Until this week, AWS users had to actively raise a support ticket to opt-out of content sharing. (The default opt-in can see AWS take customers AI workload datasets and store them for its own product development purposes, including outside of the region that end-users had explicitly selected for their own use.)

AWS AI services affected include facial recognition service Amazon Rekognition, voice recording transcription service Amazon Transcribe, natural language processing service Amazon Comprehend and more, listed below.

(AWS users can otherwise choose where data and workloads reside; something that is vital for many for compliance and data sovereignty reasons).

Opting in to sharing is still the default setting for customers: something that appears to have surprised many, as Computer Business Review reported this week.

The company has, however, now updated its opt-out options to make it easier for customers to set opting out as a group-wide policy.

Users can do this in the console, by API or command line.

Users will permission to run organizations:CreatePolicy

Console:

Command Line Interface (CLI) and API

Editors note: AWS has been keen to emphasise a difference between content and data following our initial report, asking us to correct our claim that AI customer data was being shared by default with Amazon, including sometimes outside selected geographical regions. It is, arguably, a curious distinction. The company appears to want to emphasise that the opt-in is only for AI datasets, which it calls content.

(As one tech CEO puts it to us: Only a lawyer that never touched a computer might feel smart enough to venture into content, not data wonderland.)

AWSs own new opt-out page initially read disputed that characterisation.

It read: AWS artificial intelligence (AI) services collect and store data as part of operating and supporting the continuous improvement life cycle of each service.

As an AWS customer, you can choose to opt out of this process to ensure that your data is not persisted within AWS AI service data stores. [Our italics].

AWS has since changed the wording on this page to the more anodyne: You can choose to opt out of having your content stored or used for service improvements and asked us to reflect this.For AWSs full new guide to creating, updating, and deleting AI services opt-out policies, meanwhile, see here.

Go here to see the original:

How to Stop Sharing Sensitive Content with AWS AI Services - Computer Business Review

AI model developed to identify individual birds without tagging – The Guardian

For even the most sharp-eyed of ornithologists, one great tit can look much like another.

But now researchers have built the first artificial intelligence tool capable of identifying individual small birds.

Computers have been trained to learn to recognise dozens of individual birds which could potentially save scientists arduous hours in the field with binoculars, as well as the catching of birds to fit coloured rings to their legs.

We show that computers can consistently recognise dozens of individual birds, even though we cannot ourselves tell these individuals apart, said Andr Ferreira, a PhD student at the Centre for Functional and Evolutionary Ecology (CEFE-CNRS), in France. In doing so, our study provides the means of overcoming one of the greatest limitations in the study of wild birds reliably recognising individuals.

Ferreira began exploring the potential of artificial intelligence while in South Africa, where he studied the co-operative behaviour of the sociable weaver, a bird which works with others to build the worlds largest nest.

He was keen to understand the contribution of each individual to building the nest, but found it hard to identify individual birds from direct observation because they were often hiding in trees or building parts of the nest out of sight. The AI model was developed to recognise individuals simply from a photograph of their backs while they were busy nest-building.

Together with researchers at the Max Planck Institute of Animal Behaviour, in Germany, Ferreira then demonstrated that the technology can be applied to two of the most commonly-studied species in Europe: wild great tits and captive zebra finches.

For AI models to accurately identify individuals they must be trained with thousands of labelled images. This is easy for companies such as Facebook with access to millions of pictures of people voluntarily tagged by users, but acquiring labelled photographs of birds or animals is more difficult.

The researchers, also from institutes in Portugal and South Africa, overcame this challenge by building feeders with camera traps and sensors. Most birds in the study populations already carried a tag similar to microchips which are implanted in cats and dogs. Antennae on the bird feeders read the identity of the bird from these tags and triggered the cameras.

The AI models were trained with these images and then tested with new images of individuals in different contexts. Here they displayed an accuracy of more than 90% for wild great tits and sociable weavers, and 87% for the captive zebra finches, according to the study in the British Ecological Society journal Methods in Ecology and Evolution.

While some larger individual animals can be recognised by the human eye because of distinctive patterns a leopards spots, or a pine martens chest markings, for example AI models have previously been used to identify individual primates, pigs and elephants. But the potential of the models had not been explored in smaller creatures outside the laboratory, such as birds.

According to Ferreira, the lead author of the study, the use of cameras and AI computer models could save on expensive fieldwork, spare animals from procedures such as catching and marking them, and enable much longer-term studies.

It is not very expensive to put a remote camera on a study population for eight years but for somebody to stay there and do the fieldwork for that long is not always possible, he said. It removes the need for the human to be a data collector so researchers can spend more time thinking about the questions instead of collecting the data.

The AI model is currently only able to re-identify individuals it has been shown before. But the researchers are now trying to build more powerful AI models that can identify larger study groups of birds, and distinguish individuals even if they have never seen them before. This would enable new individuals without tags to be recorded by the cameras and recognised by the computers.

See the article here:

AI model developed to identify individual birds without tagging - The Guardian

How AI will help freelancers – VentureBeat

When it comes to artificial intelligence (AI), theres a common question: Will AI technology render certain jobs obsolete? The common prediction is yes, many jobs will be lost as a result. But what do the numbers say? Even more importantly, what does logical thought suggest?

You dont need a special degree to understand the textbook relationship between automation and jobs. The basic theory is that for every job thats automated by technology, theres one less job available to a human who once performed the same task. In other words, if a machine can make a milkshake with the push of a button, then theres no need for the person who previously mixed the shake by hand. If a robot can put a car door on a vehicle in a manufacturing plant, then theres no need for the workers who previously placed the doors on by hand. You get the idea.

But does this theory really hold up on a reliable basis, or is it merely a theory that works in textbooks and PowerPoint presentations? One study or report doesnt discount a theory, but a quick glance at some recent numbers paints a different picture that requires a careful look at this pressing issue.

Upwork, an online platform that connects freelancers with clients, recently published data from its website that shows AI was the second-fastest-growing in-demand skill over the first quarter of 2017.

With artificial intelligence (AI) at the forefront of the conversation around what the future of work holds, its no surprise it is the fastest-growing tech skill and the second fastest-growing skill overall, Upwork explained in a statement. As AI continues to shape our world and influence nearly every major industry and country, companies are thinking about how to incorporate it into their business strategies, and professionals are developing skills to capitalize upon this accelerating tech trend. While some speculate that AI may be taking jobs away, others argue its creating opportunity, which is evidenced by demand for freelancers with this skill.

This latter opinion might be a contrarian view, but in this case, the data supports it. At this point, there really isnt a whole lot of data that says AI is killing jobs. Consider that as you form your opinions.

From a logical point of view, you have to consider the fact that the professional services industry isnt going anywhere. Sure, there might be automated website builders and account software, but the demand for freelancers people isnt going to suddenly disappear. Growth in AI isnt going to replace web developers, accountants, lawyers, and consultants. If anything, its going to assist them and make them more efficient and profitable.

Everything we love about civilization is a product of intelligence, says Max Tegmark, President of the Future of Life Institute. So amplifying our human intelligence with artificial intelligence has the potential of helping civilization flourish like never before as long as we manage to keep the technology beneficial.

When you look at freelancers, in particular, you can already see how AI is having a positive impact in the form of powerful tools and resources that can complement and expand existing skillsets. Here are a couple of tools:

Weve only just begun. These technologies will look totally rudimentary when we look back in a few years and recap the growth of AI. However, for now, they serve as an example of what the future holds for the freelance labor market.

The future of work, particularly as it deals with expected growth in AI, is anybodys guess. Ask AI expert Will Lee about his expectations and hell say there are two possible futures:

The first future Lee sees is one where AI has led to high unemployment and people are forced to freelance in order to get by. The only problem is it will be difficult for freelancers to differentiate themselves from the crowd because theyll be offering the same exact services as everyone else.

In this first possibility, people struggle to recognize their value and the uneducated freelance labor force is swallowed up by superior automated technology. But then theres a second possibility, where AI technology actually fuels growth in the freelance economy and humans and machines harmoniously work together.

In the second possibility, weve built a sustainable freelance market based on each individuals special skills and passions, Lee says. Each freelancer is able to find work and make a living due to their ability to differentiate themselves from others in the freelance market.

Experts in the AI field folks like Will Lee, who dedicate their working life to understanding the impact technology will have on labor dont know how this is going to unfold. It could be disastrous, or it could be highly beneficial. Making rash statements about how AI is going to collapse the freelance economy is unwise. You dont know how things will unfold and its better to remain optimistic that everything will work out for the greater good.

One thing is certainly clear: Technology is changing and the ramifications of this evolution will be felt in every capacity of business. Well know a lot more in three to five years, so hold on and enjoy the ride.

Larry Alton is a contributing writer at VentureBeat covering artificial intelligence.

Read this article:

How AI will help freelancers - VentureBeat

Google to launch AI Research Lab in Bengaluru – Economic Times

Google is launching an Artificial Intelligence (AI) Research Lab in Bengaluru in order to create products not just for India but for the rest of the world, the Mountain View headquartered Internet giant said during its flagship Google for India event on Thursday. The Lab will be led by Manish Gupta, a SEM (Society for Experimental Mechanics) Fellow.

The slew of new initiatives in India also include a tie up with state run BSNL for expanding Wi-Fi hotspots in villages in Gujarat, Bihar and Maharashtra. This comes after the company launched a project to connect 500 railway stations in the country and has since claimed to have connected close to 5000 venues across four continents. Google also announced a phone line in partnership Vodafone Idea to get their queries answered with even a 2G phone in English and Hindi. The firm has also added an array of local Indian languages across its products such as Search, Bolo, Discover and Google Assistant among others.

We want to adapt our products for Indians instead of asking Indians to adapt to Google Technology, said Caesar Sengupta, Vice President, Next Billion Users and Payments, Google adding that when the company create products for India it creates for the world.

Ravi Shanker Prasad, union minister for electronics and IT said that during his meeting with Google chief Sundar Pichai in Mountain View last week, he said asked him to make india a launch pad for products. I told him that what becomes successful in India will become successful globally. I am very happy that they are doing that, added Prasad.

Google is also expanding its Google Pay payment app which already has 67 million users and has done transactions worth $110 billion so far to include a new app for Business. Google is also bringing its existing technology of tokenisation to India which will enable payments available to debit and credit card holders through a system of tokenized cards-- to pay for things using a digital token on the phone rather than actual card number

It can be used to pay for merchants which accept Bharat QR code and those that have NFC by just tapping the card. Google said that it will make payments much more secure apart from making online payment much more seamless by doing away with the need to enter card details. It will be rolled out over the next few weeks for Visa Cards in partnerships with banks such as HDFC Bank, Axis Bank, SBI Cards among others. The features will be rolled out with MasterCard and Rupay Cards in the coming months.

It is also launching a Spot Platform which enables merchants to create exclusive shop fronts within the Google Pay app using simple Java script which will save them from building expensive websites or applications. Some applications such as like UrbanClap, Goibibo, MakeMyTrip, RedBus, Eat.Fit and Oven Story are also on board the Spot Platform. It will also extend to entrylevel neighborhood jobs search through the Spot platform on the Google Pay app. This will mainly target the unorganised segment in areas such as retail and hospitality, added Sengupta. Google is partnering with NSDC for this initiative along with firms such as Swiggy and is first being rolled out in Delhi-NCR before a nationwide roll out.

Read the original post:

Google to launch AI Research Lab in Bengaluru - Economic Times

Toby Wals’ 2062: The World that AI made – The Tribune

Joseph Jude

What makes us, humans, remarkable? Is it our physical strength? Or is it our intelligence? Perhaps our desire to live in communities? Some of our cousins in the animal kingdom have all those qualities. So what differentiates us?

Our early ancestors were limited in their ability to learn life-skills for two reasons: they had to be physically around those who knew the skills, and they had to learn only through signs. When we started to speak, learning became easier and faster. When we invented writing, ideas spread wider. Hence language is the differentiating feature, Prof Toby Walsh argues in this book.

Even with speech and writing, learning suffers in a vital way. Speaker translates thought to speech; likewise, the learner turns hearing to thought. Information is lost in multiple translations. Digital companies like Tesla and Apple have eliminated such loss of information. They can transmit what one device learns to millions of other devices as code. Every device immediately knows what every other device learns and without the need for translation.

Professor Walsh expands this idea to imagine the next species in our homo family: Homo digitalis. He says homo digitalis will be both biological and digital, living both in our brains and in the broader digital space.

In exploring the parallel living deeper, he debunks the possibility of singularity the anticipated time when we will have an intelligent machine that can cognitively improve itself. Only futurists and philosophers believe the singularity is inevitable. On the other hand, AI researchers think we will have machines that reach human-level intelligence, surpassing our intelligence in several narrow domains. Experts believe we would have such machines by around 2062, which explains the title of the book. The book focuses on the impact such machines will have on our lives. Interwoven into this focus are the steps we can take now to shape that future better than today.

Professor Walsh deals with ramifications of AI-augmented humans on work, privacy, politics, equality, and wars. He paints neither a dystopian nor a utopian future. As an academician, he carefully constructs his arguments on research, data, and trends.

When we ponder artificial intelligence, we conjure up an intellect superior to humans. What might come as a surprise is that machine intelligence would be artificial. Take flight, for example. We fly, but not like birds. Machines might simulate a storm, but it wont get wet. So values and fairness determined by devices will be unnatural. This should motivate us to take steps to shape a better future, one that isnt unnatural.

The book excels in painting a realistic picture of an AI-based future. However, it falters in the steps we can take to avoid a disastrous fate. The author promotes government regulation as a primary tool to control AI. Did UN-based regulations prevent monstrous state actors from acquiring chemical weapons and using it on their citizens? As and when lethal automatic weapons are manufactured, how difficult would it be for determined non-state actors to obtain them? Another mistake is considering AI as a singular technology. Governments should regulate AI not at the input side collecting, storing, and processing data, but where it is used loan processing, police departments, weapons, and so on.

There is no question that technology as powerful as AI should be regulated. But regulation alone wont work. We need a comprehensive approach that involves academia, citizen activists, steering groups and task forces. The Internet is one example around us.

One data shown in the book should hit Indians the hardest. I wish it forms the wake-up call for policymakers, entrepreneurs, and concerned citizens. Tianjin, a city in China, outspends India in AI. The best time to invest in AI was yesterday, the next best time is now.

Continued here:

Toby Wals' 2062: The World that AI made - The Tribune

Storytelling & Diversity: The AI Edge In LA – Forbes

LA is known as the land of storytellers, but when it comes to its own story, the entertainment business is still front and center. In fact, LA has been at the core of a flourishing AI scene for decades. Since the 1920s through today, elite mathematicians and engineers have been putting their knowledge to work for a multitude of industries such as health, aerospace, and media with relatively little visibility in the tech limelight.

Now, these industries are poised to bring together a convergence of knowledge across cutting edge technologies and LA may finally have its day in the spotlight as a focal point for cross-disciplinary innovation.

LAs history in technology has its roots in the aerospace world, where because of its perfect weather and vast open spaces, it became an ideal setting for the aerospace industry to plant its roots in the early 1900s. Companies like Douglas Aircraft and JPL were able to find multi-acre properties to test rockets and build large airfields.

The engineering know-how and nature of aviation work fueled the manufacturing sector in Southern California during WWII and eventually became the birthplace to the internet as we know it, when UCLA, funded by the Department of Defense, sent the first virtual message via ARPANET in the same year we first landed a man on the moon.

Black Girls Code

Through busts and booms, engineering talent was both attracted to the area and nurtured at many well known and respected educational institutions such as Caltech, USC, and UCLA, helping to augment the labor pool as well as becoming important sources of R&D.

This engineering talent continued to extend its branches out into other industries, such as health and wellness which are natural extensions for a population already obsessed with youth, fitness and body perfection.

Today, LA sits as a unifying center for life sciences, entertainment, media, and aerospace with frontier technologies such as AI pushing innovation across these core industries and providing a platform for new discoveries, cures, and social interactions.

Dave Whelan, Chief Executive Officer of BioscienceLA believes diversity is LAs secret weapon when it comes to its potential to become the global epicenter for AI innovation. He notes LAs widely diverse diaspora, which makes it a perfect place to train AI.

The entire worlds global population resides in LA. If you look at AI for healthcare, you have the raw materials in patient and health data that provide the widest range of possibilities. Combine that with the mix of the creative workforce, diversity of economies, and SoCal mindset, all together making LA a prime center for innovation that has yet to rightly take its place in the sun when compared to the attention that Silicon Valley receives.

The AI opportunity to save lives is particularly meaningful, especially in todays pandemic times. How do we apply AI in a way that can help with early detection, identify clusters, sequence DNA, or source the right treatments? Many aspects of life sciences are computational, and mathematical biologists have been entrenched in LA for some time providing services such as computational epidemiology, which is a multidisciplinary field that leverages computer science and mathematics to understand the spread of diseases and other public health issues.

Brian Dolan, CEO, and Founder of VerdantAI who has his roots in statistical genetics and biomathematics has seen the converging evolution of the tech scene in LA and is actively committed to building out the AI ecosystem. His startup studio is focused on launching new AI companies into market and partnering with large enterprises to help them turn their data into products.

It's not hard to argue that now is the time to focus on the big problems, like COVID and climate change. We need curious, dedicated, intelligent people to take these things on, and Los Angeles certainly offers that kind of talent. Our innovation diversity goes beyond demographics and into industries, geographies, and even ecologies. No other city can really offer that.

Brians previous company Deep 6 AI applies artificial intelligence to the clinical trial process by finding patients for medical trials and getting life-saving cures to people more quickly. Today, Brian and his team at Verdant are incubating technologies to optimize carbon-neutral supply chain networks, leveraging advanced medical NLP technology to read medical texts to create precision digital health experiences, and working on a mental health solution aimed at addiction and recovery.

Building a thriving ecosystem takes time and imagination. AI is both a disruptive force and a major opportunity, but dispelling the myths around AI is important in order to map out its impact and full potential.

Ronni Kimm, founder of Collective Future uses future visioning to help bring outside perspectives into organizations. Future visioning is important for accelerating innovation as it provides the ability to respond to and proactively be part of the stories of change. Her design and innovation studio helps bring strategic transformation to companies from a top-down and bottom-up perspective.

Ronni Kim

Health sciences and life sciences have some of the most interesting challenges in the world but there are not enough stories to help people understand how powerful approaches such as predictive analytics in health science can dramatically impact successful organ transplants predict at-risk patient complications, says Ronni. I see storytelling as one of the most important aspects of accelerating technology, creating more stories around these incredible innovations is where LA can excel in building resilient ecosystems and bringing more of these technologies to market.

Today LA sits at the center of multiple industries, where talent pools cross-pollinate and inspire new ideas. Its diverse and colorful population offers data not readily available in other geographies, making it ideal for big data applications that leverage AI. Its educational institutions feed and train new labor pools and its proximity to creative fields inspires new ways to leverage technology in traditional industries.

Ideas such as bringing the spatial web to life, holograms to offer new methods of care, and digital twins to create cross reality environments are just some of the ideas coming to life in LA.

As technology continues to advance, be sure to be on the lookout for more stories about the rise and influence of AI across these massive industries.

See the original post:

Storytelling & Diversity: The AI Edge In LA - Forbes

The imaging AI field is exploding, but it carries unique challenges – Healthcare IT News

The use of machine learning and artificial intelligence to analyze medical imaging data has grown significantly in recent years with 60 new products approved by the U.S. Food and Drug Administration in 2020 alone.

But AI scaling, particularly in the medical field, can face unique challenges, said Elad Benjamin, general manager of Radiology and AI Informatics at Philips, during an Amazon Web Services presentation this week.

"The AI field is experiencing an amazing resurgence of applications and tools and products and companies," said Benjamin.

The question many companies are seeking to answer is: "How do we use machine learning and deep learning to analyze medical imaging data and identify relevant clinical findings that should be highlighted to radiologists or other imaging providers to provide them with the decision support tools they need?" Benjamin asked.

He outlined three main business models being pursued:

Benjamin described common challenges and bottlenecks in the process of developing and marketing AI tools, noting that some were specifically hard to tackle in healthcare.

Gathering data at scale is one hurdle, he noted, and diversity of information is critical and sometimes difficult to achieve.

And labeling data, for instance, is the most expensive and time-consuming process, and requires a professional's perspective (as opposed to in other industries, when a layperson could label an image as a "car" or a "street" without too much trouble).

Receiving feedback and monitoring are critical too.

"You need to understand how your AI tools are behaving in the real world," Benjamin said. "Are there certain subpopulations where they are less effective? Are they slowly reducing in their quality because of a new scanner or a different patient population that has suddenly come into the fold?"

Benjamin said Philips with the help of AWS tools such as HealthLake, SageMaker and Comprehend is tackling these bottlenecks.

"Without solving these challenges it is difficult to scale AI in the healthcare domain," he said.

Kat Jercich is senior editor of Healthcare IT News.Twitter: @kjercichEmail: kjercich@himss.orgHealthcare IT News is a HIMSS Media publication.

Go here to see the original:

The imaging AI field is exploding, but it carries unique challenges - Healthcare IT News

AI cameras may be used to detect social distancing as US is reopening – Business Insider – Business Insider

As businesses across the United States have gradually begun to reopen, a growing number of companies are investing in camera technology powered by artificial intelligence to help enforce social distancing measures when people may be standing too closely together.

"[If] I want to manage the distance between consumers standing in a line, a manager can't be in all places at once," Leslie Hand, vice president of retail insights for the International Data Corporation, told Business Insider. "Having a digital helper that's advising you when folks are perhaps in need of some advice is useful."

Businesses throughout the country have started operating again under restrictions, such as enforcing social distancing measures, requiring customers to wear masks, and reducing capacity. New York City, which was the epicenter of the virus' outbreak in the US, is set to enter Phase II of its reopening plan on Monday.

The White House's employer guidelines for all phases of reopening include developing policies informed by best practices, particularly social distancing. And some experts believe smart cameras can help retailers and other companies detect whether such protocols are being followed.

"There's some technology coming out on the horizon that will be able to be incorporated into the nuts and bolts that you already have in your store," Barrie Scardina, head of Americas retail for commercial real estate services firm Cushman & Wakefield, said to Business Insider.

Some companies have already begun experimenting with such technologies. Amazon said on June 16 that it developed a camera system that's being implemented in some warehouses to detect whether workers are following social distancing guidelines. The company's so-called "Distance Assistant" consists of a camera, a 50-inch monitor, and a local computing device, which uses depth sensors to calculate distances between employees.

When a person walks by the camera, the monitor would show whether that person is standing six feet apart from nearby colleagues by overlaying a green or red circle around the person. Green would indicate the person is properly socially distanced, while red would suggest the people on camera may be too close together. Amazon is open-sourcing the technology so that other companies can implement it as well.

Motorola Solutions also announced new analytics technology in May that enables its Avigilon security cameras to detect whether people are social distancing and wearing masks. The system uses AI to collect footage and statistical patterns that can be used to provide notifications to organizations about when guidelines around wearing face masks or honoring social distancing measures are being breached.

Pepper Construction, a Chicago-based construction company, has also begun using software from a company called SmartVid.io to keep an eye on where workers may be grouping, as Reuters reported in late April.

Scardina offered some examples illustrating how smart cameras can help retailers enforce social distancing. Workers can use such technologies to see where customers are clustering so that they can make decisions about how to arrange furniture and fixtures within the store. If a table needs to be moved further away from another display because customers don't have space to stand six feet apart, AI camera technology can help retailers spot this.

As far as how widespread that technology will become in stores, Scardina says it will depend on factors such as a retailer's budget and the size of the shop.

While more companies may be investing in either developing or implementing new camera technologies, there will inevitably be challenges that arise when putting them into practice, says Pieter J. den Hamer, senior director of artificial intelligence for Gartner Research.

Not only could implementing such tech raise privacy concerns, but there are also practical limitations. A camera may not know if two people standing close together belong to the same household, for example.

All 50 states have reopened at some capacity, putting an end to stay-at-home orders that had been in effect since March to curb the coronavirus' spread, and some states are now seeing a spike in cases. The New York Times recently reported that at least 14 states have experienced positive cases that have outpaced the average number of administered tests.

The coronavirus has killed at least 117,000 people in the US and infected more than 2.1 million as of June 18, according to the Times, and experts predict there will be a second wave. But President Trump has said the country won't be closing again.

"It's a very, very complex debate full of dilemmas," den Hamer said. "Should we prioritize opening up the economy, or should we prioritize the protection of our privacy?"

The rest is here:

AI cameras may be used to detect social distancing as US is reopening - Business Insider - Business Insider

Creativity and AI: The Next Step – Scientific American

In 1997 IBMs Deep Blue famously defeated chess Grand Master Garry Kasparov after a titanic battle. It had actually lost to him the previous year, though he conceded that it seemed to possess a weird kind of intelligence. To play Kasparov, Deep Blue had been pre-programmed with intricate software, including an extensive playbook with moves for openings, middle game and endgame.

Twenty years later, in 2017, Google unleashed AlphaGo Zero which, unlike Deep Blue, was entirely self-taught. It was given only the basic rules of the far more difficult game of Gogo, without any sample games to study, and worked out all its strategies from scratch by playing millions of times against itself. This freed it to think in its own way.

These are the two main sorts of AI around at present. Symbolic machines like Deep Blue are programmed to reason as humans do, working through a series of logical steps to solve specific problems. An example is a medical diagnosis system in which a machine deduces a patients illness from data by working through a decision tree of possibilities.

Artificial neural networks like AlphaGo Zero are loosely inspired by the wiring of the neurons in the human brain and need far less human input. Their forte is learning, which they do by analyzing huge amounts of input data or rules such as the rules of chess or Gogo. They have had notable success in recognizing faces and patterns in data and also power driverless cars. The big problem is that scientists dont know as yet why they work as they do.

But its the art, literature and music that the two systems create that really points up the difference between them. Symbolic machines can create highly interesting work, having been fed enormous amounts of material and programmed to do so. Far more exciting are artificial neural networks, which actually teach themselves and which can therefore be said to be more truly creative.

Symbolic AI produces art which that is recognizable to the human eye as art, but its art which that has been pre-programmed. There are no surprises. Harold Cohens Aaron AARON algorithm produces rather beautiful paintings using templates which that have been programmed into it. Similarly, Simon Colton at the college of Goldsmiths College in the University of London programs The Painting Fool to create a likeness of a sitter in a particular style. But neither of these ever leaps beyond its program.

Artificial neural networks are far more experimental and unpredictable. The work springs from the machine itself without any human intervention. Alexander Mordvintsev set the ball rolling with his Deep Dream and its nightmare images spawned from convolutional neural networks (ConvNets) and that seem almost to spring from the machines unconscious. Then theres Ian Goodfellows GAN (Generative Adversarial Network) with the machine acting as the judge of its own creations, and Ahmed Elgammals CAN (Creative Adversarial Network), which creates styles of art never seen before. All of these generate far more challenging and difficult worksthe machines idea of art, not ours. Rather than being a tool, the machine participates in the creation.

In AI-created music the contrast is even starker. On the one hand, we have Franois Pachets Flow Machines, loaded with software to produce sumptuous original melodies, including a well-reviewed album. On the other, researchers at Google use artificial neural networks to produce music unaided. But at the moment their music tends to lose momentum after only a minute or so.

AI-created literature illustrates best of all the difference in what can be created by the two types of machines. Symbolic machines are loaded with software and rules for using it and trained to generate material of a specific sort, such as Reuters news reports and weather reports. A symbolic machine equipped with a database of puns and jokes generates more of the same, giving us, for example, a corpus of machine-generated knock-knock jokes. But as with art their literary products are in line with what we would expect.

Artificial neural networks have no such restrictions. Ross Goodwin, now at Google, trained an artificial neural network on a corpus of scripts from science fiction films, then instructed it to create sequences of words. The result was the fairly gnomic screenplay for his film Sunspring. With such a lack of constraints, artificial neural networks tend to produce work that seems obscureor should we say experimental? This sort of machine ventures into territory beyond that of our present understanding of language and can open our minds to a realm often designated as nonsense. NYUs Allison Parrish, a composer of computer poetry, explores the line between sense and nonsense. Thus, artificial neural networks can spark human ingenuity. They can introduce us to new ideas and boost our own creativity.

Proponents of symbolic machines argue that the human brain too is loaded with software, accumulated from the moment we are born, which means that symbolic machines can also lay claim to emulating the brains structure. Symbolic machines, however, are programmed to reason from the start.

Conversely, proponents of artificial neural networks argue that, like children, machines need first to learn before they can reason. Artificial neural networks learn from the data theyve been trained on but are inflexible in that they can only work from the data that they have.

To put it simply, artificial neural networks are built to learn and symbolic machines to reason but with the proper software they can each do a little of the other. An artificial neural network powering a driverless car, for example, needs to have the data for every possible contingency programmed into it so that when it sees a bright light in front of it, it can recognize whether its a bright sky or a white vehicle, in order to avoid a fatal accident.

What is needed is to develop a machine that includes the best features of both symbolic machines and artificial neural networks. Some computer scientists are currently moving in that direction, looking for options that offer a broader and more flexible intelligence than neural networks by combining them with the key features of symbolic machines.

At Deep Mind in London, scientists are developing a new sort of artificial neural network that can learn to form relationships in raw input data and represent it in logical form as a decision tree, as in a symbolic machine. In other words, theyre trying to build in flexible reasoning. In a purely symbolic machine all this would have to be programmed in by hand, whereas the hybrid artificial neural network does it by itself.

In this way combining the two systems could lead to more intelligent solutions and also to forms of art, literature andmusic which that are more accessible to human audiences while also being experimental, challenging, unpredictable and fun.

Continue reading here:

Creativity and AI: The Next Step - Scientific American

With 5G+AI Twin Engines – Qualcomm, WIMI and Samsung Bring New Opportunities to the Industry – Yahoo Finance

HONG KONG, CHINA / ACCESSWIRE / July 21, 2020 / The arrival of 5G will bring new explosive points for market development. AI will usher in a new wave of growth in the 5G era. 4G technology has brought opportunities for the server market to speed up development, while 5G is expected to maintain this tradition and help the server industry maintain a good long-term development prospect. Undeniably, the promotion of 4G promoted the increase of users, and the operators made a lot of investment and construction of data centers to meet the needs of users, which led to a wave of high tide of server procurement. Compared with 3G and 4G, 5G has improved its speed by about 10 times, which has achieved a qualitative leap in the development of server market. In the future, 5G rate is expected to increase by tens of times, which will undoubtedly inject more vitality into the market. For example, industries that were previously limited by data processing speed are expected to break through bottlenecks and achieve substantial growth. Once these new industries explode because of the combination of "AI" and 5G, the amount of data generated will be hundreds of times greater than the 4G era.

So, what does 5G bring to AI? This can be explained in three ways.

The first is about data. The rapid development of AI is based on big data, which has become the massive learning materials of AI system. While 5G provides the base for AI to create more data, the essence of AI is to need more data, 5G can increase the data volume by a hundred times, and meanwhile, the data structure is more diversified and complex. While 5G and AI support each other, the problem at hand is that computing power has not yet broken through, and how to process data more efficiently is another topic.

At the control level, with the determination of R16 standard and the advance of R17, 5G broad connection features are better supported. With 5G, we have access to more devices, more devices that the AI can control, and correspondingly more scenarios for THE AI. From the indoor point of view, now the user can control more kinds of household appliances, from TV, light to refrigerator, purifier; In the outdoor, we can control the car. In the off-road, we can only control the mobile phone, but now cars, wearable devices and so on have joined in. This allows AI control boundaries to be greatly expanded, but the depth of control is limited.

Finally, 5G is even more important in practical applications. For example, AI is not widely used in mobile phones now, intelligent voice is an important function, and mobile phone manufacturers are pushing personal AI assistant, but it is not smart enough at all. A large part of the reason is that the data is too small.

With the rapid development and maturity of AI technology, more and more industries are combining with ARTIFICIAL intelligence technology to seek greater development. The main advantages of the combination of various industries and ARTIFICIAL intelligence are breakthroughs in algorithms, computing power, data, products, engineering and solutions. At present, the new fields of artificial intelligence, big data and cloud computing technology with fast landing and large market space have attracted many resources in recent years. Giants in various industries, new algorithm companies and start-ups are actively planning for the 5G era.

Qualcomm

For more than a decade, Qualcomm has been working on AI to empower many industries. In the wave of 5G and AI innovation, chips are an important part of the industrial chain.

Qualcomm has a solid technology accumulation in the field of AI, mobile computing and connectivity, combining leading 5G connectivity with AI research and development, and has a complete cloud-to-end AI solution. In this process, Qualcomm has formed a close connection with the AI industry and established a solid partnership with a number of leading AI ecosystem partners in China to jointly build the future of ARTIFICIAL intelligence. In 2018, Qualcomm established Qualcomm AI Research to further consolidate the company's internal Research on cutting-edge artificial intelligence. That same year, Qualcomm set up a $100 million AI venture capital fund to invest in start-ups revolutionizing AI technology around the world. At present, Qualcomm Venture capital has invested in a number of leading AI innovation enterprises in China.

Story continues

Qualcomm has been in China for more than 20 years and established a research and development center in Shanghai in 2010. In 2016, Qualcomm established its first semiconductor manufacturing test factory in the world -- Qualcomm Communications Technology (Shanghai) Co., Ltd. in Pudong New Area, introducing its internationally advanced products and technologies to China, demonstrating qualcomm's commitment to continue to invest in China, integrate more closely with Chinese industries, and serve customers. In addition, Qualcomm is working with Chinese partners including Shanghai enterprises to make innovations in 5G, ARTIFICIAL intelligence, cloud computing, big data and other fields, so as to promote the development of 5G and ARTIFICIAL intelligence in Shanghai, boost "new infrastructure", and promote the development of domestic new technology industry and digital economy.

WiMi Hologram Cloud

When it comes to 5G networks, unlike the 4G era, they will have higher speeds, lower latency and massive connectivity. Faster than 4G by tens of times, slower than 1ms, and more than 50 billion devices worldwide are connected to each other. Thus, the three 5G application scenarios of ULTRA broadband mobile communication (eMBB), ultra low delay communication (uRLLC) and mMTC have been established. It is based on these three scenarios that the 5G era has given birth to more applications in the market such as AR holography, unmanned driving and telemedicine, and interconnection of everything. From the interaction between people to the communication between things, the realization of the telecommunications level of cellular communication, or will lead to a new revolution in human society.

The hologram industry has a broad prospect and great potential. It will have explosive growth in the future. By 2025, the size of Holographic cloud market in China is expected to exceed 450 billion RMB, the size of holographic cloud market is expected to grow by 78% annually, the size of global holographic cloud market is expected to exceed 500 billion USD, and the size of holographic cloud market is expected to grow by 68% annually.

WiMi Hologram Cloud as a representative of the domestic enterprise visual AI, its business covers holographic AR technology multiple links, including holographic visual presentation, holographic interactive software development, holographic AI computer vision synthesis, holographic AR online advertising, holographic non-inductive ARSDK pay, 5 g holographic communication software development, holographic face recognition and development, holographic development of AI in the face.

Due to the changes in 5G communication network bandwidth, high-end holographic applications are increasingly applied to social media, communication, navigation, home applications and other application scenarios. The WiMi Hologram Cloud is a project to build a holographic Cloud platform through a 5G communications network based on two core technologies: holographic AI facial recognition and holographic AI facial modification.

Hologram Cloud plans to continue to improve and strengthen existing technologies, maintain industry leadership and create ecological business models. Hologram Cloud's holographic face recognition technology and holographic face change technology are currently being applied to the existing Holographic advertising and entertainment businesses in the WiMi Hologram Cloud, and the technology is being upgraded to make breakthroughs in more areas of the industry. WiMi Hologram Cloud aims to build a commercial ecosystem based on Hologram applications.

WiMi Hologram Cloud boasts the world's leading 3D computer vision technology and SAAS platform technology. WiMi Hologram Cloud USES AI algorithms to turn ordinary images into holographic 3D content and is widely used in holographic advertising, holographic entertainment, holographic education, holographic communication and other fields. WiMi Hologram Cloud, with core technologies such as holographic face recognition, holographic face changing and holographic digital life, is looking for market collaboration and investment opportunities around the world. In the future, WiMi Hologram Cloud aims to expand Hologram ecology in the international market and become a global Hologram Cloud industry leader.

With the advent of 5G era, the industry believes that holographic image communication can use the characteristics of 5G network high speed to transmit 3D video signals with large data volume, which can show a more real world for users, have a qualitative leap in interactivity, or become a disruptive technology of Internet social interaction. At present, Samsung, Facebook and other tech giants are participating in this field of technology research and development, showing that the technology has a broad application prospect. At present, the number of domestic enterprises engaged in the field of holographic projection has also been greatly increased, according to data statistics, has reached more than a thousand holographic projection companies, the market capacity has also risen to the level of ten billion.

Samsung

Samsung introduced Digital Cockpit 2020 at CES, which USES 5G to link the internal and external functions of the vehicle and provide an interconnected experience for drivers and passengers. This is the third joint development between Samsung Electronics and Harman, which combines Samsung's strengths in communications technology, semiconductors and displays with Harman's automotive expertise. Support users in the car to achieve unlimited interaction with the office, home.

Another interesting AI topic, Samsung's performance at CES is a classic case of "big with small", because its Ballie intelligent AI robot is only a little bigger than a baseball, but it has attracted amazing attention due to the infinite application space generated.

The chubby AI robot, which moves by scrolling and ACTS as a steward for your home AIoT system, Ballie is controlled by a smartphone, equipped with artificial intelligence (AI) features, voice operations and a built-in camera to recognize and respond to users and help them with a variety of home tasks. It can respond to requests to speak like a pet, but can be used as a wake up call, fitness assistant, time recorder or to manage other smart devices in the home (like TVS and vacuum cleaners).

At the CES, samsung along the 5 g and AI are two of the most important trends of science and technology, this paper expounds the own understanding and research and development achievements, tablet PC, vehicle operating system or small AI robots, samsung expressed in all-round innovation into 5 g + AI will, existing achievement enough attractive but obviously samsung will also bring us more possibilities.

The mutual empowerment of 5G and ARTIFICIAL intelligence will bring new growth opportunities for the development of the Internet of Things. AI use case for the Internet of things, widely covered domestic and industrial/enterprise and wisdom city, including manufacturing automation and robotics, family and enterprise intelligent security, intelligent display and speakers, agriculture intelligent home control center, and smart appliances, sustainable urban and infrastructure, digital logistics and retail, etc.

In the future, 5G and AI will also affect every aspect of life and many industries, including education, healthcare, retail, manufacturing and transportation. According to statistics, the adoption rate of ARTIFICIAL intelligence in important market segments such as smartphones, PCS/tablets, extended reality (XR), cars and the Internet of Things will increase from less than 10 percent last year to 100 percent by 2025. Driven by this trend, terminal side AI will become a standard feature on many key platforms. 5G and AI technology will bring huge economic benefits to the world. As 5G becomes fully commercialized, it will empower many industries and generate up to $13.2 trillion in goods and services globally by 2035. At the enterprise level, ai derivatives will be worth $3.9 trillion by 2022.

Media contactCompany: Mobius TrendContact: Trends & Insights TeamE-Mail: cs@mobiustrend.comWebsite: http://www.mobiusTrend.comYoutube: https://www.youtube.com/channel/UCOlz-sCOlPTJ_24rMgR6JLw

SOURCE: Mobius Trend

View source version on accesswire.com: https://www.accesswire.com/598262/With-5GAI-Twin-Engines--Qualcomm-WIMI-and-Samsung-Bring-New-Opportunities-to-the-Industry

Read more:

With 5G+AI Twin Engines - Qualcomm, WIMI and Samsung Bring New Opportunities to the Industry - Yahoo Finance

Flytxt Applauded by Frost & Sullivan for Improving Telcos’ Marketing Agility with Its AI/ML Applications – Yahoo Finance

Flytxt's AI solutions aid rapid decision making and contextualize interactions to help telcos take customer engagement to the next level

LONDON, Aug. 24, 2021 /PRNewswire/ -- Frost & Sullivan recognizes Flytxt with the 2021 Global Company of the Year Award for its artificial intelligence (AI) in telecom marketing. As the telecommunications industry transitioned from rule-based to augmented/autonomous marketing, Flytxt adapted its technology using AI, data analytics, and machine learning (ML) to enable hyper-personalization at scale.

2021 Global AI in Telecom Marketing Company of the Year Award

"Flytxt's uniquely differentiated software applications and best practices help telco marketers with data-driven decisions that maximize customer lifetime value," said Hemangi Patel, Senior Research Analyst for Frost & Sullivan. "Its AI/ML applications handle decisions and actions dynamically and contextually, rapidly analyzing high data volumes to arrive at the best opportunities to uplift customer value. Flytxt's out-of-the-box solutions are easy to deploy and maintain without burdening in-house data engineers and scientists."

Flytxt's proprietary CVM technology (data model, embedded analytics, explainable AI, and privacy preservation) is offered through a broad set of solutions used by more than 70 telcos globally. The company helps enterprises to deliver comprehensive data-driven digital experiences via its omnichannel CVM solution packaging AI, analytics, and marketing automation. CVM-in-a-box is a tightly packaged solution for smaller enterprises and business units to benefit from AI-driven marketing rapidly. The CVM accelerator solutions provide AI and analytics purpose-built to augment enterprises' existing customer engagement systems and achieve the desired CVM goals faster.

"Flytxt's autonomous and explainable AI applications drive marketing optimization at scale. These applications ensure that enterprises will never miss any opportunity to maximize customer value across numerous micro-moments and contexts," noted Ruman Ahmed, Best Practices Research Analyst for Frost & Sullivan. "Its AI/ML solutions deliver the right set of decisioning variables and logic to meet changing market dynamics in different markets. With its continued AI/ML innovation and proven results in various use cases across multiple markets, Flytxt emerges as the AI and analytics partner of choice for telcos to drive customer lifetime value."

Story continues

Each year, Frost & Sullivan presents a Company of the Year award to the organization that demonstrates excellence in terms of growth strategy and implementation in its field. The award recognizes a high degree of innovation with products and technologies and the resulting leadership in customer value and market penetration. The Best Practices Awards recognize companies in a variety of regional and global markets for demonstrating outstanding achievement and superior performance in areas such as leadership, technological innovation, customer service, and strategic product development.

About Frost & Sullivan

For six decades, Frost & Sullivan has been world-renowned for its role in helping investors, corporate leaders, and governments navigate economic changes and identify disruptive technologies, Mega Trends, new business models, and companies to action, resulting in a continuous flow of growth opportunities to drive future success. Contact us: Start the discussion. Contact us: Start the discussion.

Contact:

Tarini SinghP: +91-20 6718 9725E: Tarini.Singh@frost.com

About Flytxt

Flytxt is a Dutch company and a pioneer in marketing automation and AI technology; specializing in offering Customer Life-Time Value (CLTV) management solutions for subscription and usage businesses such as Telecom, Banking, Utilities, (online) Media & Entertainment, and Travel. Our solutions are used by more than 100 enterprises including 70 leading Telecom operators across the world to increase customer lifetime value through increased upsell, cross sell, and retention.

Contact:Pravin VijayP: +91-9745961333E: Pravin.vijay@flytxt.com

Cision

View original content to download multimedia:https://www.prnewswire.com/news-releases/flytxt-applauded-by-frost--sullivan-for-improving-telcos-marketing-agility-with-its-aiml-applications-301360358.html

SOURCE Frost & Sullivan

Continued here:

Flytxt Applauded by Frost & Sullivan for Improving Telcos' Marketing Agility with Its AI/ML Applications - Yahoo Finance

7 AI Stocks to Buy for the Increasing Digitization of Healthcare – InvestorPlace

The increased move to digitization is only one of several trends the healthcare industry has embraced in the past few years. Transferring paper-based information to digital formats gives health professionals faster access to data, but the benefits dont stop there. To turn the stored information into something useful, the industry needs systems that find patterns, recognize what is important and perform predictive analysis. On that basis, investors should consider AI stocks.

The digitization of healthcare-related data will involve companies that lead in Artificial Intelligence. The rise of AI will not lead to job losses for healthcare professionals, but instead enable companies to automate repetitive tasks and free their staff to do other, more valuable things.

Here are seven AI stocks to buy for the increasing digitization of healthcare:

How might AI-powered systems contribute to a better healthcare system? Electronic Healthcare Records (EHRs) have a rich dataset to back up the benefits of AI. As medical costs for patients increase at an uncontrollable rate, the industry will want to invest in AI solutions to lessen the load.

Source: Laborant / Shutterstock.com

International Business Machines reported lower year-over-year revenue for the second quarter. Revenue fell 5.42% Y/Y to $18.12 billion, though it earned $2.18 a share. Watson is a central brand for the AI solution IBM offers, as well as a part of its hybrid cloud strategy, which IBM advertises may help its clients work through both complex and regulated workloads.

According to IBM, Watson helps you predict and shape future outcomes, automate complex processes, and optimize your employees time. For example, AI will help healthcare professionals with surface treatment, supporting user needs, and by targeting similarities and patterns.

Data courtesy of Stockrover

As a tech stock, IBM trades at a steep price-to-earnings multiples well-below both industry and S&P 500 averages. Markets are punishing IBM stock for the slow growth in its legacy businesses.

IBM still has plenty of work ahead in building Watsons AI doctor. Until it gets beyond the hype and delivers on helping such things as making diagnoses, IBM will rely on business growth from its other business units. That includes Red Hat and Cloud Paks.

Source: StreetVJ / Shutterstock.com

China-based Baidu established a health internet hospital on March 18. It also recently established Baidu Health Technology. The company is committing to the online healthcare industry with strong experience in big data and AI technologies.

Baidus value score is on par with the index, as shown in the table below. As its role in healthcare increases, price-to-sales ratio will expand to match that of the industry average. Baidu stock will increase as a result:

Data courtesy of Stockrover

Baidu said last year that it would donate AI-integrated fundus screening machines to 500 medical centers. Already, the donation is paying off. The AI-powered camera detects eye fundus and creates a screening report in mere seconds. Because China has a shortage of ophthalmologists, Baidu is helping to increase the availability of patient care.

In the near term, the company will build its Baidu Health unit. This included holding more than 100 live broadcasting events on COVID-19. Baidu Health also helps users register for doctor appointments, get information on hospitals and doctors and connect with doctors for online consultation.

On Wall Street, the average price target for Baidu stock is $146.67 (per Tipranks).

Source: JHVEPhoto / Shutterstock.com

In 2019, Medtronic launched its first AI system for colonoscopy. The company said, The GIGenius module uses advanced artificial intelligence to highlight the presence of pre-cancerous lesions with a visual marker in real-time serving as an ever vigilant second observer. A new era of diagnostic endoscopy should improve the detection rate that a doctor may miss, ultimately saving more lives.

Data courtesy of Stockrover

Above, Medtronic stock scores a 92/100 on quality. The market is ignoring its strong gross margins relative to the S&P 500.

Chairman and CEO Omar Ishrak recently explained how the model for personalized medicine is becoming a reality. That will depend on developing AI solutions in the healthcare market. In doing so, the company will empower physicians.

By giving doctors clinical and behavioral data, providers will have more information available. Making better-informed decisions will increase the effectiveness of patient treatments.

Source: Shutterstock

More of Strykers customers are ordering robots. As robotic surgery procedures increase, the role of artificial intelligence in healthcare will rise in importance, too. Stryker is a leader in orthopedic robotics. In the second quarter, the company posted strong orders, thanks to its continued push for innovation. Joint replacement surgeries, for example, are growing above the market rate.

Data courtesy of Stockrover

Strykers price to free cash flow ratio is below that of the industry. Given its strong role in AI in healthcare, the Stryker stock is trading at a discount.

On its conference call, Strykers VP of Investor Relations, Preston Wells said, whether theyre competitive accounts that are in or out, were really just going to all of those different accounts and trying to find areas to place Mako.

Wells further implied the addressable market will get larger as customers ask for more solutions from Mako. The robotic-arm uses a 3D CT-based planning software. Surgeons will know more about the patients anatomy, enabling them to offer a personalized joint replacement.

Source: Shutterstock

Nuance shares have risen steadily from sub-$15 lows to around $27. In its second quarter, the company posted organic revenue growth of 11% Y/Y. Enterprise revenue grew 19%, the highest in 10 years. Dragon Medical One is the flagship growth driver for Nuance; demand for that service grew 46% Y/Y.

Below, most analysts rate Nuance stock with a strong-buy recommendation:

Data courtesy of Stockrover

Nuance accelerated its AI innovation and continued the development of machine learning-based tools. This will improve the workflow and productivity in healthcare. Dragon Medical One contributed to the strong first half annual recurring revenue growth.

Nuance scaled its international markets by launching Dragon Medical One in five new European countries. The product is a speech recognition cloud solution that will improve the productivity of healthcare workers. It securely captures the patients narrative and reduces the workload of clinicians.

The rise in telemedicine during the global pandemic will drive Nuances AI business higher.

Source: rvlsoft / Shutterstock.com

Googles mandate for Deepmind is building products that support care teams and improve patient outcomes. Google has expertise in cloud storage, data security and app development. It will work to develop mobile medical assistants for clinicians.

Data courtesy of Stockrover

Alphabets growth will outpace the S&P 500 index over the next year. The 95/100 growth score suggests the stock will outperform markets, too, in the year.

In diagnostics, Deepmind will help healthcare workers detect eye disease from scans or assist in cancer radiotherapy treatment. More recently, Googles pending acquisition of Fitbit will accelerate the search giants development of wearables in healthcare. And since these devices track the wearers health metrics, it will have plenty of user data to work with.

That volume of data will necessitate machine learning and AI to decipher any meaningful patterns. Without AI, Google cannot perform any initial diagnoses that may potentially save a wearers life.

Google hasnt gotten the European Unions blessing on the deal, and a full-scale investigation will delay the Fitbit acquisition. But should it clear, the companys positioning in AI in healthcare will strengthen.

Source: Kevin Chen Photography / Shutterstock.com

Alibaba has all the requisite backend systems in place for AI in healthcare. Alibaba Cloud has AI-powered solutions that are solving real-world problems. And BABA is solving healthcare problems by analyzing clinical and hospital operations.

The company said that the system uses 700 core indicators that come from medical institutions and regional medical operations. By feeding real-world data to the AI, the system will have higher accuracy and reliability. Its AI platform may perform image and voice recognition. Medical institutions get diagnosis support from Alibabas AI.

The real-world importance of Alibabas new AI system will save lives. The system has a 96% accuracy in detecting coronavirus in mere seconds. By contrast, it takes humans around 15 minutes to make a diagnosis.

The fair value of Alibaba stock is $325.72. The value score is low but the growth score is 100/100:

Data courtesy of Stockrover

Alibaba trained the system to detect coronavirus by introducing images and data from 5,000 confirmed coronavirus cases.

Disclosure: As of this writing, the author did not hold a position in any of the aforementioned securities.

View original post here:

7 AI Stocks to Buy for the Increasing Digitization of Healthcare - InvestorPlace

Doctors are using AI to triage covid-19 patients. The tools may be here to stay – MIT Technology Review

The pandemic, in other words, has turned into a gateway for AI adoption in health carebringing both opportunity and risk. On the one hand, it is pushing doctors and hospitals to fast-track promising new technologies. On the other, this accelerated process could allow unvetted tools to bypass regulatory processes, putting patients in harms way.

At a high level, artificial intelligence in health care is very exciting, says Chris Longhurst, the chief information officer at UC San Diego Health. But health care is one of those industries where there are a lot of factors that come into play. A change in the system can have potentially fatal unintended consequences.

Before the pandemic, health-care AI was already a booming area of research. Deep learning, in particular, has demonstrated impressive results for analyzing medical images to identify diseases like breast and lung cancer or glaucoma at least as accurately as human specialists. Studies have also shown the potential of using computer vision to monitor elderly people in their homes and patients in intensive care units.

But there have been significant obstacles to translating that research into real-world applications. Privacy concerns make it challenging to collect enough data for training algorithms; issues related to bias and generalizability make regulators cautious to grant approvals. Even for applications that do get certified, hospitals rightly have their own intensive vetting procedures and established protocols. Physicians, like everybody elsewere all creatures of habit, says Albert Hsiao, a radiologist at UCSD Health who is now trialing his own covid detection algorithm based on chest x-rays. We dont change unless were forced to change.

As a result, AI has been slow to gain a foothold. It feels like theres something there; there are a lot of papers that show a lot of promise, said Andrew Ng, a leading AI practitioner, in a recent webinar on its applications in medicine. But its not yet as widely deployed as we wish.

QURE.AI

Pierre Durand, a physician and radiologist based in France, experienced the same difficulty when he cofounded the teleradiology firm Vizyon in 2018. The company operates as a middleman: it licenses software from firms like Qure.ai and a Seoul-based startup called Lunit and offers the package of options to hospitals. Before the pandemic, however, it struggled to gain traction. Customers were interested in the artificial-intelligence application for imaging, Durand says, but they could not find the right place for it in their clinical setup.

The onset of covid-19 changed that. In France, as caseloads began to overwhelm the health-care system and the government failed to ramp up testing capacity, triaging patients via chest x-raythough less accurate than a PCR diagnosticbecame a fallback solution. Even for patients who could get genetic tests, results could take at least 12 hours and sometimes days to returntoo long for a doctor to wait before deciding whether to isolate someone. By comparison, Vizyons system using Lunits software, for example, takes only 10 minutes to scan a patient and calculate a probability of infection. (Lunit says its own preliminary study found that the tool was comparable to a human radiologist in its risk analysis, but this research has not been published.) When there are a lot of patients coming, Durand says, its really an attractive solution.

Vizyon has since signed partnerships with two of the largest hospitals in the country and says it is in talks with hospitals in the Middle East and Africa. Qure.ai, meanwhile, has now expanded to Italy, the US, and Mexico on top of existing clients. Lunit is also now working with four new hospitals each in France, Italy, Mexico, and Portugal.

In addition to the speed of evaluation, Durand identifies something else that may have encouraged hospitals to adopt AI during the pandemic: they are thinking about how to prepare for the inevitable staff shortages that will arise after the crisis. Traumatic events like a pandemic are often followed by an exodus of doctors and nurses. Some doctors may want to change their way of life, he says. Whats coming, we dont know.

Hospitals new openness to AI tools hasnt gone unnoticed. Many companies have begun offering their products for a free trial period, hoping it will lead to a longer contract.

It's a good way for us to demonstrate the utility of AI, says Brandon Suh, the CEO of Lunit. Prashant Warier, the CEO and cofounder of Qure.ai, echoes that sentiment. In my experience outside of covid, once people start using our algorithms, they never stop, he says.

Both Qure.ais and Lunits lung screening products were certified by the European Unions health and safety agency before the crisis. In adapting the tools to covid, the companies repurposed the same functionalities that had already been approved.

QURE.AI

Qure.ais qXR, for example, uses a combination of deep-learning models to detect common types of lung abnormalities. To retool it, the firm worked with a panel of experts to review the latest medical literature and determine the typical features of covid-induced pneumonia, such as opaque patches in the image that have a ground glass pattern and dense regions on the sides of the lungs. It then encoded that knowledge into qXR, allowing the tool to calculate the risk of infection from the number of telltale characteristics present in a scan. A preliminary validation study the firm ran on over 11,000 patient images found that the tool was able to distinguish between covid and non-covid patients with 95% accuracy.

But not all firms have been as rigorous. In the early days of the crisis, Malik exchanged emails with 36 companies and spoke with 24, all pitching him AI-based covid screening tools. Most of them were utter junk, he says. They were trying to capitalize on the panic and anxiety. The trend makes him worry: hospitals in the thick of the crisis may not have time to perform due diligence. When youre drowning so much, he says, a thirsty man will reach out for any source of water.

Kay Firth-Butterfield, the head of AI and machine learning at the World Economic Forum, urges hospitals not to weaken their regulatory protocols or formalize long-term contracts without proper validation. Using AI to help with this pandemic is obviously a great thing to be doing, she says. But the problems that come with AI dont go away just because there is a pandemic.

UCSDs Longhurst also encourages hospitals to use this opportunity to partner with firms on clinical trials. We need to have clear, hard evidence before we declare this as the standard of care, he says. Anything less would be a disservice to patients.

To have more stories like this delivered directly to your inbox,sign upfor our Webby-nominated AI newsletter The Algorithm. It's free.

Originally posted here:

Doctors are using AI to triage covid-19 patients. The tools may be here to stay - MIT Technology Review

Paris-based Monk raises 2.1 million to expand its AI-based car damage inspection system – EU-Startups

French AI startup Monk, a unique system for car damage detection, has closed a 2.1 million seed round led by Iris Capital, alongside Plug and Play and key business angels including Patrick Sayer (former CEO of Eurazeo), Yannis Yahiaoui (founder of Adot), and Arthur Waller (founder of PriceMatch and Pennylane).

Monk was founded in 2019 when Aboubakr Laraki (CEO) and Fayal Slaoui (CTO), both specialized in AI and image recognition, met and shared the conviction that the market of AI-based damages detection was still at its earliest stage, requiring an expert approach. They partnered from the very beginning of the company with Getaround, a leader of the peer-to-peer car rental market to provide them with car damages claims material that proved game-changing compared to the solutions available then in the industry.

Monks solution is based on a ground-breaking artificial intelligence technology allowing to detect damages on any car relying on pictures taken by users, renters and/or drivers for a fraction of the traditional solutions price.Monk has already convinced several professionals of the car logistics and rental industry, as well as a Tier 1 European car Manufacturer (partnership to be announced later this year).

Among all the solutions weve tested to automatically detect damage on vehicles from photos provided by our users, not only did Monk eclipse the competition but their results also exceeded by far our expectationssaid P. Beret, VP of Risk Getaround.

While the company is only starting its sales outreach, this new funding round will support Monks R&D programme, the recruitment of new team members, especially data scientists, and its business expansion across Europe.

Monks mission is to transform the mobility and insurance market by bringing trust and efficiency whenever a car changes hands. Weve built an AI-based, hardware-free, inspection system that assesses instantly any vehicles condition from photos or videos. From day 1 the challenge proposed by Getaround was equivalent to climbing up the Everest. Internally it paved the way for a strong culture of breaking walls and externally the product we ended up with has echoed a lot in the automotive and insurance industries. Weve been lucky to quickly deploy our product in other contexts and build high quality customer relationships that we aim at consolidating and developing in the coming months. We are proud to work with our new partners, who understand very well our challenges. This funding will help us boost our R&D and scale our product market-fit internationally, commented Aboubakr Laraki, Monks CEO and co-founder.

Monk has the potential to address many issues related to car damages. Theyre starting with car rental claims processes in an industry on the verge of being drastically transformed by the recent crisis. But the insurance industry is also looking for tools to simplify and optimize its underwriting and claim appraisal processes, a $200 billion market today, where it would allow for more efficient, optimized and faster settlement. This would represent tremendous savings and a better customer satisfaction for insurers. We believe Monk has the potential to solve these issues with its cutting edge technology, declared Julien-David Nitlech, Managing Partner at Iris Capital.

Visit link:

Paris-based Monk raises 2.1 million to expand its AI-based car damage inspection system - EU-Startups