Black Lives Matter mural placed on top of building in 5 Points neighborhood – WRAL.com

By Kirsten Gutierrez, WRAL reporter

Raleigh, N.C. A new mural in Raleigh's Five Points neighborhood is sure to catch your eye the next time you drive through.

The mural is meant to call for action and demand change.

At the 5 Points intersection in Raleigh, you'll notice familiar faces.

Those faces were placed on top of the Shps at 1700 building Friday as a showcase of solidarity. Among the people now covering the edges of the roof are Michael Brown, Jordan Baker, Trayvon Martin, and Tamir Rice. On the other side of the building are Eric Garner, George Floyd, Breona Taylor and Ahmaud Arbery.

Its great for one thing. Its part of getting together. Just hope everyone gets on one accord and gets together," said 5 Points resident Edgar Cross.

Many who live in the neighborhood are just starting to notice the mural and believe its a great way to spread awareness.

" [It's} such a great response to see here in a very conservative neighborhood that doesnt have any murals yet, I think it speaks high volumes of education and what we want our community to reflect as well as our downtown community," said Carolyn Walker, a 5 Points resident.

So far, the posters have spoken far louder than expected.

I think thats the purpose of it, to make change," said Cross.

The owner of this building said they planned to keep the mural up for as long as the posters last.

Read more here:

Black Lives Matter mural placed on top of building in 5 Points neighborhood - WRAL.com

Keke Palmer condemns rumors that her show was canceled because of Black Lives Matter activism – NBC News

Keke Palmer addressed rumors that her show "Strahan, Sara and Keke" was canceled because of her public support for the Black Lives Matter movement, calling such speculation "dangerous."

"I want to speak on this simply because I hate the narrative that if you speak your mind as a Black person that you will in some way be punished," Palmer wrote in an Instagram post Sunday, in reference to a meme that read: "Ain't it weird how Keke Palmer was seen protesting and preaching to the police about racism in our country then ABC decides to cancel her show."

The Morning Rundown

Get a head start on the morning's top stories.

"I have seen this going around and at first I ignored [it], but in this climate I realized this is a dangerous message to send to our generation and the generation coming up," Palmer added. "If anything, my speaking out showed the corporations I work with how important my voice is and anyone that has a POV."

The 26-year-old actress began appearing on the ABC daytime talk show "Strahan and Sara" in 2019, filling in for both co-hosts Michael Strahan and Sara Haines on various occasions before she joined the show as an official co-host last August.

ABC pulled the show off the air in March, replacing it with Pandemic: What You Need to Know, a daily coronavirus report hosted by Amy Robach. Though ABC has not announced whether "Strahan, Sara and Keke" has officially been canceled, Page Six reported earlier this month that the show will be permanently taken off air. Neither ABC nor Palmer, who was nominated for a Daytime Emmy Award in the Outstanding Entertainment Talk Show Host category in May, responded to NBC News' requests for comment.

Though the future of the "Strahan, Sara and Keke" is uncertain, Palmer confirmed that she would no longer appear on the show.

"This business is dynamic and instead of thinking of me as a 'series regular' see me as a brand that works with the corporation Disney/ABC News and this particular show I was on is no longer," Palmer wrote. "When I see such fear mongering comments I want to speak out so that no one ever feels or thinks that speaking out will cost them their job! Im sure it can and has before, but lets also recognize when it has not. That way more of us with our own minds speak out against any injustices we see."

Palmer has been a vocal advocate for racial justice and went viral earlier this year for urging National Guard members to walk alongside marchers at a Black Lives Matter protest.

"Trust me, walking in my truth has always made my blessings OVERFLOW and connect to those that are like minded and not with those that are not," Palmer wrote. "Do not believe this lie."

Gwen Aviles is a trending news and culture reporter for NBC News.

See the original post:

Keke Palmer condemns rumors that her show was canceled because of Black Lives Matter activism - NBC News

How Graffiti Artists Are Propelling the Vision of the Black Lives Matter Movement – Artsy

Walls covered in graffiti and street art can offer a synopsis of social movements. Recently, in response to police brutality and the killings of George Floyd, Breonna Taylor, Ahmaud Arbery, Tony McDade, and many others, artists worldwide have been ignited, taking to streets to express themselves. Syrian artists Aziz Asmar and Anis Hamdoun painted I cant breathe across a fragment of wall in the northwest Idlib province; Italian artist Jorit Agoch made a mural of Floyd along with revolutionaries Angela Davis, Martin Luther King Jr., Malcolm X, and Vladimir Lenin in Naples; and on the Berlin Wall, Eme Freethinker portrayed Floyd and his final words. Driven by the necessity for reform and resistance, these artists are reclaiming public spaces.

In recent years, as the Black Lives Matter movement has gained momentum and protests occur internationally, graffiti has increasingly been used to propel its vision. The inherently political mediums storytelling powers have become a way for communities to raise awareness, express themselves, and even educate the public.

On June 20th in Cleveland, Ohio, artists Stamy Paul and Ricky Smith led a group of local artists, graffiti writers, and activists in creating a Black Lives Matter street mural. While such efforts have proliferated since Mayor Muriel Bowser unveiled Black Lives Matter Plaza in Washington, D.C., the Cleveland mural, like some others across the U.S., is more elaborate and artful. Each bold letter encompasses kaleidoscopic images of fire, characters, words (such as unity), and messages (Black women are beautiful).

See the rest here:

How Graffiti Artists Are Propelling the Vision of the Black Lives Matter Movement - Artsy

Personalization at Scale: Is AI the Most Realistic Way Forward? – CMSWire

PHOTO:Shutterstock

Todays brands are producing an enormous amount of highly personalized content. The reason according to research by Adobe is that 67% of consumers want brands to tailor content to them, and 42% even get annoyed when content isnt personalized.

The sheer amount of content and the complexity of matching this content to various audiences, however, could be too much to handle manually for most brands. Does that mean artificial intelligence (AI) is the only realistic way forward?

We asked marketing experts why they need large scale personalization and how theyre scaling the personalization strategies at their companies.

It seems strange to talk about large-scale personalization, when the very idea of personalization is to narrowly focus on an individuals needs or desires, stated Geoff Webb, VP of strategy PROS, yet that is exactly the challenge that faces businesses now. Todays consumers want their individual needs met immediately, and dont want to be treated as just another member of a businesss broader target market.

Vendors must now be able to demonstrate a clear response to a specific need in a buyer, while doing so at scale, Webb explained, across many buyers, globally, through multiple purchasing channels. This need for personalization goes for B2B markets as well, which have buyers willing to pay more for solutions that meet their specific business needs. Failing to do so risks losing individual deals to a more nimble competitor, warned Webb, and ultimately the buyer themselves entirely.

Related Article: Why the Time Is Right for Personalization

When you consider the scale of many businesses the number of products, customers, configurations, options, sales channels, price points and so on, said Webb, it becomes clear that the volume of information that must be consumed and analyzed is simply too great for human methods. Even if companies do attempt to grind through the data manually, it often takes too long to offer the responsiveness consumers expect.

AI has therefore become one of the central pillars in delivering personalization at scale and speed, continued Webb. AI technologies paired with a good customer dataset can spot trends and make discoveries faster and better than humans in many cases. B2B business of the future will be masters of analytics and extracting insight from oceans of data, very, very quickly, Webb stated. For this, he believes AI will play a critical role.

Jeffrey MacIntyre, principal at Bucket Studio, however, believes the smartest teams know that information architecture (IA) is far more critical than AI. There is no scale without structure, he explained, you cannot afford to forego understanding the data around your customer journey and the contextual delivery of content. The reality is that content creation and design are prerequisites to algorithms that machines simply cannot deliver.

For Webb, personalization shouldnt be the end goal itself, but a means for supporting the companys overall sales and customer engagement strategy. We personalize so that we can achieve exactly the right outcome for the customer and the vendor together, he said. That means for brands to foster personalized interactions, they first need to listen to their customers and then engage with them.

The truly transformative part of this, Webb continued, is that the process of engaging with a customer enables us to learn more about their desires. Every interaction is an opportunity to learn more about the customer, which can be factored into the next interaction. It becomes a virtuous, reinforcing cycle in which the relationship moves from transactional to long-term, Webb explained. When this happens, both the buyer and the vendor derive more value from the relationship.

AI is not the sensible way forward for most organizations to scale personalization efforts. added MacIntyre. He believes personalization doesnt need to be launched on every platform at once, but should be slowly implemented over time. Incrementalism is not just smart scoping with personalization, its oftentimes the best way to generate learnings, momentum and a sponsor's confidence.

AI is able to deliver the right kind of advice, insight and information to shape how businesses deliver exactly what a customer wants, said Webb. But brands with the most successful personalization strategies need to know how to feed AI the right data.

See more here:

Personalization at Scale: Is AI the Most Realistic Way Forward? - CMSWire

Weird AI illustrates why algorithms still need people – The Next Web

These days, it can be very hard to determine where to draw the boundaries around artificial intelligence. What it can and cant do is often not very clear, as well as where its future is headed.

In fact, theres also a lot of confusion surrounding what AI really is. Marketing departments have a tendency to somehow fit AI in their messaging and rebrand old products as AI and machine learning. The box office is filled with movies about sentient AI systems and killer robots that plan to conquer the universe. Meanwhile, social media is filled with examples of AI systems making stupid (and sometimes offending) mistakes.

If it seems like AI is everywhere, its partly because artificial intelligence means lots of things, depending on whether youre reading science fiction or selling a new app or doing academic research, writes Janelle Shane inYou Look Like a Thing and I Love You, a book about how AI works.

Shane runs the famous blogAI Weirdness, which, as the name suggests, explores the weirdness of AI through practical and humorous examples. In her book, Shane taps into her years-long experience and takes us through many examples that eloquently show what AIor more specificallydeep learningis and what it isnt, and how we can make the most out of it without running into the pitfalls.

While the book is written for the layperson, it is definitely a worthy read for people who have a technical background and even machine learning engineers who dont know how to explain the ins and outs of their craft to less technical people.

In her book, Shane does a great job of explaining how deep learning algorithms work. From stacking up layers of artificial neurons, feeding examples, backpropagating errors, using gradient descent, and finally adjusting the networks weights, Shane takes you through the training ofdeep neural networkswith humorous examples such as rating sandwiches and coming up with knock-knock whos there? jokes.

All of this helpsunderstand the limitsand dangers of current AI systems, which has nothing to do with super-smart terminator bots who want to kill all humans or software system planning sinister plots. [Those] disaster scenarios assume a level of critical thinking and a humanlike understanding of the world that AIs wont be capable of for the foreseeable future, Shane writes.She uses the same context to explain some of the common problems that occur when training neural networks, such as class imbalance in the training data,algorithmic bias, overfitting,interpretability problems, and more.

Instead, the threat of current machine learning systems, which she rightly describes asnarrow AI, is to consider it too smart and rely on it to solve a problem that is broader than its scope of intelligence. The mental capacity of AI is still tiny compared to that of humans, and as tasks become broad, AIs begin to struggle, she writes elsewhere in the book.

AI algorithms are also very unhuman and, as you will see inYou Look Like a Thing and I Love You, they often find ways to solve problems that are very different from how humans would do it. They tend to ferret out the sinister correlations that humans have left in their wake when creating the training data. And if theres a sneaky shortcut that will get them to their goals (such as pausing a game to avoid dying), they will use it unless explicitly instructed to do otherwise.

The difference between successful AI problem solving and failure usually has a lot to do with the suitability of the task for an AI solution, Shane writes in her book.

As she delves into AI weirdness, Shane sheds light on another reality about deep learning systems: It can sometimes be a needlessly complicated substitute for a commonsense understanding of the problem. She then takes us through a lot of other overlooked disciplines of artificial intelligence that can prove to be equally efficient at solving problems.

InYou Look Like a Thing and I Love You, Shane also takes care to explain some of the problems that have been created as a result of the widespread use of machine learning in different fields. Perhaps the best known isalgorithmic bias, the intricate imbalances in AIs decision-making which lead to discrimination against certain groups and demographics.

There are many examples where AI algorithms, using their own weird ways, discover and copy the racial and gender biases of humans and copy them in their decisions. And what makes it more dangerous is that they do it unknowingly and in an uninterpretable fashion.

We shouldnt see AI decisions as fair just because an AI cant hold a grudge. Treating a decision as impartial just because it came from an AI is known sometimes as mathwashing or bias laundering, Shane warns. The bias is still there, because the AI copied it from its training data, but now its wrapped in a layer of hard-to-interpret AI behavior.

This mindless replication of human biases becomes a self-reinforced feedback loop thatcan become very dangerouswhen unleashed in sensitive fields such as hiring decisions, criminal justice, and loan application.

The key to all this may be human oversight, Shane concludes. Because AIs are so prone to unknowingly solving the wrong problem, breaking things, or taking unfortunate shortcuts, we need people to make sure their brilliant solution isnt a head-slapper. And those people will need to be familiar with the ways AIs tend to succeed or go wrong.

Shane also explores several examples in which not acknowledging the limits of AI has resulted in humans being enlisted to solve problems that AI cant. Also known asThe Wizard of Oz effect, this invisible use of often-underpaid human bots is becoming a growing problem as companies try to apply deep learning to anything and everything and are looking for an excuse to put an AI-powered label on their products.

The attraction of AI for many applications is its ability to scale to huge volumes, analyzing hundreds of images or transactions per second, Shane writes. But for very small volumes, its cheaper and easier to use humans than to build an AI.

All the egg-shell-and-mud sandwiches, the cheesy jokes, the senseless cake recipes, the mislabeled giraffes, and all the other weird things AI does bring us to a very important conclusion. AI cant do much without humans, Shane writes. A far more likely vision for the future, even one with the widespread use of advanced AI technology, is one in which AI and humans collaborate to solve problems and speed up repetitive tasks.

While we continuethe quest toward human-level intelligence, we need to embrace current AI as what it is, not what we want it to be. For the foreseeable future, the danger will not be that AI is too smart but that its not smart enough, Shane writes. Theres every reason to be optimistic about AI andevery reason to be cautious. It all depends on how well we use it.

This article was originally published by Ben Dickson on TechTalks, a publication that examines trends in technology, how they affect the way we live and do business, and the problems they solve. But we also discuss the evil side of technology, the darker implications of new tech and what we need to look out for. You can read the original article here.

Published July 18, 2020 13:00 UTC

Read the original post:

Weird AI illustrates why algorithms still need people - The Next Web

AIMed Launches New AI Community Group With Jvion as its Founding Member – Yahoo Finance

LONDON, July 21, 2020 /PRNewswire/ -- AIMed, a leading provider of global clinician-led artificial intelligence thought leadership, education and editorial content in healthcare and medicine has announced today the launch of the new AIMed Community Group - AIMedConnect. AIMed is delighted to welcome Jvion, a key healthcare clinical artificial intelligence company on board as a founding member. For more than a decade, Jvion has delivered clinical-AI solutions to the healthcare industry to identify individuals on a risk trajectory whose outcomes can be modified with the proper interventions.

AIMedConnect will extend AIMed's vision and bring about a revolution that embraces a new paradigm of medicine and healthcare propelled by Artificial Intelligence (AI) and related new technologies.

AIMedConnect is a unique opportunity for the medical, healthcare and technology industries. It's a strategic platform for Innovators to receive vital feedbacks from practicing clinicians and other stakeholders to ensure their new AI solutions are fit for purpose before commercialization as well as drive increased adoption of AI technology. It's also where participating members looking for a way to get started in AI or already have advanced data science background can be exposed to the various pre-market creations and share their thoughts on how they will like new technologies to be developed.

AIMed CEO, Freddy White said, "Over the last four years, we have consistently heard challenges coming from both executives and clinicians on the adoption and implementation of AI in healthcare. By launching the AIMed User Group, we are, for the first time, connecting key stakeholders together in a unique forum to collaborate, innovation and deliver solutions that will drive better patient outcomes and efficiency across the industry. I am thrilled with the initial response and look forward to the collective output from this group."

"Jvion is excited to be a founding member of the AIMEDConnect. This is a unique opportunity to foster cross-industry collaboration and drive innovation," said Jay Deady, CEO of Jvion. "We look forward to driving alignment and benefit across the industry."

AIMed Chairman and Founder, Chief Intelligence and Innovation Officer of Children's Hospital of Orange County (CHOC) Dr. Anthony Chang added, "AIMed announces the inception of a AIMed user group concept called (AIMedConnect). These user groups will regularly convene clinician/hospital users and payors as well as service providers, data scientists, and entrepreneurs. This is a singularly unique opportunity to communicate and network with all stakeholders in the domain of artificial intelligence in healthcare community in an honest and productive conversation to maximize utility and safety of artificial intelligence in healthcare.

Jvion, one of the leading healthcare AI companies, will be a founding member of AIMedConnect. This signals Jvion's extraordinary commitment to using artificial intelligence to prevent harm and lower cost, thereby bringing the highest value to healthcare."

About AIMed

A platform established in 2014 with the goal of bringing together healthcare, business and technology experts to start a revolution in medicine and healthcare brought about by the use of artificial intelligence (AI) and related new technologies. AIMed organizes year-round education and networking opportunities through a series of events, magazine and online content.

About Jvion

Jvion enables providers, payers and other healthcare entities to prevent avoidable patient harm and lower costs through its clinical AI solution. An industry first, the Jvion CORE goes beyond simple predictive analytics and machine learning to identify patients on a trajectory to becoming high risk and for whom intervention will likely be successful. Jvion determines the interventions that will more effectively reduce risk and enable clinical action. And it accelerates time to value by leveraging established patient-level intelligence to drive engagement across hospitals, populations, and patients. To date, the Jvion CORE has been deployed across about 50 hospital systems and 300 hospitals, who report average reductions of 30% for preventable harm incidents and annual cost savings of $13.7 million. For more information, visit http://www.jvion.com

CONTACTS med

Freddy White +44 796 8565 401freddy@ai-med.io

Lexi Herosianlexi@scratchmm.com

View original content:http://www.prnewswire.com/news-releases/aimed-launches-new-ai-community-group-with-jvion-as-its-founding-member-301096963.html

SOURCE AIMed

See the original post:

AIMed Launches New AI Community Group With Jvion as its Founding Member - Yahoo Finance

Wyze will try pay-what-you-want model for its AI-powered person detection – The Verge

Smart home company Wyze is experimenting with a rather unconventional method for providing customers with artificial intelligence-powered person detection for its smart security cameras: a pay-what-you-want business model. On Monday, the company said it would provide the feature for free as initially promised, after it had to disable it due to an abrupt end to its licensing deal with fellow Seattle-based company Xnor.ai, which was acquired by Apple in November of last year. But Wyze, taking a page out of the old Radiohead playbook, is hoping some customers might be willing to chip in to help it cover the costs.

AI-powered person detection uses machine learning models to train an algorithm to differentiate between the movement of an inanimate object or animal and that of a human being. Its now a staple in the smart security camera market, but it remains rather resource-intensive to provide and expensive as a result. It is more expensive than Wyze at first realized, in fact. Thats a problem after the company promised last year that when its own version of the feature was fully baked, it would be available for free without requiring a monthly subscription, as many of its competitors do for similar AI-powered functions.

Yet now Wyze says its going to try a pay-what-you-want model in the hopes it can use customer generosity to offset the bill. Heres how the company broke the good (and bad) news in its email to the customers eligible for the promotion, which includes those that were enjoying person detection on Wyze cameras up until the Xnor.ai contract expired at the end of the year:

Over the last few months, weve had this service in beta testing, and were happy to report that the testing is going really well. Person Detection is meeting our high expectations, and its only going to keep improving over time. Thats the good news.

The bad news is that its very expensive to run, and the costs are recurring. We greatly under-forecasted the monthly cloud costs when we started working on this project last year (weve also since hired an actual finance guy). The reality is we will not be able to absorb these costs and stay in business.

Wyze says that while it would normally charge a subscription for a software service that involves recurring monthly costs, it told about 1.3 million of its customers that it would not charge for the feature when it did arrive, even if it required the company pay for pricey cloud-based processing. We are going to keep our promise to you. But we are also going to ask for your help, Wyze writes.

It sounds risky, and Wyze admits that the plan may not pan out:

When Person Detection for 12-second event videos officially launches, you will be able to name your price. You can select $0 and use it for free. Or you can make monthly contributions in whatever amount you think its worth to help us cover our recurring cloud costs. We will reevaluate this method in a few months. If the model works, we may consider rolling it out to all users and maybe even extend it to other Wyze services.

If Wyze is able to recoup its costs by relying on the goodwill of customers, it could set the company up to try more experimental pricing models. After all, radical pricing strategies and good-enough quality is how Wyze became a bit of a trailblazer in the smart home camera industry, and it could work out for them again if customers feel like the feature works so well it warrants chipping in a few bucks a month.

View post:

Wyze will try pay-what-you-want model for its AI-powered person detection - The Verge

Why The Future of Cybersecurity Needs Both Humans and AI Working Together – Security Boulevard

As we look to the future of cybersecurity, we must consider the recent past and understand what the pandemic has taught us about our security needs.

Many cybersecurity platforms proved inadequate when a large percentage of the worlds workforce abruptly shifted to remote work in the Spring of 2020. Companies found themselves fighting against the limitations of their own cybersecurity platforms.

Modern systems enhanced with self-learning AI capabilities have fared best in the face of the pandemics impact on networking.

For others, immediate, manual interventions were the only thing standing between enterprise security and the bad actors who had been standing by waiting for a global event of this scale.

They swooped in almost immediately, targeting governments and hospital systems, and a wide swath of commercial enterprises. Everything from ransomware to DDOS to phishing schemes ramped up right alongside the upheaval so many companies were experiencing in the early days of the pandemic.

Many inadequate systems were enhanced with some form of AI, but relied on what employees had taught them. No one could have predicted such a dramatic shift in behavior, but systems that were trained to alert on unexpected behavior like a sudden rush of remote connections floundered.

Security analysts were unable to keep up with the constant stream of false positives. Threat hunting is time-consuming for teams under typical network conditions. The pandemic exacerbated this challenge.

Bad actors had been standing by, waiting for an event that would impact thousands of global networks all at once.

As companies examine their security systems, the question theyll need to answer isnt Should we bring AI on board, but rather What kind and how much AI do we need?

A recent WhiteHat Security survey revealed that more than 70 percent of respondents cited AI-based tools as contributing to more efficiency. More than 55 percent of mundane tasks have been replaced by AI, freeing up analysts for other departmental tasks.

Still, not all enterprises or employees are excited by the prospect of bringing more AI on board, especially AI that requires less intervention. This is an understandable response employees worry that AI will replace their jobs.

Multitalented human employees are not only part of the self-learning AI solution, they are integral. Respondents to the WhiteHat survey cited the importance of creativity and experience as critical for adequate security.

A combined approach appears to be the likeliest reliable cybersecurity approach going forward. Security teams that incorporate AI to handle mundane tasks and reduce overarching issues like false positives and focus on the human element will fare better.

Third-wave self-supervised AI platforms handle unusual network activity with more nuance. When the shift to remote work hit these networks, self-learning AI quickly reestablished a new normal. Instead of triggering hundreds or thousands of false positives, these systems rapidly adjusted and started looking for behavior that didnt mean the new frame of reference.

In the meantime, security analysts could focus on shoring up vulnerabilities created by the pandemic in other ways.

Creative problem solving has never been as crucial for teams facing the unprecedented challenges of today. Qualities like intuition and experience-based decision-making are invaluable, and even the most advanced AI cannot replace them.

What machines can do is augment the important, nuanced work that human security professionals do. Talented security analysts waste time sifting through false positives and handling many other mundane tasks while keeping a constant eye on the network.

Tools that reduce manual interventions also reduce errors and improve employee satisfaction.

Machines will never be able to entirely replicate or take over the work security professionals do, so its essential for companies to look for security platforms that underscore the talents of human security analysts. Security teams that view AI as one part of a complete, multi-faceted approach will benefit the most from these improvements.

Future-facing companies must evaluate their ability to weather the cybersecurity emergencies of tomorrow. Typical AI-enhanced platforms can help but are fundamentally limited. Without a complete understanding of your networks baseline and how it can change in response to unexpected events, no security platform can detect every threat.

MixModes third-wave AI solution develops an accurate, evolving baseline of network behavior and then responds smartly to aberrations and unexpected network behavior.

Reach out to our client service team today to set up a demo.

Our Q2 Top Cybersecurity Insights

NTA and NDR: The Missing Piece

The Problem with Relying on Log Data for Cybersecurity

The (Recent) History of Self-Supervised Learning

Guide: The Next Generation SOC Tool Stack The Convergence of SIEM, NDR and NTA

Redefining the Definition of Baseline in Cybersecurity

MixMode CTO Responds to Self-Supervised AI Hopes

Why Training Matters And How Adversarial AI Takes Advantage of It

Read more from the original source:

Why The Future of Cybersecurity Needs Both Humans and AI Working Together - Security Boulevard

Immervision uses AI for better wide-angle smartphone videos and photos – VentureBeat

Immervision has announced real-time video distortion correction software to help create professional quality videos on smartphones. The Montreal company also revealed an off-the-shelf 125-degree wide-angle lens, enabling mobile phone makers to improve their next-generation smartphone cameras.The software algorithms are now available for mobile phone makers to license from Immervisions exclusive distribution partner Ceva and promise to enhance images through artificial intelligence and machine learning.

The wider field of view (FOV) in phones creates more apparent distortion than you would see with other cameras. But the software algorithms from Immervision help correct stretched bodies and can adjust proportions of objects, lines, and faces in real time. The AI can take a line that looks like a banana and straighten it out, said Alessandro Gasparini, executive vice president of operations and chief commercial officer at Immervision, in an interview with VentureBeat.

Whether the goal is to leave the preset as is, fully customize it, let end users decide, allow phone orientation to dictate, or leverage machine learning to control the result, Immervision said it can help phone makers differentiate their hardware.Gasparini said the algorithms offer real-time distortion correction in both videos and pictures, adjusting the perspective, capturing more of a scene with less distortion, and correcting line and object distortion.

Above: Immervision fixes curved building lines caused by smaller fields of view.

Image Credit: Immervision

While the majority of tier one phone makers have wide-angle lenses in their phones, tier two and tier three mobile brands have yet to adopt them. Immervisions technology has been preconfigured on popular sensors, including Sony, Omnivision, and Samsung, and has one lens with ready-to-use software, reducing camera customization and integration time.The lens is 6.4 millimeters high and ranges from eight megapixels to 20 megapixels in terms of image quality.

We design lenses for the mobile industry, with action cameras and broadcast cameras of different sizes, different resolutions, and different field of views, Gasparini said.

Immervision surveyed users to find out what kind of image quality and distortion issues mattered most. Some lenses with low FOV numbers can make people on the edges of photos look fatter than they are, and that really makes people mad, Gasparini said. Most smartphones have lenses that are 100 to 130 degrees FOV. Immervisions competition in this market includes Apple and Samsung, which do their own work. But Immervision aims to arm the rest of the industry with the same kind of high-quality cameras.

Above: Immervision helps a camera get a better view of a scene.

Image Credit: Immervision

Gasparini said Immervision specializes in a combination of optical design and image processing, with different types of engineers under the same roof.

We find ourselves to be one of the largest independent optical design firms in the world, he said. If you look at some of the companies that manufacture optics today for smartphones, they might have one or two optical designers in their factory. Actually, we have more, and we have cross-pollination of different competencies in our company.

Immervision was founded in 2000 and employs around 30 people. Gasparini said the company has managed good profit margins as it works to help cameras better reproduce reality.

Software can do certain magic on images. But there are limitations, Gasparini said. And there are challenges the next generation has dealing with more video. The new smartphones are cinematographic, and more people will be shooting short films and movies with them. This will increase the challenge of processing them in real time.

See the original post:

Immervision uses AI for better wide-angle smartphone videos and photos - VentureBeat

Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race – Scientific American

Falsified videos created by AIin particular, by deep neural networks (DNNs)are a recent twist to the disconcerting problem of online disinformation. Although fabrication and manipulation of digital images and videos are not new, the rapid development of AI technology in recent years has made the process to create convincing fake videos much easier and faster. AI generated fake videos first caught the public's attention in late 2017, when a Reddit account with the name Deepfakes posted pornographic videos generated with a DNN-based face-swapping algorithm. Subsequently, the term deepfake has been used more broadly to refer to all types of AI-generated impersonating videos.

While there are interesting and creative applications of deepfakes, they are also likely to be weaponized. We were among the early responders to this phenomenon, and developed the first deepfake detection method based on the lack of realistic eye-blinking in the early generations of deepfake videos in early 2018. Subsequently, there is a surge of interest in developing deepfake detection methods.

DETECTION CHALLENGE

A climax of these efforts is this years Deepfake Detection Challenge. Overall, the winning solutions are a tour de force of advanced DNNs (an average precision of 82.56 percent by the top performer). These provide us effective tools to expose deepfakes that are automated and mass-produced by AI algorithms. However, we need to be cautious in reading these results. Although the organizers have made their best effort to simulate situations where deepfake videos are deployed in real life, there is still a significant discrepancy between the performance on the evaluation data set and a more real data set; when tested on unseen videos, the top performers accuracy reduced to 65.18 percent.

In addition, all solutions are based on clever designs of DNNs and data augmentations, but provide little insight beyond the black boxtype classification algorithms. Furthermore, these detection results do not reflect the actual detection performance of the algorithm on a single deepfake video, especially ones that have been manually processed and perfected after being generated from the AI algorithms. Such crafted deepfake videos are more likely to cause real damage, and careful manual post processing can reduce or remove artifacts that the detection algorithms are predicated on.

DEEPFAKES AND ELECTIONS

The technology of making deepfakes is at the disposal of ordinary users; there are quite a few software tools freely available on GitHub, including FakeApp, DFaker, faceswap-GAN, faceswap and DeepFaceLabso its not hard to imagine the technology could be used in political campaigns and other significant social events. However, whether we are going to see any form of deepfake videos in the upcoming elections will be largely determined by non-technical considerations. One important factor is cost. Creating deepfakes, albeit much easier than ever before, still requires time, resources and skill.

Compared to other, cheaper approaches to disinformation (e.g., repurposing an existing image or video to a different context), deepfakes are still an expensive and inefficient technology. Another factor is that deepfake videos can usually be easily exposed by cross-source fact-checking, and are thus unable to create long-lasting effects. Nevertheless, we should still be on alert for crafted deepfake videos used in an extensive disinformation campaign, or deployed at a particular time (e.g., within a few hours of voting) to cause short-term chaos and confusions.

FUTURE DETECTION

The competition between the making and detection of deepfakes will not end in the foreseeable future. We will see deepfakes that are easier to make, more realistic and harder to distinguish. The current bottleneck on the lack of details in the synthesis will be overcome by combining with the GAN models. The training and generating time will be reduced with advances in hardware and in lighter-weight neural network structures. In the past few months we are seeing new algorithms that are able to deliver a much higher level of realism or run in near real time. The latest form of deepfake videos will go beyond simple face swapping, to whole-head synthesis (head puppetry), joint audiovisual synthesis (talking heads) and even whole-body synthesis.

Furthermore, the original deepfakes are only meant to fool human eyes, but recently there are measures to make them also indistinguishable to detection algorithms as well. These measures, known as counter-forensics, take advantage of the fragility of deep neural networks by adding targeted invisible noise to the generated deepfake video to mislead the neural networkbased detector.

To curb the threat posed by increasingly sophisticated deepfakes, detection technology will also need to keep up the pace. As we try to improve the overall detection performance, emphasis should also be put on increasing the robustness of the detection methods to video compression, social media laundering and other common post-processing operations, as well as intentional counter-forensics operations. On the other hand, given the propagation speed and reach of online media, even the most effective detection method will largely operate in a postmortem fashion, applicable only after deepfake videos emerge.

Therefore, we will also see developments of more proactive approaches to protect individuals from becoming the victims of such attacks. This can be achieved by poisoning the would-be training data to sabotage the training process of deepfake synthesis models. Technologies that authenticate original videos using invisible digital watermarking or control capture will also see active development to complement detection and protection methods.

Needless to say, deepfakes are not only a technical problem, and as the Pandoras box has been opened, they are not going to disappear in the foreseeable future. But with technical improvements in our ability to detect them, and the increased public awareness of the problem, we can learn to co-exist with them and to limit their negative impacts in the future.

Go here to read the rest:

Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race - Scientific American

AI Fights Fraud: How the Use of AI Technologies in Banking Forges the Fight against Fraudsters – PaymentsJournal

Virtually every credit card and debit card user has had their card suspended due to suspicious activityand unfortunately fraud has not slowed with the rest of the world during the pandemic. In fact, since the beginning of the COVID-19 outbreak, 40% of financial services firms have seen an increase in fraudulent activityaccording to a LIMRA surveyleading notable banks and even the FBI to issue fraud alerts to their communities.

Over the past few years, many technologies have come onto the market that help banks and credit unions catch out-of-the ordinary activity and alert the card holder as quickly as possible. However, with more people making deposits and taking part in financial activities digitally via apps and chatbots due to current stay at home orders, the onus is solely on the technology to detect the fraudulent activity. Now more than ever, banks and other financial service providers need to implement AI technologies so they can become even more capable of identifying fraudulent patterns and data points that rudimentary, rule-based software can easily miss. Here are the three ways AI technology helps banks with fraud detection:

In recent years, companies have invested in AI primarily toimprove efficiency by automating mundane tasks like data entry. However, according to a recent report from MIT Technology Review, organizations have expanded its use to improve the customer experience by increasing personalization and bringing a deeper level of customer understanding. This use of AI is particularly important for communicating with customers who could potentially be the target for fraudulent activity.

Detecting fraud is critical for banks to build trust with their customers. Leveraging a technology like conversational AI can alert banks to fraud warning signs so they can instantly notify the affected customer, give them the option to verify those suspicious transactions and then suggest next steps for fraud resolution. Banks should specifically look toward conversational AI providers who offer solutions with natural language understanding (NLU), which digests text and voice, translates it into computer language and produces a text and audio output in a natural way that humans can easily understand. This goes beyond simply offering customers an experience personalized just by their name and account detailsit creates a more human interaction that connects them interpersonally through a language they are most familiar with, fostering trust between the customer and financial service provider.

Anti-money laundering (AML) is another area where banks are beginning to tap into the power of AI. With hundreds of thousands of wire transfers a day totaling trillions of dollarsnot to mention the various privacy laws designed to protect customersits almost impossible to identify every instance of money laundering. Nevertheless, banks are required to do everything possible to identify and help combat money laundering. While banks have been using rule-based software to identify money laundering for some time, AI offers a significant improvement as it learns, grows and adapts with each experience. Much of this is due to AIs ability to process large quantities of data and see trends, patterns and outliers in a much larger context than the average human could easily discern.

As part of the fight against financial crime, governments across the world require their financial institutions to put in place AML compliance programs that oversee internal AML policies and ensure the organization remains compliant with important regulations. However, managing AML legislation has proven to be a challenging task for compliance officers. According to Accentures 2019 Compliance Risk Study, compliance officers have reported being overworked and exhausted resulting in potentially detrimental human-caused errors. As a result, there is an increased urgency to improve compliance productivity and shift operations from check-the-box to a risk-prevention outlook.

Organizations that incorporate AI into their businesses are forced to re-imagine their processes a common barrier to technology adoption. For example, with traditional compliance processes, humans might look at 15% of a banks loans to ensure things are being done correctly, while AI processes can review 85% of the data. This not only improves accuracy, but it also means banking employees can be freed up to do more meaningful work.

With the rise of AI, banks have a new tool to handle any number of tasks that are traditionally time-consuming, labor intensive and prone to mistakes. Whether it be document processing, anti-money laundering, fraud detection, risk prevention or customer service, AI offers a level of support that is unparalleled in the history of banking. Best of all, with an increasing focus on privacy, AI represents a viable way to use that data in a safe, trusting manner.

Summary

Article Name

AI Fights Fraud: How the Use of AI Technologies in Banking Forges the Fight against Fraudsters

Description

In fact, since the beginning of the COVID-19 outbreak, 40% of financial services firms have seen an increase in fraudulent activityaccording to a LIMRA surveyleading notable banks and even the FBI to issue fraud alerts to their communities.

Author

Lingjia Tang

Publisher Name

PaymentsJournal

Publisher Logo

Read the original here:

AI Fights Fraud: How the Use of AI Technologies in Banking Forges the Fight against Fraudsters - PaymentsJournal

I attended a virtual conference with an AI version of Deepak Chopra. It was bizarre and transfixing – KRDO

This past week I watched doctor and wellness advocate Deepak Chopra lead a short meditation over Zoom.

Close your eyes. Bring your awareness to your heart. And mentally ask yourself only four questions: Who am I? What do I want? What am I grateful for? Whats my purpose? Chopra said on Wednesday morning. He was speaking at a technology conference as part of a discussion, talking to fellow panelists including Twitter cofounder Biz Stone and venture capitalist Cyan Banister.

The group kept their eyes closed as Chopra continued to speak. After another moment of guided meditation, he finished up; everyone opened their eyes.

How was that? Chopra asked.

It went great! said Stone.

Wonderful! chimed in Banister.

So weird! I muttered, to myself.

I dont have anything against meditation. I was reacting to the fact that Chopra, Stone, Banister and two other people Id been viewing via Zoom Laura Ulloa, a peace activist, and Lars Buttler, cofounder and CEO of the AI Foundation and moderator of this panel discussion were all digital personas created with artificial intelligence.

That is, each one of them looked and sounded a lot like the person they were meant to represent. But these ersatz versions of their flesh-and-blood counterparts were built by Buttlers AI Foundation, a San Francisco company and nonprofit that promotes the idea that each of us should have our own AI identity.

Each avatar was trained by the person they emulate: The human was filmed making different consonant and vowel sounds, as well as answering a slew of questions to help the AI counterpart learn about how they speak and who they are. They are meant to be digital extensions that can communicate on behalf of their real selves. Its an idea that sounds both creepy and full of possibilities. Imagine sending your AI proxy to handle a day of work meetings, while you read a book.

The conversation, which mostly centered around what its like to have your own personal AI agent (neat, according to Stones AI, as it could still be around after he dies), was part of the second annual Virtual Beings Summit. Last year, this conference took place in San Francisco, with attendees watching speaker sessions at Fort Mason; this year, it was conducted online.

The conference, according to its website, is meant for exploring the growing impact of next-gen avatars on social networks, commerce, and the arts.

While the AI folks talking and meditating appeared to be logged in to the Zoom session from different locations (Banister on a bench outdoors in front of a thicket of bamboo, Buttler in the AI Foundation office, and so on), the conference also included numerous speaker sessions hosted within the video game Animal Crossing, with each speaker embodied by a cute character. Regular people like me could view it all from afar via Zoom.

Watching the panel of AI creations was transfixing, due to its proximity to realness and its feeling of spontaneity. Its the first conversation Ive seen conducted in real time by AI creations modeled after actual humans, without a script. While there were shortcomings, such as the AI version of Buttler repeatedly saying Sorry about this as a technical glitch delayed Chopras AI from getting online, it was fascinating to watch the AI speakers interact. At one point, Buttlers AI asked Chopras if hes often asked questions about the universe, and the result felt weirdly natural.

Ah, yes. he answered. People are often curious about what I believe the purpose of existence is.

What struck me immediately was the simultaneous sense of awe and Uncanny Valley-unease I felt just looking at the AI beings engage in conversation.

They had a number of the mannerisms of regular people: Banister blinked regularly, Buttlers adams apple moved occasionally, and Stones shoulders shifted every so often.

But they were unlike their real-world counterparts in some obvious ways. They were very human-like, but still looked kind of like animated characters and mostly existed only from the shoulders up. Their voices sounded stiff when they replied to questions, and there was always an unnaturally long pause between a query and answer. When they spoke, their mouths moved more like those of animatronic puppets than people or even cartoon characters. At the very end of the discussion, the real-life Buttler joined, making the strangeness of these AI creations even more pronounced.

While the event wasnt scripted, the real-life Buttler told me that his AI had the set goal of asking the panelists what their purpose was, and each AI panelist was set to listen for its name being spoken so it would only give an answer when addressed directly. In general, if an AI being doesnt know the answer to a question, Buttler said, its supposed to ask its corresponding human about it at another time.

Most of the panelists have a connection to the AI Foundation: Stone is part of its AI council and nonprofit board, Chopra is also on the nonprofit board, and Banister is an investor.

Buttler said each human owns their AI counterpart, and they were all aware that the AI version of themselves would be participating in this panel discussion. Different AI beings have been trained for different lengths of time; Buttler said Stone trained his AI for a few hours, while Chopra has spent dozens of hours working on his. The more you train it, the implication is, the more it will be a true representation of yourself.

We dont want to replace human beings, Buttler said, soon adding, These are extensions of real people that help them do their jobs better.

Banister, who watched part of the panel that included her AI avatar, wants the AI version of herself to be able to listen to pitches from entrepreneurs, enabling her to hear many more ideas than she ever could herself.

In the present, though, she sees a more practical benefit to having her own AI persona.

For the first time ever I wasnt stressed out about giving a talk, she said. So that was super nice.

Go here to see the original:

I attended a virtual conference with an AI version of Deepak Chopra. It was bizarre and transfixing - KRDO

These young immigrant brothers are teaching A.I. to high-schoolers for free: We want to give kids ‘a lucky break’ – CNBC

Since 20-something brothers Haroon and Hamza Choudery immigrated to Brooklyn, New York, from from rural Pakistan in 1998, their lives have been changed by technology in both amazing and devastating ways.

Technology provides a nice living for the brothers: Haroon, 26, has a well-paying AI job at a healthcare company, and Hamza, 24, works at WeWork.

But their uncle has seen the other side.

The Chouderys' uncle used his life savings to finance a New York City taxi medallion in 2013 (which, at the time,cost as much as $1.3 million). But thanks to technology, the ride-share boom left the medallion worth just 20% of its original value, Haroon says.

"As you can imagine, starting from scratch after over two decades of working as a taxi driver had a devastating effect on the trajectory of his life."

This whiplash technology launching their careers while devastating their elder also had an effect on Haroon and Hamza. Inspired in part by the experience, the brothers co-founded a nonprofit calledAI for Anyone.

The idea behind the AI literacy organization is to use "our privilege to help those that are less privileged avoid getting into situations where their livelihoods are destroyed, whether it be through like automation replacing their jobs or whether it be through automation being designed to accommodate the needs of more affluent and more well off people and not really taken the the underrepresented populations into account when they're making their decisions," Haroon says.

Both in the classroom and online, AI for Anyone teaches students the basics of artificial intelligence, increases awareness of AI's role in society and shows how the technology can be used.

"We had support that really gave us a lot of lucky breaks," Haroon says, referring to the opportunities they were afforded after coming to the US. "We want to ... help give [kids] a lucky break in the form of some knowledge that may help them make a pivot in their lives," he says.

It wasn't just about lucky breaks for Haroon and Hamza. There was a lot of hard work too.But it is true that the brothers have lived some version of the American Dream.

After coming to the US when Haroon was 6 and Hamza was 4, their family lived with nine relatives in a two-bedroom apartment in Brooklyn, and later on a poultry farm in Maryland on the Eastern Shore of Maryland. Their father worked any number of jobs from baker to taxi driver to tow truck driver.

Haroon and Hamza Choudery with their father, Shabbir Choudery and their sister, Rahat Choudery.

Photo courtesy A.I for Anyone

Haroon (left) and Hamza Choudery in Pakistan.

Photo courtesy A.I. for Anyone

In 2011,Haroon won a Gates Millennium scholarship,which gave him a full ride (including tuition, housing, food and transportation) to both Penn State for undergrad and to University of California, Berkeley, where he got his masters in information and data science. After college, Haroon did data science work for Mark Cuban Companies and was a technology consultant at Deloitte Consulting. He is now a data scientist at Komodo Health.

Hamza graduated magna cum laude from the University of Maryland. He previously worked at Facebook, and now works in business operations at WeWork.

Today, living in New York City, the brothers could easily spend a couple of dollars on cup of coffee the same amount their family had to live on for a month in Pakistan. Living in both realities, Hamza says,"has contextualized the poverty and it has also contextualized the success."

And it has been the brothers' "call to action" to launch their education initiative.

In 2017, Haroon, Hamza and their friend Mac McMahon started AI for Anyone with $5,000 of their own money.

The idea wasto educate those who might be at risk of having their livelihood affected by artificial intelligence and arm them for the future.

The organization's team goes to schools to present workshopsthat teach kidswho might not learn about AI otherwise, because as important as it is to the future of work, itis not part of a regular high school curriculum.

So far, AI for Anyone has taught approximately 50 workshops, reaching over 55,000 people, according to the Chouderys. It also has a monthly newsletter,All About AI, with over 33,000 subscribers, as well as a new podcast,AI For You. (One episode has an interview with Hod Lipson, a well-renowned professor in the AI space.)

The non-profit is now funded by corporate sponsorships from Hypergiant and Komodo Health, so the workshops are free to students and teachers.

Haroon Choudery, a co-founder of A.I. for Anyone, teaching a workshop.

Photo courtesy A.I. for Anyone

Even the pandemic has not stopped AI for Anyone, as the team has taken their seminars virtual.

The first virtual workshop in April was a partnership with the Mark Cuban Foundation, the billionaire tech entrepreneur's philanthropy, via a connection Haroon made through the work he did at Mark Cuban Companies.

"When COVID-19 hit, Haroon and I reconnected and realized we were both thinking about ways to teach AI in a bite-sized way to kids stuck at home," saysRyan Kline, an associate at Mark Cuban Companies. "AI for Anyone is doing great work in fundamental AI education, reaching wide audiences of young students."

They collaborated to digitize the AI for Anyone workshop. Then if students want to learn more, they can be funneled into the Mark Cuban Foundation's Intro to AI Bootcamps, a collaboration between the Mark Cuban Foundation, Microsoft and Walmart, which was announced in 2019.

Cuban shared about the workshop on LinkedIn.

"We see AI for Anyone as providing a spark for hundreds of students to advance their AI learning, and hope that many AI for Anyone graduates will apply to participate in the Mark Cuban Foundation Bootcamps as we expand nationwide," Kline says.

AI for Anyone is still growing, but the purpose is clear for the founders.

"A.I. for Anyone works [as] one of the most appropriate and most fitting ways for us to use our privilege to give back to those that are less privileged than us," Haroon says.

See also:

Amid the coronavirus pandemic, many companies could replace their workers with robots

Merck CEO on success: I was one of a 'few inner city black kids' who rode bus 90 minutes to school

Barack Obama: This is what you can do to reform the system that leads to police brutality

The rest is here:

These young immigrant brothers are teaching A.I. to high-schoolers for free: We want to give kids 'a lucky break' - CNBC

Here are 3 ways AI could help you commute safely after COVID-19 – World Economic Forum

As cities around the world are emerging from lockdowns to stop the spread of COVID-19, public transport companies are facing new challenges. They will have to avoid overcrowding buses and trains to reduce the risk of coronavirus transmission, while ensuring that overall passenger numbers are high enough to sustain the system. Meanwhile, commuters are tentatively returning to public transport, but will only embrace it widely if they see it as a safe, fast and convenient way of reaching their destinations.

Here are three ways cutting-edge technologies such as artificial intelligence can help us all travel at ease, by crunching huge amounts of data, devising optimal schedules and journeys and adapting them to the rapidly evolving situation:

Overcrowding is known to pose a major transmission risk for COVID-19 and other diseases. If countries want to avoid a second wave of infections, one approach is to flatten the typical morning and afternoon peaks in passenger numbers. At least in the medium term, rush hour is likely to be replaced by an even spread of passengers over the course of the day.

Countries around the world are already implementing or preparing for staggered work shifts and school schedules. This helps prevent overcrowding in offices and classrooms, and also spreads out commutes. For public transport companies, this can mean putting on relatively frequent trains and buses all day long, rather than running back-to-back transport during rush hour and more infrequent service during quieter times.

Social distancing and public transport

Image: Optibus

An all-day service has several benefits. It reduces passenger density and facilitates social distancing. It also means drivers are less likely to have to split their shifts and work during busy mornings and afternoons, with sometimes inconvenient off-time in between. Passengers benefit, too, because they can rely on fairly frequent trains and buses all day long.

However, this new model also comes with challenges. Any changes in COVID-19 infection numbers, including local outbreaks and surges, are likely to affect ridership demand. This can happen so quickly that transport providers wont have much time to prepare and adjust. The only way to deal with the uncertainty is to have the flexibility and technological capability to react within days or even hours, and implement the kinds of schedule changes that would have previously taken months of preparation.

This is where artificial intelligence comes in. Transport planning involves vast amounts of data. The number of drivers on duty, the level of passenger demand, the number of available buses and trains, as well as rules such as the maximum hours drivers can work between breaks, and the length of each break, are just some of the many different factors that need to be taken into account.

With the help of algorithms, transit officials can easily create different scenarios based on changes in any of these factors. They can enter changes to the routes and travel times and see the schedule update automatically. They can also handily compare the costs and revenues of the different scenarios. This allows them to respond quickly to broader events that will affect peoples movement, be it lockdowns or staggered shifts. It also means they can quickly put on extra trains and buses if needed.

Mobility systems must be resilient, safe, inclusive, responsive, and sustainable. This is why #WeAllMove, a mobility service match-making platform, launched April 2020 by Wunder Mobility in partnership with the World Economic Forum COVID Action Platform. The platform highlights the importance of leveraging multi-stakeholder collaboration across governments, providers, commuters and more

#WeAllMove consolidates information about a variety of mobility options available in any city, from mode share, to ride share and transit. The independent platform, co-hosted by mobility providers operating globally, will integrate private, public and joint mobility services into a single search and output engine, ensuring a better new mobility normal can be forged, regardless of the crisis ahead.

Since its launch April 2020, it has grown to include 130 mobility service providers offering tailored services in over 300 cities and 40 countries. By bringing public and private stakeholders together, the platform can ensure business continuity for an array of mobility providers, and help secure jobs and services that depend on mobility.

Smart contingency planning

Artificial intelligence (AI) can help us solve seemingly intractable transport problems. Take this example from our own business, which provides advanced technology to public transportation agencies and operators in various countries.

One of our customers wanted to add 5% more trips to their schedule in order to spread passenger numbers over more journeys, and reduce the risk of coronavirus transmission. However, they had 14% fewer drivers to hand. It seemed like an impossible problem. And yet, our AI-driven software found a way of adding more trips with fewer drivers. It did so by optimizing the schedule and extending the average shift by just 45 minutes, while adding in any necessary breaks to adhere to labour and safety regulations. If the driver shortage becomes less severe, transportation providers can easily change their preferences, and the platform would automatically restore the length of the shift.

AI-powered systems also allow operators to quickly provide extra buses to ensure social distancing. Putting on an extra bus may sound simple, but it requires contingency planning, in the form of complex algorithms that can rapidly create alternative scenarios. These enable agencies and operators to figure out which part of the transportation network needs to change to ensure that extra vehicles and drivers are available when needed.

Take a tiny family-owned operator that owns five buses and offers only 50 trips a day. This already results in over 1 billion potential vehicle combinations. Even with a medium-sized transit provider, the numbers become so large that they cant be analysed by humans alone. In big transit-friendly cities like London, some 9,700 buses make 2.2 billion trips a year. Algorithms can sift through all those combinations and choose the optimal solutions within minutes or even seconds.

Image: Optibus

Monitor the impact of altered service

Transport providers have to ensure that all those living along certain routes can still get to their destinations even if schedules are changed at short notice.

With data-driven planning systems, transport officials can enter demographic data from public sources, such as income levels in specific areas. They can then look at a map that overlays the suggested routes with this data. This allows officials to quickly see the potential impact of any changes for residents, including those who may not have the means to use private forms of transport. Such mapping is a fast and simple way of making public transit as inclusive as possible.

All told, there are many different ways social distancing may affect public transit as we encounter the changes that accompany a gradual exit from lockdown. We may not know exactly what transit demand will look like in the coming months, but principles like cooperating with local governments and private institutions to flatten peak times, creating and comparing multiple transit scheduling scenarios, and monitoring the impact that any service changes will have on residents can help transportation providers navigate the unknown roads ahead.

Read the original:

Here are 3 ways AI could help you commute safely after COVID-19 - World Economic Forum

Are these the edge-case trends of AI in 2020? – Tech Wire Asia

Artificial intelligence (AI) continues to hold its title as the top buzzword of enterprise tech, but its appeal is well-founded. We now seem to be shifting from the era of businesses simply talking about AI, to actually getting hands-on, exploring the ways it can be used to tackle real-world challenges.

AI is increasingly providing a solution to problems old and new, then again, while the technology is proving itself incredibly powerful, not all of its potential is necessarily positive. Here, we explore some of the more edge-case applications of AI taking place this year.

Advances in deep-learning and AI continue to make deepfakes more realistic. This technology has already proven itself dangerous in the wrong hands; many predict that deepfakes could provide a dangerous new medium for information warfare, helping to spread misinformation or fake news. The majority of its use, however, is in the creation of non-consensual pornography which most frequently targets celebrities, owed to large amounts of data samples in the public domain.Deepfake technology has also been used in highly-sophisticated phishing campaigns.

Beyond illicit ingenuity in shady corners of cyberspace, the fundamental technology is proving itself a valuable tool in a few other disparate places. Gartners Andrew Frank called the technology a potential asset to enterprises in personalized content production: Businesses that utilize mass personalization need to up their game on the volume and variety of content that they can produce, and GANs [Generative Adversarial Network] simulated data can help.

Last year, a video featuring David Beckham speaking in nine different languages for a Malaria No More campaign was released. The content was a result of video manipulation algorithms and represented how the technology can be used for a positive outcome reaching a multitude of different audiences quickly with accessible, localized content in an engaging medium.

Meanwhile, a UK-based autonomous vehicle software company has developed deepfake technology that is able to generate thousands of photo-realistic images in minutes, which helps it train autonomous driving systems in lifelike scenarios, meaning the vehicle makers can accelerate the training of systems when off the road.

The Financial Times also reported on a growing divide between traditional computer-generated graphics which are often expensive and time-consuming and the recent rise in deepfake tech, while Disney used deepfake technology to include the young version of Harrison Ford as Han Solo in the recent Star Wars films.

Facial recognition is enabling convenience, whether its a quick passport check-in process at the airport (remember those?) or the swanky facial software in newer phone models. But AIs use in facial recognition extends now to surveillance, security, and law enforcement. At best, it can cut through some of the noise of traditional policing. At worst, its susceptible to some of its own in-built biases, with recorded instances of systems trained on misrepresentative datasets leading to gender and ethnicity biases.

Facial recognition has been dragged to the fore of discussion, following its use at BLM protests and the wrongful arrest of Robert Julian-Borchak Williams at the hand of faulty AI algorithms earlier this year. A number of large tech firms, including Amazon and IBM,have withdrawn their technology from use by law enforcement.

AI has a long way to go to match the expertise of our human brains when it comes to recognizing faces. These things on the front of us are complex and changeable; algorithms can be easily confused. Theres a roadmap of hope for the format, though, thanks to further advances in deep-learning. As an AI machine matches two faces correctly or incorrectly, it remembers the steps and creates a network of connections, picking up past patterns and repeating them or altering them slightly.

Facial recognitions controversies have furthered discussions around ethical AI, allowing us to clearly understand the tangible impact of misrepresentative datasets in training AI models, which are equally worrying in other applications and use cases, such as recruitment.As the technology is deployed into more and more areas in the world around us, its dependability, neutrality and compliance with existing laws becomes all the more critical.

With every promising advance in technology comes another challenge, and a recent CBInsights paper warns of AIs role in the rise of new-age hacks.

Sydney-based researchers Skylight Cyber reported finding an inherent bias in an AI model developed by cybersecurity firm Cylance, and were able to create a universal bypass that allowed malware to go undetected. They were able to understand how the AI model works, the features it uses to reach decisions, and create tools to fool it time and again. Theres also the potential for a new crop of hackers and malware to poison data corrupting AI algorithms and disrupting the usual detection of malicious/normal network behaviour. This problematic level of manipulation doesnt do a lot for the plaudits that many cybersecurity firms give to products that use AI.

AI is also being used by the attackers themselves. In March last year, scammers were thought to have leveraged AI to impersonate the voice of a business executive at a UK-based energy business, requesting from an employee the successful transfer of hundreds and thousands of dollars to a fraudulent account.More recently, its emerged that these concerns are valid, and not a whole lot of sophistication is required to pull them off. As seen in the case of Katie Jones a fake LinkedIn account used to spy and phish information from her connections an AI-generated image was enough to dupe unsuspecting businessmen into connecting and potentially sharing sensitive information.

Meanwhile, some believe AI-driven malware could be years away if on the horizon at all but IBM has researched how existing AI models can be combined with current malware techniques to create challenging new breeds in a project dubbed DeepLocker. Comparing its potential capabilities to a sniper attack as opposed to traditional malwares spray and pray approach, IBM said DeepLocker was designed for stealth: It flies under the radar, avoiding detection until the precise moment it recognizes a specific target.

Theres no end to innovation when it comes to cybercrime, and we seem set for some sophisticated, disruptive activity to emerge from the murkier shadows of AI.

Automated machine learning, or AutoML (a term coined by Google), reduces or completely removes the need for skilled data scientists to build machine learning models. Instead, these systems allow users to provide training data as an input, and receive a machine learning model as an output.

AutoML software companies may take a few different approaches. One approach is to take the data and train every kind of model, picking the one that works best. Another is to build one or more models that combine the others, which sometimes give better results. Businesses ranging from motor vehicles to data management, analytics and translation are seeking refined machine learning models through the use of AutoML. With a marked shortage of AI experts, this technology will help democratise the tech and cut down computing costs.

Despite its name, AutoML has so far relied a lot on human input to code instructions and programs that tell a computer what to do. Users then still have to code and tune algorithms to serve as building blocks for the machine to get started. There are pre-made algorithms that beginners can use, but its not quite automatic.

Google computer scientists believe they have come up with a new AutoML method that can generate the best possible algorithm for a specific function, without human intervention. The new method is dubbed AutoML-Zero, which works by continuously trying algorithms against different tasks, and improving upon them using a process of elimination, much like Darwinian evolution.

AI and machine learning may be streamlining processes, but they are doing so at some cost to the environment.

AI is computationally intensive (it uses a whole load of energy), which explains why a lot of its advances have been top-down. As more companies look to cut costs and utilize AI, the spotlight will fall on the development and maintenance of energy-efficient AI devices, and tools that can be used to turn the tide by pointing AI expertise towards large-scale energy management.

Artificial Intelligence also has a role in augmenting energy efficiency. Tech giants are using systems that can gather data from sensors every five minutes, and use algorithms to predict how different combinations of actions will positively or negatively affect energy use.

In 2018, Chinas data centers produced 99 million metric tons of carbon dioxide (thats equivalent to 21 million cars on the road). Worldwide, data centers consume 3 to 5 percent of total global electricity, and that will continue to rise as we rely more on cloud-bases services. Savvy to the need to go green, tech giants are now employing AI systems that can gather data from sensors every five minutes, and use algorithms to predict how different combinations of actions will positively or negatively affect energy use. AI tools can also spot issues with cooling systems before they happen, avoiding costly shutdowns and outages for cloud customers.

From low power AI processors in edge technologies to large scale renewable energy solutions (thats AI dictating the angle of solar panels, and predicting wind power output based on weather forecasts), there are positive moves happening as we enter the 2020s. More green-conscious, AI-intensive tech firms are popping all the time, and we look forward to seeing how they navigate the double-edged sword of energy-guzzling AI being used to mitigate the guzzling of energy.

See more here:

Are these the edge-case trends of AI in 2020? - Tech Wire Asia

Will this creepy AI platform put artists out of a job? – Digital Arts

How do you use Artbreeder, and should you use it?

When discussions of AI and its effect on the labour force come up, the creative industry usually shrugs its shoulders. After all, how can artificial intelligence accurately replicate an artist's singular vision?

It's an understandable stance, but recent weeks may have sent a little shiver down creatives' spines. Only yesterday did Fast Company reveal that a designer commissioned by clients (below) turned out to be an AI system employed by one very adventurous design firm.

AI art generator Artbreeder meanwhile has grabbed attention through Bas Uterwijk's photo portraits of historical figures, all of which were generated from classic painted portraits. The photos, as seen here on Designboom, are remarkably polished and authentic. But would it work the other way around? Can equally excellent art stem from machine manipulated photos?

Artbreeder isn't the only AI art platform out there, but it's certainly grabbed the most attention. The website combines and manipulates any kind of image to produce countless variations using the magic of machine-learning, giving you the option to make landscapes, anime figures, portraits and more (and don't worry, all these images remain automatically private unless you choose otherwise.)

You can upload an image and let the website do the heavy lifting, or toggle features using its Edit-Genes tab. On portraits this allows you to change colours, race and accessories (adding facial hair or glasses), allowing the same for cartoonish anime and 'furry' creations, minus the last two options. Artbreeder also animates everything except anime characters, which might sadden the weaboos out there.

First up, note that you can only upload photos for the portrait and landscape sections. Everything else gives you a series of random images to play about with using slider controls; refresh the set if you'd prefer different options.

You can play with a given selection for the portraits and landscapes workspaces, but these are the fun ones which let you upload any image, as many have been doing in recent months.

With these photos you can generate new faces and landscapes as sourced from the original, either in photorealistic or impressionist form. Just use the slider controls with the 'Children' tab for these; if you get bored, you can 'Crossbreed' your upload with a public image from the database or another of your own.

Just be careful as you slide: whatever catches your eye won't be there when you toggle back as each 'push' creates a whole new series of variants. Save what you see as soon as you see it to avoid any disappointment.

Note that each image you upload is instantly converted, meaning it won't look like the one on your phone or hard drive. This variant of the original is what all the Children are based off, but you can tinker with this master copy using the Edit-Genes tab mentioned earlier.

As a free account user, you can upload up to five images in total. In order to do anything with your personal imagery, you have to click on the relevant page (Portrait, Landscape) and upload from there; images aren't stored in the cloud for you to share across sub-platforms. In other words, if you upload four pics in one of the Portrait tabs, you'll only be able to upload one Landscape pic, and that'll be the only one you can manipulate aside from the ones given to you by Artbreeder.

There are two Portrait pages. The others is marked as 'Old', presumably meaning it uses older networks instead of generating more classical-style portraits as you'd think; I preferred the results of the more current option.

No matter what section you use, you can create and download as many animations as you like. There is a limit on how many high-res versions you can download of your creations; paying allows you to save and upload more.

But would we recommend you to pay? And should artists be worried about websites like this? Judging by the portraits created, the answer right now would be no. Paintings generated are very rarely lifelike (if that's what you're looking for) and I noticed a limited output of painting styles; also, most faces look creepy and 'off'; there's barely anything cosy here. It can also take a while to upload a photo due to the popularity of the site; depending on time of day, processing can last up to an hour when queued.

The more impressive results though came from landscapes. While again not entirely faithful recreations, turning my industrial city-scape into a fantasy mountain scene, Artbreeder's BigGAN models resulted in these very impressive worlds; here's the original version I uploaded for you to compare with its 'remixes' below it.

Game environment artists may be intrigued by these results, and no doubt the software at work is improving as we speak. With an improved website, Artbreeder could be a very creepy 'rival' for digital artists indeed.

Related: The AI robots getting a retro Saul Bass branding

Link:

Will this creepy AI platform put artists out of a job? - Digital Arts

Sitecore Leverages AI for Smarter Management of Images and Video, Reduces Content Creation Pressure and Saves Brands Time and Money With New Sitecore…

SAN FRANCISCO, July 21, 2020 /PRNewswire/ --Sitecore, the global leader in digital experience management software, released version 3.4 of Sitecore Content Hubtoday, offering new robust, streamlined and scalable capabilities to help brands accelerate their digital transformations.

Brands today manage thousands of digital assets to create engaging digital experiences for their customers. Leading organizations are quickly realizing the necessity of augmenting their content management with world-class digital asset management capabilities to support the current content explosion and improve their marketing efficacy and efficiency in the process. Sitecore Content Hub version 3.4 offers enhanced Digital Asset Management (DAM) capability with leading edge innovations in artificial intelligence (AI) and video capabilities, in addition to improving workflows and ease of use with extended integration to third-party solutions.

A recent study by marketing advisory firm Econsultancy1 found that 65% of marketers spend more of their time creating content than any other activity supporting digital campaigns.To offer time-savings, Sitecore Content Hub 3.4 adds Content AI to analyze image similarity so brands have instantaneous access to alternative images in the DAM, with more options made immediately available to fit their particular need. Marketers can then choose to reuse or repurpose these rather than create new onessaving their company time and money. For example, when using a photo of a family on the beach, Content Hub will recognize the makeup of the photo and offer similar images of groups on the beach, reducing content creation timelines and expense for marketing teams.

Consumption of video content has grown and continues to grow exponentially, with 85% of U.S. internet users saying they watch online video content,2 and viewers report spending 59% more time watching online videos in 2019 than they had three years before.3 To help brands manage their video content more effectively, Content Hub version 3.4 also includes the ability to automatically generate metadata as well as transcripts for video using AI analysis from Microsoft Azure Cognitive Services. In addition, video management capabilities now include support for time- and range-based annotation, cropping and subtitles, giving marketers more automated capabilities to make their videos more consumable by different audiences.

"Our vision has always been to streamline work and take the tedium out of the creative process for our customers," said Tom De Ridder, CTO, Sitecore. "With version 3.4 of Content Hub, we've taken several massive steps to further drive this mission with new capabilities, enhancements and integrations in a SaaS product that makes it easier for all content stakeholders in a distributed environment to work together to solve the content crisisall in a single solution that distributes content to all channels."

Recent research from SoDA found that 95% of marketers say producing and publishing personalized digital content more quickly is a priority and 39% say manual processes are holding them back. 4 Additional workflow and ease-of-use functionality new to version 3.4 helps to support these demands, including smarter navigation, mass-edit templates, and on-the-fly tagging. Now, Marketing Resource Management and Content Marketing Platform (MRM and CMP) integration empowers DAM users to manage both agile workflows of content items and timeline-based project management workflows from a single location.

Content Hub 3.4 also improves workflow and ease-of-use for DAM users who use Adobe Creative Cloud. With a DAM search panel, users can upload, check in, and check out assets from InDesign, Photoshop, and Illustrator. They can preview work-in-progress within InDesign documents directly in the DAM without packaging, and package finished assets directly into the DAM from InDesign.

CHILI publisher integration

Producing graphics the traditional way can be fraught with challenges for today's marketer. It takes time, requires money, and involves risk. Marketers face brand compliance and governance challenges, scale challenges, and time-to-market challenges.

With Content Hub 3.4, enhanced Web to Print capabilities with tight integration to CHILI publisher are now available to simplify and automate graphic production across digital and print publishing. This integration connects Content Hub data and assets to CHILI Smart Templates that enable preset customizable elements while embedding brand identity guidelines. Users can now self-service anywhere in the world with just a browser to deliver customized, brand-compliant, and ready-to-use printed assets at scale.

Javascript development kit

And to make it easier for assets and content to flow to and from other applications in the marketing technology stack, Sitecore has also released a Javascript SDK, making Content Hub more extensible with accelerated and minimized development and integration efforts for third-party solutions.

To learn more about Content Hub, please visit Sitecore.com. Developers interested in the new Content Hub Javascript SDK can find more details available here.

About Sitecore

Sitecore delivers a digital experience platform that empowers the world's smartest brands to build lifelong relationships with their customers. A highly decorated industry leader, Sitecore is the only company bringing together content, commerce, and data into one connected platform that delivers millions of digital experiences every day. Leading companies including American Express, ASOS, Carnival Cruise Lines, Kimberly-Clark, L'Oral, and Volvo Cars rely on Sitecore to provide more engaging, personalized experiences for their customers. Learn more atSitecore.com.

Sitecore and Sitecore Content Hub are registered trademarks or trademarks of Sitecore Corporation A/S in the USA and other countries. All other brand names, product names or trademarks belong to their respective holders.

Sources: EConsultancy1, Statisa2, Limelight Networks3, SODA4

Contact Shannon Lyman Sr. Director, Communications at Sitecore [emailprotected]

SOURCE Sitecore

http://www.sitecore.net

See original here:

Sitecore Leverages AI for Smarter Management of Images and Video, Reduces Content Creation Pressure and Saves Brands Time and Money With New Sitecore...

With 5G+AI Twin Engines – Qualcomm, WIMI and Samsung Bring New Opportunities to the Industry – Yahoo Finance

HONG KONG, CHINA / ACCESSWIRE / July 21, 2020 / The arrival of 5G will bring new explosive points for market development. AI will usher in a new wave of growth in the 5G era. 4G technology has brought opportunities for the server market to speed up development, while 5G is expected to maintain this tradition and help the server industry maintain a good long-term development prospect. Undeniably, the promotion of 4G promoted the increase of users, and the operators made a lot of investment and construction of data centers to meet the needs of users, which led to a wave of high tide of server procurement. Compared with 3G and 4G, 5G has improved its speed by about 10 times, which has achieved a qualitative leap in the development of server market. In the future, 5G rate is expected to increase by tens of times, which will undoubtedly inject more vitality into the market. For example, industries that were previously limited by data processing speed are expected to break through bottlenecks and achieve substantial growth. Once these new industries explode because of the combination of "AI" and 5G, the amount of data generated will be hundreds of times greater than the 4G era.

So, what does 5G bring to AI? This can be explained in three ways.

The first is about data. The rapid development of AI is based on big data, which has become the massive learning materials of AI system. While 5G provides the base for AI to create more data, the essence of AI is to need more data, 5G can increase the data volume by a hundred times, and meanwhile, the data structure is more diversified and complex. While 5G and AI support each other, the problem at hand is that computing power has not yet broken through, and how to process data more efficiently is another topic.

At the control level, with the determination of R16 standard and the advance of R17, 5G broad connection features are better supported. With 5G, we have access to more devices, more devices that the AI can control, and correspondingly more scenarios for THE AI. From the indoor point of view, now the user can control more kinds of household appliances, from TV, light to refrigerator, purifier; In the outdoor, we can control the car. In the off-road, we can only control the mobile phone, but now cars, wearable devices and so on have joined in. This allows AI control boundaries to be greatly expanded, but the depth of control is limited.

Finally, 5G is even more important in practical applications. For example, AI is not widely used in mobile phones now, intelligent voice is an important function, and mobile phone manufacturers are pushing personal AI assistant, but it is not smart enough at all. A large part of the reason is that the data is too small.

With the rapid development and maturity of AI technology, more and more industries are combining with ARTIFICIAL intelligence technology to seek greater development. The main advantages of the combination of various industries and ARTIFICIAL intelligence are breakthroughs in algorithms, computing power, data, products, engineering and solutions. At present, the new fields of artificial intelligence, big data and cloud computing technology with fast landing and large market space have attracted many resources in recent years. Giants in various industries, new algorithm companies and start-ups are actively planning for the 5G era.

Qualcomm

For more than a decade, Qualcomm has been working on AI to empower many industries. In the wave of 5G and AI innovation, chips are an important part of the industrial chain.

Qualcomm has a solid technology accumulation in the field of AI, mobile computing and connectivity, combining leading 5G connectivity with AI research and development, and has a complete cloud-to-end AI solution. In this process, Qualcomm has formed a close connection with the AI industry and established a solid partnership with a number of leading AI ecosystem partners in China to jointly build the future of ARTIFICIAL intelligence. In 2018, Qualcomm established Qualcomm AI Research to further consolidate the company's internal Research on cutting-edge artificial intelligence. That same year, Qualcomm set up a $100 million AI venture capital fund to invest in start-ups revolutionizing AI technology around the world. At present, Qualcomm Venture capital has invested in a number of leading AI innovation enterprises in China.

Story continues

Qualcomm has been in China for more than 20 years and established a research and development center in Shanghai in 2010. In 2016, Qualcomm established its first semiconductor manufacturing test factory in the world -- Qualcomm Communications Technology (Shanghai) Co., Ltd. in Pudong New Area, introducing its internationally advanced products and technologies to China, demonstrating qualcomm's commitment to continue to invest in China, integrate more closely with Chinese industries, and serve customers. In addition, Qualcomm is working with Chinese partners including Shanghai enterprises to make innovations in 5G, ARTIFICIAL intelligence, cloud computing, big data and other fields, so as to promote the development of 5G and ARTIFICIAL intelligence in Shanghai, boost "new infrastructure", and promote the development of domestic new technology industry and digital economy.

WiMi Hologram Cloud

When it comes to 5G networks, unlike the 4G era, they will have higher speeds, lower latency and massive connectivity. Faster than 4G by tens of times, slower than 1ms, and more than 50 billion devices worldwide are connected to each other. Thus, the three 5G application scenarios of ULTRA broadband mobile communication (eMBB), ultra low delay communication (uRLLC) and mMTC have been established. It is based on these three scenarios that the 5G era has given birth to more applications in the market such as AR holography, unmanned driving and telemedicine, and interconnection of everything. From the interaction between people to the communication between things, the realization of the telecommunications level of cellular communication, or will lead to a new revolution in human society.

The hologram industry has a broad prospect and great potential. It will have explosive growth in the future. By 2025, the size of Holographic cloud market in China is expected to exceed 450 billion RMB, the size of holographic cloud market is expected to grow by 78% annually, the size of global holographic cloud market is expected to exceed 500 billion USD, and the size of holographic cloud market is expected to grow by 68% annually.

WiMi Hologram Cloud as a representative of the domestic enterprise visual AI, its business covers holographic AR technology multiple links, including holographic visual presentation, holographic interactive software development, holographic AI computer vision synthesis, holographic AR online advertising, holographic non-inductive ARSDK pay, 5 g holographic communication software development, holographic face recognition and development, holographic development of AI in the face.

Due to the changes in 5G communication network bandwidth, high-end holographic applications are increasingly applied to social media, communication, navigation, home applications and other application scenarios. The WiMi Hologram Cloud is a project to build a holographic Cloud platform through a 5G communications network based on two core technologies: holographic AI facial recognition and holographic AI facial modification.

Hologram Cloud plans to continue to improve and strengthen existing technologies, maintain industry leadership and create ecological business models. Hologram Cloud's holographic face recognition technology and holographic face change technology are currently being applied to the existing Holographic advertising and entertainment businesses in the WiMi Hologram Cloud, and the technology is being upgraded to make breakthroughs in more areas of the industry. WiMi Hologram Cloud aims to build a commercial ecosystem based on Hologram applications.

WiMi Hologram Cloud boasts the world's leading 3D computer vision technology and SAAS platform technology. WiMi Hologram Cloud USES AI algorithms to turn ordinary images into holographic 3D content and is widely used in holographic advertising, holographic entertainment, holographic education, holographic communication and other fields. WiMi Hologram Cloud, with core technologies such as holographic face recognition, holographic face changing and holographic digital life, is looking for market collaboration and investment opportunities around the world. In the future, WiMi Hologram Cloud aims to expand Hologram ecology in the international market and become a global Hologram Cloud industry leader.

With the advent of 5G era, the industry believes that holographic image communication can use the characteristics of 5G network high speed to transmit 3D video signals with large data volume, which can show a more real world for users, have a qualitative leap in interactivity, or become a disruptive technology of Internet social interaction. At present, Samsung, Facebook and other tech giants are participating in this field of technology research and development, showing that the technology has a broad application prospect. At present, the number of domestic enterprises engaged in the field of holographic projection has also been greatly increased, according to data statistics, has reached more than a thousand holographic projection companies, the market capacity has also risen to the level of ten billion.

Samsung

Samsung introduced Digital Cockpit 2020 at CES, which USES 5G to link the internal and external functions of the vehicle and provide an interconnected experience for drivers and passengers. This is the third joint development between Samsung Electronics and Harman, which combines Samsung's strengths in communications technology, semiconductors and displays with Harman's automotive expertise. Support users in the car to achieve unlimited interaction with the office, home.

Another interesting AI topic, Samsung's performance at CES is a classic case of "big with small", because its Ballie intelligent AI robot is only a little bigger than a baseball, but it has attracted amazing attention due to the infinite application space generated.

The chubby AI robot, which moves by scrolling and ACTS as a steward for your home AIoT system, Ballie is controlled by a smartphone, equipped with artificial intelligence (AI) features, voice operations and a built-in camera to recognize and respond to users and help them with a variety of home tasks. It can respond to requests to speak like a pet, but can be used as a wake up call, fitness assistant, time recorder or to manage other smart devices in the home (like TVS and vacuum cleaners).

At the CES, samsung along the 5 g and AI are two of the most important trends of science and technology, this paper expounds the own understanding and research and development achievements, tablet PC, vehicle operating system or small AI robots, samsung expressed in all-round innovation into 5 g + AI will, existing achievement enough attractive but obviously samsung will also bring us more possibilities.

The mutual empowerment of 5G and ARTIFICIAL intelligence will bring new growth opportunities for the development of the Internet of Things. AI use case for the Internet of things, widely covered domestic and industrial/enterprise and wisdom city, including manufacturing automation and robotics, family and enterprise intelligent security, intelligent display and speakers, agriculture intelligent home control center, and smart appliances, sustainable urban and infrastructure, digital logistics and retail, etc.

In the future, 5G and AI will also affect every aspect of life and many industries, including education, healthcare, retail, manufacturing and transportation. According to statistics, the adoption rate of ARTIFICIAL intelligence in important market segments such as smartphones, PCS/tablets, extended reality (XR), cars and the Internet of Things will increase from less than 10 percent last year to 100 percent by 2025. Driven by this trend, terminal side AI will become a standard feature on many key platforms. 5G and AI technology will bring huge economic benefits to the world. As 5G becomes fully commercialized, it will empower many industries and generate up to $13.2 trillion in goods and services globally by 2035. At the enterprise level, ai derivatives will be worth $3.9 trillion by 2022.

Media contactCompany: Mobius TrendContact: Trends & Insights TeamE-Mail: cs@mobiustrend.comWebsite: http://www.mobiusTrend.comYoutube: https://www.youtube.com/channel/UCOlz-sCOlPTJ_24rMgR6JLw

SOURCE: Mobius Trend

View source version on accesswire.com: https://www.accesswire.com/598262/With-5GAI-Twin-Engines--Qualcomm-WIMI-and-Samsung-Bring-New-Opportunities-to-the-Industry

Read more:

With 5G+AI Twin Engines - Qualcomm, WIMI and Samsung Bring New Opportunities to the Industry - Yahoo Finance

GNS Healthcare Presents Novel Use of AI to Identify Drivers of Response to Immune Checkpoint Inhibitor Therapy – PRNewswire

CAMBRIDGE, Mass., July 21, 2020 /PRNewswire/ --GNS Healthcare(GNS), a leading AI and simulation company, presents results that validate the use of AI to accurately classify tumors based on their immunogenicity and predict response to immune checkpoint inhibitor (ICI) therapy using real-world data. The study showcases the power of causal AI to capture biomarkers and mechanisms, in addition to PD(L)1 and tumor mutation burden (TMB), that are consistent with known immunology. These markers, including CXCL13 upregulation and STK11 mutation, are in line with the targets that are currently being explored for stratification of responders vs. non-responders to ICI therapy, cohort selection, enrichment of future immunoncology trials, or ICI efficacy improvement through combination therapy.

The study applied AI to tumor data from The Cancer Genome Atlas (TCGA) to identify the drivers of immune response. The data from nearly 700 NSCLC and over 400 HNSCC patients were fed into REFS, GNS's causal AI and simulation platform, which reverse-engineered in silico patients which accurately classified tumors based on their response. Macrophage activation and polarization, which is driven in part by metabolic reprogramming, was identified as the primary driver of tumor immunogenicity which can allow for a more targeted approach to patient care and clinical trial design.

"Over the past decade we have seen nearly a dozen immuno-oncology treatments approved but treatment protocols are still based only on a few biomarkers. The presentation of our work is not only a validation of how AI can extract critical insights from real-world data, but also a milestone in our mission to make precision oncology a reality," said Colin Hill, GNS Healthcare CEO and Co-Founder.

These findings from these in silico patients can be used by biopharma companies to select optimal patient populations for clinical trials based on likelihood of response and discover novel biomarkers that make tumors more susceptible to immune therapy, irrespective of response to PD(L)1 therapy. The findings are also beginning to unlock the value of investments in real-world and clinical data to inform future trial design, enable discovery of novel drug targets, and better position drugs across global markets.

Listen to a deep-dive webinar discussing the results here or view the poster presented at ASCO-SITC and reach out to the GNS Healthcare team to learn more.

About GNS Healthcare: GNS Healthcare is an AI-driven precision medicine company developingin silicopatients from real-world and clinical data. In silico patients reveal the complex system of interactions underlying disease progression and drug response, enabling the simulation of drug response at the individual patient level. This in turn enables the ability to precisely match therapeutics to patients and rapidly discover key insights across drug discovery, clinical development, commercialization, and payer markets. GNS REFS causal AI and simulation technology integrates and transforms a wide-variety of patient data types into in silico patients across oncology, auto-immune diseases, neurology, and cardio-metabolic diseases. GNS partners with the world's leading biopharmaceutical companies and health plans and has validated its science and technology in over 50 peer-reviewed papers and abstracts. https://gnshealthcare.com

Media Contact:Simona GilmanMarketing[emailprotected]

SOURCE GNS Healthcare

http://gnshealthcare.com

See original here:

GNS Healthcare Presents Novel Use of AI to Identify Drivers of Response to Immune Checkpoint Inhibitor Therapy - PRNewswire

Patients aren’t being told about the AI systems advising their care – STAT

Since February of last year, tens of thousands of patients hospitalized at one of Minnesotas largest health systems have had their discharge planning decisions informed with help from an artificial intelligence model. But few if any of those patients has any idea about the AI involved in their care.

Thats because frontline clinicians at M Health Fairview generally dont mention the AI whirring behind the scenes in their conversations with patients.

At a growing number of prominent hospitals and clinics around the country, clinicians are turning to AI-powered decision support tools many of them unproven to help predict whether hospitalized patients are likely to develop complications or deteriorate, whether theyre at risk of readmission, and whether theyre likely to die soon. But these patients and their family members are often not informed about or asked to consent to the use of these tools in their care, a STAT examination has found.

advertisement

The result: Machines that are completely invisible to patients are increasingly guiding decision-making in the clinic.

Hospitals and clinicians are operating under the assumption that you do not disclose, and thats not really something that has been defended or really thought about, Harvard Law School professor Glenn Cohen said. Cohen is the author of one of only a few articles examining the issue, which has received surprisingly scant attention in the medical literature even as research about AI and machine learning proliferates.

advertisement

In some cases, theres little room for harm: Patients may not need to know about an AI system thats nudging their doctor to move up an MRI scan by a day, like the one deployed by M Health Fairview, or to be more thoughtful, such as with algorithms meant to encourage clinicians to broach end-of-life conversations. But in other cases, lack of disclosure means that patients may never know what happened if an AI model makes a faulty recommendation that is part of the reason they are denied needed care or undergo an unnecessary, costly, or even harmful intervention.

Thats a real risk, because some of these AI models are fraught with bias, and even those that have been demonstrated to be accurate largely havent yet been shown to improve patient outcomes. Some hospitals dont share data on how well the systems work, justifying the decision on the grounds that they are not conducting research. But that means that patients are not only being denied information about whether the tools are being used in their care, but also about whether the tools are actually helping them.

The decision not to mention these systems to patients is the product of an emerging consensus among doctors, hospital executives, developers, and system architects, who see little value but plenty of downside in raising the subject.

They worry that bringing up AI will derail clinicians conversations with patients, diverting time and attention away from actionable steps that patients can take to improve their health and quality of life. Doctors also emphasize that they, not the AI, make the decisions about care. An AI systems recommendation, after all, is just one of many factors that clinicians take into account before making a decision about a patients care, and it would be absurd to detail every single guideline, protocol, and data source that gets considered, they say.

Internist Karyn Baum, whos leading M Health Fairviews rollout of the tool, said she doesnt bring up the AI to her patients in the same way that I wouldnt say that the X-ray has decided that youre ready to go home. She said she would never tell a fellow clinician not to mention the model to a patient, but in practice, her colleagues generally dont bring it up either.

Four of the health systems 13 hospitals have now rolled out the hospital discharge planning tool, which was developed by the Silicon Valley AI company Qventus. The model is designed to identify hospitalized patients who are likely to be clinically ready to go home soon and flag steps that might be needed to make that happen, such as scheduling a necessary physical therapy appointment.

Clinicians consult the tool during their daily morning huddle, gathering around a computer to peer at a dashboard of hospitalized patients, estimated discharge dates, and barriers that could prevent that from occurring on schedule. A screenshot of the tool provided by Qventus lists a hypothetical 76-year-old patient, N. Griffin, who is scheduled to leave the hospital on a Tuesday but the tool prompts clinicians to consider that he might be ready to go home Monday, if he can be squeezed in for an MRI scan by Saturday.

Baum said she sees the system as a tool to help me make a better decision just like a screening tool for sepsis, or a CT scan, or a lab value but its not going to take the place of that decision, she said. To her, it doesnt make sense to mention to patients. If she did, Baum said, she could end up in a lengthy discussion with patients curious about how the algorithm was created.

That could take valuable time away from the medical and logistical specifics that Baum prefers to spend time talking about with patients flagged by the Qventus tool. Among the questions she brings up with them: How are the patients vital signs and lab test results looking? Does the patient have a ride home? How about a flight of stairs to climb when they get there, or a plan for getting help if they fall?

Some doctors worry that while well-intentioned, the decision to withhold mention of these AI systems could backfire.

I think that patients will find out that we are using these approaches, in part because people are writing news stories like this one about the fact that people are using them, said Justin Sanders, a palliative care physician at Dana-Farber Cancer Institute and Brigham and Womens Hospital in Boston. It has the potential to become an unnecessary distraction and undermine trust in what were trying to do in ways that are probably avoidable.

Patients themselves are typically excluded from the decision-making process about disclosure. STAT asked four patients who have been hospitalized with serious medical conditions kidney disease, metastatic cancer, and sepsis whether theyd want to be told if an AI-powered decision support tool were used in their care. They expressed a range of views: Three said they wouldnt want to know if their doctor was being advised by such a tool. But a fourth patient spoke out forcefully in favor of disclosure.

This issue of transparency and upfront communication must be insisted upon by patients, said Paul Conway, a 55-year-old policy professional who has been on dialysis and received a kidney transplant, both consequences of managing kidney disease since he was a teenager.

The AI-powered decision support tools being introduced in clinical care are often novel and unproven but does their rollout constitute research?

Many hospitals believe the answer is no, and theyre using that distinction as justification for the decision not to inform patients about the use of these tools in their care. As some health systems see it, these algorithms are tools being deployed as part of routine clinical care to make hospitals more efficient. In their view, patients consent to the use of the algorithms by virtue of being admitted to the hospital.

At UCLA Health, for example, clinicians use a neural network to pinpoint primary care patients at risk of being hospitalized or frequently visiting the emergency room in the next year. Patients are not made aware of the tool because it is considered a part of the health systems quality improvement efforts, according to Mohammed Mahbouba, who spoke to STAT in February when he was UCLA Healths chief data officer. (He has since left the health system.)

This is in the context of clinical operations, Mahbouba said. Its not a research project.

Oregon Health and Science University uses a regression-powered algorithm to monitor the majority of its adult hospital patients for signs of sepsis. The tool is not disclosed to patients because it is considered part of hospital operations.

This is meant for operational care, it is not meant for research. So similar to how youd have a patient aware of the fact that were collecting their vital sign information, its a part of clinical care. Thats why its considered appropriate, said Abhijit Pandit, OHSUs chief technology and data officer.

But there is no clear line that neatly separates medical research from hospital operations or quality control, said Pilar Ossorio, a professor of law and bioethics at the University of Wisconsin-Madison. And researchers and bioethicists often disagree on what constitutes one or the other.

This has been a huge issue: Where is that line between quality control, operational control, and research? Theres no widespread agreement, Ossorio said.

To be sure, there are plenty of contexts in which hospitals deploying AI-powered decision support tools are getting patients explicit consent to use them. Some do so in the context of clinical trials, while others ask permission as part of routine clinical operations.

At Parkland Hospital in Dallas, where the orthopedics department has a tool designed to predict whether a patient will die in the next 48 hours, clinicians inform patients about the tool and ask them to sign onto its use.

Based on the agreement we have, we have to have patient consent explaining why were using this, how were using it, how well use it to connect them to the right services, etc., said Vikas Chowdhry, the chief analytics and information officer for a nonprofit innovation center incubated out of Parkland Health System in Dallas.

Hospitals often navigate those decisions internally, since manufacturers of AI systems sold to hospitals and clinics generally dont make recommendations to their customers about what, if anything, frontline clinicians should say to patients.

Jvion a Georgia-based health care AI company that markets a tool that assesses readmission risk in hospitalized patients and suggests interventions to prevent another hospital stay encourages the handful of hospitals deploying its model to exercise their own discretion about whether and how to discuss it with patients. But in practice, the AI system usually doesnt get brought up in these conversations, according to John Frownfelter, a physician who serves as Jvions chief medical information officer.

Since the judgment is left in the hands of the clinicians, its almost irrelevant, Frownfelter said.

When patients are given an unproven drug, the protocol is straightforward: They must explicitly consent to enroll in a clinical study authorized by the Food and Drug Administration and monitored by an institutional review board. And a researcher must inform them about the potential risks and benefits of taking the medication.

Thats not how it works with AI systems being used for decision support in the clinic. These tools arent treatments or fully automated diagnostic tools. They also dont directly determine what kind of therapy a patient may receive all of which would make them subject to more stringent regulatory oversight.

Developers of AI-powered decision support tools generally dont seek approval from the FDA, in part because the 21st Century Cures Act, which was signed into law in 2016, was interpreted as taking most medical advisory tools out of the FDAs jurisdiction. (That could change: In guidelines released last fall, the agency said it intends to focus its oversight powers on AI decision-support products meant to guide treatment of serious or critical conditions, but whose rationale cannot be independently evaluated by doctors a definition that lines up with many of the AI models that patients arent being informed about.)

The result, for now, is that disclosure around AI-powered decision support tools falls into a regulatory gray zone and that means the hospitals rolling them out often lack incentive to seek informed consent from patients.

A lot of people justifiably think there are many quality-control activities that health care systems should be doing that involve gathering data, Wisconsins Ossorio said. And they say it would be burdensome and confusing to patients to get consent for every one of those activities that touch on their data.

In contrast to the AI-powered decision support tools, there are a few commonly used algorithms subject to the regulation laid out by the Cures Act, such as the type behind the genetic tests that clinicians use to chart a course of treatment for a cancer patient. But in those cases, the genetic test is extremely influential in determining what kind of therapy or drug a patient may receive. Conversely, theres no similarly clear link between an algorithm designed to predict whether a patient may be readmitted to the hospital and the way theyll be treated if and when that occurs.

If it were me, Id say just file for institutional review board approval and either get consent or justify why you could waive it.

Pilar Ossorio, professor of law and bioethics, University of Wisconsin-Madison

Still, Ossorio would support an ultra-cautious approach: I do think people throw a lot of things into the operations bucket, and if it were me, Id say just file for institutional review board approval and either get consent or justify why you could waive it.

Further complicating matters is the lack of publicly disclosed data showing whether and how well some of the algorithms work, as well as their overall impact on patients. The public doesnt know whether OHSUs sepsis-prediction algorithm actually predicts sepsis, nor whether UCLAs admissions tool actually predicts admissions.

Some AI-powered decision support tools are supported by early data presented at conferences and published in journals, and several developers say theyre in the process of sharing results: Jvion, for example, has submitted to a journal for publication a study that showed a 26% reduction in readmissions when its readmissions risk tool was deployed; that paper is currently in review, according to Jvions Frownfelter.

But asked by STAT for data on their tools impact on patient care, several hospital executives declined or said they hadnt completed their evaluations.

A spokesperson from UCLA said it had yet to complete an assessment of the performance of its admissions algorithm.

Before you use a tool to do medical decision-making, you should do the research.

Pilar Ossorio, professor of law and bioethics, University of Wisconsin-Madison

A spokesperson from OHSU said that according to its latest report, run before the Covid-19 pandemic began in March, its sepsis algorithm had been used on 18,000 patients, of which it had flagged 1,659 patients as at-risk with nurses indicating concern for 210 of them. He added that the tools impact on patients as measured by hospital death rates and length of time spent in the facility was inconclusive.

Its disturbing that theyre deploying these tools without having the kind of information that they should have, said Wisconsins Ossorio. Before you use a tool to do medical decision-making, you should do the research.

Ossorio said it may be the case that these tools are merely being used as an additional data point and not to make decisions. But if health systems dont disclose data showing how the tools are being used, theres no way to know how heavily clinicians may be leaning on them.

They always say these tools are meant to be used in combination with clinical data and its up to the clinician to make the final decision. But what happens if we learn the algorithm is relied upon over and above all other kinds of information? she said.

There are countless advocacy groups representing a wide range of patients, but no organization exists to speak for those whove unknowingly had AI systems involved in their care. They have no way, after all, of even identifying themselves as part of a common community.

STAT was unable to identify any patients who learned after the fact that their care had been guided by an undisclosed AI model, but asked several patients how theyd feel, hypothetically, about an AI system being used in their care without their knowledge.

Conway, the patient with kidney disease, maintained that he would want to know. He also dismissed the concern raised by some physicians that mentioning AI would derail a conversation. Woe to the professional that as you introduce a topic, a patient might actually ask questions and you have to answer them, he said.

Other patients, however, said that while they welcomed the use of AI and other innovations in their care, they wouldnt expect or even want their doctor to mention it. They likened it to not wanting to be privy to numbers around their prognosis, such as how much time they might expect to have left, or how many patients with their disease are still alive after five years.

Any of those statistics or algorithms are not going to change how you confront your disease so why burden yourself with them, is my philosophy, said Stacy Hurt, a patient advocate from Pittsburgh who received a diagnosis of metastatic colorectal cancer in 2014, on her 44th birthday, when she was working as an executive at a pharmaceutical company. (She is now doing well and is approaching five years with no evidence of disease.)

Katy Grainger, who lost the lower half of both legs and seven fingertips to sepsis, said she would have supported her care team using an algorithm like OHSUs sepsis model, so long as her clinicians didnt rely on it too heavily. She said she also would not have wanted to be informed that the tool was being used.

I dont monitor how doctors do their jobs. I just trust that theyre doing it well.

Katy Grainger, patient who developed sepsis

I dont monitor how doctors do their jobs. I just trust that theyre doing it well, she said. I have to believe that Im not a doctor and I cant control what they do.

Still, Grainger expressed some reservations about the tool, including the idea that it may have failed to identify her. At 52, Grainger was healthy and fairly young when she developed sepsis. She had been sick for days and visited an urgent care clinic, which gave her antibiotics for what they thought was a basic bacterial infection, but which quickly progressed to a serious case of sepsis.

I would be worried that [the algorithm] could have missed me. I was young well, 52 healthy, in some of the best shape of my life, eating really well, and then boom, Grainger said.

Dana Deighton, a marketing professional from Virginia, suspects that if an algorithm scanned her data back in 2013, it would have made a dire prediction about her life expectancy: She had just been diagnosed with metastatic esophageal cancer at age 43, after all. But she probably wouldnt have wanted to hear about an AIs forecast at such a tender and sensitive time.

If a physician brought up AI when you are looking for a warmer, more personal touch, it might actually have the opposite and worse effect, Deighton said. (Shes doing well now her scans have turned up no evidence of disease since 2015.)

Harvards Cohen said he wants to see hospital systems, clinicians, and AI manufacturers come together for a thoughtful discussion around whether they should be disclosing the use of these tools to patients and if were not doing that, then the question is why arent we telling them about this when we tell them about a lot of other things, he said.

Cohen said he worries that uptake and trust in AI and machine learning could plummet if patients were to find out, after the fact, that theres a rash of this being used without anyone ever telling them.

Thats a scary thing, he said, if you think this is the way the future is going to go.

This is part of a yearlong series of articles exploring the use of artificial intelligence in health care that is partly funded by a grant from the Commonwealth Fund.

See original here:

Patients aren't being told about the AI systems advising their care - STAT