Page 18«..10..17181920..3040..»

Category Archives: Ai

Theres no Tiananmen Square in the new Chinese image-making AI – MIT Technology Review

Posted: September 15, 2022 at 10:06 pm

When a demo of the software was released in late August, users quickly found that certain wordsboth explicit mentions of political leaders names and words that are potentially controversial only in political contextswere labeled as sensitive and blocked from generating any result. Chinas sophisticated system of online censorship, it seems, has extended to the latest trend in AI.

Its not rare for similar AIs to limit users from generating certain types of content. DALL-E 2 prohibits sexual content, faces of public figures, or medical treatment images. But the case of ERNIE-ViLG underlines the question of where exactly the line between moderation and political censorship lies.

The ERNIE-ViLG model is part of Wenxin, a large-scale project in natural-language processing from Chinas leading AI company, Baidu. It was trained on a data set of 145 million image-text pairs and contains 10 billion parametersthe values that a neural network adjusts as it learns, which the AI uses to discern the subtle differences between concepts and art styles.

That means ERNIE-ViLG has a smaller training data set than DALL-E 2 (650 million pairs) and Stable Diffusion (2.3 billion pairs) but more parameters than either one (DALL-E 2 has 3.5 billion parameters and Stable Diffusion has 890 million). Baidu released a demo version on its own platform in late August and then later on Hugging Face, the popular international AI community.

The main difference between ERNIE-ViLG and Western models is that the Baidu-developed one understands prompts written in Chinese and is less likely to make mistakes when it comes to culturally specific words.

For example, a Chinese video creator compared the results from different models for prompts that included Chinese historical figures, pop culture celebrities, and food. He found that ERNIE-ViLG produced more accurate images than DALL-E 2 or Stable Diffusion. Following its release, ERNIE-ViLG has also been embraced by those in the Japanese anime community, who found that the model can generate more satisfying anime art than other models, likely because it included more anime in its training data.

But ERNIE-ViLG will be defined, as the other models are, by what it allows. Unlike DALL-E 2 or Stable Diffusion, ERNIE-ViLG does not have a published explanation of its content moderation policy, and Baidu declined to comment for this story.

When the ERNIE-ViLG demo was first released on Hugging Face, users inputting certain words would receive the message Sensitive words found. Please enter again (), which was a surprisingly honest admission about the filtering mechanism. However, since at least September 12, the message has read The content entered doesnt meet relevant rules. Please try again after adjusting it. ()

Go here to see the original:

Theres no Tiananmen Square in the new Chinese image-making AI - MIT Technology Review

Posted in Ai | Comments Off on Theres no Tiananmen Square in the new Chinese image-making AI – MIT Technology Review

Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most – CNBC

Posted: at 10:06 pm

Security officers keep watch in front of an AI (Artificial Intelligence) sign at the annual Huawei Connect event in Shanghai, China, September 18, 2019.

Aly Song | Reuters

Artificial intelligence is playing an increasingly important role in cybersecurity for both good and bad. Organizations can leverage the latest AI-based tools to better detect threats and protect their systems and data resources. But cyber criminals can also use the technology to launch more sophisticated attacks.

The rise in cyberattacks is helping to fuel growth in the market for AI-based security products. A July 2022 report by Acumen Research and Consulting says the global market was $14.9 billion in 2021 and is estimated to reach $133.8 billion by 2030.

An increasing number of attacks such as distributed denial-of-service (DDoS) and data breaches, many of them extremely costly for the impacted organizations, are generating a need for more sophisticated solutions.

Another driver of market growth was the Covid-19 pandemic and shift to remote work, according to the report. This forced many companies to put an increased focus on cybersecurity and the use of tools powered with AI to more effectively find and stop attacks.

Looking ahead, trends such as the growing adoption of the Internet of Things (IoT) and the rising number of connected devices are expected to fuel market growth, the Acumen report says. The growing use of cloud-based security services could also provide opportunities for new uses of AI for cybersecurity.

Among the types of products that use AI are antivirus/antimalware, data loss prevention, fraud detection/anti-fraud, identity and access management, intrusion detection/prevention system, and risk and compliance management.

Up to now, the use of AI for cybersecurity has been somewhat limited. "Companies thus far aren't going out and turning over their cybersecurity programs to AI," said Brian Finch, co-leader of the cybersecurity, data protection & privacy practice at law firm Pillsbury Law. "That doesn't mean AI isn't being used. We are seeing companies utilize AI but in a limited fashion," mostly within the context of products such as email filters and malware identification tools that have AI powering them in some way.

"Most interestingly we see behavioral analysis tools increasingly using AI," Finch said. "By that I mean tools analyzing data to determine behavior of hackers to see if there is a pattern to their attacks timing, method of attack, and how the hackers move when inside systems. Gathering such intelligence can be highly valuable to defenders."

In a recent study, research firm Gartner interviewed nearly 50 security vendors and found a few patterns for AI use among them, says research vice president Mark Driver.

"Overwhelmingly, they reported that the first goal of AI was to 'remove false positives' insofar as one major challenge among security analysts is filtering the signal from the noise in very large data sets," Driver said."AI can trim this down to a reasonable size, which is much more accurate.Analysts are able to work smarter and faster to resolve cyber attacks as a result."

In general, AI is used to help detect attacks more accurately and then prioritize responses based on real world risk, Driver said. And it allows automated or semi-automated responses to attacks, and finally provides more accurate modelling to predict future attacks. "All of this doesn't necessarily remove the analysts from the loop, but it does make the analysts' job more agile and more accurate when facing cyber threats," Driver said.

On the other hand, bad actors can also take advantage of AI in several ways. "For instance, AI can be used to identify patterns in computer systems that reveal weaknesses in software or security programs, thus allowing hackers to exploit those newly discovered weaknesses," Finch said.

When combined with stolen personal information or collected open source data such as social media posts, cyber criminals can use AI to create large numbers of phishing emails to spread malware or collect valuable information.

"Security experts have noted that AI-generated phishing emails actually have higher rates of being opened [for example] tricking possible victims to click on them and thus generate attacks than manually crafted phishing emails," Finch said. "AI can also be used to design malware that is constantly changing, to avoid detection by automated defensive tools."

Constantly changing malware signatures can help attackers evade static defenses such as firewalls and perimeter detection systems. Similarly, AI-powered malware can sit inside a system, collecting data and observing user behavior up until it's ready to launch another phase of an attack or send out information it has collected with relatively low risk of detection. This is partly why companies are moving towards a "zero trust" model, where defenses are set up to constantly challenge and inspect network traffic and applications in order to verify that they are not harmful.

But Finch said, "Given the economics of cyberattacks it's generally easier and cheaper to launch attacks than to build effective defenses I'd say AI will be on balance more hurtful than helpful. Caveat that, however, with the fact that really good AI is difficult to build and requires a lot of specially trained people to make it work well. Run of the mill criminals are not going to have access to the greatest AI minds in the world."

Cybersecurity program might have access to "vast resources from Silicon Valley and the like [to] build some very good defenses against low-grade AI cyber attacks," Finch said. "When we get into AI developed by hacker nation states [such as Russia and China], their AI hack systems are likely to be quite sophisticated, and so the defenders will generally be playing catch up to AI-powered attacks."

Read the original here:

Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most - CNBC

Posted in Ai | Comments Off on Artificial intelligence is playing a bigger role in cybersecurity, but the bad guys may benefit the most – CNBC

A terrifying AI-generated woman is lurking in the abyss of latent space – TechCrunch

Posted: at 10:06 pm

Theres a ghost in the machine. Machine learning, that is.

We are all regularly amazed by AIs capabilities in writing and creation, but who knew it had such a capacity for instilling horror? A chilling discovery by an AI researcher finds that the latent space comprising a deep learning models memory is haunted by least one horrifying figure a bloody-faced woman now known as Loab.

(Warning: Disturbing imagery ahead.)

But is this AI model truly haunted, or is Loab just a random confluence of images that happens to come up in various strange technical circumstances? Surely it must be the latter unless you believe spirits can inhabit data structures, but its more than a simple creepy image its an indication that what passes for a brain in an AI is deeper and creepier than we might otherwise have imagined.

Loab was discovered encountered? summoned? by a musician and artist who goes by Supercomposite on Twitter (this article originally used her name but she said she preferred to use her handle for personal reasons, so it has been substituted throughout). She explained the Loab phenomenon in a thread that achieved a large amount of attention for a random creepy AI thing, something there is no shortage of on the platform, suggesting it struck a chord (minor key, no doubt).

Supercomposite was playing around with a custom AI text-to-image model, similar to but not DALL-E or Stable Diffusion, and specifically experimenting with negative prompts.

Ordinarily, you give the model a prompt, and it works its way toward creating an image that matches it. If you have one prompt, that prompt has a weight of one, meaning thats the only thing the model is working toward.

You can also split prompts, saying things like hot air balloon::0.5, thunderstorm::0.5 and it will work toward both of those things equally this isnt really necessary, since the language part of the model would also accept hot air balloon in a thunderstorm and you might even get better results.

But the interesting thing is that you can also have negative prompts, which causes the model to work away from that concept as actively as it can.

This process is far less predictable, because no one knows how the data is actually organized in what one might anthropomorphize as the mind or memory of the AI, known as latent space.

The latent space is kind of like youre exploring a map of different concepts in the AI. A prompt is like an arrow that tells you how far to walk in this concept map and in which direction, Supercomposite told me.

Heres a helpful rendering of a much, much simpler latent space in an old Google translation model working on a single sentence in multiple languages:

The latent space of a system like DALL-E is orders of magnitude larger and more complex, but you get the general idea. If each dot here was a million spaces like this one its probably a bit more accurate. Image Credits: Google

So if you prompt the AI for an image of a face, youll end up somewhere in the middle of the region that has all the of images of faces and get an image of a kind of unremarkable average face, she said. With a more specific prompt, youll find yourself among the frowning faces, or faces in profile, and so on. But with negatively weighted prompt, you do the opposite: You run as far away from that concept as possible.

But whats the opposite of face? Is it the feet? Is it the back of the head? Something faceless, like a pencil? While we can argue it amongst ourselves, in a machine learning model it was decided during the process of training, meaning however visual and linguistic concepts got encoded into its memory, they can be navigated consistently even if they may be somewhat arbitrary.

Image Credits: Supercomposite

We saw a related concept in a recent AI phenomenon that went viral because one model seemed to reliably associate some nonsense words with birds and insects. But it wasnt that DALL-E had a secret language in which Apoploe vesrreaitais means birds its just that the nonsense prompt basically had it throwing a dart at a map of its mind and drawing whatever it lands nearby, in this case birds because the first word is kind of similar to some scientific names. So the arrow just pointed generally in that direction on the map.

Supercomposite was playing with this idea of navigating the latent space, having given the prompt of Brando::-1, which would have the model produce whatever it thinks is the very opposite of Brando. It produced a weird skyline logo with nonsense but somewhat readable text: DIGITA PNTICS.

Weird, right? But again, the models organization of concepts wouldnt necessarily make sense to us. Curious, Supercomposite wondered it she could reverse the process. So she put in the prompt: DIGITA PNITICS skyline logo::-1. If this image was the opposite of Brando, perhaps the reverse was true too and it would find its way to, perhaps, Marlon Brando?

Instead, she got this:

Image Credits: Supercomposite

Over and over she submitted this negative prompt, and over and over the model produced this woman, with bloody, cut or unhealthily red cheeks and a haunting, otherworldly look. Somehow, this woman whom Supercomposite named Loab for the text that appears in the top-right image there reliably is the AI models best guess for the most distant possible concept from a logo featuring nonsense words.

What happened? Supercomposite explained how the model might think when given a negative prompt for a particular logo, continuing her metaphor from before.

You start running as fast as you can away from the area with logos, she said. You maybe end up in the area with realistic faces, since that is conceptually really far away from logos. You keep running, because you dont actually care about faces, you just want to run as far away as possible from logos. So no matter what, you are going to end up at the edge of the map. And Loab is the last face you see before you fall off the edge.

Image Credits: Supercomposite

Negative prompts dont always produce horrors, let alone so reliably. Anyone who has played with these image models will tell you it can actually be quite difficult to get consistent results for even very straightforward prompts.

Put in one for a robot standing in a field four or 40 times and you may get as many different takes on the concept, some hardly recognizable as robots or fields. But Loab appears consistently with this specific negative prompt, to the point where it feels like an incantation out of an old urban legend.

You know the type: Stand in a dark bathroom looking at the mirror and say Bloody Mary three times. Or even earlier folk instructions of how to reach a witchs abode or the entrance to the underworld: Holding a sprig of holly, walk backward 100 steps from a dead tree with your eyes closed.

DIGITA PNITICS skyline logo::-1 isnt quite as catchy, but as magic words go the phrase is at least suitably arcane. And it has the benefit of working. Only on this particular model, of course every AI platforms latent space is different, though who knows if Loab may be lurking in DALL-E or Stable Diffusion too, waiting to be summoned.

Loab as an ancient statue, but its unmistakably her. Image Credits: Supercomposite

In fact, the incantation is strong enough that Loab seems to infect even split prompts and combinations with other images.

Some AIs can take other images as prompts; they basically can interpret the image, turning it into a directional arrow on the map just like they treat text prompts, explained Supercomposite. I used Loabs image and one or more other images together as a prompt she almost always persists in the resulting picture.

Sometimes more complex or combination prompts treat one part as more of a loose suggestion. But ones that include Loab seem not just to veer toward the grotesque and horrifying, but to include her in a very recognizable fashion. Whether shes being combined with bees, video game characters, film styles or abstractions, Loab is front and center, dominating the composition with her damaged face, neutral expression and long dark hair.

Its unusual for any prompt or imagery to be so consistent to haunt other prompts the way she does. Supercomposite speculated on why this might be.

I guess because she is very far away from a lot of concepts and so its hard to get out of her little spooky area in latent space. The cultural question, of why the data put this woman way out there at the edge of the latent space, near gory horror imagery, is another thing to think about, she said.

Although its an oversimplification, latent space really is like a map, and the prompts like directions for navigating it and the system draws whatever ends up being around where its asked to go, whether its well-trodden ground like still life by a Dutch master or a synthesis of obscure or disconnected concepts: robots battle aliens in a cubist etching by Dore. As you can see:

Image Credits: TechCrunch / DALL-E

A purely speculative explanation of why Loab exists has to do with how that map is laid out. As Supercomposite suggested, its likely that, simply due to the fact that company logos and horrific, scary imagery are very far from one another conceptually.

A negative prompt doesnt mean take 10 data steps in the other direction, it means keep going as far as you can, and its more than possible that images at the farthest reaches of an AIs latent space have more extreme or uncommon values. Wouldnt you organize it that way, with stuff that has lots of commonalities or cross-references in the center, however you define that and weird, wild stuff thats rarely relevant out at the edge?

Therefore negative prompts may act like a way to explore the frontier of the AIs mind map, skimming the concepts it deems too outlandish to store among prosaic concepts like happy faces, beautiful landscapes or frolicking pets.

Image Credits: Devin Coldeway

The unnerving fact is no one really understands how latent spaces are structured or why. There is of course a great deal of research on the subject, and some indications that they are organized in some ways like how our own minds are which makes sense, since they were more or less built in imitation of them. But in other ways they have totally unique structures connecting across vast conceptual distances.

To be clear, its not as if there is some clutch of images specifically of Loab waiting to be found theyre definitely being created on the fly, and Supercomposite told me theres no indication the digital cryptid is based on any particular artist or work. Thats why latent space is latent! These images emerged from a combination of strange and terrible concepts that all happen to occupy the same area in the models memory, much like how in the Google visualization earlier, languages were clustered based on their similarity.

From what dark corner or unconscious associations sprang Loab, fully formed and coherent? We cant yet trace the path the model took to reach her location; a trained models latent space is vast and impenetrably complex.

The only way we can reach the spot again is through the magic words, spoken while we step backward through that space with our eyes closed, until we reach the witchs hut that cant be approached by ordinary means. Loab isnt a ghost, but she is an anomaly, yet paradoxically she may be one of an effectively infinite number of anomalies waiting to be summoned from the farthest, unlit reaches of any AI models latent space.

It may not be supernatural but sure as hell aint natural.

Read the rest here:

A terrifying AI-generated woman is lurking in the abyss of latent space - TechCrunch

Posted in Ai | Comments Off on A terrifying AI-generated woman is lurking in the abyss of latent space – TechCrunch

PyTorch Takes AI/ML Back to Its Research, Open Source Roots – thenewstack.io

Posted: at 10:06 pm

Metas decision to launch the PyTorch Foundation and contribute the PyTorch machine learning framework to the Linux Foundation indicates the maturity of the technology and a move closer to its open source roots.

AI adoption seems to be stuck and sinking, outside of some applications of AI in text and image generation, natural language processing, computer vision and some around pattern detection and predictive analytics, said Ronald Schmelzer, principal analyst at Cognilytica, a firm that focuses on AI research.

Open source has been gaining much faster adoption than vendor solutions in the market, as can be seen by the difficulties encountered by many fast-moving startups, unicorns, and IPOd companies, Schmelzer told The New Stack. With open source technology and data leading the way in AI, its no surprise that Meta is loosening its hold on PyTorch and letting the community guide its development.

PyTorch moves to the new, independent PyTorch Foundation, under the Linux Foundation umbrella, with a governing board composed of representatives from AMD, AWS, Google Cloud, Meta, Microsoft Azure, and Nvidia, with the intention to expand over time. The PyTorch Foundation will serve as the steward for the technology and will support PyTorch through conferences, training courses, and other initiatives, Meta AI said in a blog post.

Since the release of PyTorch 1.0 in 2018, PyTorch has grown into the lingua franca of AI research, Meta AI said. The framework will continue to be a part of Metas AI research and engineering work, the team said in its post. PyTorch is also a foundation of the AI research and products built by Amazon Web Services, Microsoft Azure, OpenAI, and many other companies and research institutions.

Most of those organizations are founding members of the PyTorch Foundation.

This is a Facebook + Google + AWS vs Microsoft story, said Lawrence E. Hecht, an analyst for The New Stack. The Stack Overflow survey collected data on almost 4,000 users. They were significantly more likely to have used Google Cloud recently as compared to the study average (35% vs 20%). Thats a 75% difference. It also catapults Google past Microsoft Azure (25% of PyTorch users and 23% overall), to be closer to the leader AWS (44% vs 41%).

In many ways, AI is retreating back to some research and open source roots, and the wave of hype and interest in AI by investors and for-profit companies seems to be waning, Schmelzer said. Were past peak on AI hype and winding down. Yeah, were past irrational exuberance on AI and into some sober reality. Companies like C3 and DataRobot and others are really struggling now that AI is not top of the list for many organizations.

Holger Mueller, an analyst at Constellation Research, noted that in general, it is better to have an open source framework at an independent organization. We can also assume that Meta thinks that PyTorch is no longer where it wants to invest into solely and maybe it is not that relevant for metaverse use cases, he said.

According to Jim Zemlin, executive director of The Linux Foundation, AI/ML is a truly open source-first ecosystem. The majority of popular AI and ML tools and frameworks are open source. The community clearly values transparency and the ethos of open source, Zemlin said in a blog post, noting that the Linux Foundation will provide a neutral home for PyTorch.

Moreover, the PyTorch Foundations mission is to drive the adoption of AI tooling by fostering and sustaining an ecosystem of open source, vendor-neutral projects with PyTorch. It will democratize state-of-the-art tools, libraries, and other components to make these innovations accessible to everyone. It also will focus on the business and product marketing of PyTorch and the related ecosystem, Meta said. The transition will not entail any changes to PyTorchs code and core project, including its separate technical governance structure.

As of August 2022, PyTorch was one of the five fastest-growing open source software communities in the world alongside the Linux kernel and Kubernetes, Zemlin said. From August 2021 through August 2022, PyTorch counted over 65,000 commits. Over 2,400 contributors participated in the effort, filing issues or PRs or writing documentation. These numbers place PyTorch among the most successful open source projects in history.

In January, PyTorch celebrated its five year anniversary since its inception in Metas AI labs. Now, all releases, features, and technical direction will continue to be driven by PyTorchs community: from individual code contributors, those who review and commit changes, to the module maintainers.

The creation of the PyTorch Foundation will ensure business decisions are being made in a transparent and open manner by a diverse group of members for years to come, said Soumith Chintala, PyTorch Lead Maintainer and AI Researcher at Meta, in a blog post.

However, the technical decisions remain in the control of individual maintainers, he said.

While, up to now, the business governance of PyTorch was unstructured and like a scrappy startup, the next stage is to support the interests of multiple stakeholders.

We chose the Linux Foundation as it has vast organization experience hosting large multi-stakeholder open source projects with the right balance of organizational structure and finding specific solutions for these projects, Chintala said. Such projects include Linux, Kubernetes, Node.js, Hyperledger and RISC-V.

More:

PyTorch Takes AI/ML Back to Its Research, Open Source Roots - thenewstack.io

Posted in Ai | Comments Off on PyTorch Takes AI/ML Back to Its Research, Open Source Roots – thenewstack.io

How Can Dentistry Benefit from AI? Its All in the Data – insideBIGDATA

Posted: at 10:06 pm

In this special guest feature, Florian Hillen, founder and CEO, VideaHealth, points out that Like many other industries within the healthcare ecosystem, dentistry is beginning to adopt artificial intelligence (AI) solutions to improve patient care, lower costs, and streamline workflows and care delivery.

Like many other industries within the healthcare ecosystem, dentistry is beginning to adopt artificial intelligence (AI) solutions to improve patient care, lower costs, and streamline workflows and care delivery. While the dental profession is no stranger to cutting-edge technology, AI represents such a revolutionary change that few organizations have the knowledge and skill sets to implement an effective strategy.

This is particularly important when applying AI to diagnose and treat patients. Ideally, AI should exceed human-level performance in speed, efficiency and accuracy. But unlike traditional technologies that are simply powered up and put to work, AI must be trained, conditioned and trusted to perform as expected even under difficult or unusual circumstances.

This requires dental providers to implement an AI training engine what we call an AI factory that incorporates key elements in the creation and conditioning of AI models. These include things like the data pipeline, labeling operations and software infrastructure, as well as the machine learning programs themselves, all of which are designed to detect a wide range of dental pathologies and provide highly tailored courses of treatment based on patients needs.

Turning Data into Knowledge

Training AI models is no easy job. It requires enormous amounts of data and strict guidance as to how that data is presented so as not to bias the algorithm, which can skew results and lead to health inequities. With the ability to support immense computing power to process calculations very quickly, coupled with access to aggregated and centralized data stores, todays platforms can comb through hundreds of millions of data points from service providers, insurance companies, universities and other sources to ensure that results are not just accurate but impartial as well.

This is what gives AI-driven processes the ability to enhance the clinical experience. By eliminating human error and bias, AI delivers more accurate diagnoses, better treatment options and fewer mistakes that must be corrected, usually at great expense or pain, at a later date.

It is important to note the data used to inform these models is not merely textual or numeric in nature, but pictographic as well. AI scans X-rays, MRIs and other visual elements to detect decay, abscesses and even cancers, sometimes long before they become apparent to the naked eye. This technology can also be used to customize crowns, bridges and implants much more quickly and more accurately than traditional procedures.

A key problem in the dental industry is the fractured nature of most practices. The vast majority of dental practices are independently owned and operated, which makes data collection and analysis difficult at best, particularly at the scale needed to draw accurate conclusions. While this has started to change in recent years with the rise of dental service organizations (DSOs) and increased consolidation within the insurance industry, to date there has been very little progress in capturing broad data sets, which are largely subject to data privacy and protection laws.

The AI Factory Approach

New companies are looking to change this with the development of factory-style data preparation modeled on the analytics engines of Netflix and other data-driven organizations. Using highly automated processes that can be quickly scaled to accommodate massive data sets from a multitude of sources, a properly designed AI factory can streamline the analytics process to ensure high-quality data is being fed into AI models.

This, in turn, produces high-quality results much the same way that automation has improved the manufacturing of cars, food and other physical products.

Perhaps one of the most basic improvements this factory approach to AI has achieved is cavity detection. A recent FDA trial demonstrated how AI-driven software trained on a factory model can reduce the number of missed cavities by 43% and cut the number of false positives by 15%. All dentists involved in the trial, regardless of training and experience, reported a distinct improvement in the ability to make accurate diagnoses.

Dentistry is a highly specialized sector of the broader healthcare industry, and as such it relies on unique data points in order to provide effective service to patients. At the same time differences in experience levels, equipment and diagnostic capabilities vary greatly, so much so that in most cases ten different dentists will provide ten different diagnoses.

By bringing order to this environment, an AI factory not only streamlines dental care and reduces costs but greatly increases accuracy in both the assessment and treatment of patients. Professional discrepancies will remain, of course, but disagreements on data and how it should be treated will be less. The end result should be better health outcomes and less burden, financial and otherwise, on todays bloated, largely redundant healthcare system.

The knowledge to accomplish this feat is already out there. All that is needed is an efficient, effective means of utilizing it.

Sign up for the free insideBIGDATAnewsletter.

Join us on Twitter:https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

The rest is here:

How Can Dentistry Benefit from AI? Its All in the Data - insideBIGDATA

Posted in Ai | Comments Off on How Can Dentistry Benefit from AI? Its All in the Data – insideBIGDATA

AI could revamp the once-doomed Smell-O-Vision – Inverse

Posted: at 10:06 pm

When movies made the leap from silent to sound, production companies had their eyes on the next sensory frontier in entertainment.

So in 1939, they attempted to bring smell into the cinematic experience. The Smell-O-Vision, a system that piped prepackaged scents from under movie theater seats, made its debut at the 1939 Worlds Fair in New York City.

It didnt go as planned. Audiences complained that the scents were out-of-sync with the movie, overpowering, or simply unpleasant. Despite an attempted revival in the 1960s, the technology largely fell by the wayside.

But now, more than sixty years later, science is considerably closer to making Smell-O-Vision a reality.

The Smell-O-Vision concept didnt work as planned, but it may be due for an AI revamp.LMPC/LMPC/Getty Images

Computer scientists and chemical engineers at the Tokyo Institute of Technology in Japan have developed a machine-learning algorithm that can reverse-engineer a smell based on its chemical makeup.

With this technology, they hope to one day create custom scents on-demand, according to a study recently published in PLOS One. It does sound tempting to smell ratatouille as its being carefully prepared in Ratatouille.

Heres the background Scent has played an important evolutionary role: The brains olfactory bulb, which is responsible for processing odors, sits right next to the amygdala, which handles emotions and the two share considerable neural overlap.

Thats why certain smells tend to evoke a strong emotional response; for example, the scent of chocolate chip cookies might transport you right back to the warmth of your grandmothers kitchen, while the odor of pencil shavings may evoke college exam anxiety.

We rely on these subtle olfactory cues to tell us whether to feel safe, nervous, excited, or relaxed in a given environment.

The olfactory bulb, which processes odors, sits right next to the emotion-handling amygdala.TIM VERNON / SCIENCE PHOTO LIBRARY/Science Photo Library/Getty Images

People vary in how they interpret these signals. Factors like an individuals race, gender, genetics, and unique experiences can all contribute to the way they perceive a specific smell.

Amanda Holloman, a computer science PhD student at the University of Alabama who specializes in olfactory research, has noticed this subjectivity in her own work (she was not involved with the new study). One person might smell popcorn, and another smells vanilla, she says.

Whats new The researchers ran each scent through a mass spectrometer to reveal its chemical signature. They were able to isolate about sixty components called odorants that make up any smell.

Then, using a machine learning algorithm, they assigned a quality such as sweet, fruity, or astringent to each odorant, and measured its ratio relative to the smells other components. By tweaking these ratios, the researchers say, they may eventually be able to generate custom-tailored scents.

Currently, theres just a numerical calculation, says Takamichi Nakamoto, an engineer at the Tokyo Institute of Technology and senior author of the new study. But later, wed like to repeat this actual smell.

Future virtual reality experiences could include custom scents to further immerse users.Liyao Xie/Moment/Getty Images

Why it matters The team envisions an AI perfumer that can blend up a scent based on a given description. Such a device would be a boon for the cosmetic fragrance industry for example, cooking up new scents for shampoos, lotions, or candles.

But it could also have medical applications, such as treating certain types of seizures. And it could even make for more realistic VR experiences in the Metaverse and other digital environments.

Whats next The algorithm created by Nakamotos team was trained to recognize scents using human-generated descriptions of smells from a database.

But because olfactory computing is a relatively new field, Nakamoto says we have limited data on how people perceive odors. This can lead to algorithmic bias, an increasingly common issue in artificial intelligence.

To avoid this bias from creeping in or, at the very least, minimize it Holloman thinks researchers should develop a database using information from human subjects with as wide a range of different backgrounds as possible. We need to focus more on recruitment, she says.

For now, she thinks the new study represents a promising step forward for the olfactory computing field.

Whether or not Smell-O-Vision will succeed remains to be seen (or rather, sniffed). Either way, harnessing the power of fragrance with artificial intelligence sounds pretty scent-sational.

LEARN SOMETHING NEW EVERY DAY.

Excerpt from:

AI could revamp the once-doomed Smell-O-Vision - Inverse

Posted in Ai | Comments Off on AI could revamp the once-doomed Smell-O-Vision – Inverse

Kintsugi named 2022 Gartner Cool Vendor in AI Governance and Responsible AI – Business Wire

Posted: at 10:06 pm

BERKELEY, Calif.--(BUSINESS WIRE)--Kintsugi, a Bay Area startup developing voice biomarker technology to detect signs of depression and anxiety from short clips of speech, has been named a 2022 Gartner Cool Vendor in AI Governance and Responsible AI an annual award recognizing startups that are innovative, impactful, and intriguing.

Kintsugi is developing machine learning algorithms that analyze short clips of free-form speech for vocal features that correlate with clinical depression and anxiety. Their API platform Kintsugi Voice provides clinical decision support to healthcare practitioners, by scoring patients' mental health in real time. The tool integrates seamlessly with clinical call centers, telehealth platforms, and remote patient monitoring apps.

The Gartner report highlights that Kintsugi Voice does not rely on question and answer methods to assess a users mental health, nor is it based on natural language processing, because it analyzes how people speak, not the content of their speech. That furthermore allows them to operate in any language.

The machine learning model is based on the largest dataset in the world for voice biomarkers and mental health. It is collected from Kintsugis award-winning consumer wellness app in over 250 international cities and in multiple languages. The startup leverages this uniquely large and continuously evolving global dataset to prevent bias and push for responsible AI in healthcare. Furthermore, it is working to broaden its impact by deploying Kintsugi Voice in different patient populations, via clinical partnerships with a number of hospitals and health organizations.

According to Gartner, companies considered to be Cool Vendors in AI Governance and Responsible AI are those developing tools that assure AI fairness, bias mitigation, explainability, privacy and compliance.

We are thrilled to be recognized as a Gartner Cool Vendor, said Grace Chang, Founder and CEO of Kintsugi. Mental health conditions are on the rise, and are only identified by primary care practitioners 47.3% of the time. We aim to support providers with objective and accurate mental health insights, so that they can ensure that their patients get the care they need.

Kintsugi recently announced a $20M Series A, bringing the companys total capital raised to $28M since its inception in 2019. The funding round was led by New York-based global venture capital and private equity firm Insight Partners, and will further Kintsugis mission of scaling access to mental healthcare for more of those in need.

This is the latest in a series of accolades for the startup, which was listed in Forbes 2022 AI 50 in North America, and received the 2022 Frost & Sullivan Best Practices Technology Innovation Leadership Award.

About Kintsugi:

Kintsugi is developing novel voice biomarker software to detect signs of clinical depression and anxiety from short clips of free-form speech, closing mental health care gaps across risk-bearing health systems, saving time and lives. Based in Berkeley, California, Kintsugi is on a mission to provide equitable access to mental healthcare for all.

About Gartner:

Gartner, Inc. (NYSE: IT) delivers actionable, objective insight to executives and their teams. Its expert guidance and tools enable faster, smarter decisions and stronger performance on an organizations mission critical priorities. To learn more, visit gartner.com.

Gartner disclaimer:

Gartner does not endorse any vendor, product or service depicted in our research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Read the original post:

Kintsugi named 2022 Gartner Cool Vendor in AI Governance and Responsible AI - Business Wire

Posted in Ai | Comments Off on Kintsugi named 2022 Gartner Cool Vendor in AI Governance and Responsible AI – Business Wire

Into the metaverse: How conversational AI will build its experiential foundation – VentureBeat

Posted: at 10:06 pm

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The much-hyped metaverse concept once thought of as a futuristic, hypothetical online destination is quickly becoming a new kind of internet. As more users interact with one another in these virtual environments, brands will realize new opportunities for engaging their target audiences. Companies such as Meta (formerly Facebook) are rapidly making plans to expand into the metaverse, altering and advancing how people will work, socialize, shop and even bank in the future.

While some completely disagree with the positive potential of this digital world, it cannot be refuted that the metaverse is a topic that many have heard of and will become increasingly ubiquitous. Gaming may be its most obvious initial use case, as consumers and gamers alike are steadily continuing to merge their physical and digital lives. This is something thats been happening since the arrival of the iPhone, a device that has become an extension of our brains and bodies. As technology progresses and advances, its only natural that more parts of our lives will be embedded into the digital world.

With more people opting to live inside the metaverse, there will be routine tasks that require more advanced and intuitive communication. Be it mailing a letter, purchasing land or buying a burger, there must be a proper way to scale communications through artificial intelligence. Technologies like CAIP (conversational AI platforms) will allow brands to design more appealing and engaging user experiences in this burgeoning virtual environment.

As more retail, banking and technology companies begin to inhabit the metaverse, the need for intelligent and relatable customer service programs will be crucial for success in this new arena. To achieve this, it has become increasingly clear that conversational AI must be the foundation of the metaverse, especially for experience delivery.

Whether its a virtual post office, bank, or fast food restaurant, the interactions will be between a human-controlled avatar and a chatbot, so these avatars must have a streamlined way to communicate in order to effectively reach their goal. We are seeing this begin to take shape, as Meta just released its open source language model that includes 175 billion words, a good start for the conversational AI that will give consumers access to advanced customer support systems that interact like humans. These advancements will not only allow retailers to make the customer service process easier, they can also help them create an even more immersive experience for brand loyalists.

In addition to its language model, Meta released major improvements to its hand tracking API, allowing a users hand to move as freely as it would in the physical world. This same precedent must be set for digital conversations, especially in consumer engagement and customer support settings. Just as users need to be able to use their hands, they will need to achieve goals through conversation. If a user cannot properly communicate with an avatar (bot), the lack of human experience will likely detract from the systems ability to blend the digital and physical worlds.

The metaverse can be thought of as a series of universes that merge, where a user could seamlessly flit among their worker, gamer and social media personas based on their requirements. Many have noted that the metaverse will probably play out like an open-world MMO or MMORPG (e.g. Fortnite or World of Warcraft, respectively) which will allow metaverse avatars to interact with each other. In order for these aspirations to even begin to take shape, the proper AI must be implemented to ensure algorithms and interactions are as scalable and sustainable as possible. This will be especially important considering the number of languages spoken across the globe (there are more than 7,100).

If the metaverse aims to unite these worlds, how will it allow us to overcome language barriers? This is where conversational AI, speech recognition and self-supervised learning come into play. When providing a chatbot (or in this case, avatar) that needs to support thousands of languages, any conversational AI platform would need to train the avatars to recognize patterns of specific speech and language in order to respond effectively and efficiently to written and voice queries.

Mark Zuckerberg referred to AI as the most important foundational technology of our time, believing that its power can unlock advancements in other fields, like VR, blockchain, AR and 5G.

With companies across all industries developing ways to monetize the metaverse, these industry giants are banking on it becoming the new (and all-encompassing) internet. Much like the internet boom in the 1990s, we just may look back at this moment and wonder how we ever survived without the metaverse.

In any universal shift of life, and especially when it comes to technology, its important to understand the foundational elements that build these integral parts of our lives. As technology becomes more and more integral in our lives, its important for users to truly understand the underpinnings of things like smartphones, computer programs and alternate digital universes.

Innovations like the metaverse would not be achievable without conversational AI, and as open source programs continue to develop, the adoption of this space will only grow.

Raj Koneru is CEO of Kore.ai

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

View post:

Into the metaverse: How conversational AI will build its experiential foundation - VentureBeat

Posted in Ai | Comments Off on Into the metaverse: How conversational AI will build its experiential foundation – VentureBeat

The FTC Is Closing in on Runaway AI – WIRED

Posted: at 10:06 pm

Teenagers deserve to grow, develop, and experiment, says Caitriona Fitzgerald, deputy director at the Electronic Privacy Information Center (EPIC), a nonprofit advocacy group. They should be able to test or abandon ideas while being free from the chilling effects of being watched or having information from their youth used against them later when they apply to college or apply for a job. She called for the Federal Trade Commission (FTC) to make rules to protect the digital privacy of teens.

This content can also be viewed on the site it originates from.

Hye Jung Han, the author of a Human Rights Watch report about education companies selling personal information to data brokers, wants a ban on personal data-fueled advertising to children. Commercial interests and surveillance should never override a childs best interests or their fundamental rights, because children are priceless, not products, she said.

Han and Fitzgerald were among about 80 people who spoke at the first public forum run by the FTC to discuss whether it should adopt new rules to regulate personal data collection, and the AI fueled by that data.

The FTC is seeking the publics help to answer questions about how to regulate commercial surveillance and AI. Among those questions is whether to extend the definition of discrimination beyond traditional measures like race, gender, or disability to include teenagers, rural communities, homeless people, or people who speak English as a second language.

The FTC is also considering whether to ban or limit certain practices, restrict the period of time companies can retain consumer data, or adopt measures previously subscribed by congressional lawmakers, like audits of automated decision-making systems to verify accuracy, reliability, and error rates.

Tracking peoples activity on the web is the foundation of the online economy, dating back to the introduction of cookies in the 1990s. Data brokers from obscure companies collect intimate details about peoples online activity and can make predictions about individuals, like their menstrual cycles, or how often they pray, as well as collecting biometric data like facial scans.

Cookies underpin online advertising and the business models of major companies like Facebook and Google, but today its common knowledge that data brokerages can do far more than advertise goods and services. Online tracking can bolster attempts to commit fraud, trick people into buying products or disclosing personal information, and even share location data with law enforcement agencies or foreign governments.

See the original post:

The FTC Is Closing in on Runaway AI - WIRED

Posted in Ai | Comments Off on The FTC Is Closing in on Runaway AI – WIRED

Warzone 2.0 AI factions will add a new layer of danger – Gamesradar

Posted: at 10:06 pm

Warzone 2.0 will have an enemy AI faction you'll need to deal with after dropping into battle, according to today's Call of Duty Next reveal event. The AI faction will add a new layer of depth, up the stakes, and make combat even more realistic - all of which makes sense for Warzone's sequel.

Other popular battle royale games like Apex Legends and Fortnite have enemy AI you can encounter in specific areas on certain maps. Apex has deadly spiders, prowler dens, and an armory full of loot protected by robots. Fortnite has had tons of AI enemies throughout its lifespan, including aggressive ones that would randomly pop up from some underground lair and characters who are only aggressive if engaged.

The Warzone 2.0 AI faction, which is currently unnamed, will likely be narratively tied to the new map, Al Mazrah. The desert-based Warzone 2.0 map is set in a fictional region of Western Asia, and it looks incredibly dense and layered, with tons of verticality (but, thankfully, not as many sniper camp spots). It's unclear if the AI faction characters will attack on sight, or if they need to be provoked, but either way, you should keep your head on a swivel - you aren't just worried about enemy players anymore.

Warzone 2.0 is set to debut on November 16. Since it's being built in the same engine as Modern Warfare 2, you might want to check out the Call of Duty: Modern Warfare 2 beta for a feel for the guns, gameplay changes (there's a new jump-to-prone maneuver that will be very helpful), and more.

Check out our full Call of Duty: Modern Warfare 2 multiplayer preview to see what to expect from the beta.

See original here:

Warzone 2.0 AI factions will add a new layer of danger - Gamesradar

Posted in Ai | Comments Off on Warzone 2.0 AI factions will add a new layer of danger – Gamesradar

Page 18«..10..17181920..3040..»