Daily Archives: March 31, 2022

Pulsing Perceptions and Use of AI Voice Apps – ARC

Posted: March 31, 2022 at 2:37 am

The time is right for investing in the global natural language processing (NLP) market, projected to grow from $20.98 billion in 2021 to $127.26 billion in 2028 at a CAGR of 29.4% in that forecast period.

To get a sense on NLP user perspectives, this past February, Applause surveyed its global crowdtesting community to gain insight into perceptions around the use of artificial intelligence (AI) voice applications such as chatbots, interactive voice response (IVR), and other conversational assistants. Check out our summary infographic for some highlights. We had over 6,600 responses from around the world. I want to share our findings and call out a few interesting points.

While just over half of respondents reported they prefer to wait for a human agent when calling a company for customer support (51%), 25% said they prefer immediate access to an automated touch tone response system and 22% prefer an automated virtual service representative that responds to voice commands.

Consumers increasingly expect businesses to have automated chatbots and automated voice systems: 31% said they always expect companies to have chatbots, 61% said it depended on the industry. A vast minority, 6.7%, stated they never expect chat functionality on a companys website or app, while 11% dont expect call centers to have IVR systems that greet them. Still, customers expect IVR more often than not: 46% always expect call centers to have IVR systems that greet them while another 40% said their expectations varied by industry.

Webinars

Join voicebot.ai founder Bret Kinsella and Applauses Emerson Sklar as they cover lessons learned through global testing efforts and new models for conversational AI and other AI projects.

Users expect mobile apps to include voice functionality as well: 44% always expect mobile apps to have voice assistants or voice search features while 41% said it depends on the app category.

Of the 5896 respondents (88%) who said they had used chat functionality on a website at least once, 63% said they were somewhat satisfied or extremely satisfied with the experience. Of the 19% who found the experiences dissatisfying, the top three complaints were:

They could not find the answers they were looking for (29%)

The chatbot did not understand what they were asking (25%)

The chatbot wasted users' time (did not add value) before connecting them with an agent (20%)

Customers expect companies to have automated chatbots and automated voice systems to greet them and there is tremendous ROI for companies who get the NLP experience right, such as freeing up customer service reps for higher-value activities and reducing wait time for customers yet developing NLP technologies requires special attention to details that many other digital products may not.

Ebooks

Natural language assistants offer advantages to businesses. Our whitepaper covers these in detail and lists best practices for creating great user experiences.

Want to see more like this?

Share this:

Read more:

Pulsing Perceptions and Use of AI Voice Apps - ARC

Posted in Ai | Comments Off on Pulsing Perceptions and Use of AI Voice Apps – ARC

As Adoption of Artificial Intelligence Plateaus, Organizations Must Ensure Value to Avoid AI Winter, According to New O’Reilly Report – Business Wire

Posted: at 2:37 am

BOSTON--(BUSINESS WIRE)--OReilly, the premier source for insight-driven learning on technology and business, today announced the results of its annual AI Adoption in the Enterprise survey. The benchmark report explores trends in how artificial intelligence is implemented, including the techniques, tools, and practices organizations are using, to better understand the outcomes of enterprise adoption over the past year. This years survey results showed that the percentage of organizations reporting AI applications in productionthat is, those with revenue-bearing AI products in productionhas remained constant over the last two years, at 26%, indicating that AI has passed to the next stage of the hype cycle.

For years, AI has been the focus of the technology world, said Mike Loukides, vice president of content strategy at OReilly and the reports author. "Now that the hype has died down, its time for AI to prove that it can deliver real value, whether thats cost savings, increased productivity for businesses, or building applications that can generate real value to human lives. This will no doubt require practitioners to develop better ways to collaborate between AI systems and humans, and more sophisticated methods for training AI models that can get around the biases and stereotypes that plague human decision-making.

Despite the need to maintain the integrity and security of data in enterprise AI systems, a large number of organizations lack AI governance. Among respondents with AI products in production, the number of those whose organizations had a governance plan in place to oversee how projects are created, measured, and observed (49%) was roughly the same as those that didn't (51%).

As for evaluating risks, unexpected outcomes (68%) remained the biggest focus for mature organizations, followed closely by model interpretability and model degradation (both 61%). Privacy (54%), fairness (51%), and security (42%)issues that may have a direct impact on individualswere among the risks least cited by organizations. While there may be AI applications where privacy and fairness arent issues, companies with AI practices need to place a higher priority on the human impact of AI.

While AI adoption is slowing, it is certainly not stalling, said Laura Baldwin, president of OReilly. There are significant venture capital investments being made in the AI space, with 20% of all funds going to AI companies. What this likely means is that AI growth is experiencing a short-term plateau, but these investments will pay off later in the decade. In the meantime, businesses must not lose sight of the purpose of AI: to make people's lives better. The AI community must take the steps needed to create applications that generate real human value, or we risk heading into a period of reduced funding in artificial intelligence.

Other key findings include:

The complete report is now available for download here: https://get.oreilly.com/ind_ai-adoption-in-the-enterprise-2022.html. To learn more about OReillys AI-focused training courses, certifications, and virtual events, visit http://www.oreilly.com.

About OReilly

For over 40 years, OReilly has provided technology and business training, knowledge, and insight to help companies succeed. Our unique network of experts and innovators share their knowledge and expertise through the companys SaaS-based training and learning platform. OReilly delivers highly topical and comprehensive technology and business learning solutions to millions of users across enterprise, consumer, and university channels. For more information, visit http://www.oreilly.com.

Read more:

As Adoption of Artificial Intelligence Plateaus, Organizations Must Ensure Value to Avoid AI Winter, According to New O'Reilly Report - Business Wire

Posted in Ai | Comments Off on As Adoption of Artificial Intelligence Plateaus, Organizations Must Ensure Value to Avoid AI Winter, According to New O’Reilly Report – Business Wire

At GTC22, HPC and AI Get Edgy – HPCwire

Posted: at 2:37 am

From weather sensors and autonomous vehicles to electric grid monitoring and cloud gaming, the worlds edge computing is getting increasingly complex but the world of HPC hasnt necessarily caught up to these rapid innovations at the edge. At a panel at Nvidias virtual GTC22 (HPC, AI, and the Edge), five experts discussed how leading-edge HPC applications can benefit from deeper incorporation of AI and edge technologies.

On the panel: Tom Gibbs, developer relations for Nvidia; Michael Bussmann, founding manager of the Center for Advanced Systems Understanding (CASUS); Ryan Coffee, senior staff scientist for the SLAC National Accelerator Laboratory; Brian Spears, a principal investigator for inertial confinement fusion (ICF) energy research at Lawrence Livermore National Laboratory (LLNL); and Arvind Ramanathan, a principal investigator for computational biology research at Argonne National Laboratory.

The edge of a deluge, and a deluge of the edge

Early in the panel, Gibbs who served as the moderator of the discussion termed the 2020s the decade of the experiment, explaining that virtually every HPC-adjacent domain was in the midst of having a major experimental instrument (or a major upgrade to an existing instrument) come online. Its really exciting; but on the other side, these are going to produce huge volumes of rich data, he said. And how we can use and manage that data most effectively to produce new science is really one of the key questions.

Coffee agreed. Im actually at the X-ray laser facility at SLAC, and so I come at this from a short pulse time-resolved molecular physics perspective, he said. And we are facing an impending data deluge as we potentially move to a million-shots-per-second data rate, and so thats pulled me over the last half decade more into computing at the edge. And so where I feed into this is: how do we actually integrate the intelligence at the sensor with whats going on with HPC in the cloud?

One of the major opportunities I see moving forward is: we can look at rare events now, he continued. No one in their right mind is really going to move terabytes per second that just doesnt make sense however, we need the ability to record terabytes per second to watch for the anomalies that actually are driving the science that happens right now.

Spears, hailing from fusion research at LLNL, spoke to his field as a prime example. He pointed to recent success at LLNLs ICF experiment, where the team managed to produce 1.35 megajoules of energy from the fusion reaction almost break-even and that the JET team in Europe had a similar breakthrough in the last few months. But fusion research, he said, depended on data streams off of our cameras for experiments that last for you know, some of the action is happening over 100 picoseconds.

Sharpening AIs edge

So: huge amounts of data at very fast timescales, with the aim being to move from once-daily experiments to many-times-per-second. Spears explained how they planned on handling this. Were going to do things fast; were gonna do them in hardware at the edge; were probably gonna do them with an AI model that can do low-precision, fast compute, but thats going to be linked back to a very high-precision model that comes from a leadership-class institution.

You can start to see from these applications the convergence of the experiment and timescales, he said, driving changes in the way we think about representing the physics and the model and moving that to the edge[.]

AI, then, accelerates this same strategy, helping to whittle down the data that moves from the edge to the larger facilities. You can use AI in terms of guiding where the experiments must go, in terms of seeing what data we might have missed, Ramanathan said.

Bussmann agreed, citing many fields employing a live stream [of data] that will not be recorded forever so we have to make fast decisions and we have to make intelligent decisions, he said. We realize that this is an overarching subject across domains by now because of the capabilities that have become [widespread].

AI provides the capability to put a wrapper around that, train a lightweight surrogate model, and take what I actually think in my head and move it toward the edge of the computing facility, Spears said. We can run the experiments now for two purposes: one is to optimize whats going on with the actual experiment itself so we can be moving to a brighter beam or a higher-temperature plasma but we can also say, I was wrong about what I was thinking, because as a human, I have some weaknesses in my conception of the way the world looks. So I can also steer my experiment to the places where Im not very good, and I can use the experiment to make my model better. And if I can tighten those loops by doing the computing at the edge I can have these dual outcomes of making my experiment better and making my model better.

Just a matter of time

Much of the discussion in the latter half of the panel focused on how these AI and edge technologies could be used to usefully interpolate sparse or low-resolution data. You really need these surrogate models, Ramanathan said, explaining how his drug discovery work operated in ranges of 15 to 20 orders of magnitude and that to tackle it, it was useful to build models that can adaptively sample this landscape without having all of this information: rare event identification.

I can run a one-dimensional model really cheaply I can run 500 million of those, maybe, Spears said. A two-dimensional model is a few hundred or a thousand times more expensive. A three-dimensional model is thousands of times more expensive than that. All of those have advantages in helping me probe around in parameter space, so what a workflow tool allows us to do is make decisions back at the datacenter saying: run interactively all of these 1D simulations and let me make a decision about how much information gain Im getting from these simulations.

And when I think Ive found a region or parameter or design space that is high-value real estate, Ill make a workflow decision to say, plant some 2D simulations instead of 1Ds and Ill hone in on another more precise area of the high-value real estate. And then I can elevate again to the three-dimensional model which I can only run a few times. Thats all high-precision computing thats being steered on a machine like Sierra that we have at Lawrence Livermore National Laboratory.

All of our problems are logarithmically scaled, right? added Coffee. We have multiple scales that we want to be sensitive to it doesnt matter which domain youre in. We all are using computers to help us do the thing that we dont do well, which is swallow data quickly enough.

When you start talking about integrating workflows for multiple domains and they all have a similar pattern of use, doesnt that beg us to ask for an infrastructure where we bind HPC together with edge to follow a common model and a common infrastructure across domains? he continued. I think were all asking for the same infrastructure. And this infrastructure is really, now, not just what happens in the datacenter, right? Its what happens in the datacenter and how its sort of almost neurologically connected to all of the edge sensors that are distributed broadly across our culture.

See the original post here:

At GTC22, HPC and AI Get Edgy - HPCwire

Posted in Ai | Comments Off on At GTC22, HPC and AI Get Edgy – HPCwire

Companies In The Artificial Intelligence In Healthcare Market Are Introducing AI-Powered Surgical Robots To Improve Precision As Per The Business…

Posted: at 2:37 am

LONDON, March 30, 2022 (GLOBE NEWSWIRE) -- According to The Business Research Companys research report on the artificial intelligence in healthcare market, AI-driven surgical robots are gaining prominence among the artificial intelligence in healthcare market trends. Various healthcare fields have adopted robotic surgery in recent times. Robot-assisted surgeries are performed to remove limitations during minimally invasive surgical procedures and to improve surgeons' capabilities during open surgeries. AI is widely being applied in surgical robots and is also used with machine vision to analyze scans and detect complex cases. While performing surgeries in delicate areas of the human body, robotic surgeries are more effective than manually performed surgeries. To meet healthcare needs, many technology companies are providing innovative robotic solutions.

For example, in 2020, Accuracy Incorporated, a US-based company that develops, manufactures, and sells radiotherapy systems for alternative cancer treatments, launched a device called the CyberKnife S7 System, which combines speed, advanced precision, and AI-driven motion tracking for stereotactic radiosurgery and stereotactic body radiation therapy treatment.

Request for a sample of the global artificial intelligence in healthcare market report

The global artificial intelligence in healthcare market size is expected to grow from $8.19 billion in 2021 to $10.11 billion in 2022 at a compound annual growth rate (CAGR) of 23.46%. The global AI in healthcare market size is expected to grow to $49.10 billion in 2026 at a CAGR of 48.44%.

The increase in the adoption of precision medicine is one of the driving factors of artificial intelligence in the healthcare market. Precision medicine uses information about an individual's genes, environmental and lifestyle changes to design and improve the diagnosis andtherapeutics of the patient. It is widely used for oncology cases, and due to the rising prevalence of cancer and the number of people affected by it, the demand for AI in precision medicine will increase. According to research published in the Lancet Oncology, the global cancer burden is set to increase by 75% by 2030.

Major players in the artificial intelligence in healthcare market are Intel Corporation, Nvidia Corporation, IBM Corporation, Microsoft Corporation, Google Inc., Welltok Inc., General Vision Inc., General Electric Company, Siemens Healthcare Private Limited, Medtronic, Koninklijke Philips N.V., Micron Technology Inc., Johnson & Johnson Services Inc., Next IT Corporation, and Amazon Web Services.

The global artificial intelligence in healthcare market is segmented by offering into hardware, software; by algorithm into deep learning, querying method, natural language processing, context aware processing; by application into robot-assisted surgery, virtual nursing assistant, administrative workflow assistance, fraud detection, dosage error reduction, clinical trial participant identifier, preliminary diagnosis; by end-user into hospitals and diagnostic centers, pharmaceutical and biopharmaceutical companies, healthcare payers, patients.

As per the artificial intelligence in healthcare industry growth analysis, North America was the largest region in the market in 2021. Asia-Pacific is expected to be the fastest-growing region in the global artificial intelligence in healthcare market during the forecast period. The regions covered in the global artificial intelligence in healthcare market report are Asia-Pacific, Western Europe, Eastern Europe, North America, South America, the Middle East, and Africa.

Artificial Intelligence In Healthcare Market Global Market Report 2022 Market Size, Trends, And Global Forecast 2022-2026 is one of a series of new reports from The Business Research Company that provide artificial intelligence in healthcare market overviews, analyze and forecast market size and growth for the whole market,artificial intelligence in healthcare market segments and geographies, artificial intelligence in healthcare market trends, artificial intelligence in healthcare market drivers, artificial intelligence in healthcare market restraints,artificial intelligence in healthcare market leading competitors revenues, profiles and market shares in over 1,000 industry reports, covering over 2,500 market segments and 60 geographies.

The report also gives in-depth analysis of the impact of COVID-19 on the market. The reports draw on 150,000 datasets, extensive secondary research, and exclusive insights from interviews with industry leaders. A highly experienced and expert team of analysts and modelers provides market analysis and forecasts. The reports identify top countries and segments for opportunities and strategies based on market trends and leading competitors approaches.

Not the market you are looking for? Check out some similar market intelligence reports:

Robotic Surgery Devices Global Market Report 2022 By Product And Service (Robotic Systems, Instruments & Accessories, Services), By Surgery Type (Urological Surgery, Gynecological Surgery, Orthopedic Surgery, Neurosurgery, Other Surgery Types), By End User (Hospitals, Ambulatory Surgery Centers) Market Size, Trends, And Global Forecast 2022-2026

Artificial Intelligence (AI) In Drug Discovery Global Market Report 2022 By Technology (Deep Learning, Machine Learning), By Drug Type (Small Molecule, Large Molecules), By Therapeutic Type (Metabolic Disease, Cardiovascular Disease, Oncology, Neurodegenerative Diseases), By End-Users (Pharmaceutical Companies, Biopharmaceutical Companies, Academic And Research Institutes) Market Size, Trends, And Global Forecast 2022-2026

Precision Medicine Global Market Report 2022 By Technology (Big Data Analytics, Bioinformatics, Gene Sequencing, Drug Discovery, Companion Diagnostics), By Application (Oncology, Respiratory Diseases, Central Nervous System Disorders, Immunology, Genetic Diseases), By End-User ( Hospitals And Clinics, Pharmaceuticals, Diagnostic Companies, Healthcare And IT Firms) Market Size, Trends, And Global Forecast 2022-2026

Interested to know more about The Business Research Company?

The Business Research Company is a market intelligence firm that excels in company, market, and consumer research. Located globally it has specialist consultants in a wide range of industries including manufacturing, healthcare, financial services, chemicals, and technology.

The Worlds Most Comprehensive Database

The Business Research Companys flagship product, Global Market Model, is a market intelligence platform covering various macroeconomic indicators and metrics across 60 geographies and 27 industries. The Global Market Model covers multi-layered datasets which help its users assess supply-demand gaps.

Go here to see the original:

Companies In The Artificial Intelligence In Healthcare Market Are Introducing AI-Powered Surgical Robots To Improve Precision As Per The Business...

Posted in Ai | Comments Off on Companies In The Artificial Intelligence In Healthcare Market Are Introducing AI-Powered Surgical Robots To Improve Precision As Per The Business…

How Nvidia is harnessing AI to improve predictive maintenance – VentureBeat

Posted: at 2:36 am

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - August 3. Join AI and data leaders for insightful talks and exciting networking opportunities. Learn more about Transform 2022

Follow along with VentureBeats coverage from Nvidias GTC 2022 event >>

The rapidly growing sectors of edge computing and the industrial metaverse were targeted by new technology developments, like sensor architecture, released by Nvidia last week at its GTC 2022 conference. Last week, the company also debuted the Isaac Nova Orin, its latest computing and sensor architecture powered by Nvidia Jetson AGX Orinhardware.

Nvidias main focus is pursuing a tech-stack-based approach starting with new silicon to help manufacturers make sense of the massive amount of asset, machinery, and tools data they generate. In addition, predictive maintenance is core to many organizations Maintenance, Repair, and Overhaul (MRO) initiatives.

CEO Jensen Huang said during this keynote that AI [artificial intelligence] data centers process mountains of continuous data to train and refine AI models. But, Huang continued, raw data comes in, is refined, and intelligence goes out companies are manufacturing intelligence and operating giant AI factories.

Accurately pursing predictive maintenance, repair, and overhaul (MRO) right is a complex, data-intensive challenge for any business that relies on assets to serve customers. MRO systems have proven effective in managing the life cycle of machinery, assets, tools, and equipment. However, they havent been able to decipher the massive amount of data in real-time that discrete and process manufacturers produce every day.

As a result, IoT Analytics predicts that the global predictive maintenance market will expand from $6.9 billion in 2021 to $28.2 billion by 2026. Edge computing architectures, more contextually intelligent sensors, and advances in AI and machine learning (ML) architectures, including Nvidias Isaac Nova Orin, are combining to drive greater adoption across asset-intensive businesses.

IoT Analytics advises that the key performance indicator to watch for is how effective predictive maintenance solutions are, how well they reduce unplanned operational equipment downtime

Not knowing whats in that real-time data slows down how fast manufacturers and services companies can innovate and respond, further driving the demand for AI-based predictive maintenance solutions. Unlocking the insights hidden in real-time asset performance and maintenance data, whether from jet engines, multi-ton production equipment, or robots, isnt possible for many enterprises today.

Nvidias announcement of the Isaac Nova Orin architecture and enhanced edge computing support is noteworthy because its purpose-built for the many data challenges predictive maintenance has. The aircraft maintenance and MRO process is a perfect example, notable for its unpredictable process times and material requirements. As a result, airlines and their services partners rely on massive time and inventory buffers to alleviate risk, which further jeopardizes when a jet or any other asset will be available.

Nvidia has identified an opportunity in edge computing to update legacy tech stacks that have long lacked support for maintenance or asset performance management with a new AI-driven tech stack that expands their total available market.

As a result, Nvidia is doubling down on edge computing efforts. Approximately one of every four sessions presented during the companys GTC 2022 event mentioned the concept. CEO Jensen Huangs keynote also underscored how edge computing is a core use case to the future of their architectures.

IoT and IIoT sensors excel at capturing preventative maintenance data in real-time from machinery, production, and other large-scale assets. AL and ML-based modeling and analysis then happen in the cloud.

For large-scale data sets and models, latency becomes a factor in how quickly the data delivers insights. Thats where edge computing comes in and why its predicted to see explosive growth in the near future. Gartner predicts that by 2023, more than 50% of all data analysis by deep neural networks (DNNs) will be at the point of capture in an edge computing network, soaring from less than 5% in 2019. And by year-end 2023, 50% of large enterprises will have a documented edge computing strategy, compared to less than 5% in 2020. As a result, the worldwide edge computing market will reach $250.6 billion in 2024, attaining a compound annual growth rate (CAGR) of 12.5% between 2019 and 2024.

Of the many sessions at GTC 2022 that included edge computing, one specifically grabbed attention in this area: Automating Industrial Inspection with Deep Learning and Computer Vision. The presentation provided an overview of how edge computing can improve manufacturing performance with real-time insights and alerts.

Real-time production and process data interpreted at the edge is proving effective in predicting machinery repair and refurbishment rates already.

Edge computing-based models successfully predicted yield rates for the resin class and machine combination.

Nvidia sees the opportunity to expand its total available market with an integrated platform aimed at streamlining predictive maintenance. Today, many manufacturers and service organizations struggle to gain the insights they need to reduce downtimes, further expanding the total available market.

For many providers that sell the time their machinery and assets are available, predictive maintenance and MRO are central to their business models.

As asset-heavy service industries, including airlines and others, face higher fuel costs and more challenges in operating profitably, AI-based predictive maintenance will become the new technology standard.

Nvidias decision to concentrate architectural investments in edge computing to drive predictive maintenance is prescient of where the market is going.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Go here to read the rest:

How Nvidia is harnessing AI to improve predictive maintenance - VentureBeat

Posted in Ai | Comments Off on How Nvidia is harnessing AI to improve predictive maintenance – VentureBeat

The LinkedIn controversy over AI-generated accounts, explained – TRT World

Posted: at 2:36 am

The technology primarily used to spread misinformation has now crept into the corporate world to ramp up sales, a new investigation finds.

When Renee DiResta, a Stanford Internet Observatory researcher, received a software sales pitch on LinkedIn, she didnt know that it would lead her down a rabbit hole of over 10,000 fake corporate accounts of LinkedIn.

With knowledge of information systems and how narratives spread, DiRestas trained eye was quick to notice something was not quite right the profile picture of the sender, Keenan Ramsey, looked off. Her eyes were centred, she was missing an earring in one ear, and some of her hair seemed to blur into the background.

The researcher, along with her colleague Josh Goldstein, began digging into Ramseys profile only to find that she was not a real person, and over thousands of other accounts on the website, which appeared to be generated by artificial intelligence (AI) technology, didn't exist in real life either.

Who created the profiles?

An investigation by NPR, the public radio network of the United States, found that it is a tactic now being deployed by companies on LinkedIn to ramp up their sales.

When the Stanford researcher DiResta responded to an AI-generated salespersons message, she was finally contacted by a real employee to continue the conversation.

NPR says the AI-generated profiles allow companies to reach more potential customers without hitting LinkedIns message limit. It also eludes the necessity to hire more sales staff to reach customers.

The investigation spotted more than 70 businesses that used fake profiles. Several companies said they hired third party marketers to help with sales but they had not authorised any use of AI-created profile photos and were surprised by the findings.

While the investigation couldnt spot who authorised the usage of fake profiles to send messages to users on the website, nor any illegal activity, it did, however, conclude that the usage of fake profiles being used by companies illustrates how technology used to spread misinformation and propaganda has now made its way to the corporate world.

It's not a story of mis-[information] or dis-[information], but rather the intersection of a fairly mundane business use case w/AI technology [sic], and the resulting questions of ethics & expectations, the Stanford researcher DiResta reacted to the investigation in a tweet thread.

What are our assumptions when we encounter others on social networks? What actions cross the line to manipulation? she asked.

The researchers also notified LinkedIn about their findings. The company said it removed the fake accounts for breaking its policies after an investigation.

"Our policies make it clear that every LinkedIn profile must represent a real person. We are constantly updating our technical defences to better identify fake profiles and remove them from our community, as we have in this case," LinkedIn spokesperson Leonna Spilman said in a statement.

"At the end of the day, it's all about making sure our members can connect with real people, and we're focused on ensuring they have a safe environment to do just that."

Trustworthy faces

The fake profiles on the website or elsewhere in the online sphere are not easy to detect. The investigation says what created fake salesperson profiles on LinkedIn is likely to be a generative adversarial network, or GAN a technology that improves itself each day. Since its launch in 2014, it has been analysing datasets obtained from pictures of real people online in order to create more realistic images.

"If you ask the average person on the internet, 'Is this a real person or synthetically generated?' they are essentially at chance, (relying on luck)" said Hany Farid, an expert in digital media forensics at the University of California, Berkeley, who co-authored a study with Sophie Nightingale of Lancaster University.

Farid's study previously found that AI-generated photos were designed to look more trustworthy than real faces.

Some methods may help regular internet users spot such AI-generated online content. One of them is V7 Labss Google Chrome extension tool, which helps users spot fake profiles.

However, many people are unlikely to even suspect that the profiles they come across may be fake.

Farid said he finds the proliferation of AI-generated content worrying, not just the still images but also the video and audio content. He warned that it could foreshadow a new era of online deception.

Source: TRT World

See the article here:

The LinkedIn controversy over AI-generated accounts, explained - TRT World

Posted in Ai | Comments Off on The LinkedIn controversy over AI-generated accounts, explained – TRT World

AWS teams with THREAD for AI-enabled clinical trials management – Healthcare IT News

Posted: at 2:36 am

THREAD, which develops technology and offers consulting services for decentralized clinical trials, announced a new collaboration this week with Amazon Web Services.

WHY IT MATTERSAWS will help develop new enhancements for the THREAD platform, bringing scalable automation and built-in AI to enabling faster and more efficient trials by enabling higher quality data capture across the lifecycle of a clinical study.

In addition to improving access for research participants, the companies say they hope the collaboration will speed up the ability to offer and initiate co-created and configured trials by reducing the start-up time to onboard customers by up to 30%.

Another goal is to enable customers to reduce inefficiencies by 30% and achieve up to 25% cost savings when pre-completing data, significantly reducing data capture and removing source data verification.

The hope is to provide a more comprehensive view of participant data across studies with enhanced security, AI support and operational controls also to help customers to more precisely assess studies' success by enabling real-time visibility into richer data streams, real-time grades on study performance.

THREAD is working with AWS Professional Services to design advanced machine learning architecture and AI models to automate processes for real-time data capture, auto-populating data workflows and more.

THE LARGER TRENDThere's been big momentum for AI and machine learning in clinical trial management, especially since the pandemic, in the U.S. and around the world.

This past October, Cerner launched Enviza, a new operating unit focused on innovating new approaches to automated data management and expanding participation in clinical trials.

That same month, we offered an inside look at how Intel and ConsenSys Health are combining blockchain and AI for clinical trials management.

ON THE RECORD"The breadth and depth of AWS's machine learning and cloud capabilities will help support THREAD teams as they work to automate processes, reduce inefficiencies, and monitor and support clinical trials," said Dan Sheeran, general manager, healthcare and life sciences at Amazon Web Services, in a statement.

"In collaboration with Amazon Web Services, we are further scaling our DCT platform with next-level automation, AI/ML offerings, and optimized features focused to meet the evolving needs of our customers, research sites, and participants," added THREAD CEO John Reites.

Twitter:@MikeMiliardHITNEmail the writer:mike.miliard@himssmedia.comHealthcare IT News is a HIMSS publication.

Follow this link:

AWS teams with THREAD for AI-enabled clinical trials management - Healthcare IT News

Posted in Ai | Comments Off on AWS teams with THREAD for AI-enabled clinical trials management – Healthcare IT News

Intel’s XeSS AI upscaling won’t be available until sometime in ‘early summer’ – The Verge

Posted: at 2:36 am

Intels first Arc GPUs are launching today, but one of the biggest features for the new discrete graphics cards will be absent when the first Arc-powered laptops arrive: the companys XeSS AI-powered upscaling technology, which Intel says wont be available until sometime in early summer.

XeSS is meant to compete with other AI-based upscaling techniques, like Nvidias own Deep Learning Super Sampling (DLSS), and promises to offer players better framerates without compromising on quality. (AMDs FidelityFX Super Resolution aims to offer similar results, too, but doesnt use the same super sampling methods as Intel and Nvidia.)

The goal of XeSS is similar to other upscaling techniques, offering players 4K-quality visuals without having to deal with the far more demanding hardware and power requirements for running actual 4K gameplay in real time (something that only the most powerful and pricey graphics cards like the RTX 3090 or the RX 6800 can really achieve right now).

Like Nvidias DLSS, games have to be specifically optimized to work with XeSS, with Intel touting a list of titles that includes Death Stranding: Directors Cut, Legend of the Tomb Raider, Ghostwire Tokyo, Chorus, Hitman 3, and more. And while all those games are already out, the fact that XeSS wont arrive until the summer means that you wont get Intels upscaling benefits on them even if you do pick up an Arc-powered laptop now.

The silver lining is that the bulk of Intels Arc products including its more powerful laptop GPUs and its long-awaited desktop cards also wont be arriving until later on in the year. Intels launch today is just for the companys least powerful Arc 3 discrete graphics for laptops. But hopefully, by the time XeSS does arrive in early summer, Intel will have some more powerful GPUs waiting for it.

See the original post here:

Intel's XeSS AI upscaling won't be available until sometime in 'early summer' - The Verge

Posted in Ai | Comments Off on Intel’s XeSS AI upscaling won’t be available until sometime in ‘early summer’ – The Verge

For AI assistants to move forward, Siri and Alexa need to die – The Next Web

Posted: at 2:36 am

Its never easy saying goodbye. But its obvious that the time has come. We need to ditch big techs virtual assistants and calmly demand a little more autonomy in our AI.

Up front: The dream has always been to make personal assistants accessible to everyone. Since most of us cant afford our own human assistant, big tech decided to combine chatbots and natural language processing (NLP) to create a virtual version of the real thing.

Billions of people use these AI-powered tools everyday. Whether its Siri on iPhone, Google Assistant on Android, or Alexa on Amazon products, theres a good chance at least one of them has become a part of your everyday life.

So why on Earth would anyone want to get rid of them? Because you deserve so much better.

Background: Virtual assistants were supposed to evolve over time. Yet all weve seen in the past five years is fine-tuning and tweaks. Back in 2018, Google Assistant sometimes struggled to understand me. Now it usually catches what Im saying.

And as nice as it is to use voice-control to play music, turn the lights on, and send a text message, AI-powered voice assistants are too busy collecting data and pretending to have agency to do anything truly useful.

The root of the problem is, unlike a human assistant with an NDA, you cant trust big techs AI.

The reality: Current virtual assistants all live on servers. The companies who build and train the AI models that power them use your data to make them better. It feels like a win-win because the more you use your virtual assistants, the better they become for everyone.

But Google, Amazon, Apple, and all the others use the data you generate when you use your virtual assistants to train AI models across their respective companies.

The AI models powering virtual assistants arent designed to provide the best possible assistant experience for users theyre designed to harvest data. Theyre biased towards features and capabilities that funnel the most useful data upstream. Essentially, theyre Candy Crush. But instead of your attention, they want your data.

And, because these AI assistants have to service billions of humans across millions of possible linguistic, cultural, software, hardware, and networking platforms, theres no incentive for big tech to build models that conform to individual users needs. They do what they do and if that works for you, great. If not: you can choose not to use them.

But thats not how human assistants work. A good human assistant knows how to focus on a client and adapt to their needs.

The solution: Make virtual assistants personal. Theres nothing stopping big tech, or an enterprising startup, from building AI systems that operate completely offline.

When the first generation of modern virtual assistants began showing up on smart speakers and flagship phones, they were relaying most of their processes from the cloud. Now, the majority of these devices have onboard AI chips complementing their processors.

At this point, we could have virtual assistants baked into our devices capable of performing every function they currently can, without the need to send personally indentifiable data to a remote server.

Imagine, if you will, an open-source neural network built on self-supervised learning algorithms. Once you installed it on your device, it would essentially function as a medium between you and the digital world.

Assuming you were able to trust the AI, you could essentially give it power of attorney over your digital affairs. All it would take is a private networking protocol running through blockchain-based authentication.

And, most importantly of all, we could ditch the silly human personifications for virtual assistants.Without the need to go through the rigmarole of summoning a specific assistant, you could just talk to your gadgets, software, and web browser like the objects they are. TV on. TV off.

As weird as it is to imagine in 2022, the only way for AI assistant technology to move forward is to kill the virtual-person-as-a-serviceparadigm and replace it with one where assistance is platformed through privacy and trust.

Original post:

For AI assistants to move forward, Siri and Alexa need to die - The Next Web

Posted in Ai | Comments Off on For AI assistants to move forward, Siri and Alexa need to die – The Next Web

Akouos Reports Fourth Quarter and Full Year 2021 Financial Results and Provides Business Highlights – Yahoo Finance

Posted: at 2:36 am

Akouos, Inc.

- Advanced toward planned IND submissions for AK-OTOF in the first half of 2022 and AK-antiVEGF in 2022

- Presented data at ARO demonstrating potential of precision genetic medicine platform to address a broad range of inner ear conditions

- Expanded leadership team with appointment of otology leader Aaron Tward, M.D., Ph.D., as chief scientific officer

- Ended 2021 with a strong cash position of $232.5 million

BOSTON, March 29, 2022 (GLOBE NEWSWIRE) -- Akouos, Inc. (Nasdaq: AKUS), a precision genetic medicine company dedicated to developing potential gene therapies for individuals living with disabling hearing loss worldwide, today reports financial results for the fourth quarter and full year ended December 31, 2021 and provides business highlights.

In 2021, we continued to execute on our core strategic objectives, ended the year with a strong cash position, and remained focused on our IND submission plans for AK-OTOF in the first half of 2022 and for AK-antiVEGF in 2022. In addition to AK-OTOF and AK-antiVEGF, we continue to leverage our genetic medicine platform to expand our pipeline and address a broader range of inner ear conditions, including some of the most common forms of hearing loss, said Manny Simons, Ph.D., M.B.A., co-founder, president, and chief executive officer of Akouos. We recently presented nonclinical data at ARO that help support these future development plans, and we expanded our world-class team of experts with the appointment of renowned surgeon and scientist Aaron Tward, M.D., Ph.D., as our chief scientific officer. We believe the genetic medicines we are developing have the potential to create a new standard of care for, and to transform the lives of individuals and families with, disabling hearing loss.

Pipeline and Business Highlights

Continued progress toward IND submission readiness for lead programs AK-OTOF and AK-antiVEGF Akouos continued to advance its lead product candidate, AK-OTOF, a gene therapy intended for the treatment of otoferlin gene (OTOF)-mediated hearing loss and is on track to submit an investigational new drug (IND) application in the first half of 2022. Additionally, Akouos continues to plan for an IND submission in 2022 for AK-antiVEGF, a gene therapy candidate in preclinical development for the potential treatment of patients with vestibular schwannoma.

Applying genetic medicines platform to expand into broader range of inner ear conditions Akouos is leveraging its multimodal genetic medicine capabilities to address a broad range of inner ear conditions, including those that are monogenic and those of complex etiology. The company recently presented nonclinical data at the Association for Research in Otolaryngology (ARO) 45th Annual Mid-Winter Meeting that help support future development of gene therapies targeting inner ear conditions.

Two nonclinical studies in non-human primates evaluating protein expression and tolerability support future clinical development of AK-antiVEGF for the treatment of vestibular schwannoma.

AAVAnc80 with a supporting cell selective promoter drives widespread GJB2 expression in supporting cells, while limiting expression in, and loss of, hair cells in mice. We continue to evaluate the most promising product candidate options in mice and non-human primates.

In vitro characterization of AAV-mediated RNA interference gene silencing and CRISPR/Cas9 gene editing methods demonstrates a reduction of protein expression for the gene of interest. We continue to consider targets for autosomal dominant nonsyndromic hearing loss.

We also continue to believe that AAVAnc gene therapy has the potential to restore hearing in individuals with a wide range of environmental hearing loss by regenerating hair cells from neighboring supporting cells. We have identified multiple factors that, when delivered in combination, result in new hair cell formation in neonate mice, and we plan to continue preclinical development work in 2022.

Expanded leadership team with appointment of Aaron Tward, M.D., Ph.D., as chief scientific officer In March 2022, Akouos announced the appointment of surgeon and scientist Aaron Tward, M.D., Ph.D., as chief scientific officer. Dr. Tward brings deep experience in genetics, genomics, gene delivery, high-throughput sequencing technologies, and the clinical care of patients with conditions of the ear and skull base to Akouos. Dr. Tward was previously a member of the Akouos scientific advisory board since 2018. In his new role as chief scientific officer, he will lead the research team and provide strategic scientific expertise to advance the companys precision genetic medicine platform.

Story continues

Fourth Quarter and Full Year 2021 Financial Results

Cash Position Cash, cash equivalents, and marketable securities were $232.5 million as of December 31, 2021, as compared to $308.0 million as of December 31, 2020.

Research and Development (R&D) Expenses R&D expenses were $18.8 million for the fourth quarter of 2021 and $64.6 million for the full year ended December 31, 2021, compared to $8.0 million for the fourth quarter of 2020 and $34.3 million for the full year ended December 31, 2020. The increase was primarily due to the increased efforts in IND-enabling studies and increased manufacturing costs for AK-OTOF and AK-antiVEGF and the growth in the number of R&D employees and their related activities, as well as the expense allocated to R&D related to Akouoss leased facilities.

General and Administrative (G&A) Expenses G&A expenses were $6.2 million for the fourth quarter of 2021 and $22.2 million for the full year ended December 31, 2021, compared to $4.6 million for the fourth quarter of 2020 and $14.6 million for the full year ended December 31, 2020. The increase was due to the growth in the number of G&A employees and other administrative expenses related to operating as a public company, as well as the expense allocated to G&A related to Akouoss leased facilities and due to increased patent activities and increases in professional fees related to legal and accounting services.

Net Loss Net loss was $24.9 million, or $0.72 per share, for the fourth quarter of 2021 and $86.7 million, or $2.52 per share, for the full year ended December 31, 2021, compared to $12.5 million, or $0.37 per share, for the fourth quarter of 2020 and $48.6 million, or $2.77 per share, for the full year ended December 31, 2020.

About Akouos

Akouos is a precision genetic medicine company dedicated to developing gene therapies with the potential to restore, improve, and preserve high-acuity physiologic hearing for individuals living with disabling hearing loss worldwide. Leveraging its precision genetic medicine platform that incorporates a proprietary adeno-associated viral (AAV) vector library and a novel delivery approach, Akouos is focused on developing precision therapies for forms of sensorineural hearing loss. Headquartered in Boston, Akouos was founded in 2016 by leaders in the fields of neurotology, genetics, inner ear drug delivery, and AAV gene therapy.

Forward-Looking Statements

Statements in this press release about future expectations, plans and prospects, as well as any other statements regarding matters that are not historical facts, may constitute forward-looking statements within the meaning of The Private Securities Litigation Reform Act of 1995. These statements include, but are not limited to, statements relating to the initiation, plans, and timing of our future clinical trials and our research and development programs, and the timing of our IND submissions for AK-OTOF and AK-antiVEGF. The words anticipate, believe, continue, could, estimate, expect, intend, may, plan, potential, predict, project, should, target, will, would, and similar expressions are intended to identify forward-looking statements, although not all forward-looking statements contain these identifying words. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: our limited operating history; uncertainties inherent in the development of product candidates, including the initiation and completion of nonclinical studies and clinical trials; whether results from nonclinical studies will be predictive of results or success of clinical trials; the timing of and our ability to submit applications for, and obtain and maintain regulatory approvals for, our product candidates; our expectations regarding our regulatory strategy; our ability to fund our operating expenses and capital expenditure requirements with our cash, cash equivalents, and marketable securities; the potential advantages of our product candidates; the rate and degree of market acceptance and clinical utility of our product candidates; our estimates regarding the potential addressable patient population for our product candidates; our commercialization, marketing, and manufacturing capabilities and strategy; our ability to obtain and maintain intellectual property protection for our product candidates; our ability to identify additional products, product candidates, or technologies with significant commercial potential that are consistent with our commercial objectives; the impact of government laws and regulations and any changes in such laws and regulations; risks related to competitive programs; the potential that our internal manufacturing capabilities and/or external manufacturing supply may experience delays; the impact of the COVID-19 pandemic on our business, results of operations, and financial condition; our ability to maintain and establish collaborations or obtain additional funding; and other factors discussed in the Risk Factors included in the Companys Quarterly Report on Form 10-Q for the three months ended September 30, 2021 filed with the Securities and Exchange Commission on November 9, 2021, and in other filings that the Company makes with the Securities and Exchange Commission in the future. Any forward-looking statements contained in this press release speak only as of the date hereof, and the Company expressly disclaims any obligation to update any forward-looking statement, whether as a result of new information, future events or otherwise.

Condensed Consolidated Balance Sheet Data (Unaudited)

(in thousands)

December 31, 2021

December 31, 2020

Cash, cash equivalents and marketable securities

$

232,452

$

308,010

Total assets

278,755

333,350

Total liabilities

45,105

22,736

Total stockholders equity

233,650

310,614

Condensed Consolidated Statements of Operations and Comprehensive Loss(Unaudited)

(in thousands, except share and per share data)

Three Months Ended December 31,

Years Ended December 31,

2021

2020

2021

2020

Operating expenses:

Research and development

$

18,819

$

7,977

$

64,595

$

34,297

General and administrative

6,158

4,646

22,226

14,583

Total operating expenses

24,977

12,623

86,821

48,880

Loss from operations

(24,977

)

(12,623

)

(86,821

)

(48,880

)

Other income (expense):

Interest income

326

366

1,872

567

Other expense, net

(288

)

(291

)

(1,722

)

(287

)

Total other income, net

38

75

150

280

See the rest here:
Akouos Reports Fourth Quarter and Full Year 2021 Financial Results and Provides Business Highlights - Yahoo Finance

Posted in Gene Medicine | Comments Off on Akouos Reports Fourth Quarter and Full Year 2021 Financial Results and Provides Business Highlights – Yahoo Finance