Page 41«..1020..40414243..5060..»

Category Archives: Ai

KAID Health Announces Series A to Fuel Growth of its AI-Powered Provider/Payer Whole Chart Analysis Platform – Business Wire

Posted: May 25, 2022 at 3:51 am

BOSTON--(BUSINESS WIRE)--KAID Health, makers of an artificial intelligence-enabled clinical analysis and provider engagement platform, today announced its $4.25 million Series A funding. The financing was led by prominent healthcare IT investors, including Activate Venture Partners and Martinson Ventures. Boston Millenia Partners, Brandon Hull, Howard Landis, and KAID Healths Board of Directors also participated. The new investment brings KAID Healths total capital raised to $6.45 million. John Martinson and Dana Callow, of Martinson Ventures and Boston Millenia Partners, respectively, are now observers to KAID Healths Board of Directors.

KAID Health allows providers to profit from delivering more informed coordinated care. Its natural language processing-centric Whole Chart Analysis solution integrates with the electronic medical record (EMR) to identify high-value tactical care and coding interventions. The solution pushes identified gaps into the providers workflow, making it easy for them to intervene appropriately. At the same time, the system can financially reward providers for their timely completion of assigned tasks.

KAID Health is solving three fundamental problems facing providers and payers today. First, how to make clinicians more efficient. Second, how to translate that efficiency into more cost-effective care. And lastly, how to grow both provider and payer revenues to capture this newly created value, explained Kevin Agatstein, founder and CEO of KAID Health. By finding the right intervention and simplifying how the work gets done, KAID Health saves clinicians time, reduces waste, and ensures all care is paid for appropriately. Built in conjunction with leading provider organizations, we have already created over 10 times the return on investment for our customers.

Today, KAID Healths technology is used for a wide variety of use cases in an array of clinical settings. For example, KAID Health improves Medicare Advantage coding accuracy and completeness at a large multi-specialty group. The platform helps automate quality metric reporting for primary care providers. The same solution streamlines chart review for prior authorization, and identifies pre-operative risk factors at a large academic medical center.

KAID Health combines the deep industry expertise and the technology prowess needed to transform how providers and payers can collaborate to improve care, said Todd Pietri, KAID Healths board member and lead investor.

With the funding, KAID Health will bolster its market footprint among providers, while furthering collaboration between providers and health plans, payers, and Accountable Care Organization partners. The company will also expand its Boston, MA office and will hire new team members across a variety of job roles.

We know there is tremendous value locked in EMRs, including the ability to reduce costs and increase revenues, said John Martinson of Martinson Ventures. KAID Health has already proven it can translate this data into profitable interventions. We are proud to support its expansion.

About KAID Health

KAID Health makes care delivery more efficient, effective, and profitable for providers and their payers and Accountable Care Organization partners. Its Whole Chart Analysis platform extracts all relevant data from electronic medical records, including structured data and text, using artificial intelligence and natural language processing. The solution identifies the patient care interventions needed for providers to achieve their clinical, financial, or operational objectives. In parallel, KAID Health extends to payers a comprehensive view of members health by combining claims and EMR data. Today, KAID Healths technology is used by leading providers, health systems, academic medical centers, and payers to automate a variety of workflows, including coding accuracy, quality measurement, prior authorization support, and pre-operative assessment. The company was founded by a veteran team of healthcare information technology and population health innovators. It is based in Boston, MA. To learn more, visit http://www.kaidhealth.com.

More:

KAID Health Announces Series A to Fuel Growth of its AI-Powered Provider/Payer Whole Chart Analysis Platform - Business Wire

Posted in Ai | Comments Off on KAID Health Announces Series A to Fuel Growth of its AI-Powered Provider/Payer Whole Chart Analysis Platform – Business Wire

Apple Watch ECG Readings Plus AI Detect Weak Hearts – Medical Device and Diagnostics Industry

Posted: at 3:51 am

Apple caused quite a stir in 2018 when it unveiledelectrocardiogram (ECG) on the Apple Watch 4, with some critics pointing out that it wasn't much different from what AliveCor had been doing for years.

Four years later, researchers at the Mayo Clinic in Rochester, MNhave not only accepted the role that the Apple Watch ECG app plays in the market, but they've developed an artificial intelligence (AI) algorithm that's programmedto interpret single-lead ECG tracings from an Apple Watch to more effectively identify patients who are living with a weak heart pump, or left ventricular systolic dysfunction.

A condition that affects 2% to 3% of people worldwide and up to 9% of people over age 60, left ventricular systolic dysfunction sometimes produces no symptoms. Earlier detection can improve outcomes, says Paul Friedman, MD, chair of the department of cardiovascular medicine at Mayo Clinic.

On average, the mortality at five years is approximately 5%for stage B, no symptoms, and 25% for stage C, overtly symptomatic heart failure, Friedman said. What is important, is that once we know a weak heart pump is present there are many life-saving and symptom-preventing treatments available. It is absolutely remarkable that AI transforms a consumer watch ECG signal into a detector of this condition, which would normally require an expensive, sophisticated imaging test, such as an echocardiogram, CT scan, or MRI."

Through this technology, its possible that the number of patients diagnosed will increase.

Epidemiologic studies in the United States and Europe have generally shown that approximately half of left ventricular systolic dysfunction has no symptoms, so that likely a substantial number of new cases will be uncovered, Friedman said. But whether the proportion of the total population will change and the extent of regional variation is not clear.

More than 2,400 patients participated in a recent decentralized, prospective study. A standard ECG placed on the chest, arms, and legs was used to create a tracing to evaluate the heart's electrical signals. To interpret signals generated from the single lead on the watch, researchers modified an established 12-lead algorithm for low ventricular ejection fraction (i.e., the weak heart pump). To adapt the 12-lead algorithm to work with a single-lead watch signal, an adaptation technique was created to translate the single-lead readings into understandable signals for the algorithm.

Participants were required to download an app that securely transferred watch ECGs. Participants from 46 states and 11 countries securely transmitted 125,610 ECGs during a six-month period. The average app use was about two times per month and overall participation was high, as 92% of patients used the app more than once. Researchers chose the cleanest ECG readings. Study participation demonstrated the possibility for a scalable tool to be developed to screen and monitor heart patients for this condition wherever they are located, according to researchers.

"Approximately 420 patients had a watch ECG recorded within 30 days of a clinically ordered echocardiogram, said Itzhak Zachi Attia, PhD, lead AI scientist in the department of cardiovascular medicine at Mayo Clinic. We took advantage of those data to see whether we could identify a weak heart pump with AI analysis of the watch ECG. While our data are early, the test had an area under the curve of 0.88, meaning it is as good as or slightly better than a medical treadmill test.

Researchers worked with the Mayo Clinic's Center for Digital Health to develop the app for the study, which securely sent all ECGs as they were recorded by patients to a data platform where they were analyzed.

"Our next steps include global prospective studies to test this prospectively in more diverse populations and demonstrate medical benefit, Friedman said. This is what the transformation of medicine looks like: inexpensively diagnosing serious disease from your sofa."

While the research did not compare the use of the technology by age group, Friedman saidthe data suggest there could be an increased sense of trust and engagement with digital technology among older generations. The mean age of participants was 53 years, with the oldest patient being 94.

Link:

Apple Watch ECG Readings Plus AI Detect Weak Hearts - Medical Device and Diagnostics Industry

Posted in Ai | Comments Off on Apple Watch ECG Readings Plus AI Detect Weak Hearts – Medical Device and Diagnostics Industry

Aidoc Partners with Gleamer to Expand the Use of AI within Health Systems and Improve X-Ray Imaging Efficiency USA – English – USA – English – PR…

Posted: at 3:51 am

Aidoc broadens its X-ray AI offering with Gleamer's BoneView solution, empowering health systems in the U.S. to address high imaging volumes, across multiple imaging modalities, amid the healthcare labor shortage

NEW YORK, May 24, 2022 /PRNewswire/ --Aidoc, the leading provider of healthcare AI solutions, today announced an agreement with Gleamer, a French medtech company pioneering the use of AI technology in the practice of radiology, to integrate Gleamer's BoneView solution for X-ray. The onboarding of Gleamer's AI BoneView X-ray solution expands Aidoc's venture into the X-ray modality with a recent FDA-clearance for triage and notification of pneumothoraces.

"After receiving our FDA clearance in early March and entering the U.S. commercial market, this partnership with Aidoc is an exciting milestone for our company," says Christian Allouche, CEO and co-founder of Gleamer. "Considering the existing fatigue and shortages experienced in the U.S. and that fracture interpretation errors can represent up to 24 percent of harmful diagnostic errors seen in the ER, we anticipate that the integration will benefit health systems looking to improve imaging efficiency across all their facilities."

With the new integration, Aidoc's AI platform now offers clinicians a tool for assisting in the identification and localization of fractures in limbs, pelvis, thoracic and lumbar spine and rib cage in X-ray images, solidifying Aidoc's foray into the X-ray space and diversifying Aidoc's AI offering. Aidoc's end-to-end AI platform, designed to empower hospitals to seamlessly integrate more AI solutions and orchestrate them across an entire network of facilities, already includes numerous vetted third-party AI vendorsincluding Imbio, Riverain, Subtle, Icometrix and ScreenPoint.

X-ray accounts for a majority of imaging procedures in hospitals in the U.S., with over 152 million performed annually. Coupled with high imaging volumes, health systems across the U.S. are facing high labor costs due to the "great resignation" of healthcare workers and an on-going shortage of radiologists. Gleamer's AI solutions have been clinically demonstrated to help reduce the reading time of appendicular X-rays, while also increasing the sensitivity and specificity of radiologist interpretations of appendicular fractures by 8.7% and 4.1%, respectively.

"Considering that bone trauma X-rays account for a high percentage of hospital imaging volume, the integration of Gleamer's solution is an important step in our mission to provide comprehensive coverage with AI," says Tom Valent, VP of Business Development at Aidoc. "Powered by the enterprise-grade scalability of our AI platform, Gleamer's solution is another highly valued addition to our suite of industry-leading AI solutions that, combined, empower health systems to improve patient outcomes across multiple modalities and service lines within the hub-and-spoke model."

About Aidoc

Aidoc (aidoc.com) is the leading provider of artificial intelligence healthcare solutions that empower physicians to expedite patient treatment and enhance efficiencies. Aidoc's AI-driven solutions analyze medical images directly after the patient is scanned, suggesting prioritization of time-sensitive pathologies, as well as notifying and activating multidisciplinary teams to reduce turnaround time, shorten length of stay, and improve overall patient outcomes.

Media ContactNetanya SteinWestRay Communications[emailprotected]+972534506487

About Gleamer

Gleamer's first globally available AI software, BoneView, recently received clearance by the U.S. Federal Food and Drug Administration and CE mark class 2a certification in Europe. Studies by world-leading radiologists and academic medical doctors have shown that BoneView improves detection of fractures in X-ray images, providing healthcare professionals with a safe, reliable, time-saving concurrent reading. Gleamer develops a suite of AI solutions for Radiology that encapsulate medical-grade expertise. The company wants to support imaging users to secure diagnoses for all patients and at all times, while improving efficiency. Gleamer's AI Companions are directly integrated in the users' usual reading environment and act as an automated and transparent second reading to improve diagnostic accuracy in X-ray imaging. Gleamer's solutions are currently being used across 14 countries in more than 300 institutions.

For more information: http://www.gleamer.ai

Media ContactAlbane Grandjean[emailprotected]+33612680126

SOURCE Aidoc

Excerpt from:

Aidoc Partners with Gleamer to Expand the Use of AI within Health Systems and Improve X-Ray Imaging Efficiency USA - English - USA - English - PR...

Posted in Ai | Comments Off on Aidoc Partners with Gleamer to Expand the Use of AI within Health Systems and Improve X-Ray Imaging Efficiency USA – English – USA – English – PR…

AI’s role is poised to change monumentally in 2022 and beyond – TechCrunch

Posted: May 21, 2022 at 6:48 pm

Shashank SrivastavaContributor

The latest developments in technology make it clear that we are on the precipice of a monumental shift in how artificial intelligence (AI) is employed in our lives and businesses.

First, let me address the misconception that AI is synonymous with algorithms and automation. This misconception exists because of marketing. Think about it: When was the last time you previewed a new SaaS or tech product that wasnt fueled by AI? This term is becoming something like all-natural on food packaging: ever-present and practically meaningless.

Real AI, however, is foundational to supporting the future of how businesses and individuals function in the world, and a huge advance in AI frameworks is accelerating progress.

As a product manager in the deep learning space, I know that current commercial and business uses of AI dont come close to representing its full or future potential. In fact, I contend that weve only scratched the surface.

The next generation of AI products will extend the applications for ambient computing.

Weve all grown accustomed to asking Siri for directions or having Alexa manage our calendar notifications, and these systems can also be used to automate tasks or settings. That is probably the most accessible illustration of a form of ambient computing.

Ambient computing involves a device performing tasks without direct commands hence the ambient, or the concept of it being in the background. In ambient computing, the gap between human intelligence and artificial intelligence narrows considerably. Some of the technologies used to achieve this include motion tracking, wearables, speech-recognition software and gesture recognition. All of this serves to create an experience in which humans wish and machines execute.

The Internet of Things (IoT) has unlocked continuous connectivity and data transference, meaning devices and systems can communicate with each other. With a network of connected devices, its easy to envision a future in which human experiences are effortlessly supported by machines at every turn.

But ambient computing is not nearly as useful without AI, which provides the patterning, helping software learn the norms and trends well enough to anticipate our routines and accomplish tasks that support our daily lives.

On an individual level, this is interesting and makes life easier. But as professionals and entrepreneurs, its important to see the broader market realities of how ambient computing and AI will support future innovation.

See more here:

AI's role is poised to change monumentally in 2022 and beyond - TechCrunch

Posted in Ai | Comments Off on AI’s role is poised to change monumentally in 2022 and beyond – TechCrunch

AI experts are in short supply. That’s making the skills crisis worse – ZDNet

Posted: at 6:48 pm

Written by Owen Hughes, Senior Editor Owen HughesSenior Editor

Owen is a senior editor at ZDNet. Based in London, UK, Owen covers software development, IT workforce trends and the evolution of tech and work.

IBM is warning about the about slow progress being made in some countries' adoption of artificial intelligence (AI) technology, which could prevent them from solving some of society's toughest challenges.

A study by IBM concluded that the UK is falling behind its European neighbours in AI adoption, with employers blaming a lack of skills in areas like software engineering, problem solving, and knowledge of programming languages.

A survey of 7,500 business leaders by IBM found that about a third of UK respondents said their company had accelerated their rollout of AI during the past two years, compared with a European average of 49%.

If UK businesses aren't able to speed up their adoption of AI-like technologies, such as machine learning and automation, companies will find it difficult to achieve their ambitious goals for sustainability, IBM warned.

In a somewhat Inception-style twist, the lack of AI-ready skills also means businesses can't harness AI tech to solve the shortage of labour and skills they are already facing.

SEE: What is AI? Here's everything you need to know about artificial intelligence

Just over 40% of UK companies surveyed by IBM said they plan to use AI to retrain their workforce the second-highest priority for AI investment after research & development while 59% plan to use automation tools to reduce manual or repetitive tasks.

The findings come as the UK government pursues its National AI Strategy, launched in September 2021, which aims to nurture the country's AI ecosystem and transition to an AI-enabled economy. The UK is ranked third globally for private investment into AI companies and is home to a third of Europe's total AI businesses.

Yet more than a third (36%) of UK companies surveyed have stalled their AI investments since 2020, versus 27% across Europe. Presumably, the COVID-19 pandemic played a part here, with businesses having to divert resources away from more ambitious projects to focus on essential, day-to-day operations.

But these projects could be critical in enabling UK companies to pursue their environmental, social and governance (ESG) goals in the face of a growing climate emergency.

SEE: The EU Artificial Intelligence Act: What you need to know

More than half (58%) of UK companies surveyed are either "executing" (31%) or "planning to apply" AI (27%) to help meet their ESG targets, while 44% are either planning to invest in AI to address their sustainability goals (30%) or say that investing in AI for sustainability is among their top tech priorities for the next one to two years (14%).

The success of this approach falls on businesses being able to source and hire the skills they need to get more ambitious projects moving. "Talent can be found everywhere, but training opportunities unfortunately cannot," Ebru Binboga, director of data, AI and automation, IBM UK & Ireland, told ZDNet, who added that a lack of relevant training opportunities to people of all ages is a key factor causing the shortage of tech skills seen not just in the UK, but globally.

To get more technically skilled people into the workforce, the public and private sectors need to collaborate on providing education and training that keeps pace with market demands, demographic changes, and technological progress, said Binboga.

"As most businesses are still at a very early stage of adopting AI, there will continue to be a huge demand for the skills that are needed to fully integrate AI into an organization from building a modern data platform to developing sophisticated AI models," Binboga said.

Here is the original post:

AI experts are in short supply. That's making the skills crisis worse - ZDNet

Posted in Ai | Comments Off on AI experts are in short supply. That’s making the skills crisis worse – ZDNet

AI Weekly: Is AI alien invasion imminent? – VentureBeat

Posted: at 6:48 pm

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Want AI Weekly for free each Thursday in your inbox? Sign up here.

Is an AI alien invasion headed for earth? The VentureBeat editorial staff marveled at the possibility this week, thanks to the massive online traffic earned by one Data Decision Makers community article, with its impossible-to-ignore title, Prepare for arrival: Tech pioneer warns of alien invasion.

The column, written by Louis Rosenberg, founder of Unanimous AI, was certainly buoyed not only by its SEO-friendly title, but its breathless opener: An alien species is headed for planet Earth and we have no reason to believe it will be friendly. Some experts predict it will get here within 30 years, while others insist it will arrive far sooner. Nobody knows what it will look like, but it will share two key traits with us humans it will be intelligent and self-aware.

But a fuller read reveals Rosenbergs focus on some of todays hottest AI debates, including the potential for AGI in our lifetimes and why organizations need to prepare with AI ethics: while theres an earnest effort in the AI community to push for safe technologies, theres also a lack of urgency. Thats because too many of us wrongly believe that a sentient AI created by humanity will somehow be a branch of the human tree, like a digital descendant that shares a very human core. This is wishful thinking the time to prepare is now.

Coincidentally, this past week was filled with claims, counterclaims and critiques of claims around the potential to realize AGI anytime soon.

Last Friday, Nando De Freitas, a lead researcher at Googles DeepMind AI division, tweeted that The Game is Over! in the decades-long quest for AGI, after DeepMind unveiled its new Gato AI, which is capable of complex tasks ranging from stacking blocks to writing poetry.

According to De Freitas, Gato AI simply needs to be scaled up in order to create an AI that rivals human intelligence. Or, as he wrote on Twitter, Its all about scale now! Its all about making these models bigger, safer, compute efficient, faster at sampling, smarter memory, more modalities, innovative data, on/offline Solving these challenges is what will deliver AGI.

Plenty of experts are pushing back on De Freitas claims and those of others insisting that AGI or its equivalent is at hand.

Yann LeCun, the French computer scientist who is chief AI scientist at Meta, had this to say (on Facebook, of course):

About the raging debate regarding the significance of recent progress in AI, it may be useful to (re)state a few obvious facts:

(0) there is no such thing as AGI. Reaching Human Level AI may be a useful goal, but even humans are specialized.

(1) the research community is making some progress towards HLAI

(2) scaling up helps. Its necessary but not sufficient, because.

(3) we are still missing some fundamental concepts

(4) some of those new concepts are possibly around the corner (e.g. generalized self-supervised learning)

(5) but we dont know how many such new concepts are needed. We just see the most obvious ones.

(6) hence, we cant predict how long its going to take to reach HLAI.

Meanwhile, Gary Marcus, founder of Robust.AI and author of Rebooting AI, added to the debate on his new Substack, with its first post dedicated to the discussion of current efforts to develop AGI (including Gato AI), which he calls alt intelligence:

Right now, the predominant strand of work within Alt Intelligence is the idea of scaling. The notion that the bigger the system, the closer we come to true intelligence, maybe even consciousness.

There is nothing new, per se, about studying Alt Intelligence, but the hubris associated with it is. Ive seen signs for a while, in the dismissiveness with which the current AI superstars,= and indeed vast segments of the whole field of AI, treat human cognition, ignoring and even ridiculing scholars in such fields as linguistics, cognitive psychology, anthropology and philosophy.

But this morning I woke to a new reification, a Twitter thread that expresses, out loud, the Alt Intelligence creed, from Nando de Freitas, a brilliant high-level executive at DeepMind, Alphabets rightly-venerated AI wing, in a declaration that AI is all about scale now.

Marcus closes by saying:

Let us all encourage a field that is open-minded enough to work in multiple directions, without prematurely dismissing ideas that happen to be not yet fully developed. It may just be that the best path to artificial (general) intelligence isnt through Alt Intelligence, after all.

As I have written, I am fine with thinking of Gato as an Alt Intelligence an interesting exploration in alternative ways to build intelligence but we need to take it in context: it doesnt work like the brain, it doesnt learn like a child, it doesnt understand language, it doesnt align with human values and it cant be trusted with mission-critical tasks.

It may well be better than anything else we currently have, but the fact that it still doesnt really work, even after all the immense investments that have been made in it, should give us pause.

Its nice to know that most experts dont believe the AGI alien invasion will arrive anytime soon.

But the fierce debate around AI and its ability to develop human-level intelligence will certainly continue on social media and off.

Let me know your thoughts!

Sharon Goldman, senior editor and writer

Twitter: @sharongoldman

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

The rest is here:

AI Weekly: Is AI alien invasion imminent? - VentureBeat

Posted in Ai | Comments Off on AI Weekly: Is AI alien invasion imminent? – VentureBeat

AI for All: Experts Weigh In on Expanding AI’s Shared Prosperity and Reducing Potential Harms – uschamber.com

Posted: at 6:48 pm

Policymakers, technologists, and business leaders must work together to ensure that the prosperity from artificial intelligence is shared throughout society and the unintended harms are addressed and mitigated, said experts at the U.S. Chamber AI Commission field hearing in Palo Alto, CA.

Were seeing a growth in AI systems that can function across multiple domains for the last decade...this can lead to unanticipated and harmful outcomes, said Rep. Anna Eshoo (CA-18), kicking off the hearing with words of caution. Policymakers, researchers, and leaders in the private sector need to collaborate to address these issues to ensure that AI advancement accrues to the benefit of society, not at the cost of it.

She added, As AI becomes more powerful, we have to keep refocusing technological development on our values to ensure that technology improves society. Many experts testifying throughout the hearing echoed similar points, advocating for widening the shared prosperity that would result from AI and cautioning the Commission on AIs potential harm to workers and marginalized communities.

1/2Rep. Anna Eshoo (CA-18) and Rep. Ro Khanna (CA-17) provided remarks at the U.S. Chamber AI Commission field hearing in Palo Alto, CA, on May 9, 2022.

2/2Rep. Anna Eshoo (CA-18) and Rep. Ro Khanna (CA-17) provided remarks at the U.S. Chamber AI Commission field hearing in Palo Alto, CA, on May 9, 2022.

Erik Brynjolfsson, Senior Fellow at the Stanford Institute for Human-Centered AI (HAI) and Director of the Stanford Digital Economy Lab, articulated the difference between automation and augmentation when it comes to jobs: Economists have made a distinction between economic substitute and economic complement, he testified. Substitutes tend to worsen economic inequality and increase concentration of economic and political power.

Moreover, he stressed that Most of the progress over time has come not from automating things we are already doing, but from doing new things...When technology complements humans...it increases wages and leads to more widely shared prosperity.

Katya Klinova, Head of Al, Labor and the Economy at The Partnership on AI, also advocated for the path of AI augmenting and complementing the skills of a much broader group of workers, making them more valuable for the labor market, boosting their wages, improving economic inclusion, and ultimately creating a more competitive economy, she said.

The regular discourse is overwhelmingly focused on how workers should prepare for the age of AI, and how governments and institutions can help them to prepare, Klinova testified. By putting all the burden of adjustment on the workers and the government, we are forgetting that the technology too can and should adjust to the needs and realities faced by communities and the workforce.

However, The issue is that in practice, it is often quite difficult to tell apart worker-augmenting technologies from worker-replacing technologies. Because of that, Klinova asserted, any company today that wants to claim their technology augments workers can just do it. Its a free-for-all claim that is not necessarily substantiated by anything.

Alka Roy, Founder of the Responsible Innovation Project and RI Labs, underscored a trust gap that results from this kind of discrepancy between having best practices, audits, and governance, and how and where they are actually used. Some reports...cite that even companies that have AI principles and ethics, only 9% to 20% of them publicly admit to having operationalizing these principles, Roy said.

To address these issues, Klinova advocated for invest[ing] in alternative benchmarks...and in building institutions that allow for empowered participation of workers in the development and deployment of AI. Adding that, Workers are ultimately the best people to tell apart which technologies help them and make their day better, and which ones look good on paper in marketing materials, but in practice enable exploitation or over surveillance.

In talking about the impact of AI on workers, Doug Bloch, Political Director at Teamsters Joint Council 7, referenced his time serving on Governor Newsoms Future of Work Commission, I became convinced that all the talk of the robot apocalypse and robots coming to take workers jobs was a lot of hyperbole. I think the bigger threat to the workers I represent is the robots will come and supervise through algorithms and artificial intelligence.

We have to empower workers to not only question the role of technology in the workplace, but also to use tools such as collective bargaining and government regulation to make sure that workers also benefit from its deployment, he said.

In his testimony, Bloch emphasized that workers arent afraid of technology, but they will question its purpose and make sure that its regulated, and that workers have a voice in the process. The biggest question for organized labor and worker advocates right now...is how does all of this technology relate to production standards, to production, and to discipline?

Bloch referenced an existing contract to show how AI and labor may co-exist. Terms provided a safety net for workers by ensuring that they cant be fired by surveillance technology or an algorithm. A supervisor has to directly observe dishonest behavior to allow a firing. He also underlined the importance that the data workers generate, which helps to inform decisions and increase profits for the company, won't be used against them.

Bloch closed by stating, If the fight of the last century was for workers to have unions and protections like OSHA, I honestly believe that the fight of this century for workers will be around data, and that workers should have a say in what happens with it and to share in the profit with it.

Jacob Snow, Staff Attorney for the Technology and Civil Liberties Program at the ACLU of Northern California, told the Commission that the critical discussions on AI are, not narrow technical questions about how to design a product. They are social questions about what happens when a product is deployed to a society, and the consequences of that deployment on peoples lives.

He explained why he believed facial recognition should be on the other side of the technological red line: There are applications of facial recognition, which I think at least officially seem like they might be valuable finding a missing person or tracking down a dangerous criminal, for example. But...any tool that can find a missing person, can find a political dissident. Any tool that can pick a criminal out of a crowd, can do same for an undocumented person or a person who has received reproductive healthcare. He cautioned, Were living in a time when its not necessary for civil rights and privacy advocates to say just imagine if the technology fell into the wrong hands. Its going directly into the wrong hands after its been built.

We can think a little bit more broadly about what constitutes AI regulation worker protections, housing support, private laws all those frameworks put in place deeper social, health-related, and economic protections that limit the harm about algorithms, Snow testified.

Rep. Ro Khanna (CA-17), who provided concluding remarks, talked about the disparate impacts that AI will have in different communities across the United States. This challenge is the central challenge for the country: How do we both create economic opportunity in places that have been totally left out, how do we build and revitalize a new middle class, and how do we have the benefits of technology be more widely shared? In summary, the Congressman stated, There's going to be 25 million of these new jobs in every field from manufacturing to farming to retail to entertainment. The question is, how do we make sure that they are a possibility for people in every community?"

To continue exploring critical issues around AI, the U.S. Chamber AI Commission will host further field hearings in the U.S. and abroad to hear from experts on a range of topics. The next hearing will be held in London, UK, on June 13. Previous hearings took place in Austin, TX, and Cleveland, OH.

Learn more about the AI Commission here.

Director, Policy, U.S. Chamber of Commerce Technology Engagement Center (C_TEC)

Read more:

AI for All: Experts Weigh In on Expanding AI's Shared Prosperity and Reducing Potential Harms - uschamber.com

Posted in Ai | Comments Off on AI for All: Experts Weigh In on Expanding AI’s Shared Prosperity and Reducing Potential Harms – uschamber.com

Can Administrators Ensure the Ethical Use of AI in K12 Education? – EdTech Magazine: Focus on K-12

Posted: at 6:48 pm

As with many technologies used in K12 learning environments, school leaders must guarantee that artificial intelligence is safe. In addition to the legal requirements placed on districts, there are ethical issues schools must consider before introducing new tech powered by AI and machine learning (ML).

To do this, there must first be an understanding of what AI looks like in education. All of us have very different ideas for AI and algorithmic tools and machine learning and what these things are, sayssava saheli singh, an eQuality-Scotiabank postdoctoral fellow in AI and surveillance at the University of Ottawa.

The concepts of AI and ML are nebulous, meaning that everyone understands them a bit differently. The internet, and companies selling AI-powered products, can sway peoples understanding of this technology.

For school leaders to properly protect students using this tech, they must understand what AI and ML mean in the context of the products theyre adopting. IT administrators should also understand how the AI is using data and how the tools affect different student populations.

Click the bannerto learn more about trending tech when you register as an Insider.

The first consideration leaders must make is whether they understand the tools in which theyre investing. IT departments should know what the tools are doing behind the scenes to produce the results they see.

Theres definitely a lack of understanding about what these systems are, how theyre implemented, who theyre for and what theyre used for, singh says.

In understanding how AI works, IT leaders must remember that these systems rely on data.

AI only works if you grab data. It only works if you grab data from everywhere you can find it, and the more data the better, saysValerie Steeves, a professor in the department of criminology at the University of Ottawa and principal investigator of theeQualityproject.

Because AI and ML tools rely on data, admins must ensure theyre building in student data protections to use this technology ethically. AI thats rolling out now always comes with a price tag, and the price tag is your students data, Steeves says.

KEEP READING:Data analytics shows the impact of educational technology.

Another ethical consideration is algorithmic bias in AI and ML technologies. When purchasing these solutions, its important to remember that these programs are operating off data sets that frequently contain bias.

Theres a lot of bias involved in applying some of these tools, singh says. A lot of these tools are made by specific people and with specific populations in mind. At a basic level, theres racial, gender, sexual orientation differences theres a lot of different kinds of people and a lot of these technologies either leave them out or include them in ways that are really harmful.

While the creators of these tools typically arent intending to cause harm, the biases built into the data sets can discriminate against certain student populations.

It creates a system where biases can play out in in ways that are rampant, and it becomes ever more difficult to pull them back because the bias and the discriminatory use of the data is built into the algorithm, Steeves says.

One way to account for this bias is to acknowledge it. Teaching students to interact with AI should include lessons about how it was created and where these biases may appear.

The necessary ethical considerations shouldnt dissuade school leaders from using AI in education entirely. Teaching students to interact with this technology will set them up for success in higher education and future careers. There are also safe ways to use AI.

A lot of the context in which algorithms and AI and machine learning are useful is when youre looking at a large corpus of data and trying to make sense of it, singh says. Using algorithms and AI to answer a specific question can maybe give you a clue as to what the larger context might be. In that educational context, I think its a useful tool.

When students are inputting data, instead of being the subjects from whom data is extracted, AI can be extremely beneficial.

AI is most useful when no data is collected from the kids, and the kids are not embedded into some kind of surveillance system, Steeves says. AI is just helping them facilitate their learning and their modeling of the world around them.

DIVE DEEPER:Learn how to use artificial intelligence in K12 education.

View original post here:

Can Administrators Ensure the Ethical Use of AI in K12 Education? - EdTech Magazine: Focus on K-12

Posted in Ai | Comments Off on Can Administrators Ensure the Ethical Use of AI in K12 Education? – EdTech Magazine: Focus on K-12

How AI is improving the web for the visually impaired – VentureBeat

Posted: at 6:48 pm

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

There are almost 350 million people worldwide with blindness or some other form of visual impairment who need to use the internet and mobile apps just like anyone else. Yet, they can only do so if websites and mobile apps are built with accessibility in mind and not as an afterthought.

Consider these two sample buttons that you might find on a web page or mobile app. Each has a simple background, so they seem similar.

In fact, theyre a world apart when it comes to accessibility.

Its a question of contrast. The text on the light blue button has low contrast, so for someone with visual impairment like color blindness or Stargardt disease, the word Hello could be completely invisible. It turns out that there is a standard mathematical formula that defines the proper relationship between the color of text and its background. Good designers know about this and use online calculators to calculate those ratios for any element in a design.

So far, so good. But when it comes to text on a complex background like an image or a gradient, things start to get complicated and helpful tools are rare. Before today, accessibility testers have had to check these cases manually by sampling the background of the text at certain points and calculating the contrast ratio for each of the samples. Besides being laborious, the measurement is also inherently subjective, since different testers might sample different points inside the same area and come up with different measurements. This problem laborious, subjective measurements has been holding back digital accessibility efforts for years.

Artificial intelligence algorithms, it turns out, can be trained to solve problems like this and even to improve automatically as they are exposed to more data.

For example, AI can be trained to do text summarization, which is helpful for users with cognitive impairments; or to do image and facial recognition, which helps those with visual impairments; or real-time captioning, which helps those with hearing impairment. Apples VoiceOver integration on the iPhone, whose main usage is to pronounce email or text messages, also uses AI to describe app icons and report battery levels.

Wise companies are rushing to comply with the Americans with Disabilities Act (ADA) and give everyone equal access to technology. In our experience, the right technology tools can help make that much easier, even for todays modern websites with their thousands of components. For example, a sites design can be scanned and analyzed via machine learning. It can then improve its accessibility through facial & speech recognition, keyboard navigation, audio translation of descriptions and even dynamic readjustments of image elements.

In our work, weve found three guiding principles that, I believe, are critical for digital accessibility. Ill illustrate them here with reference to how our team, in an effort led by our data science team leader Asya Frumkin, has solved the problem of text on complex backgrounds.

If we look at the text in the image below we see that there is some kind of legibility problem, but its hard to quantify overall, looking solely at the whole phrase. On the other hand, if our algorithm examines each of the letters in the phrase separately for example, the e on the left and the o on the right we can more easily tell for each of them whether it is legible or not.

If our algorithm continues to go through all the characters in the text in this way, we can count the number of legible characters in the text and the total number of characters. In our case, there are four legible characters out of eight in total. The ensuing fraction, with the number of legible characters as the numerator, gives us a legibility ratio for the overall text. We can then use an agreed-upon pre-set threshold, for example, 0.6, below which the text is considered unreadable. But the point is we got there by running operations on each piece of the text and then tallying from there.

We all remember Optical Character Recognition (OCR) from the 1970s and 80s. Those tools had promise but ended up being too complex for their originally intended purpose.

But there was a part of those tools called The CRAFT (Character-Region Awareness For Text) model that held out promise for AI and accessibility. CRAFT maps each pixel in the image to its probability of being in the center of a letter. Based on this calculation, it is possible to produce a heat map in which high probability areas will be painted in red and areas with low probability will be painted in blue. From this heat map, you can calculate the bounding boxes of the characters and cut them out of the image. Using this tool, we can extract individual characters from long text and run a binary classification model (like in #1 above) on each of them.

The model of the problem classifies individual characters in a straightforward binary way at least in theory. In practice, there will always be challenging real-world examples that are difficult to quantify. What complicates the matter, even more, is the fact that every person, whether they are visually impaired or not, has a different perception of what is legible.

Here, one solution (and the one we have taken) is to enrich the dataset by adding objective tags to each element. For example, each image can be stamped with a reference piece of text on a fixed background prior to analysis. That way, when the algorithm runs, it will have an objective basis for comparison.

As the world continues to evolve, every website and mobile application needs to be built with accessibility in mind from the beginning. AI for accessibility is a technological capability, an opportunity to get off the sidelines and engage and a chance to build a world where peoples difficulties are understood and considered. In our view, the solution to inaccessible technology is simply better technology. That way, making websites and apps accessible is part and parcel of making websites and apps that work but this time, for everybody.

Navin Thadani is cofounder and CEO of Evinced.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Go here to read the rest:

How AI is improving the web for the visually impaired - VentureBeat

Posted in Ai | Comments Off on How AI is improving the web for the visually impaired – VentureBeat

Everlaw Offers Half-Day Symposium on AI-Driven Ediscovery in Chicago, NYC and LA – PR Newswire

Posted: at 6:48 pm

Two CLE-Eligible Sessions Offered at In-Person Events from June 1 to 15; Designed to Deliver Tips and Insights for Ediscovery Success in the New AI World

OAKLAND, Calif., May 20, 2022 /PRNewswire/ -- Everlaw, the cloud-native investigation and litigation platform, today announced the kickoff of its Connect events, an engaging, in-person symposium for legal professionals hosted jointly with EDRM. The half-day events pack in networking opportunities, CLE-eligible panels, great food and cocktails, sneak peeks of the latest in industry-leading artificial intelligence (AI) solutions for ediscovery, and a special deep dive on trends in technology assisted review (TAR) with AI and ediscovery expert Dr. Maura R. Grossman.

Register here for Everlaw Connect.

"Today's legal professions who understand and wield the power of machine learning and AI for ediscovery are already reaping a competitive advantage in their cases and careers," said Chuck Kellner, ediscovery pioneer and Everlaw strategy leader. "Fluency in emerging tech is now a must-have for client and professional success. Our Connect events are designed to deliver a quick leg up on the rapidly evolving world of AI discovery with tech demos, tools and advice from both peers and world-leading experts. Bring your growth mindset and curiosity, and you'll be richly rewarded."

The Connect events half-day sessions run from noon to 6:30 pm local times. They are held at:

Everlaw and EDRM designed this symposium for legal professionals who want to return to valuable in-person peer networking and educational opportunities, with great food, drink and company.

Everlaw will host CLE-eligible panels on:

Special featured speakers included are:

Day 2: On June 2 in Chicago, June 9 in NYC and June 16 in LA, the Everlaw User Education team welcomes current customers for an additional half-day of interactive training experience. The experience will focus on how users can save time and money by providing guidance on how to incorporate AI into their review process and how to be more efficient throughout the life of a case on Everlaw. Users will have the chance to work together to share insight into how they have tackled common problems and will receive guidance from the experts from Everlaw's User Education team on how to deliver the best results using the Everlaw platform.

Register here for Customer Education

About EverlawEverlaw blends cutting-edge technology with modern design to help government entities, law firms and corporations solve the toughest problems in the legal industry. Everlaw is used by Fortune 100 corporate counsels and household brands like Hilton and Dick's Sporting Goods, 91 out of the AM Law 200 and all 50 U.S. state attorneys general. Based in Oakland, California, Everlaw is funded by top-tier investors, including Andreessen Horowitz, CapitalG, H.I.G. Growth Partners, K9 Ventures, Menlo Ventures, and TPG Growth.

Learn more at https://www.everlaw.com.

Media Contact:Colleen Haikes[emailprotected]

SOURCE Everlaw

Link:

Everlaw Offers Half-Day Symposium on AI-Driven Ediscovery in Chicago, NYC and LA - PR Newswire

Posted in Ai | Comments Off on Everlaw Offers Half-Day Symposium on AI-Driven Ediscovery in Chicago, NYC and LA – PR Newswire

Page 41«..1020..40414243..5060..»