Page 20«..10..19202122..3040..»

Category Archives: Ai

Katia Walsh on ‘permeating Levi’s with the best digital data and AI capabilities’ – Glossy

Posted: September 14, 2022 at 1:11 am

The retail industry is increasingly integrating AI efforts into business strategies, and Katia Walsh, chief global strategy and AI officer at Levi Strauss and Co., is at the forefront of those changes. Walsh has been with Levis for nearly four years and has led the charge of melding Levis values with innovative technological capabilities to both drive change and increase revenue.

Walsh has a strong track record. Prior to Levis, Walshs work contributed to transforming companies across many industries in 30-plus countries. She holds a Ph.D. in strategic communication with a specialization in quantitative methodology.

I joined Levis because of what the brand symbolizes for me. Especially growing up in a communist country, it meant so much more than clothes and I love the clothes. Its also a symbol of freedom, of democracy, of the unattainable. To this day, if you ask people in Eastern Europe what some of the strongest brands are, Levis tops that list. I also joined Levis because of its DNA of innovation. It has always stood up for making an impact on the world, Walsh said on the latest episode of the Glossy Podcast.

The executive credits developing three particular passions as the key to her professional success. Those include data and information, technologys ability to amplify information, and the power of machine learning to analyze it all and drive desired outcomes.

Below are additional highlights from the conversation, which have been lightly edited for clarity.

Driving changeI cannot drive change alone. A big part of my job and my day, every single day, is spent on education, communication and aligning the goals of the organization. I lead the goals of my partners and stakeholders across the company. Ultimately, what we do in digital data and AI with the capabilities were building has to be in service of the business, our commercial markets and the functions. Its my goal to permeate the entirety of Levi Strauss & Company with the best of digital data and AI capabilities.

Corporate rules no longer applyDigital transformation is inherently something that breaks down silos. [The breakdown] has to happen because youre talking about overhauling an entire enterprise and changing the culture of a whole company, while building on what has worked so far and what needs to be retained with respect. I actually dont talk about my team [as a structure], and I encourage people not to talk about my team. We are all one team: team Levi Strauss & Company. We are all in service of our consumers and the future of this tremendous brand.

Transforming Levis operationsWe focus on three key areas where digital data and AI capabilities can be particularly impactful. The first area is about making sure we do whats right for our consumers and customers through an external focus. Within that area, it can be anything from using a combination of digital data and AI capabilities to providing a predictive, proactive and personalized experience. Personalization is a big part of how we create smarter connections with our consumers. The second area I focus on is our internal operations. This is where the combination of digital data and AI capabilities has a tremendous opportunity for impact. Thats where we talk about smarter commerce and smarter creation. The third area of focus is on whats next for the company; digital transformation helps us change the ways in which we do business today and also [about how Levis] thinks about what new business models we can explore for the future.

See more here:

Katia Walsh on 'permeating Levi's with the best digital data and AI capabilities' - Glossy

Posted in Ai | Comments Off on Katia Walsh on ‘permeating Levi’s with the best digital data and AI capabilities’ – Glossy

It’s not just floods and fires: This AI forecasts how climate change will impact your city – Fast Company

Posted: at 1:11 am

If you buy a house in Miami or Phoenix in 2022, what will it be worth in five years or ten years as climate change progresses? A new tech platform, Climate Alpha, uses a scenario forecaster to help answer that questionand to help predict where it might make sense to invest instead.

Were blending the entire history of the American modern property market with climate modeling, says Parag Khanna, founder and CEO of Climate Alpha. While other tools exist to look at climate risk by location, the startup saw the need to look at a more complex picture. How are cities adapting and investing in infrastructure to protect against climate impacts? Where are jobs growing? Where are people moving now, despite extreme heat or wildfires or sea level rise?

[Image: Climate Alpha]People are still moving to Miami, and they dont care about the climate, he says. So you have to model human behavior, and you have to look at lots of factors beyond climate. Its not as simple as saying that because the sea level is rising, property values will drop; the platform uses machine learning and proprietary algorithms to look at hundreds of variables. It includes products like Climate Price, an estimate of property value in the future, Resilience Index, a dashboard that compares locations by their vulnerability and adaptation, and Alpha Finder, which lets users enter criteria to get an analysis of where to buy.

[Image: Climate Alpha]Khanna started working on the company while writing Move: The Forces Uprooting Us, a book exploring how people will migrate because of climate change, political upheaval, jobs, and other forces. The point of the book was not to dwell on where people will leave, he says. I was undertaking what I think of as an as an ethical challenge to figure out where the human population can sustainably and equitably reside, as climate change and other conditions deteriorate over the next 10, 20, 30 years, at a scale of billions of people.

[Image: Climate Alpha]The platform is launching with data about the U.S., though it will expand globally. Large real estate investors can use it to plan investments. Lennar, one of the largest home construction companies in the U.S., is an early customer. A tool for individual homebuyers will soon launch. So you could enter your street address, and the price that you bought your property at, and thats all you would need to do, he says. And then we could give you what our forecast is for the fair market value of your home for any year.

Governments can also use the tools to help plan where to invest for climate migration, and where to encourage people to move. My belief is that we should use this technology to identify those locations that meet the criteria for livabilityaffordability, space, lower cost, building zero energy homes, high share of renewable energy in the grid, clean air, and proximity to certain infrastructureand to pre-design the settlement of the future.

Link:

It's not just floods and fires: This AI forecasts how climate change will impact your city - Fast Company

Posted in Ai | Comments Off on It’s not just floods and fires: This AI forecasts how climate change will impact your city – Fast Company

Are You Making These Deadly Mistakes With Your AI Projects? – Forbes

Posted: August 20, 2022 at 2:19 pm

Since data is at the heart of AI, it should come as no surprise that AI and ML systems need enough good quality data to learn. In general, a large volume of good quality data is needed, especially for supervised learning approaches, in order to properly train the AI or ML system. The exact amount of data needed may vary depending on which pattern of AI youre implementing, the algorithm youre using, and other factors such as in house versus third party data. For example, neural nets need a lot of data to be trained while decision trees or Bayesian classifiers dont need as much data to still produce high quality results.

So you might think more is better, right? Well, think again. Organizations with lots of data, even exabytes, are realizing that having more data is not the solution to their problems as they might expect. Indeed, more data, more problems. The more data you have, the more data you need to clean and prepare. The more data you need to label and manage. The more data you need to secure, protect, mitigate bias, and more. Small projects can rapidly turn into very large projects when you start multiplying the amount of data. In fact, many times, lots of data kills projects.

Clearly the missing step between identifying a business problem and getting the data squared away to solve that problem is determining which data you need and how much of it you really need. You need enough, but not too much. Goldilocks data is what people often say: not too much, not too little, but just right. Unfortunately, far too often, organizations are jumping into AI projects without first addressing an understanding of their data. Questions organizations need to answer include figuring out where the data is, how much of it they already have, what condition it is in, what features of that data are most important, use of internal or external data, data access challenges, requirements to augment existing data, and other crucial factors and questions. Without these questions answered, AI projects can quickly die, even drowning in a sea of data.

Getting a better understanding of data

In order to understand just how much data you need, you first need to understand how and where data fits into the structure of AI projects. One visual way of understanding the increasing levels of value we get from data is the DIKUW pyramid (sometimes also referred to as the DIKW pyramid) which shows how a foundation of data helps build greater value with Information, Knowledge, Understanding and Wisdom.

DIKW pyramid

With a solid foundation of data, you can gain additional insights at the next information layer which helps you answer basic questions about that data. Once you have made basic connections between data to gain informational insight, you can find patterns in that information to gain understanding of the how various pieces of information are connected together for greater insight. Building on a knowledge layer, organizations can get even more value from understanding why those patterns are happening, providing an understanding of the underlying patterns. Finally, the wisdom layer is where you can gain the most value from information by providing the insights into the cause and effect of information decision making.

This latest wave of AI focuses most on the knowledge layer, since machine learning provides the insight on top of the information layer to identify patterns. Unfortunately, machine learning reaches its limits in the understanding layer, since finding patterns isnt sufficient to do reasoning. We have machine learning, not but the machine reasoning required to understand why the patterns are happening. You can see this limitation in effect any time you interact with a chatbot. While the Machine learning-enabled NLP is really good at understanding your speech and deriving intent, it runs into limitations rying to understand and reason.For example, if you ask a voice assistant if you should wear a raincoat tomorrow, it doesn't understand that youre asking about the weather. A human has to provide that insight to the machine because the voice assistant doesnt know what rain actually is.

Avoiding Failure by Staying Data Aware

Big data has taught us how to deal with large quantities of data. Not just how its stored but how to process, manipulate, and analyze all that data. Machine learning has added more value by being able to deal with the wide range of different types of unstructured, semi-structured or structured data collected by organizations. Indeed, this latest wave of AI is really the big data-powered analytics wave.

But its exactly for this reason why some organizations are failing so hard at AI. Rather than run AI projects with a data-centric perspective, they are focusing on the functional aspects. To gain a handle of their AI projects and avoid deadly mistakes, organizations need a better understanding not only of AI and machine learning but also the Vs of big data. Its not just about how much data you have, but also the nature of that data. Some of those Vs of big data include:

With decades of experience managing big data projects, organizations that are successful with AI are primarily successful with big data. The ones that are seeing their AI projects die are the ones who are coming at their AI problems with application development mindsets.

Too Much of the Wrong Data, and Not Enough of the Right Data is Killing AI Projects

While AI projects start off on the right foot, the lack of the necessary data and the lack of understanding and then solving real problems are killing AI projects. Organizations are powering forward without actually having a real understanding of the data that they need and the quality of that data. This poses real challenges.

One of the reasons why organizations are making this data mistake is that they are running their AI projects without any real approach to doing so, other than using Agile or app dev methods. However, successful organizations have realized that using data-centric approaches focus on data understanding as one of the first phases of their project approaches. The CRISP-DM methodology, which has been around for over two decades, specifies data understanding as the very next thing to do once you determine your business needs. Building on CRISP-DM and adding Agile methods, the Cognitive Project Management for AI (CPMAI) Methodology requires data understanding in its Phase II. Other successful approaches likewise require a data understanding early in the project, because after all, AI projects are data projects. And how can you build a successful project on a foundation of data without running your projects with an understanding of data? Thats surely a deadly mistake you want to avoid.

Read more from the original source:

Are You Making These Deadly Mistakes With Your AI Projects? - Forbes

Posted in Ai | Comments Off on Are You Making These Deadly Mistakes With Your AI Projects? – Forbes

A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’ – TNW

Posted: at 2:19 pm

Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world. Compared to the heavy-handed government oversight in China or the Wild West-style anything goes approach in the US, the EUs strategy is designed to stoke academic and corporate innovation while also protecting private citizens from harm and overreach. But that doesnt mean its perfect.

In 2018, the European Commission began its European AI Alliance initiative. The alliance exists so that various stakeholders can weigh-in and be heard as the EU considers its ongoing policies governing the development and deployment of AI technologies.

Since 2018, more than 6,000 stakeholders have participated in the dialogue through various venues, including online forums and in-person events.

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

The commentary, concerns, and advice provided by those stakeholders has been considered by the EUs High-level expert group on artificial intelligence, who ultimately created four key documents that work as the basis for the EUs policy discussions on AI:

1. Ethics Guidelines for Trustworthy AI

2. Policy and Investment Recommendations for Trustworthy AI

3. Assessment List for Trustworthy AI

4. Sectoral Considerations on the Policy and Investment Recommendations

This article focuses on item number one: the EUs Ethics Guidelines for Trustworthy AI.

Published in 2019, this document lays out the barebones ethical concerns and best practices for the EU. While I wouldnt exactly call it a living document, it is supported by a continuously updated reporting system via the European AI Alliance initiative.

The Ethics Guidelines for Trustworthy AI provides a set of 7 key requirements that AI systems should meet in order to be deemed trustworthy.

Per the document:

AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms need to be ensured, which can be achieved through human-in-the-loop, human-on-the-loop, and human-in-command approaches.

Neurals rating: poor

Human-in-the-loop, human-on-the-loop, and human-in-command are all wildly subjective approaches to AI governance that almost always rely on marketing strategies, corporate jargon, and disingenuous approaches to discussing how AI models work in order to appear efficacious.

Essentially, the human in the loop myth involves the idea that an AI system is safe as long as a human is ultimately responsible for pushing the button or authorizing the execution of a machine learning function that could potentially have an adverse effect on humans.

The problem: Human-in-the-loop relies on competent humans at every level of the decision-making process to ensure fairness. Unfortunately, studies show that humans are easily manipulated by machines.

Were also prone to ignore warnings whenever they become routine.

Think about it, whens the last time you read all the fine print on a website before agreeing to the terms presented? How often do you ignore the check engine light on your car or the time for an update alert on software when its still functioning properly?

Automating programs or services that affect human outcomes under the pretense that having a human in the loop is enough to prevent misalignment or misuse is, in this authors opinion, a feckless approach to regulation that gives businesses carte blanche to development harmful models as long as they tack on a human-in-the-loop requirement for usage.

As an example of what could go wrong, ProPublicas award-winning Machine Bias article laid bare the propensity for the human-in-the-loop paradigm to cause additional bias by demonstrating how AI used to recommend criminal sentences can perpetuate and amplify racism.

A solution: the EU should do away with the idea of creating proper oversight mechanisms and instead focus on creating policies that regulate the use and deployment of black box AI systems to prevent them from deployment in situations where human outcomes might be affected unless theres a human authority who can be held ultimately responsible.

Per the document:

AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible. That is the only way to ensure that also unintentional harm can be minimized and prevented.

Neurals rating : needs work.

Without a definition of safe, the whole statement is fluff. Furthermore, accuracy is a malleable term in the AI world that almost always refers to arbitrary benchmarks that do not translate beyond laboratories.

A solution: the EU should set a bare minimum requirement that AI models deployed in Europe with the potential to affect human outcomes must demonstrate equality. An AI model that achieves lower reliability or accuracy on tasks involving minorities should be considered neither safe nor reliable.

Per the document:

Besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.

Neurals rating: good, but could be better.

Luckily, the General Data Protection Regulation (GDPR) does most of the heavy lifting here. However, the terms quality and integrity are highly subjective as is the term legitimised access.

A solution: the EU should define a standard where data must be obtained with consent and verified by humans to ensure the databases used to train models contain only data that is properly-labeled and used with the permission of the person or group who generated it.

Per the document:

The data, system and AI business models should be transparent. Traceability mechanisms can help achieving this. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned. Humans need to be aware that they are interacting with an AI system, and must be informed of the systems capabilities and limitations.

Neurals rating: this is hot garbage.

Only a small percentage of AI models lend themselves to transparency. The majority of AI models in production today are black box systems that, by the very nature of their architecture, produce outputs using far too many steps of abstraction, deduction, or conflation for a human to parse.

In other words, a given AI system might use billions of different parameters to produce an output. In order to understand why it produced that particular outcome instead of a different one, wed have to review each of those parameters step-by-step so that we could come to the exact same conclusion as the machine.

A solution: the EU should adopt a strict policy preventing the deployment of opaque or black box artificial intelligence systems that produce outputs that could affect human outcomes unless a designated human authority can be held fully accountable for unintended negative outcomes.

Per the document:

Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination. Fostering diversity, AI systems should be accessible to all, regardless of any disability, and involve relevant stakeholders throughout their entire life circle.

Neurals rating: poor.

In order for AI models to involve relevant stakeholders throughout their entire life circle theyd need to be trained on data sourced from diverse sources and developed by teams of diverse people. The reality is that STEM is dominated by white, straight, cis-males and there are myriad peer-reviewed studies demonstrating how that simple, demonstrable fact makes it almost impossible to produce many types of AI models without bias.

A solution: unless the EU has a method by which to solve the lack of minorities in STEM, it should instead focus on creating policies that prevent businesses and individuals from deploying AI models that produce different outcomes for minorities.

Per the document:

AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.

Neurals rating: great. No notes!

Per the document:

Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. Auditability, which enables the assessment of algorithms, data and design processes plays a key role therein, especially in critical applications. Moreover, adequate an accessible redress should be ensured.

Neurals rating: good, but could be better.

Theres currently no political consensus as to whos responsible when AI goes wrong. If the EUs airport facial recognition systems, for example, mistakenly identify a passenger and the resulting inquiry causes them financial harm (they miss their flight and any opportunities stemming from their travel) or unnecessary mental anguish, theres nobody who can be held responsible for the mistake.

The employees following procedure based on the AIs flagging of a potential threat are just doing their jobs. And the developers who trained the systems are typically beyond reproach once their models go into production.

A solution: the EU should create a policy that specifically dictates that humans must always be held accountable when an AI system causes an unintended or erroneous outcome for another human. The EUs current policy and strategy encourages a blame the algorithm approach that benefits corporate interests more than citizen rights.

While the above commentary may be harsh, I believe the EUs AI strategy is a light leading the way. However, its obvious that the EUs desire to compete with the Silicon Valley innovation market in the AI sector has pushed the bar for human-centric technology a little further towards corporate interests than the unions other technology policy initiatives have.

The EU wouldnt sign off on an aircraft that was mathematically proven to crash more often if Black persons, women, or queer persons were passengers than it did when white men were onboard. It shouldnt allow AI developers to get away with deploying models that function that way either.

Read the original:

A critical review of the EU's 'Ethics Guidelines for Trustworthy AI' - TNW

Posted in Ai | Comments Off on A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’ – TNW

Thinking of a career in AI? Make sure you have these 8 skills – TNW

Posted: at 2:19 pm

This article was originally published on .cult by Saudamani Singh. .cult is a Berlin-based community platform for developers. We write about all things career-related, make original documentaries, and share heaps of other untold developer stories from around the world.

If youve ever used Alexa or Siri, sat in a self-driving car, talked to a chatbot, or even watched something recommended to you by Netflix, youve come across Artificial Intelligence, or AI, as its commonly known.

AI is a major driving force behind the worlds advancement in almost every field of study including healthcare, finance, entertainment, and transport. Simply put, Artificial Intelligence is the capability of machines to learn like humans, take problem-solving decisions, and complete tasks that would otherwise require multiple individuals to invest long working hours.

Thinking of a career in this exciting field? Were here to answer all your AI-related questions, so you can get humanity one step closer to planting Elon Musks neuralink chips into our brains and curing blindness! Or, just make a chatbot.

AI engineers conduct a variety of tasks that would fly right over the laymans head. In fairness, creating and implementing machine learning algorithms sounds like something right out of a sci-fi movie. To be able to do that though, here are some skills every AI engineer must have:

1. Analytics

To be able to create deep-learning models that analyse patterns, a strong understanding of analytics is a prerequisite. Being very grounded with analytics will help in testing and configuring AI.

2. Applied Mathematics

Were guessing if you have an interest in Artificial Intelligence engineering, you probably dont hate math, since it is at the core of all things AI. A firm understanding of gradient descent, quadratic programming and stuff like convex optimization is necessary.

3. Statistics and algorithms

An adequate understanding of statistics is required while working with algorithms. AI engineers need to be well-versed in topics like standard deviation, probability, and models like Hidden Markov and Naive Bayes.

4. Language fluency

Yep, no surprises here. Youll need to be fluent in a couple of languages to be a successful AI engineer. The most popular language amongst artificial intelligence engineers is Python, but it often turns out to be too little on its own. Its important to have proficiency in multiple languages like C, C++ and Java.

5. Problem-solving and communication skills

AI engineers need to think out of the box. Youll find theres no set of rules or go-to guidelines you can adhere to if youre ever in a pickle. AI often requires innovative use of machine learning models and creative thinking. Youll also need to be able to communicate these ideas to your co-workers who may not have enough knowledge on the subject.

6. Neural Network knowledge

Another important skill youre going to need is efficiency with neural networks. A neural network is a software that works similarly to a human brain, helping in pattern recognition, solving complex problems and conducting image classification, which is a massive part of how we use AI. AI engineers often spend a lot of time working with neural networks; thus, a good understanding of the subject is required.

7. Team management

Youll likely work independently. However, some aspects allow you to communicate with humans, too, instead of just machines. As an AI engineer, you will need to communicate your ideas with numerous other engineers and developers. Therefore, communication and management skills come in handy . So while youre solving math equations to prepare for your career, make sure you do it with people around you.

8. Cloud knowledge

Out of the many tricks AI engineers need to have under their belt, having a fair idea of what cloud architecture is, is right up there. Cloud architecture involves much more than just managing storage space, and knowing the difference between which secure storage system is best suited to your project will be extremely helpful.

The salary an AI engineer makes depends on experience, certification, and the location in which theyre working, but generally, they get paid pretty well. According to Glassdoor, the average salary for AI engineers in the US is $114,121 per year as of 2020. Other sources claim the salary goes as high as $248,625 for experienced AI engineers. It sounds like youll be able to afford your dream house in Silicon Valley in no time.

As an AI engineer, your job is not monotonous by any means. New challenges and opportunities for innovative implementations of AI technology await every day. The demands and skills needed may seem intimidating, but the reward and compensation make it all worth it.

See the rest here:

Thinking of a career in AI? Make sure you have these 8 skills - TNW

Posted in Ai | Comments Off on Thinking of a career in AI? Make sure you have these 8 skills – TNW

Why Home Prices Must Fall; All In on Retail AI – Bloomberg

Posted: at 2:19 pm

Bloomberg Intelligence Podcast Browse all episodes

In this weeks Bloomberg podcast, Bloomberg Intelligence analysts discuss the findings and impact of their research:Unaffordable Homes, Higher Mortgages May Create Price Pressure -- Erica Adelberg lays out why home prices will have to retreat in the face of higher mortgage rates.Clean-Energy Flows May Not Lift ESG and Sustainability ETFs -- Shaheen Contractor says new US climate spending plans won't fully revive flows into clean-energy ETFs. All In on AI: Artificial Intelligence Key to E-Commerce Future -- Poonam Goyal lays out the impacts of retailers' embrace of artificial intelligence.Bonds vs. Recession: Industrials, Retail at Risk, Tech Resilient -- Himanshu Bakshi explains what groups of bonds are most at risk, and resilient, if the US falls into recession.FTSE Growth-Value Reversal a Trend? We Think Not; Look to Gilts -- Tim Craighead says the big shift toward growth from value looks transitory.The Bloomberg Intelligence radio show with Paul Sweeney and Alix Steel podcasts through Apples iTunes, Spotify and Luminary. It broadcasts on Saturdays and Sundays at noon on Bloombergs flagship station WBBR (1130 AM) in New York, 106.1 FM/1330 AM in Boston, 99.1 FM in Washington, 960 AM in the San Francisco area, channel 119 on SiriusXM, http://www.bloombergradio.com, and iPhone and Android mobile apps.Bloomberg Intelligence, the research arm of Bloomberg L.P., has more than 400 professionals who provide in-depth analysis on more than 2,000 companies and 135 industries while considering strategic, equity and credit perspectives. BI also provides interactive data from over 500 independent contributors. It is available exclusively for Bloomberg Terminal subscribers. Run BI .

Aug 19, 2022

See the original post here:

Why Home Prices Must Fall; All In on Retail AI - Bloomberg

Posted in Ai | Comments Off on Why Home Prices Must Fall; All In on Retail AI – Bloomberg

AI has yet to revolutionize health care – POLITICO

Posted: at 2:19 pm

By BEN LEONARD and RUTH READER

08/17/2022 10:00 AM EDT

Updated 08/17/2022 03:43 PM EDT

Investors have honed in on artificial intelligence as the next big thing in health care, with billions flowing into AI-enabled digital health startups in recent years.

But the technology has yet to transform medicine in the way many predicted, Ben and Ruth report.

Companies come in promising the world and often dont deliver, Bob Wachter, head of the department of medicine at the University of California, San Francisco, told Future Pulse. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Administrators say that algorithms from third-party firms often dont work seamlessly because every health system has its own tech system, so hospitals are developing their own in-house AI. But its moving slowly research on job postings shows health care lagging every industry but construction.

The FDA is working on a model to regulate AI, but its still nascent.

Theres an inherent mismatch between the pace of software development and government regulation of medical devices, said Kristin Zielinski Duggan, a partner at Hogan Lovells.

Questions remain about how regulators can rein in AIs shortcomings, including bias that threatens to exacerbate health inequities. For example, a 2019 study found a common algorithm in hospitals more frequently directed white patients to programs providing more personalized care than Black patients.

And when providers build their own AI systems, they typically arent vetted the way commercial software is, potentially allowing flaws to go unfixed for longer than they would otherwise. Furthermore, with data often siloed between health systems, a lack of quality data to power algorithms is another barrier.

But AI has shown promise in a number of medical specialties, particularly radiology. NYU Langone Health has worked with Facebooks AI Research group (FAIR) to develop AI that allows an MRI to take 15 minutes instead of an hour.

Weve taken 80 percent of the human effort out of it, said John D. Halamka, president of Mayo Clinic Platform, which has an algorithm in a clinical trial that seeks to reduce the lengthy process of mapping out a surgery plan for removing complex tumors.

And in another success story, Louisianas Ochsner Health developed AI that detects early signs of sepsis, a life-threatening infection.

Micky Tripathi, HHS national coordinator of health information technology, says AI could resemble sports broadcast systems that spit out a teams chance of winning at any given point in the game. In health care, an electronic health records system could give doctors a patients risk profile and the steps they might need to take.

This will be deemed as one of the most important if not the most important transformative phases of medicine, said Eric Topol, founder of the Scripps Research Translational Institute. A lot of heavy lifting is left to be done.

Welcome back to Future Pulse, where we explore the convergence of health care and technology. Tiny blood draws from infants, used to screen for disease, are now also being used in criminal investigations to indict their parents! What a world.

Share your news, tips and feedback with Ben at [emailprotected] or Ruth at [emailprotected] and follow us on Twitter for the latest @_BenLeonard_ and @RuthReader. Send tips securely through SecureDrop, Signal, Telegram or WhatsApp here.

ABORTION RULING FALLOUT More Democrats are backing companion legislation by Rep. Sara Jacobs (D-Calif.) and Sen. Mazie Hirono (D-Hawaii) that would make it harder for online firms to share personal data.

Jacobs is touting her bill, the My Body My Data Act, in response to Nebraska polices seizure of Facebook messages between a woman and her daughter that allegedly revealed a plan to induce an illegal abortion outside the states 20-week limit.

The similar bills from Jacobs and Hirono would limit the data that firms can collect, protect personal health information not currently covered by the health privacy law HIPAA and give the FTC power to enforce the act alongside a private right of action.

Ninety-three representatives have signed on, along with 13 senators. Without GOP support, the bill cannot pass the Senate, but Jacobs encourages state lawmakers to follow her lead in their capitals.

BIDEN SIGNS HEALTH CARE, CLIMATE BILL President Joe Biden signed legislation Tuesday that will allow Medicare to negotiate drug prices in an attempt to cut costs.

Beginning in 2026, the legislation enables Medicare to negotiate with manufacturers on 10 pricey drugs, expanding as the decade goes on. Cancer, HIV and diabetes drug costs could be considered during negotiations, according to SVB Securities.

Bidens signature caps the largest victory for Democrats since taking control of both chambers of Congress and the White House in January 2021, POLITICOs Sarah Ferris and Jordain Carney report.

Beginning in 2026, the legislation enables Medicare to negotiate with manufacturers on 10 pricey drugs, expanding as the decade goes on. | Drew Angerer/Getty Images

The drug negotiation portions came despite fierce opposition from the pharmaceutical industry, which argued the legislation would curb innovation.

Something to watch: Steve Ubl, the president of PhRMA, the drug industry lobbying group, said that members supporting the bill wont get a free pass and that one of PhRMAs member companies would nix 15 drugs if the bill became law.

BRUTAL CYBERSECURITY NUMBERS Close to 6 in 10 hospital and health system leaders said their organizations had at least one cyberattack in the past two years, according to new data from the cybersecurity company Cynerio and the cybersecurity research center Ponemon Institute.

The report also notes that the attacks which cost an average of nearly $10 million often recur: Among the victims, 82 percent had been hit by four or more attacks during the timeframe.

And the damage isnt just financial about 1 in 4 cyberattacks resulted in increased mortality by leading to delayed care, according to the report.

UNITEDHEALTHCARE TELEHEALTH SURGE The nations largest health insurer has seen patients gravitate toward telemedicine in a plan called Surest, which gives enrollees prices upfront.

Presented with the costs, enrollees choose telemedicine visits 10 times more often than people in typical plans and go to the emergency room or undergo surgery less frequently.

When a consumer goes out there and looks for [care] ..., were able to say, Hey, did you know that virtual visit offering is a zero co-pay? Alison Richards, CEO of Surest, told Future Pulse. Thats where were seeing that increase in virtual care.

Many other major insurers offer virtual-first plans that push patients toward telemedicine before in-person care.

RACIAL DISPARITIES IN HOSPITAL PROFITS Revenue and profit per patient are lower at hospitals serving the highest percentage of Black Medicare patients.

U.S. hospital financing effectively assigns a lower dollar value to the care of Black patients, a study published in the Journal of General Internal Medicine found.

Researchers from UCLA, Johns Hopkins and Harvard Medical School examined profits at 574 hospitals serving high rates of Black patients. Profits were on average $111 lower per patient day at the hospitals, and revenues were $283 lower.

Equalizing reimbursement levels would have required $14 billion in additional payments to Black-serving hospitals in 2018, a mean of approximately $26 million per Black-serving hospital, the researchers found. Health financing reforms should eliminate the underpayment of hospitals serving a large share of Black patients.

TELEREHABILITATION CAN IMPROVE BACK PAIN Good news for the quarter of Americans who suffer from acute lower back pain: It can be treated remotely.

A recent study found that a 12-week telehealth program from a New York company called Sword Health significantly reduced pain, depression, anxiety and productivity among those who completed it.

The program, overseen by a physical therapist, involves a combination of exercise, education and psychotherapy.

Of the roughly 338 people who completed the study, more than half reported a significant reduction in their disability and 61 percent experienced decreased pain.

The rub is that there is no rub: The program doesnt work for everyone. Some patients with acute lower back pain need more hands-on help, like massage or spinal manipulation, and digital rehab cant do that.

OTC HEARING AIDS GET FDA NOD The FDA has created regulatory guidance for hearing aids that manufacturers can sell over the counter, reports POLITICOs David Lim. Until now, patients have needed a prescription.

The change could bring new competition to the hearing aid market. On average, one prescription hearing aid costs approximately $2,300, though some run as much as $6,000, according to Consumer Affairs.

The rule will go into effect in 60 days. Manufacturers that want to sell existing products over the counter will have 240 days to comply with technical requirements and rules that aim to ensure the OTC hearing aids are easy to use without a doctors help.

An eye implant engineered from proteins in pigskin restored sight in 14 blind people NBC News

Parents and clinicians say private equitys profit fixation is short-changing kids with autism STAT

Google Maps regularly misleads people searching for abortion clinics Bloomberg

CORRECTION: An earlier version of Future Pulse misstated where Sword Health is based. The company is based in New York.

Read the original:

AI has yet to revolutionize health care - POLITICO

Posted in Ai | Comments Off on AI has yet to revolutionize health care – POLITICO

Lu brings the power of AI to the hospital – The Source – Washington University in St. Louis – Washington University in St. Louis

Posted: at 2:19 pm

Chenyang Lu, the Fullgraf Professor of computer science and engineering in the Washington University in St. Louis McKelvey School of Engineering, is combining artificial intelligence with data to improve patient care and outcomes.

But he isnt only concerned with patients, he is also developing technology to help monitor doctors health and well-being.

The Lu lab presented two papers at this years ACM SIGKDD Conference on Knowledge Discovery and Data Mining, both of which outline novel methods his team has developed with collaborators from Washington University School of Medicine to improve health outcomes by bringing deep learning into clinical care.

For caregivers, Lu looked at burnout, and how to predict it before it even arises. Activity logs of how clinicians interact with electronic health records provided researchers with massive amounts of data. They fed this data into a machine learning framework developed by Lu and his team Hierarchical burnout Prediction based on Activity Logs (HiPAL) and it was able to extrapolate meaningful patterns of workload and predict burnout from this data in an unobtrusive and automated manner.

Learn more about the teams work on the engineering website.

When it comes to patient care, physicians in the operating room collect substantial amounts of data about their patients, both during preoperative care and during surgery data that Lu and collaborators thought they could put to good use with Lus deep-learning approach: Clinical Variational Autoencoder (cVAE).

Using novel algorithms designed by the Lu lab, they were able to predict who would be in surgery for longer and who was more likely to develop delirium after surgery. The model was able to transform hundreds of clinical variables into just 10, which the model used to make accurate and interpretable predictions about outcomes that were superior to current methods.

Learn more about the teams findings on the engineering website.

Lu and his interdisciplinary collaborators will continue to validate both models, hopeful that both bring the power of AI into hospital settings.

The McKelvey School of Engineering at Washington University in St. Louis promotes independent inquiry and education with an emphasis on scientific excellence, innovation and collaboration without boundaries. McKelvey Engineering has top-ranked research and graduate programs across departments, particularly in biomedical engineering, environmental engineering and computing, and has one of the most selective undergraduate programs in the country. With 140 full-time faculty, 1,387 undergraduate students, 1,448 graduate students and 21,000 living alumni, we are working to solve some of societys greatest challenges; to prepare students to become leaders and innovate throughout their careers; and to be a catalyst of economic development for the St. Louis region and beyond.

See the original post here:

Lu brings the power of AI to the hospital - The Source - Washington University in St. Louis - Washington University in St. Louis

Posted in Ai | Comments Off on Lu brings the power of AI to the hospital – The Source – Washington University in St. Louis – Washington University in St. Louis

Video: Sneak Preview of the AI Hardware Summit – HPCwire

Posted: at 2:19 pm

Next month the AI Hardware Summitreturns to the Bay Area, bringing AI technologists and end users together to share ideas and get up to speed on all the latest AI hardware developments. The event which takes place September 13-15, 2022, at the Santa Clara Marriott, Calif. will be co-located with the Edge AI Summit. Both events are organized byKisaco Research, which launched the inaugural AI Hardware Summit in 2018.

One of the participants who has been there from the beginning is Karl Freund, founder and principal analyst at Cambrian AI Research. In an interview with HPCwire and EnterpriseAI, Freund provides a preview of what attendees can expect and offers advance highlights of his scheduled talk, The Landscape of AI Acceleration: Annual Survey of the Last Year of Innovations from the World of Semiconductors, Systems and Design Tools.

To me, if youre developing AI, a machine learning platform, you want to make sure youre using the best hardware you can get your hands on. This is the best place to go to find out whats coming and to find out what other people are doing. And learn from them, learn from whats working and perhaps whats not, Freund shared.

A number of other returning speakers will also be giving talks this year: Lip-Bu Tan, executive chairman at Cadence; Kunle Olukotun, co-founder and chief technologist at SambaNova Systems; Andrew Feldman, founder and CEO of Cerebras Systems; and many more.

On Wednesday, the luminary keynote will be presented by Meta engineers Alexis Black Bjorlin, vice president, infrastructure hardware, and Lin Qiao, senior director, engineering. Their session is titled, Co-Designing an Accelerated Compute Platform at Scale for AI Research and Infrastructure.

On Thursday, Rashid Attar, head of engineering, cloud/edge AI inference accelerators at Qualcomm, opens the day with his keynote, which will cover chip design and AI at the edge.

The events closing keynote will be delivered by Sassine Ghazi, president and COO of Synopsys. In his presentation, Enter the Era of Autonomous Design: Personalizing Chips for 1,000X More Powerful AI Compute, Ghazi will discuss strategies for using machine learning techniques to reduce design time and design risk.

In addition to participation from Meta, Qualcomm, Cadence and Synopsys, there will be talks from Alibaba, AMD, Atos, Graphcore, HuggingFace, MemVerge, Microsoft, SambaNova, Siemens, and many others.

A Meet & Greet takes place Tuesday from 4-7pm during which DeepMind Chief Business Operator Colin Murdoch will be interviewed by Cade Metz of the NYTimes. AI Hardware Summit and Edge AI Summit attendees are invited to reconnect with peers, make new acquaintances and discuss the state of machine learning in the datacenter and at the edge. A guest speaker announcement is forthcoming.

Freund maintains that AI will be pervasive. It will be in every electronic product we buy and sell and use, from toys to automobiles, he said, And the ability to make AI useful depends on really good engineers, really good AI modelers, data scientists, as well as the hardware. But all that will be for naught if youre not running it on efficient hardware because you cant afford it. So this is all about making AI pervasive and affordable and people overuse the term but democratizing AI is still a goal. Its not where we are, but its gonna take a lot of work to get there.

Visit link:

Video: Sneak Preview of the AI Hardware Summit - HPCwire

Posted in Ai | Comments Off on Video: Sneak Preview of the AI Hardware Summit – HPCwire

EnergX Announces World-First Use of AI in the Field of Employee Retention – AccessWire

Posted: at 2:19 pm

SYDNEY, NEW SOUTH WALES / ACCESSWIRE / August 19, 2022 / With the cost of replacing employees at 50% of a junior salary and up to 250% of a senior salary according to the Society of HR Management, EnergX has announced a world-first AI coach with a focus on employee retention.

The scalable technology is designed to help businesses retain staff and to get a healthy ROI within the existing flow of work.

Named Franky to reflect its straightforward approach, the coach chooses from over 5.7 million personalized curriculum options to connect employees to the intrinsic drivers of engagement that maintain their motivation and improve the quality of their work.

Franky lives' in existing platforms like Teams, Slack, Webex and Workplace to enable organizations to drive behavior change without the need for extensive IT installation.

Endorsed by University of Sydney Professor of Psychology David Alais, who has noted results shown through the new AI have significantly helped with factors contributing to employee anxiety levels, stress management and overall employee happiness.

"As a psychologist and neuroscientist, I admire how EnergX has built on evidence from both fields to design a remarkably effective approach that overcomes burnout in short time frames."

With burned out employees 2.6x more likely to leave, this approach is proving a strong link between improving employee health and a corresponding reduction in the risk factors associated with retention.

The AI curriculum was complemented with team learning experiences designed to create a sense of belonging and leadership development and coaching. Upon completion, participants were reassessed for retention risk and also against the World Health Organization Wellbeing Index.

At this point EnergX found that employees in very good or excellent health were 4.1x more likely to have zero retention risk factors when compared with employees in poor health.

This impact was made more significant by tripling the number of leaders and overall employees in very good to excellent health employees in just 100 days.

"At the end of the day achieving competitive advantage requires you to have the fittest team on the field," says EnergX CEO Sean Hall.

He added, "The first step to achieving this, and your best retention strategy right now, is to help your people overcome burnout. This doesn't happen overnight, and definitely not with generic masterclasses, but it can happen much quicker than you think."

About EnergxThe team at EnergX focuses on certain behaviors and processes that cause burnout, crush creativity and stifle inclusion to significantly improve belonging, engagement, and retention.

CONTACT:Sean HallEnergxEmail: [emailprotected]Website: energx.com.au

SOURCE: Energx

Read more:

EnergX Announces World-First Use of AI in the Field of Employee Retention - AccessWire

Posted in Ai | Comments Off on EnergX Announces World-First Use of AI in the Field of Employee Retention – AccessWire

Page 20«..10..19202122..3040..»