Page 24«..1020..23242526..3040..»

Category Archives: Artificial Intelligence

Artificial Intelligence as Evidence on Everyday Law – Maryland-Law.com

Posted: August 30, 2022 at 11:21 pm

The latest episode of Everyday Law focused on Artificial Intelligence as evidence in court proceedings.

Host, Bob Clark, spoke to Judge Paul Grimm of the the United States District Court for the District of Maryland and Professor Maura Grossman of the Univeristy of Waterloo in Ontario, Canada, who together had previously authored an authoritative article in the Northwestern School of Law's Journal of Technology. entitled " Artificial Intelligence as Evidence".

Both Judge Grimm and Professor Grossman had taken somewhat atypical paths to their legal careers. Judge Grimm started his legal career in the military and Professor Grossman earned a PHD in psychology, actively practicing in that field for a number of years before she concluded the law was her future.

They are each now academics with Judge Grimm helming the Duke University Law School's Bolch Judcial Institute and Professor Grossman a research professor in the school of computer science at the University of Waterloo as well as an adjunct professor at Osgoode Hall Law School.

The origin of Professor Grossman and Judge Grimm's work together was in the context of issues arising in electronic discovery. Professor Grossman and her husband, Gordon Cormack have been instrumental in setting legal standards for dealing with e-discovery and Judge Grimm was at the forefont of judicial efforts to formulate rules regarding admissibility of such evidence.

The episode is an hour in length and after charting the fascinating career paths of the guests, turns to the fundamental question of what is artificial intelligence and what is problematic regarding its use in court.

Artificial intelligence is computers performing cognitive tasks. That these processes are often opaque is beyond dispute. The fundamental question for admissibility concerns what validation was done to ensure that the algorithims consistently and accurately produce their results.

Judge Grimm discussed the continuing evolution of evidentiary standards noting that blood spatter and hair fiber analysis as well as eyewitness identification have been increasingly subject to skeptical court scrutiny.

He indicated that judges need a set of tools and that approaches to A.I. have been derivative of Daubert and the changes it gave rise to in the Federal Rules of Evidence.

Considerations for admissibility include relevance, error rate and the prejudice associated with wrongful admission. Judge Grimm suggested that asking fundamental questions such as what the A.I. was designed to do, whether it has been peer reviewed.and whether its process can be explained, are important.

A recurrent stumbling block concerns " trade secrets" which is subject to a qualified privilege but may disqualify admisssion of an A.I. function, where the progenitor of the A.I. refuses to explain how it works for fear they will disclose the secret sauce which distinguishes its product from a competitors.

As with all evolving evidentiary issues it is likely that the usefullness of the technology must be balanced against the prejudice its use entails and the party adversely affected must be afforded the opportunity to explore the possibility that its output is inaccurate.

For more go to:https://everydaylaw.podbean.com/e/artificial-intelligence/

View original post here:

Artificial Intelligence as Evidence on Everyday Law - Maryland-Law.com

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence as Evidence on Everyday Law – Maryland-Law.com

Artemis: Artificial intelligence shines a light on the Moons permanently shadowed regions – Express

Posted: at 11:21 pm

The dark regions of craters and mountainous terrain near the Moons south pole are key targets for future lunar missions like Artemis III. According to NASA, these dark sites have the potential to harbour coveted water ice, which could be broken down into its oxygen and hydrogen components in order to provide both life-sustaining air and potential fuel. This is because the shadowed regions are incredibly cold with temperatures as low as -274 to -400 F which traps the ice by stopping it from sublimating into a gas.

In their study, glaciologist Dr Valentin Bickel of ETH Zrich and his colleagues worked with images taken by NASAs Lunar Reconnaissance Orbiter, which has been documenting the surface of the Moon for more than a decade.

The spacecrafts camera, the team explained, captures photons particles of light that are bounced into the shadowed regions of the lunar surface from adjacent mountains and crater walls.

With the help of AI, the team have been able to make such efficient use of the data captured by the orbiter that even the darkest regions of the Moon have become visible.

Crucially, their analysis has revealed that no water ice is visible on the surface of the Moons shadowed areas even though such has been detected in these regions by other instruments.

Dr Bickel said: There is no evidence of pure surface ice within the shadowed areas.

This, he added, implies that any ice must be mixed with lunar soil or lie underneath the surface.

The new study is part of a larger investigation of potential landing sites and exploration options on the lunar surface being conducted by the Lunar and Planetary Institute (LPI) and the Johnson Space Center (JSC)s Center for Lunar Science and Exploration.

To date, the researchers said, they have examined more than half-a-dozen potential landing sites on the Moon.

READ MORE: NASA to venture into 'dark, unexplored' regions of Moon

Looking further into the future, the team explained, the findings will help NASA precisely map out safe routes into and through the Moons permanently shadowed regions for the Artemis programme.

This will greatly reduce the risk of misfortune for astronauts and robotic explorers traversing the lunar surface in the future.

Furthermore, the new images of the Moon will help target specific locations for sample collection to best assess the distribution of water ice on the Moon.

The full findings of the study were published in the journal Geophysical Research Letters.

Read the original post:

Artemis: Artificial intelligence shines a light on the Moons permanently shadowed regions - Express

Posted in Artificial Intelligence | Comments Off on Artemis: Artificial intelligence shines a light on the Moons permanently shadowed regions – Express

SCOPA: Intersection of artificial intelligence and telemedicine – Optometry Times

Posted: at 11:21 pm

Optometry Times' Alex Delaney-Gesing speaks with Leo P. Semes, OD, FAAO, professor emeritus of optometry at the University of Alabama-Birmingham, on the highlights and key takeaways from his discussion titled "Artificial intelligence and telemedicine," presented during the 115th annual South Carolina Optometric Physicians Association (SCOPA) meeting in Hilton Head, South Carolina.

Editor's note: this transcript has been lightly edited for clarity.

Could you share a highlights version of your presentation?

Artificial Intelligence (AI) is a topic that I've been following for probably 5 or so years. And as I dug into the history, it's quite interesting; it really began back in the 1930s. So it has quite a long history. It's based on algorithms and whether that algorithm is something as simple as how you do addition of big numbers or long division

The algorithm for looking at, for example, a patient with diabetic retinopathy, is specifying the severity of that, and then using that as a determination for treatment. And then if the patient is treated, following that patient to see if there is stagnation, stability of the diabetic retinopathy, or regression, which is what we're hoping for.

And some of the AI paradigms now demonstrate that there is the possibility of regression of diabetic retinopathy, from a physical standpoint, of how the retina looks, and also in terms of visual performance. And that's what to me is probably the most exciting aspect of what we can do with AI; to say, Okay, this is a patient who's got a certain level of diabetic retinopathy, the patient qualifies for treatment. Then 3 months following treatment, yes, the retina looks better, but they have improvement in visual performance.

So visual acuityquantitativelynumbers look better. And as a consequence of that, patients could enjoy a better lifestyle.

Why would you say this is such an important topic of discussion? Well, one of the reasons is thataside from age-related macular degeneration (AMD) one of the major causes of vision loss, especially among the working age population. is secondary to diabetic retinopathy (DR). And it's estimated that there's a segment of the population perhaps as high as 25%, who have pre-diabetes. So patients presenting for a vision exam, or vision irregularities, or even a periodic examination, might be discovered with certain changes that relate to DR. And then a diagnosis is made and the patient can be managed systemically, as well as ocularly.

What are the key takeaways you'd like attendees to learn from this?Probably the biggest thing is going to be the new staging paradigms for DR and how those relate to when a patient is going to need treatment. And if the patient is not at high risk and not a candidate for treatment, then emphasizing to the patient the importance of maintenance of systemic management strategies, and regular ophthalmic exams.

Originally posted here:

SCOPA: Intersection of artificial intelligence and telemedicine - Optometry Times

Posted in Artificial Intelligence | Comments Off on SCOPA: Intersection of artificial intelligence and telemedicine – Optometry Times

Artificial intelligence and policing: it’s a matter of trust | The Strategist – The Strategist

Posted: at 11:21 pm

From Robocop to Minority Report, the intersection between policing and artificial intelligence has long captured attention in the realm of high-concept science fiction. However, only over the past decade or so has academic research and government policy begun to focus on it.

Teagan Westendorfs ASPI report, Artificial intelligence and policing in Australia, is one recent example. Westendorf argues that Australian government policy and regulatory frameworks dont sufficiently capture the current limitations of AI technology, and that these limitations may compromise [the] principles of ethical, safe and explainable AI in the context of policing.

My aim in this article is to expand on Westendorfs analysis of the potential challenges in policings use of AI and offer some solutions.

Westendorf focuses primarily on a particular kind of policing use of AI, namely, statistical inferencing used to make (or inform) decisionsin other words, technology that falls broadly into the category of predictive policing.

While predictive policing applications pose the thorniest ethical and legal questions and therefore warrant serious consideration, its important to also highlight other applications of AI in policing. For example, AI can assist investigations by expediating the transcription of interviews and analysis of CCTV footage. Image-recognition algorithms can also help detect and process child-exploitation material, helping to limit human exposure. Drawing attention to these applications can help prevent the conversation from becoming too focused on a small but controversial set of uses. Such a focus could risk poisoning the well for the application of AI technology to the sometimes dull and difficult (but equally important) areas of day-to-day police work.

That said, Westendorfs main concerns are well reasoned and worth discussing. They can be summarised as being the problem of bias and the problem of transparency (and its corollary, explainability).

Like all humans, police officers can have both conscious and unconscious biases that may influence decision-making and policing outcomes. Predictive policing algorithms often need to be trained on datasets capturing those outcomes. Yet, if algorithms are trained on historical datasets that include the results of biased decision-making, it can result in unintentional replication (and in some cases amplification) of the original biases. Efforts to ensure systems are free of bias can also be hampered by tech-washing, where AI outputs are portrayed (and perceived) as based solely on science and mathematics and therefore inherently free of bias.

Related to these concerns is the problem of transparency and explainability. Some AI systems lack transparency because their algorithms are closed-source proprietary software. But it can be difficult to render even open-source algorithms explainableparticularly those used in machine learningdue to their complexity. After all, a key benefit of AI lies in its ability to analyse large datasets and detect relationships that are too subtle for the human mind to identify. Making models more comprehensible by simplifying them may require trade-offs in sensitivity, and therefore also in accuracy. Together these concerns are often referred to as the AI black box (inputs and outputs are known, but not what goes on in the middle).

In short, a lack of transparency and explainability makes the detection of bias and discriminatory outputs more difficult. This is both an ethical concern and a legal one when justice systems require that charging decisions be understood by all parties to avoid discriminatory practices. Indeed, research suggests that when individuals trust the process of decision-making, they are more likely to trust the outcomes in justice settings, even if those outcomes are unfavourable. Explainability and transparency can therefore be important considerations when seeking to enhance public accountability and trust in these systems.

As Westendorf points out, steps can be taken to mitigate bias, such as pre-emptively coding against foreseeable biases and involving human analysts in the processes of building and leveraging AI systems. With these sorts of safeguards in place (as well as deployment reviews and evaluations), use of AI may have the upshot of establishing built-in objectivity for policing decisions by reducing reliance on heuristics and other subjective decision-making practices. Over time, AI use may assist in debiasing policing outcomes.

While theres no silver bullet for enhancing explainability, there are plenty of suggestions, particularly when it comes to developing AI solutions to enhance AI explainability. Transparency challenges generated by proprietary systems can also be alleviated when AI systems are owned by police and designed in house.

Yet the need for explainability is only one consideration for enhancing accountability and public trust in the use of AI systems by police, particularly when it comes to predictive policing. Recent research has found that peoples level of trust in the police (which is relatively high in Australia) correlates with their level of acceptance of changes in the tools and technology used by police. In another study, participants exposed to purportedly successful policing applications of AI technology were more likely to support wider police use of such technologies than those exposed to unsuccessful uses, or not exposed to examples of AI application at all. In fact, participants exposed to purportedly successful applications even judged the decision-making process involved to be trustworthy.

This suggests that focusing on broader public trust in policing will be vital in sustaining public trust and confidence in the use of AI in policing, regardless of the degree of algorithmic transparency and explainability. The goal of transparent and explainable AI shouldnt neglect this broader context.

More here:

Artificial intelligence and policing: it's a matter of trust | The Strategist - The Strategist

Posted in Artificial Intelligence | Comments Off on Artificial intelligence and policing: it’s a matter of trust | The Strategist – The Strategist

The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. – Grid

Posted: at 11:21 pm

Music has forever been moved by technology from the invention of the phonograph, to Bob Dylan pivoting from acoustic to electric guitar, to the ubiquity of streaming platforms and, most recently, an ambitious attempt at crossing AI with commercial music.

FN Meka, introduced in 2021 as a virtual rapper whose lyrics and beats were constructed with proprietary AI technology, had a promising rise.

But just days after he signed on with Capitol Records the label that carried The Beatles, Nat King Cole and The Beach Boys and released his debut track Florida Water, the record company dropped him. His pink slip was a response in part to fans and activists widely criticizing his image a digital avatar with face tattoos, green braids and a golden grill and decrying his blend of stereotypes and slur-infused lyrics.

The AI artist, voiced by a real person and created by a company called Factory New, was not, technologically, a groundbreaking experiment. But it was a needle-mover for a discussion that is imminent within the industry: How AI will continue to shape how we experience music.

In 1984, classical trombonist George Lewis used three Apple II computers to program Yamaha digital synthesizers to improvise along with a live quartet. The resulting record a syrupy and spacey co-creation of computer and human musicians was titled Rainbow Family, and is considered by many as the first instance of artificially intelligent music

In the years since, advances in mixing boards popularized the practice of sampling and interpolation igniting debates about remixing old songs to make new ones (art form or cheap trick?) and Auto-Tune became a central tool in singers recorded and onstage performances.

FN Meka isnt the only AI artist out there. Some have been introduced, and lasted, with less commercial backing. YONA, a virtual singer-songwriter and AI poet made by Ash Koosha, has performed live at music festivals around the globe, including MUTEK in Montreal, Rewire in the Netherlands and Barbican in the U.K.

In fact, the most crucial and successful partnerships between AI and music have been under the hood, said Patricia Alessandrini, a composer, sound artist and researcher at Stanford Universitys Center for Computer Research in Music and Acoustics.

During the pandemic, the music world leaned heavily on digital tools to overcome challenges of sharing and playing music while remote, Alessandrini said. JackTrip Virtual Studio, for example, was an online platform used to teach university music lessons while students were remote. It minimized time delay, making audiovisual synchronicity much easier, and was born from machine learning sound research.

And for producers who deal with large music files and digital compression, AI can play a role in signal processing, Alessandrini said. This is important for sound engineers and musicians alike, saving time and helping them more smoothly create, or export, big records.

There are beneficial applications for technology and music to intersect when it comes to accessibility, she said. Instruments have been made using AI to require less strength or pressure in order to generate sound, for example allowing those with injuries or disabilities to play with eye movements alone.

Alessandrinis own projects include the Piano Machine which uses computers and voltages as fingers to create new sounds and Harp Fingers, a technology that allows users to play a harp without physically touching it.

On a meta level, algorithms are the ubiquitous drivers of online streaming platforms Spotify, Apple Music, SoundCloud, YouTube and others are constantly using machine learning, in less transparent ways, to personalize playlists, releases, lists of nearby concerts and music recommendations.

Less agreed upon is the concept of an AI artist itself. Reactions have been split among those loyal to the humanity of art; some who argued that if certain artists were indistinguishable from AI, then they deserved to be replaced; others who invited the newness; and many whose feelings fall somewhere in between.

With any cultural form, part of what youre dealing with are peoples expectations for what things sound like or what an artist looks like, Oliver Wang, a music writer and sociology professor at California State University, Long Beach, told Grid.

Some experts argue that those questions leave out a critical point: Whatever the technology, there is always a human behind the work and that should count.

Sometimes people dont know or see how much human work is behind artificial intelligence, said Adriana Amaral, a professor at UNISINOS in Brazil and expert in pop culture, influencers and fan studies. Its a team of people developers, programmers, designers, people from production and marketing.

But this misunderstanding isnt always the fault of the public, said Alessandrini. It often comes down to marketing. Its more exciting to say that somethings made entirely by AI, Alessandrini said. This was how FN Meka was marketed and promoted online as an AI artist. But while his lyrics, sound and beats were AI-generated, they then were performed by a human and animated, cartoon-style.

If it sounds strange that one would become a dedicated fan of a virtual persona, it shouldnt, Amaral said. The world of competitive video gaming, which is nothing without its on-screen characters, is a multibillion-dollar industry that sells out arenas worldwide.

Still, music purists and audiophiles and any person who appreciates music as an experience, rather than just entertainment may very well resist AI musicians. In particular, Alessandrini said, AI is better at generating content faster and copying genres, though unable to innovate new ones a result of training their computing models, largely, using what music already exists.

When a rap artist has these different influences and their own specific cultural experience, then thats the kind of magical thing that they use to create, Alessandrini said. You can say that Bobby Shmurda is one of the first Brooklyn drill artists because of a particular song. So thats a [distinctly] human capacity, compared to AI.

Alessandrini likens this artistic experience to the advancements of AI in medicine the applications of robotic technologies used during surgeries that are more efficient and mitigate the risk of human error. But, she said, there are some things that humans do better caring for a patient, understanding their suffering.

Its hard to imagine AI vocals ever reaching the emotional and beautifully human depths, say, of a Nina Simone or Ann Peebles; or channeling the authentic camaraderie and bounce of a group like OutKast.

In 2017, the French government commissioned mathematician and politician Cdric Villani to lay ambitious groundwork for the countrys artificially intelligent (AI) future.

His strategy, one that considered economics, ethics and education, foremost straddled the thinning line between creation and consumption.

The division between the noncreative machine and the creative human is ever less clear-cut, he wrote. Creativity, he went on to say, was no longer just an artists skill it was a necessary tool for a world of co-inhabitance, machine and human together.

Is that what is happening?

One cant talk about music on grand scales without also talking about money. Though FN Meka was a failure, AI has strong ties to the music sphere that wont be broken because one AI rapper got cut from a label. And it feels inevitable that another big record company or music festival will give it a go.

Why? It might all come down to cost, say experts and music listeners who run the cynicism gamut.

Wang said he has a sneaking suspicion that record companies and executives see AI musicians as a way to save money on royalty payments and travel costs moving forward.

Beyond the money-hungry music industry, there is also room for a lot of good moving forward with AI, said Amaral. She hopes FN Mekas image, and how he was received, was a wake-up call for whatever AI artist inevitably comes next. She also mentioned YONA, which she saw in concert in Japan, as a thin, white, able pop star not unlike many who dominate the music scene today.

We have all the technological tools to make someone who could be green, or fat or any way we like, and we still are stuck on these patterns, she said.

What will the landscape look like five or 10 or 15 years from now? Wang asks. Pop music, despite peoples cynicism, rarely stays static. Its constantly changing, and perhaps these computer-based attempts at creating artists will be part of that change.

Thanks to Dave Tepps for copy editing this article.

Continued here:

The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. - Grid

Posted in Artificial Intelligence | Comments Off on The future of AI in music is now. Artificial Intelligence was in the music industry long before FN Meka. – Grid

The Innovative Role of Artificial Intelligence in the Online Entertainment Industry – Varsity Online

Posted: at 11:21 pm

Image: Pexels.com

There is no doubt that artificial intelligence (AI) is rapidly changing the online entertainment landscape. With the help of AI, we are now able to personalize our entertainment experiences with online casinos such as the betvictor sign up offer in ways that were once impossible.

For example, Netflix uses AI to recommend TV shows and movies that you might like based on your watching history. Amazons Prime Video service uses AI to suggest videos that you might be interested in.

But this is just the beginning. In the future, AI will become even more involved in our online entertainment experiences. Lets explore how!

The online entertainment industry has been booming in recent years, with a growing number of people streaming movies, TV shows, and other forms of content on a variety of platforms. This trend is only expected to continue in the coming years, as more and more people turn to the internet for their entertainment needs.

One of the key factors driving this growth is the increasing availability of artificial intelligence (AI) technology. AI can be used to improve the user experience on entertainment platforms by providing recommendations for what to watch next, helping to curate personalized content feeds, and even creating new forms of content through machine learning.

AI can be used to create more personalized experiences, in that, social media platforms are using AI to show users content that is most relevant to them. This ensures that users see content that they are interested in and are less likely to get bored with the platform.

AI is also being used to create more immersive experiences. For example, gaming companies are using AI-powered chatbots to provide players with in-game support and advice. These chatbots can understand natural language and provide helpful information when needed.

Here are the top three ways that AI will shape the future of online entertainment:

1. More personalized content recommendations

As AI gets better at understanding our individual preferences, we will see more personalized content recommendations. This means that we will see less of the one size fits all approach to the content recommendation and more tailored suggestions based on our specific interests.

2. Improved search and discovery features

AI will also help improve search and discovery features on entertainment platforms. For example, if youre looking for a particular type of movie or TV show, AI will be able to provide more accurate and relevant results.

3. Enhanced content creation and curation

Finally, AI will also play a role in enhancing content creation and curation. With the help of AI, content creators will be able to produce higher-quality content faster and more efficiently. Additionally, AI can also be used to curate existing content so that it is more relevant and engaging for users

Please gamble responsibly.

Read the rest here:

The Innovative Role of Artificial Intelligence in the Online Entertainment Industry - Varsity Online

Posted in Artificial Intelligence | Comments Off on The Innovative Role of Artificial Intelligence in the Online Entertainment Industry – Varsity Online

Artificial Intelligence in Food and Beverage Market to Record a CAGR of 44.5%, North America to Contribute Majority of Industry Growth: Market.us -…

Posted: at 11:21 pm

The Artificial Intelligence (AI) in food and beverage market size was USD 3.34 Billion in 2020 and is expected to register a CAGR of 44.5% during the forecast period.

Scope of the report @https://market.us/report/artificial-intelligence-in-food-and-beverage-market/

Introduction: what is AI and its potential in the food and beverage industry

The potential for artificial intelligence (AI) in the food and beverage industry is significant. By automating repetitive and time-consuming tasks, AI can help food and beverage companies improve efficiency and quality while reducing costs.

For example, AI can be used to automate the inspection of food products for defects. Currently, inspectors must manually inspect each item on a production line, which is slow and prone to human error. However, by using computer vision to automatically detect defects, AI can speed up the inspection process while also reducing mistakes.

Get Sample Copy of the Report for more Industry Insights @ CLICK HERE NOW: https://market.us/report/artificial-intelligence-in-food-and-beverage-market/request-sample/

In addition, AI can be used to optimize production schedules and ingredient lists. For example, by analyzing past sales data, AI can predict future demand for certain products and adjust production accordingly. AI can also suggest recipes based on available ingredients, helping companies reduce waste and save money.

Artificial intelligence is a new phenomenon that has attracted many major players to the food and beverage industry.The global artificial intelligence market in food and beverage is highly competitive.However, the market is dominated by a few market players in terms of market share.Major players focus on expanding their reach in unexplored areas and expanding their customer base abroad.

Here is List of BEST KEY PLAYERS Listed in Artificial Intelligence in Food and Beverage Market Report are:

Aboard SoftwareAnalytical Flavor SystemsDeepnifyImpactVisionIntelligentX BrewingNotCoSight Machine

Planning to lay down future strategy? Perfect your plan with our report brochure : https://market.us/report/artificial-intelligence-in-food-and-beverage-market/request-sample/

Artificial Intelligence in Food and Beverage Market Segmentation:

Artificial Intelligence in Food and Beverage Market is segmented in various types and applications according to product type and category. In terms of Value and Volume the growth of market calculated by providing CAGR for forecast period for year 2022 to 2032.

Most Important Types of Artificial Intelligence in Food and Beverage Market are covered in this Report:

HardwareSoftwareServices

Artificial Intelligence in Food and Beverage Market Product Applications:

Transportation and logisticsQuality ControlProduction Planning

Top countries data covered in this report:

People also ask:

What are the applications of artificial intelligence in the F&B industry?

How AI is used in food industry?

What technology is used in food and beverage industry?

What will be the benefits of AI in food service industry?

How does AI affect food and beverage industry?

How artificial intelligence is revolutionizing the food and beverage industry?

Why are robots used in food industry?

What is artificial intelligence in computer?

Original post:

Artificial Intelligence in Food and Beverage Market to Record a CAGR of 44.5%, North America to Contribute Majority of Industry Growth: Market.us -...

Posted in Artificial Intelligence | Comments Off on Artificial Intelligence in Food and Beverage Market to Record a CAGR of 44.5%, North America to Contribute Majority of Industry Growth: Market.us -…

Headroom Solves Virtual Meeting Fatigue with Artificial Intelligence that Eliminates Wasted Time and Reveals Essential Highlights – Business Wire

Posted: at 11:21 pm

SAN FRANCISCO--(BUSINESS WIRE)--Headroom, a meeting platform leveraging artificial intelligence to improve communications and productivity, today announced a $9 million investment led by Equal Opportunity Ventures with participation from Gradient Ventures, LDV Capital, AME Cloud Ventures and Morado Ventures. The capital brings total funding to date to $14 million and will be used to expand Headrooms team, product development and mobile offering. The company also recently added new Shareable Automatic Summaries to its suite of tools for remote and hybrid meetings, furthering its mission to support balanced, entertaining, productive and memorable meetings.

Virtual meetings have become the de facto method for gathering, connection and collaboration. According to Fortune Business Insights, the meeting collaboration market is expected to exceed $41 billion by 2029. Gartner predicts that by 2025, 75% of conversations at work will be recorded and analyzed, enabling the discovery of added organizational value or risk. Yet despite the increase in meetings, productivity and engagement rates are down. Even before the start of the pandemic, a Harvard Business Review survey revealed 65% of senior managers felt meetings kept them from completing their own work and 64% said meetings come at the expense of deep thinking. Smarter meetings may be the biggest opportunity for improved work productivity and satisfaction.

The more meetings held, the more time wasted, with too many people spending time in redundant meetings. Headroom is leveraging AI to help companies do more with less, enabling individual workers to be more productive, choose which meetings to attend and which to watch later, or just quickly get the key pieces of information discussed, said Julian Green, CEO and Co-Founder of Headroom. Particularly in this environment, where for startups every dollar and every meeting minute counts, those that can move faster and stay better connected with people wherever they are, in real time and asynchronously, will win.

Headroom is self-learning; its relevance and impact on productivity improves with use. Headroom data shows 90% of every meeting lacks useful information. To maximize the 10% meeting content that is helpful, the company developed Shareable Automatic Summaries which auto-generate highlight reels that provide key moments, shared notes and action items, and enable easy sharing with others. Additional platform functionality that maximizes synchronous and asynchronous communication includes:

"Hybrid work is here to stay and virtual meetings are the norm, but they allow for a wide margin of distraction," said Roland Fryer, Founder and Managing Partner at Equal Opportunity Ventures and newly appointed Headroom Board Member. Headroom at its core is an engagement and productivity platform - streamlining collaboration and information sharing, without a heavy lift. It saves time in scheduling, reporting and collaborating."

Simply put: meetings should be better. Unlike any other video communication and collaboration platform, Headroom is stateful. Meeting information is generated during live conversations, and can be augmented and accessed forever after. Participants are free to act naturally and engage with the information without being restricted by the actual meeting slot, said Andrew Rabinovich CTO and Co-Founder of Headroom. Those who didn't attend the meeting itself, have all the details readily available to them. With Headroom, this is automated and highlights go to non-attendee stakeholders who can replay key decisions. Our customers are also using it as an information resource they can search for key information later.

Headroom was co-founded by Julian Green and Andrew Rabinovich in 2020. The companys executive team experience spans founding and leadership roles at GoogleX, Houzz, Magic Leap, Patreon and Square. Headrooms platform currently serves more than 5,000 customers spanning technology and online education startups, as well as marketing, design, consulting and recruiting agencies. It is free with no usage caps or storage limits, and is available on Google Chrome with no download or app required. Users have full control over sharing of meeting information. Get started at https://www.goheadroom.com/.

ABOUT HEADROOM

Headroom, founded in 2020, is improving communication in meetings by augmenting meeting intelligence. Automated virtual meetings in Headroom allow attendees to act naturally, replay key decisions, build smart summaries and search everything later. Headroom is brought to you by an experienced team that has created and managed AI products used by billions of people at tech startups and large companies including Google and Magic Leap. The founders helped create the worlds leading Computer Vision, Augmented Reality and Virtual Reality products, started Unicorns, and have won a Webby. To get started with Headroom visit https://www.goheadroom.com/.

Read the original post:

Headroom Solves Virtual Meeting Fatigue with Artificial Intelligence that Eliminates Wasted Time and Reveals Essential Highlights - Business Wire

Posted in Artificial Intelligence | Comments Off on Headroom Solves Virtual Meeting Fatigue with Artificial Intelligence that Eliminates Wasted Time and Reveals Essential Highlights – Business Wire

Artificial intelligence was supposed to transform health care. It hasn’t. – POLITICO

Posted: August 15, 2022 at 6:18 pm

Companies come in promising the world and often dont deliver, said Bob Wachter, head of the department of medicine at the University of California, San Francisco. When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Administrators say algorithms the software that processes data from outside companies dont always work as advertised because each health system has its own technological framework. So hospitals are building out engineering teams and developing artificial intelligence and other technology tailored to their own needs.

But its slow going. Research based on job postings shows health care behind every industry except construction in adopting AI.

The Food and Drug Administration has taken steps to develop a model for evaluating AI, but it is still in its early days. There are questions about how regulators can monitor algorithms as they evolve and rein in the technologys detrimental aspects, such as bias that threaten to exacerbate health care inequities.

Sometimes theres an assumption that AI is working, and its just a matter of adopting it, which is not necessarily true, said Florenta Teodoridis, a professor at the University of Southern Californias business school whose research focuses on AI. She added that being unable to understand why an algorithm came to a certain result is fine for things like predicting the weather. But in health care, its impact is potentially life-changing.

Despite the obstacles, the tech industry is still enthusiastic about AIs potential to transform health care.

The transition is slightly slower than I hoped but well on track for AI to be better than most radiologists at interpreting many different types of medical images by 2026, Hinton told POLITICO via email. He said he never suggested that we should get rid of radiologists, but that we should let AI read scans for them.

If hes right, artificial intelligence will start taking on more of the rote tasks in medicine, giving doctors more time to spend with patients to reach the right diagnosis or develop a comprehensive treatment plan.

I see us moving as a medical community to a better understanding of what it can and cannot do, said Lara Jehi, chief research information officer for the Cleveland Clinic. It is not going to replace radiologists, and it shouldnt replace radiologists.

Radiology is one of the most promising use cases for AI. The Mayo Clinic has a clinical trial evaluating an algorithm that aims to reduce the hours-long process oncologists and physicists undertake to map out a surgical plan for removing complicated head and neck tumors.

An algorithm can do the job in an hour, said John D. Halamka, president of Mayo Clinic Platform: Weve taken 80 percent of the human effort out of it. The technology gives doctors a blueprint they can review and tweak without having to do the basic physics themselves, he said.

NYU Langone Health has also experimented with using AI in radiology. The health system has collaborated with Facebooks Artificial Intelligence Research group to reduce the time it takes to get an MRI from one hour to 15 minutes. Daniel Sodickson, a radiological imaging expert at NYU Langone who worked on the research, sees opportunity in AIs ability to downsize the amount of data doctors need to review.

When I look for examples of true AI and machine learning thats really making a difference, theyre pretty few and far between. Its pretty underwhelming.

Bob Wachter, head of the department of medicine at the University of California, San Francisco

Covid has accelerated AIs development. Throughout the pandemic, health providers and researchers shared data on the disease and anonymized patient data to crowdsource treatments.

Microsoft and Adaptive Biotechnologies, which partner on machine learning to better understand the immune system, put their technology to work on patient data to see how the virus affected the immune system.

The amount of knowledge thats been obtained and the amount of progress has just been really exciting, said Peter Lee, corporate vice president of research and incubations at Microsoft.

There are other success stories. For example, Ochsner Health in Louisiana built an AI model for detecting early signs of sepsis, a life-threatening response to infection. To convince nurses to adopt it, the health system created a response team to monitor the technology for alerts and take action when needed.

Im calling it our care traffic control, said Denise Basow, chief digital officer at Ochsner Health. Since implementation, she said, death from sepsis is declining.

The biggest barrier to the use of artificial intelligence in health care has to do with infrastructure.

Health systems need to enable algorithms to access patient data. Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence. But thats not as easy for smaller players.

Another problem is that every health system is unique in its technology and the way it treats patients. That means an algorithm may not work as well everywhere.

Over the last year, an independent study on a widely used sepsis detection algorithm from EHR giant Epic showed poor results in real-world settings, suggesting where and how hospitals used the AI mattered.

This quandary has led top health systems to build out their own engineering teams and develop AI in-house.

That could create complications down the road. Unless health systems sell their technology, its unlikely to undergo the type of vetting that commercial software would. That could allow flaws to go unfixed for longer than they might otherwise. Its not just that the health systems are implementing AI while no ones looking. Its also that the stakeholders in artificial intelligence, in health care, technology and government, havent agreed upon standards.

A lack of quality data which gives algorithms material to work with is another significant barrier in rolling out the technology in health care settings.

Over the last several years, large, well-funded systems have invested in moving their data into the cloud, creating vast data lakes ready to be consumed by artificial intelligence.|Elaine Thompson/AP Photo

Much data comes from electronic health records but is often siloed among health care systems, making it more difficult to gather sizable data sets. For example, a hospital may have complete data on one visit, but the rest of a patients medical history is kept elsewhere, making it harder to draw inferences about how to proceed in caring for the patient.

We have pieces and parts, but not the whole, said Aneesh Chopra, who served as the governments chief technology officer under former President Barack Obama and is now president of data company CareJourney.

While some health systems have invested in pulling data from a variety of sources into a single repository, not all hospitals have the resources to do that.

Health care also has strong privacy protections that limit the amount and type of data tech companies can collect, leaving the sector behind others in terms of algorithmic horsepower.

Importantly, not enough strong data on health outcomes is available, making it more difficult for providers to use AI to improve how they treat patients.

That may be changing. A recent series of studies on a sepsis algorithm included copious details on how to use the technology in practice and documented physician adoption rates. Experts have hailed the studies as a good template for how future AI studies should be conducted.

But working with health care data is also more difficult than in other sectors because it is highly individualized.

We found that even internally across our different locations and sites, these models dont have a uniform performance, said Jehi of the Cleveland Clinic.

And the stakes are high if things go wrong. The number of paths that patients can take are very different than the number of paths that I can take when Im on Amazon trying to order a product, Wachter said.

Health experts also worry that algorithms could amplify bias and health care disparities.

For example, a 2019 study found that a hospital algorithm more often pushed white patients toward programs aiming to provide better care than Black patients, even while controlling for the level of sickness.

Last year, the FDA published a set of guidelines for using AI as a medical device, calling for the establishment of good machine learning practices, oversight of how algorithms behave in real-world scenarios and development of research methods for rooting out bias.

The agency subsequently published more specific guidelines on machine learning in radiological devices, requiring companies to outline how the technology is supposed to perform and provide evidence that it works as intended. The FDA has cleared more than 300 AI-enabled devices, largely in radiology, since 1997.

Regulating algorithms is a challenge, particularly given how quickly the technology advances. The FDA is attempting to head that off by requiring companies to institute real-time monitoring and submit plans on future changes.

But in-house AI isnt subject to FDA oversight. Bakul Patel, former head of the FDAs Center for Devices and Radiological Health and now Googles senior director for global digital health strategy and regulatory affairs, said that the FDA is thinking about how it might regulate noncommercial artificial intelligence inside of health systems, but he adds, theres no easy answer.

FDA has to thread the needle between taking enough action to mitigate flaws in algorithms while also not stifling AIs potential, he said.

Some argue that public-private standards for AI would help advance the technology. Groups, including the Coalition for Health AI, whose members include major health systems and universities as well as Google and Microsoft, are working on this approach.

But the standards they envision would be voluntary, which could blunt their impact if not widely adopted.

Link:

Artificial intelligence was supposed to transform health care. It hasn't. - POLITICO

Posted in Artificial Intelligence | Comments Off on Artificial intelligence was supposed to transform health care. It hasn’t. – POLITICO

Artificial intelligence can now make convincing images of buildings. Is that a good thing? – Archinect

Posted: at 6:18 pm

Artificial intelligence can now make convincing images of buildings. Is that a good thing? | Forum | Archinect '); }, imageUploadError: function(json, xhr) { alert(json.message); } }}); /*$(el).ckeditor(function() {}, {//removePlugins: 'elementspath,scayt,menubutton,contextmenu',removePlugins: 'liststyle,tabletools,contextmenu',//plugins:'a11yhelp,basicstyles,bidi,blockquote,button,clipboard,colorbutton,colordialog,dialogadvtab,div,enterkey,entities,filebrowser,find,flash,font,format,forms,horizontalrule,htmldataprocessor,iframe,image,indent,justify,keystrokes,link,list,maximize,newpage,pagebreak,pastefromword,pastetext,popup,preview,print,removeformat,resize,save,smiley,showblocks,showborders,sourcearea,stylescombo,table,specialchar,tab,templates,toolbar,undo,wysiwygarea,wsc,vimeo,youtube',//toolbar: [['Bold', 'Italic', 'BulletedList', 'Link', 'Image', 'Youtube', 'Vimeo' ]],plugins:'a11yhelp,basicstyles,bidi,blockquote,button,clipboard,colorbutton,colordialog,dialogadvtab,div,enterkey,entities,filebrowser,find,flash,font,format,forms,horizontalrule,htmldataprocessor,iframe,image,indent,justify,keystrokes,link,list,maximize,newpage,pagebreak,pastefromword,pastetext,popup,preview,print,removeformat,resize,save,smiley,showblocks,showborders,sourcearea,stylescombo,table,specialchar,tab,templates,toolbar,undo,wysiwygarea,wsc,archinect',toolbar: [['Bold', 'Italic', 'BulletedList','NumberedList', 'Link', 'Image']],resize_dir: 'vertical',resize_enabled: false,//disableObjectResizing: true,forcePasteAsPlainText: true,disableNativeSpellChecker: false,scayt_autoStartup: false,skin: 'v2',height: 300,linkShowAdvancedTab: false,linkShowTargetTab: false,language: 'en',customConfig : '',toolbarCanCollapse: false });*/ }function arc_editor_feature(el) { $(el).redactor({minHeight: 300,pasteBlockTags: ['ul', 'ol', 'li', 'p'],pasteInlineTags: ['strong', 'br', 'b', 'em', 'i'],imageUpload: '/redactor/upload',plugins: ['source', 'imagemanager'],buttons: ['html', 'format', 'bold', 'italic', 'underline', 'lists', 'link', 'image'],formatting: ['p'],formattingAdd: {"figcaption": {title: 'Caption',args: ['p', 'class', 'figcaption', 'toggle']},"subheading": {title: 'Subheading',args: ['h3', 'class', 'subheading', 'toggle']},"pullquote-left": {title: 'Quote Left',args: ['blockquote', 'class', 'pullquote-left', 'toggle']},"pullquote-centered": {title: 'Quote Centered',args: ['blockquote', 'class', 'pullquote-center', 'toggle']},"pullquote-right": {title: 'Quote Right',args: ['blockquote', 'class', 'pullquote-right', 'toggle']},"chat-question": {title: 'Chat Question',args: ['p', 'class', 'chat-question', 'toggle']}, "chat-answer": {title: 'Chat Answer',args: ['p', 'class', 'chat-answer', 'toggle']}, },callbacks:{ imageUpload: function(image, json) { $(image).replaceWith('

Excerpt from:

Artificial intelligence can now make convincing images of buildings. Is that a good thing? - Archinect

Posted in Artificial Intelligence | Comments Off on Artificial intelligence can now make convincing images of buildings. Is that a good thing? – Archinect

Page 24«..1020..23242526..3040..»