Page 14«..10..13141516..2030..»

Category Archives: Ai

Turing launches government-backed AI standards information hub – ComputerWeekly.com

Posted: October 13, 2022 at 1:33 pm

The Alan Turing Institute has announced the formal launch of an AI Standards Hub that the government trialed in January 2022.

The institute has teamed up with the British Standards Institution (BSI) and the National Physical Laboratory (NPL) to form the hub, which is also supported by the Department for Digital, Culture, Media and Sport (DCMS) and the governments office for artificial intelligence (AI).

The present government was formed on 6 September 2022, and so the launch of the hub is one of the first slew of initiatives that it publicly backs.

It is billed as part of the governments 10-year national AI strategy, launched in September 2021.

The minister for technology and the digital economy, Damian Collins, who took up his position in August 2022 as part of outgoing prime minister Boris Johnsons interim administration and supported Liz Truss to be leader of the Conservative Party in its leadership election, said: Our National AI Strategy builds on the UKs position at the forefront of artificial intelligence to fuel innovation and strengthen trust in this transformative technology.

The hubs launch sets the bar for the responsible creation, development and use of AI to unlock its full potential and drive growth across the country.

Also from the government, its chief scientific adviser and national technology adviser, Patrick Vallance, said:The UKs new AI Standards Hub should help create the conditions needed to develop a thriving AI industry and promote innovation.

Adrian Smith, director and chief executive of the Alan Turing Institute, said:As artificial intelligence technologies play an increasingly crucial role across all sectors, its vital that the development and use of these technologies adheres to commonly agreed and ethically sound standards. This is why our new initiative is so important it will support innovation and ensure that organisations and people are using AI responsibly.

The Hub has an online platform, with information about existing AI standardisation efforts and related policy developments, and some capacity to engage with other users through a range of community features.

The Hub will also, said the ATI in a statement, seek to build a community around AI trustworthiness and the role of standards within it. It will offer training in the skills needed to get involved in standardisation efforts, as well as publish research and analysis about the topic.

Scott Steedman, director-general of standards at BSI, added:As the UKs National Standards Body, BSI is delighted to be playing a central role in the AI Standards Hub, a world-leading initiative to increase understanding of the standards that are supporting the deployment of AI technologies, and to inform the development of new standards. One of the most important of these for businesses of all sizes will be the AI management standard ISO/IEC 42001, which we will championing as a British Standard in the UK, and which will help companies take advantage of AI technologies in a responsible way.

For the National Physical Laboratory, Sundeep Bhandari, strategy manager at the digital sector, said: The launch of the AI Standards Hub signifies a coordinated UK effort to strengthen the UKs contribution to the development of global AI technical standards. The hub provides an environment for our world-leading scientific researchers to take their work through from the lab to market and enables innovators to access, collaborate and create global standards.

See the rest here:

Turing launches government-backed AI standards information hub - ComputerWeekly.com

Posted in Ai | Comments Off on Turing launches government-backed AI standards information hub – ComputerWeekly.com

Microsoft Teams gains animated avatars and AI-powered recaps – TechCrunch

Posted: at 1:33 pm

At its Ignite conference this week, Microsoft announced updates heading to Teams, its ever-evolving videoconferencing and workplace collaboration app. New avatars are available, and more details were announced around Teams Premium, a paid set of Teams features including AI-generated tasks and meeting guides, which is set to arrive in December in preview.

Teams Premium is an effort to simplify Teams pricing, which before was disparate across several tiers. Microsoft says it expects it to cost $10 per user per month, with official pricing to come once Teams Premium is generally available. Thats higher than the lowest-cost Google Workspace plan, which costs $6 per user per month, but less expensive than Zoom Pro ($15 per user per month).

The aforementioned avatars a part of Microsofts Mesh platform allow users to choose customized, animated versions of themselves to show up in Teams meetings, a bit like Zooms virtual avatars. Through the Avatars app in the Microsoft Teams app store, users can design up to three avatars to use in a Teams meeting with gestures to react to topics.

Microsofts CVP of modern work Jared Spataro pitches avatars as a way to take a break from the camera but still have a physical presence in Teams meetings. Our data shows that 51% of Gen Z envisions working in the metaverse in the next two years, he wrote in a blog post a percentage that seems optimistically high if were talking about VR and AR headsets, but depends on how one defines metaverse. He continued: You can create custom avatars to represent yourself.

Avatars are perhaps also a small play albeit an unspoken one at revitalizing a platform thats stagnated over the past year. Microsoft says that more than 270 million people actively use Teams monthly today, a number that hasnt budged since January as workers increasingly return to the office.

Avatars are available in the standard Teams for private preview customers, while organizations interested in trying them out can sign up for updates on the Teams website if theyre not already part of the Teams Technical Access Program, Microsoft says.

On the Teams Premium side, customers are getting meeting guides designed to help them pick the right meeting experience e.g. a client call, brainstorm meeting or help desk support with options that can be customized and managed by an IT team. Teams Premium users will also be able to brand the meeting experience with bespoke logos for the lobby and brand-specific backgrounds at the organization level.

The forthcoming Intelligent Recap feature in Microsoft Teams Premium, powered by machine learning. Image Credits: Microsoft

Among the more interesting new Teams Premium-specific additions leverage AI. For example, theres Intelligent Recap, which attempts to capture highlights from Teams meetings, and an Intelligent Playback feature that automatically generates chapters for Teams meeting recordings. Personalized Insights highlights key moments in recordings, like when a persons name was mentioned, while Intelligent Search aims to make searching transcripts easier with suggested speakers.

Beyond all this, Teams Premium will deliver real-time translations for 40 spoken languages and the above-mentioned AI-generated tasks, which are automatically assigned to meeting attendees.

AI aside, Teams Premium will soon offer what Microsofts calling Advanced Meeting Protection, a set of features to safeguard confidential meetings such as board meetings and undisclosed product launches. These span watermarking, limits to recording and sensitivity labels to automatically apply protections to meetings. Relatedly, new Advanced Webinars in Teams Premium provide options for a registration waitlist and manual approvals, automated reminder emails, a virtual green room for hosts and presenters and the ability to manage what attendees see.

Teams Premium will also introduce advanced virtual appointments, which are designed to help manage the end-to-end appointment experience for direct-to-consumer brands with pre-appointment text reminders, a branded lobby and post-appointment follow-ups. Organizations get both scheduled and on-demand appointments, a simplified view of all virtual appointments and pre-appointment chat capabilities to communicate with their customers.

On the backend, customers can view analytics like usage trends, a history of virtual appointments and no-shows and wait times with specific staff and departments.

Microsoft says that Teams Premium features will begin rolling out in December 2022 as part of a preview, with general availability coming in February 2023. The AI capabilities, including Intelligent Playback and Intelligence Recap, will hit the first half of 2023.

See the original post here:

Microsoft Teams gains animated avatars and AI-powered recaps - TechCrunch

Posted in Ai | Comments Off on Microsoft Teams gains animated avatars and AI-powered recaps – TechCrunch

Pony.ai to test its autonomous vehicles in Tucson – The Arizona Republic

Posted: at 1:33 pm

A global autonomous driving technology company will open its first Arizona location in Tucson to test its vehicles.

Pony.ai, founded in Fremont, California in 2016, announced last month its partnership with Pima Community College. The company will base its operations at the college's new Automotive Technology and Innovation Center onits downtown campus.

Pony.ais decision to select Tucson for their new autonomous passenger vehicle testing operations is further validation that Southern Arizona is an emerging player in the autonomous vehicle industry, Joe Snell, president and CEO, of Sun Corridor Inc.said in a press release. Sun Corridor is an economic development agency in Tucson.

He also said Pony.ai joins a growing list of companies developing autonomous technologies that are looking to the region for expanding or basing their operations.

According to the press release, Pony.ai is the first company in Southern Arizona to launch passenger AVtesting.

The future is here: Autonomous semitrucks are rolling along Valley freeways

A spokesperson for Pony.ai told The Arizona Republic that people will see the company's cars driving around Tucson as part of its testing. Theon-road testing of autonomous vehicles in Tucson will always have trained safety drivers behind the wheel.

Pony.ai said it has driven over 9.3 million real-world autonomous milesand is proud of its safety record.

The company said Tucson was chosen for its operations because of its growing importancefor tech startups and smart city technology, and because the company already has strong partnerships with the City of Tucson and Pima Community Colleges new Automotive Technology & Innovation Center.

In the press release, Pima County Supervisor Sharon Bronson said that with Pony.ais vision, under-resourced populations will have access to more reliable transportation for persons with disabilities.

According to a 2017 report published by the Ruderman Family Foundation, self-driving carsofferpotential for reducing transportation obstacles for people with disabilities.

The report points to government transportation survey data from2003 that foundsix million peoplewith disabilities have difficulties getting the transportation they need.

A Pony.ai spokesperson said the companybelieves in a future where "autonomous vehicles not only make driving inherently safer, but also open up a world of safe and reliable transportation access and more mobility options for people with disabilities.

Coverage of southern Arizona onazcentral.comand inThe Arizona Republicis funded by the nonprofitReport for Americain association with The Republic.

Reach the reporter atsarah.lapidus@gannett.com.

Go here to see the original:

Pony.ai to test its autonomous vehicles in Tucson - The Arizona Republic

Posted in Ai | Comments Off on Pony.ai to test its autonomous vehicles in Tucson – The Arizona Republic

Blueprint for an AI Bill of Rights – The White House

Posted: October 4, 2022 at 1:21 pm

Navigate this Section

Among the great challenges posed to democracy today is the use of technology, data, and automated systems in ways that threaten the rights of the American public. Too often, these tools are used to limit our opportunities and prevent our access to critical resources or services. These problems are well documented. In America and around the world, systems supposed to help with patient care have proven unsafe, ineffective, or biased. Algorithms used in hiring and credit decisions have been found to reflect and reproduce existing unwanted inequities or embed new harmful bias and discrimination. Unchecked social media data collection has been used to threaten peoples opportunities, undermine their privacy, or pervasively track their activityoften without their knowledge or consent.

These outcomes are deeply harmfulbut they are not inevitable. Automated systems have brought about extraordinary benefits, from technology that helps farmers grow food more efficiently and computers that predict storm paths, to algorithms that can identify diseases in patients. These tools now drive important decisions across sectors, while data is helping to revolutionize global industries. Fueled by the power of American innovation, these tools hold the potential to redefine every part of our society and make life better for everyone.

This important progress must not come at the price of civil rights or democratic values, foundational American principles that President Biden has affirmed as a cornerstone of his Administration. On his first day in office, the President ordered the full Federal government to work to root out inequity, embed fairness in decision-making processes, and affirmatively advance civil rights, equal opportunity, and racial justice in America.[i] The President has spoken forcefully about the urgent challenges posed to democracy today and has regularly called on people of conscience to act to preserve civil rightsincluding the right to privacy, which he has called the basis for so many more rights that we have come to take for granted that are ingrained in the fabric of this country.[ii]

To advance President Bidens vision, the White House Office of Science and Technology Policy has identified five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threatsand uses technologies in ways that reinforce our highest values. Responding to the experiences of the American public, and informed by insights from researchers, technologists, advocates, journalists, and policymakers, this framework is accompanied by From Principles to Practicea handbook for anyone seeking to incorporate these protections into policy and practice, including detailed steps toward actualizing these principles in the technological design process. These principles help provide guidance whenever automated systems can meaningfully impact the publics rights, opportunities, or access to critical needs.

Safe and Effective Systems

You should be protected from unsafe or ineffective systems. Automated systems should be developed with consultation from diverse communities, stakeholders, and domain experts to identify concerns, risks, and potential impacts of the system. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use, mitigation of unsafe outcomes including those beyond the intended use, and adherence to domain-specific standards. Outcomes of these protective measures should include the possibility of not deploying the system or removing a system from use. Automated systems should not be designed with an intent or reasonably foreseeable possibility of endangering your safety or the safety of your community. They should be designed to proactively protect you from harms stemming from unintended, yet foreseeable, uses or impacts of automated systems. You should be protected from inappropriate or irrelevant data use in the design, development, and deployment of automated systems, and from the compounded harm of its reuse. Independent evaluation and reporting that confirms that the system is safe and effective, including reporting of steps taken to mitigate potential harms, should be performed and the results made public whenever possible.

From Principles to Practice: Safe and Effective Systems

Algorithmic Discrimination Protections

You should not face discrimination by algorithms and systems should be used and designed in an equitable way. Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections. Designers, developers, and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way. This protection should include proactive equity assessments as part of the system design, use of representative data and protection against proxies for demographic features, ensuring accessibility for people with disabilities in design and development, pre-deployment and ongoing disparity testing and mitigation, and clear organizational oversight. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.

From Principles to Practice: Algorithmic Discrimination Protections

Data Privacy

You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used. You should be protected from violations of privacy through design choices that ensure such protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers, developers, and deployers of automated systems should seek your permission and respect your decisions regarding collection, use, access, transfer, and deletion of your data in appropriate ways and to the greatest extent possible; where not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that obfuscate user choice or burden users with defaults that are privacy invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given. Any consent requests should be brief, be understandable in plain language, and give you agency over data collection and the specific context of use; current hard-to-understand notice-and-choice practices for broad uses of data should be changed. Enhanced protections and restrictions for data and inferences related to sensitive domains, including health, work, education, criminal justice, and finance, and for data pertaining to youth should put you first. In sensitive domains, your data and related inferences should only be used for necessary functions, and you should be protected by ethical review and use prohibitions. You and your communities should be free from unchecked surveillance; surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of their potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to limit rights, opportunities, or access. Whenever possible, you should have access to reporting that confirms your data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on your rights, opportunities, or access.

From Principles to Practice: Data Privacy

Notice and Explanation

You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you. Designers, developers, and deployers of automated systems should provide generally accessible plain language documentation including clear descriptions of the overall system functioning and the role automation plays, notice that such systems are in use, the individual or organization responsible for the system, and explanations of outcomes that are clear, timely, and accessible. Such notice should be kept up-to-date and people impacted by the system should be notified of significant use case or key functionality changes. You should know how and why an outcome impacting you was determined by an automated system, including when the automated system is not the sole input determining the outcome. Automated systems should provide explanations that are technically valid, meaningful and useful to you and to any operators or others who need to understand the system, and calibrated to the level of risk based on the context. Reporting that includes summary information about these automated systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.

From Principles to Practice: Notice and Explanation

Human Alternatives, Consideration, and Fallback

You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. You should be able to opt out from automated systems in favor of a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts. In some cases, a human or other alternative may be required by law. You should have access to timely human consideration and remedy by a fallback and escalation process if an automated system fails, it produces an error, or you would like to appeal or contest its impacts on you. Human consideration and fallback should be accessible, equitable, effective, maintained, accompanied by appropriate operator training, and should not impose an unreasonable burden on the public. Automated systems with an intended use within sensitive domains, including, but not limited to, criminal justice, employment, education, and health, should additionally be tailored to the purpose, provide meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions. Reporting that includes a description of these human governance processes and assessment of their timeliness, accessibility, outcomes, and effectiveness should be made public whenever possible.

From Principles to Practice: Human Alternatives, Consideration, and Fallback

While many of the concerns addressed in this framework derive from the use of AI, the technical capabilities and specific definitions of such systems change with the speed of innovation, and the potential harms of their use occur even with less technologically sophisticated tools.

Thus, this framework uses a two-part test to determine what systems are in scope. This framework applies to (1) automated systems that (2) have the potential to meaningfully impact the American publics rights, opportunities, or access to critical resources or services. These Rights, opportunities, and access to critical resources of services should be enjoyed equally and be fully protected, regardless of the changing role that automated systems may play in our lives.

This framework describes protections that should be applied with respect to all automated systems that have the potential to meaningfully impact individuals or communities exercise of:

Rights, Opportunities, or Access

Civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts;

Equal opportunities, including equitable access to education, housing, credit, employment, and other programs; or,

Access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits.

A list of examples of automated systems for which these principles should be considered is provided in the Appendix. The Technical Companion, which follows, offers supportive guidance for any person or entity that creates, deploys, or oversees automated systems.

Considered together, the five principles and associated practices of the Blueprint for an AI Bill of Rights form an overlapping set of backstops against potential harms. This purposefully overlapping framework, when taken as a whole, forms a blueprint to help protect the public from harm. The measures taken to realize the vision set forward in this framework should be proportionate with the extent and nature of the harm, or risk of harm, to peoples rights, opportunities, and access.

[i] The Executive Order On Advancing Racial Equity and Support for Underserved Communities Through the FederalGovernment. https://www.whitehouse.gov/briefing-room/presidential-actions/2021/01/20/executive-order-advancing-racial-equity-and-support-for-underserved-communities-through-the-federal-government/

[ii] The White House. Remarks by President Biden on the Supreme Court Decision to Overturn Roe v. Wade. Jun. 24, 2022. https://www.whitehouse.gov/briefing-room/speeches-remarks/2022/06/24/remarks-by-president-biden-on-the-supreme-court-decision-to-overturn-roe-v-wade/

The rest is here:

Blueprint for an AI Bill of Rights - The White House

Posted in Ai | Comments Off on Blueprint for an AI Bill of Rights – The White House

Get ready for the next generation of AI – MIT Technology Review

Posted: at 1:21 pm

Researchers from Google also submitted a paper to the conference about their new model calledDreamFusion, which generates 3D images based on text prompts. The 3D models can be viewed from any angle, the lighting can be changed, and the model can be plonked into any 3D environment.

Dont expect that youll get to play with these models anytime soon.Meta isn't releasing Make-A-Video to the public yet. Thats a good thing. Metas model is trained using the same open-source image-data set that was behind Stable Diffusion. The company says it filtered out toxic language and NSFW images, but thats no guarantee that they will have caught all the nuances of human unpleasantness when data sets consist of millions and millions of samples. And the company doesnt exactly have a stellar track record when it comes to curbing the harm caused by the systems it builds, to put it lightly.

The creators of Pheraki write in theirpaperthat while the videos their model produces are not yet indistinguishable in quality from real ones, it is within the realm of possibility, even today. The models creators say that before releasing their model, they want to get a better understanding of data, prompts, and filtering outputs and measure biases in order to mitigate harms.

Its only going to become harder and harder to know whats real online, and video AI opens up a slew of unique dangers that audio and images dont, such as the prospect of turbo-charged deepfakes. Platforms like TikTok and Instagram are alreadywarping our sense of realitythrough augmented facial filters. AI-generated video could be a powerful tool for misinformation, because people have a greater tendency to believe and share fake videos than fake audio and text versions of the same content,accordingto researchers at Penn State University.

In conclusion, we havent come even close to figuring outwhat to do about thetoxic elements of language models. Weve only just started examining the harms around text-to-image AI systems. Video? Good luck with that.

The EU wants to put companies on the hook for harmful AI

The EU is creating new rules to make it easier to sue AI companies for harm.A new bill published last week, which is likely to become law in a couple of years, is part of a push from Europe to force AI developers not to release dangerous systems.

The bill, called the AI Liability Directive, will add teeth to the EUsAI Act, which is set to become law around a similar time. The AI Act would require extra checks for high risk uses of AI that have the most potential to harm people. This could include AI systems used for policing, recruitment, or health care.

See original here:

Get ready for the next generation of AI - MIT Technology Review

Posted in Ai | Comments Off on Get ready for the next generation of AI – MIT Technology Review

To Improve AI Outcomes, Think About the Entire System – HBR.org Daily

Posted: at 1:21 pm

CURT NICKISCH: Welcome to the HBR IdeaCast from Harvard Business Review. Im Curt Nickisch.

A shiny new piece of technology is not good enough on its own. It needs to be implemented at the right time, used in the right context, and accepted in the right culture, applied in the right way. In short, it needs to be part of the right system. And thats true for artificial intelligence too. AI can help individuals and teams make better predictions, combine that with judgment and you get better decisions. But those decisions have ripple effects on other parts of the system, ripple effects that can undermine the very prediction that was made.

Our guest today says, If organizations want to take artificial intelligence to the next level, they need to get better at coordinating optimal decisions over a wider network. Joshua Gans is a strategy professor at the University of Torontos Rotman School of Management. He co-wrote the HBR article, From Prediction to Transformation, as well as the new book, Power and Prediction: The Disruptive Economics of Artificial Intelligence.

Hi, Joshua.

JOSHUA GANS: Hi.

CURT NICKISCH: One big argument is that artificial intelligence has to do more for companies than just give data and insights. Is that like a big misconception that people have about AI? Theyre going to get insights out of their data and thats just not enough?

JOSHUA GANS: Yeah, so I think what happened some years ago, relatively recently, I guess, is that, of course, we started the hype about artificial intelligence. Businesses who are attuned to technological developments, started asking whether this was going to be something that should concern them or could take advantage of? And the one thing that artificial intelligence, in its recent incarnation, required, was data, artificial intelligence, machine learning, and deep learning. The more recent stuff, not the stuff that you might see in movies, is really an advance in the statistics of prediction.

It allows you to get much more accurate predictions for dramatically lower cost. But in order to generate those predictions, you do need to have data of differing kinds. And I think one of the things that the businesses asked themselves, Well, we have a lot of data. Weve been collecting data on so and so for years, maybe actually were well positioned to have a critical input into this new technology. That led to more investment to clean up that data and make use of it. But I think there were some challenges.

CURT NICKISCH: An example of this is, I dont know, a retailer who is trying to manage their inventory better so that they dont have as much in stock, but have just enough that when somebody orders it, they have it close by or in the right location.

JOSHUA GANS: Yes. So prediction of demand is a common one, and it was a one that we wouldve thought wouldve been very, very ubiquitous. What we found that even for things like inventory, when you try to predict demand better, you have to say, Well, what am I going to do with that prediction? What Im going to do with that prediction is, if I anticipate there are going to be a surge in demand for one of my products, what I need to do, is make sure Ive got that product on hand. Well, thats easier said than done. In this world of supply constraints, there may not be simple ways of doing that. We have very tight supply chains.

And so what might happen is, you might want to adopt AI for this thing, prediction, where theres some clear uncertainty, but instead you realize that you cant fully take advantage of it, because that requires coordinating everything all the way down the line. Something that we sometimes refer to as the AI bullwhip effect basically employing AI somewhere, has reverberations down the line. And if you cant actually get the rest of the system to come along with you, you might not be getting much value out of AI in the first place.

CURT NICKISCH: Have you seen a lot of places disappointed in what theyve implemented?

JOSHUA GANS: I think relative to our expectations in 2018, the adoption of artificial intelligence beyond the biggest tech firms, has been pretty slow. There was a lot of optimism that it could be used and could be used to make certain tasks more efficient, and I think it has to some degree. But in terms of its true transformational impact, well, it turns out that just adding a dollop of AI isnt going to do it for you. AI is something that gets leveraged within a context of a system. Yes, in some systems you can just improve your prediction and that system operates better.

In other systems, things are sort of divided, a bit modular, with one another. So you can put in some AI in one part of the organization, and that org part does better and the rest of it goes merrily along. But we suspect that in fact, the biggest transformations from AI are going to require a system-wide adjustment to exploit it. And youre not going to exploit that just for a trial. Youre going to exploit that when you really think AI is going to give you that leverage, because theres a lot of work involved obviously.

CURT NICKISCH: In your article you wrote about sailing teams in the Americas Cup, what they typically need to do to win and you tell a story of how one team used AI to really excel.

JOSHUA GANS: So the Americas Cup is deep in my bones because Im an Australian and I grew up in the 1980s, and it was a significant affair. And what I always reflect about that, when Australia too, managed to win the Americas Cup, first non-American team to do so in over a 100 years. It was because of a more radical boat design, the so-called winged keel. And there was a lot of discussion of that, which was a very interesting way for Australians to win a sporting event, which is technological innovation as opposed to better training and other things that we were used to.

So that heralded an era where the Americas ocean racing like this, started to have greater technological inputs. So when it came to the application of artificial intelligence, artificial intelligence has the ability to look at conditions, look at behaviors, and predict better performance, and then to say, Well, if we change the design this way, or that way, or something completely different, what might be the likely change in performance? Because it can basically handle all those weird edge cases that people dont normally think of.

And moreover, it can do it in the context of providing simulations. Now initially, the production process for coming up with new designs, was to put in a new design and then put it into a simulation where you had people operate the sailing boat as they would. One problem with that is, of course, every iteration takes time. Every iteration takes somebody going out and running several hours or maybe even more, of simulation.

CURT NICKISCH: Cant do that at night, yeah.

JOSHUA GANS: Yeah, you cant do it. So what was interesting there, one of the things that had happened about the same time, was that wed had these advances in the playing of games like Go. Artificial intelligence initially became the world champion at Go, by looking at all the games that everybody had ever played, and using that to predict moves and predict winning strategies, and come up with better winning strategies. Soon after that, they wondered, Well, what if we forget the people together and just have these AIs play even more games against each other, and not limited to the total corpus of recorded games.

So Team New Zealand saw that and said, Well, maybe we can just program in responses of people, automate them, not try to get too fancy about it, not to think about what theyre seeing, et cetera, and run a heap more simulations. As a result, that iterate even faster. That may lead to a system where you think, Oh, well, then youre not going to need a person to run that sailing boat at all. But actually, this is typical. The one place where systems seem to be starting to really work in AI, is in innovation itself.

CURT NICKISCH: Well, sorting through complicated problems, thinking about how one decision will affect another, that sounds a lot like what people are supposed to do at work, but a lot of organizations have this capability for AI in one place or with one data science team. Where does this need to change?

JOSHUA GANS: Think about what happens when you are dealing with a lot of uncertainty and you cant predict it. One of our favorite examples is just the decision whether you carry an umbrella or not. If you have no forecast of the weather, well, it depends on your own preferences. How much do you want to get wet versus how much youd mind carrying an umbrella? Its essentially the choice. And so youll have a rule for it. Even if I give you a prediction of the rain, that might only slightly modify your rules. Lets say, if theres zero chance of rain, sure, I wouldve taken an umbrella but I personally even think 20% would be worrisome, so I would. So people have some specific rules. So you think about that now in the context of business, when were not dealing just with whether it rains or not, but a whole heap of uncertainty.

Well, if youve been unable to predict that uncertainty, youve been unable to adjust your organization to it, at least before the fact. And so a good way to deal with that, is to do what we do, is develop rules. So we take what might have been a decision, something where we, Oh, if we think this is going to happen, do X. If we think something else is going to happen, do Y. And we say, Well, we dont know whats going to happen, so we just choose X or Y. And we put that into our organization. Sometimes we come up with whole standard operating procedures where people have thought very deeply about what the best guide is for people when you just cant react to everything thats going on.

And so those are the successful organizations. Now, I am an AI salesman coming in and saying, Oh, you need to adopt AI. Well, what does that mean? What does that mean, is that means were going to give you some predictions of some of these things youre missing and now you can make a decision, you can react to it. Thats got to be a better thing to do. And youre saying to yourself, I dont have any decisions. Weve got all these rules. And a few years pass and some employee turnover, you might dont even know. You might know the rules work, but you might not realize that the rules were a reaction to uncertainty and were made so you didnt have to think about the uncertainty.

So its really hard for you to sell something to an organization when their organizations constructed itself, not to realize that they need it or could use it, or even worse, realize that and realize its going to have to change everything, which is a little scary. So thats where we see disruption coming in, artificial intelligence requiring the organizations to break out of things that they were doing. Now, some large organizations can realize that or maybe theyve got more flexibility and they can integrate it, but typically, that is a recipe for new entrants who are not starting off, Oh, we are doing these rules starting off with some other basis, and were going to use AI to come in and do things better.

CURT NICKISCH: When you say new entrants, are you talking about competitors?

JOSHUA GANS: Yeah, competitors, startups, things like that. Whereas, if you get a startup firm, well, they dont have any legacy. They dont have to change how theyre doing things. Theyre not doing anything. So theyre building right from the start, from a green field essentially. And for these sorts of innovations that require a brand new system, its easier to start from scratch in some regards. So thats where that competitive tension comes in.

CURT NICKISCH: Yeah. So youre saying that it needs to be done differently going forward, but a lot of organizations just arent prepared to do that?

JOSHUA GANS: Yeah, I mean, think there is an appetite. I mean, theres enough business school and HBR reading to know that these things are an issue. So CEOs will look at it and say, This has the potential to disrupt. Maybe I should do something.

CURT NICKISCH: And Im just curious, what should that CEO do? If theyre at an organization and youve got some AI technology, but its in a team, in one place, or in a silo, and theyre improving decisions iteratively, but not really realizing the full system power of implementing AI, what should that CEO do?

JOSHUA GANS: This is where they earn their money, from these. This is a nasty, nasty problem. All the things are pushing towards doing nothing or to waiting and seeing, but changing your systems going to take time. Really, what you want to have in an organization is, you want to make it easier on yourself by having organizational memory of why you are where you are. So remember I said before, you have rules and then you forget why you had the rule.

Thats going to be a problem. So what you want to do is, you want to design an organization such that that memory is being filtered through quite regularly. And moreover, people have some flexibility, so that when you come to do these organizational redesigns, its not as painful for everybody. But thats a tension, because you are betting on the long-term or something potential, and youre going to sacrifice something now, by preparing for it. Thats essentially the real dilemma of disruption.

CURT NICKISCH: Youve given executives and leaders an out here, to know that this is hard, but what are some of the systems that companies should think about changing, to prepare for this?

JOSHUA GANS: So one of the things we try to encourage organizations when we sit down to talk about that, is we say, Can you go through an exercise where you can identify the big uncertainty you are facing in an organization? And what that is? So we might sit down with a hospital and ask them, and not even hospital system, just a hospital, What are the big uncertainties? And theres all sorts of things. What are the new techniques were going to put in? What are the costs of getting doctors, nurses, and so on? Oh, yeah, how were going to manage capacity? How are we going to make sure weve got enough beds for the demands in the local community? Im like, Well, thats an interesting one. Okay, why are you having trouble with that?

Well, one reason is things like COVID. Okay, of course. But the other is, the population changes and we build a hospital, but we cant change it very often, the population changes, and saying, Well, thats interesting. Youre talking about it in terms of people and youre uncertain about the number of people. What about the length of their stay? And they come back and say, Oh, no, no, thats standard. If you have this procedure, youll be there for that long, et cetera, et cetera. Im like, Ah, theres a word Im going to clue on, standard. Why is that standard? Well, if somebody gets an appendicitis, you need to keep them there a few days to make sure that they dont get an infection and secondary things, and other things like that. And so its a standard thing.

Or some other procedures, which it might be five days, or a week, or what have you. Because weve got to keep them under observation. Complications occur, and Im like, Oh, so theyre sitting in the bed, waiting for that information and you are waiting for that information. What if you actually, at the time of the operation, had enough information that you can make a really great prediction about how out of the woods somebody is or not? Ah, that changes everything. And now we go through the full experiment, Well, lets just go to the extreme and imagine you could perfectly predict that. And all of a sudden itd be, Wow, wed have a lot more hospital space all of a sudden. In fact, most of our people are sitting in bed waiting for stuff and if we had this knowledge.

And I said, Well, if you had some of this knowledge before they came into the hospital? What if you were collecting data on patients in the population before, so that when they get to the hospital, youre not reacting and trying to work out what they have, but you have a good idea about whats going on. Im like, Again. And that gets you down to this one variable, which is major, which is capacity in the hospital, and its telling you that all of a sudden your issue is not that youre going to be running up against capacity constraints, is that if you got this AI magically tomorrow, youd have a lot of spare capacity.

CURT NICKISCH: Right. But it sounds complicated.

JOSHUA GANS: It is complicated and Ive glossed over a lot to get there, but thats the sort of thing a CEOs going to have to go through. Thinking through those sorts of scenarios and really trying to understand one or two things that if they could develop AI for, it would change everything. Because thats the real worry. Developing AI for some of the other things on the fringes, its not going to be an existential threat or even a major opportunity, but developing AI thats going to turn a business around, change how you think about major decisions like capital, or expansions, and stuff like that, thats a whole other matter.

CURT NICKISCH: System change takes time. Is it a danger if the ability to change the system, takes longer than it does for the technology to improve?

JOSHUA GANS: Is it a danger? I dont know. I think the technology will reach technical improvements at a much faster rate, than the systems will change for it. This is not unprecedented. In electricity, Thomas Edison lit up the streets of a suburb of New York, and it was 40 years before more than half the country started to have electricity going to their factories and to their houses. This stuff takes time, let alone with electricity, it did lead to a transformation of manufacturing and other businesses, and that didnt happen for decades.

CURT NICKISCH: Whats your recommendation for a manager or somebody in a company, who feels like they need to be doing more but the organization isnt, and they want to spur some change?

JOSHUA GANS: One of the things that weve found with thinking about changes in systems, is that they very rarely occur without changes in power. There are winners and there are losers. We saw this for the taxi cab industry when ride-sharing came into play and ride-sharing came into play because people had mobile devices, and so any driver would have the locational and navigational ability of the most experienced taxi driver. And the power change that occurred there, was power to individual drivers and power away from taxi drivers who previously had something that was more unique. This sort of thing is likely to occur within organizations as well.

Now sometimes we talk a lot about automation just replacing jobs and things like that, all these changes tend to be a bit more subtle, but one of the challenges of managing that change, is understanding where power is changing and where youre going to get resistance from. Broadly speaking, that just means being a good manager. That must mean understanding peoples perspectives and points of view, with transformational things that is just as important as day-to-day things, and you just have to have a plan for it. Some of that plan may be that you decide to ignore or cast off some of the more resisting parts, but obviously the potential opportunity is to see if you can co-op them.

CURT NICKISCH: Joshua, thanks so much for coming on the show to talk about this.

JOSHUA GANS: Thank you.

CURT NICKISCH:Thats Joshua Gans. He is a professor at the University of Torontos Rotman School of Management. The chief economist at the Creative Destruction Lab, and a co-author of the HBR article, From Prediction to Transformation. He also co-wrote the new book, Power and Prediction.

If you got something from todays episode, we have more podcasts to help you manage your team, manage organizations, and manage your career. Find them at hbr.org/podcast or search HBR in Apple Podcasts, Spotify, or wherever you listen.

This episode was produced by Mary Dooe. We get technical help from Rob Eckhardt. Our audio product manager is Ian Fox and Hannah Bates is our audio production assistant. Thanks for listening to the HBR IdeaCast. Im Curt Nickisch.

More here:

To Improve AI Outcomes, Think About the Entire System - HBR.org Daily

Posted in Ai | Comments Off on To Improve AI Outcomes, Think About the Entire System – HBR.org Daily

WHO and partners launch world’s most extensive freely accessible AI health worker – World Health Organization

Posted: at 1:21 pm

The World Health Organization, with support from the Qatar Ministry of Health, today launched the AI-powered WHO Digital Health Worker, Florence version 2.0, offering an innovative and interactive platform to share a myriad of health topics in seven languages at the World Innovation Summit for Health (WISH) in Qatar.

Florence can share advice on mental health, give tips to destress, provide guidance on how to eat right, be more active, and quit tobacco and e-cigarettes. She can also offer information on COVID-19 vaccines and more. Florence 2.0 is now available in English with Arabic, French, Spanish, Chinese, Hindi and Russian to follow.

Florence has helped fight misinformation around COVID-19 since the beginning of the pandemic. The pandemic has had a significant effect on mental health. It is estimated that 1 in every 8 people in the world lives with a mental disorder. Her topics like tobacco and unhealthy diet kill 16 million people every year, while physical inactivity kills an estimated 830 000. These deaths are due to diseases like cancer, heart disease, lung disease, and diabetes that can be prevented and controlled with the right support.

Digital technology plays a critical role in helping people worldwide lead healthier lives, said Andy Pattison, WHOs Team Lead for Digital Channels. The AI health worker Florence is a shining example of the potential to harness technology to promote and protect peoples physical and mental health. At WISH, we aim to meet with visionary partners to continue to improve this cutting-edge technology. AI can help fill gaps in health information that exist in many communities around the world.

We are pleased to partner with the WHO for the development of Florence and are very excited about the opportunities this technology can offer to raise awareness of key health issues, said Dr Yousuf Al Maslamani, Official Healthcare Spokesperson for the FIFA World Cup Qatar 2022, Ministry of Public Health.

We know that providing advice on Florences key health topics, including mental health, nutrition and tobacco cessation is an important tool in our commitment to support people to make healthy lifestyle choices, added Dr. Al Maslamani.

At the WISH conference, WHO released the beta version of Florence 2.0 to interact with scientists, public health organizations, entrepreneurs, and policy-makers and plans to continue to develop the digital health worker to help meet major health issues facing the world today.

We are pleased to have Florence 2.0 launched at the WISH conference. This is a place for global actors to come together to find solutions for public health. WHO is demonstrating incredible innovation leadership by using groundbreaking empathetic AI, said Nick Bradshaw, Director of Partnerships and Outreach at WISH.

The digital health worker is a prominent feature of the Sport For Health partnership between WHO and the Qatar Ministry of Public Health, which has been established to help make this years FIFA World Cup Qatar 2022 a beacon for health and safety.

The project is supported by technology company Soul Machines, which brings avatars to life in the form of autonomously animated Digital People. Through this collaboration, we have created a personality for the frontline responder that is empathetic, informative, and understanding, says Greg Cross, CEO and Co-Founder of Soul Machines. Our Digital People operate and respond in real time, providing users with a unique and emotionally engaging experience. We look forward to continuing our work on Florence as we aim to positively reshape and transform the health-care industry.

Read the original here:

WHO and partners launch world's most extensive freely accessible AI health worker - World Health Organization

Posted in Ai | Comments Off on WHO and partners launch world’s most extensive freely accessible AI health worker – World Health Organization

Google Will Let Healthcare Organizations Use Its AI To Analyze And Store X-Rays – Forbes

Posted: at 1:21 pm

New tools from Google's cloud unit will help healthcare organizations analyze and store medical images.

Google on Tuesday announced a new set of artificial intelligence tools aimed at letting healthcare organizations use the search giants software and servers to read, store and label X-rays, MRIs and other medical imaging.

The tools, from Googles cloud unit, allow hospitals and medical companies to search through imaging metadata or develop software to quickly analyze images for diagnoses. Called the Medical Imaging Suite, the tools can also help healthcare professionals to automatically annotate medical images and build machine learning models for research.

With the advancements in medical imaging technology, there's been an increase in the size and complexity of these images, Alissa Hsu Lynch, Google Clouds global lead for health tech strategy and solutions, said in an interview. We know that AI can enable faster, more accurate diagnosis and therefore help improve productivity for healthcare workers.

Based on Google's other forays into healthcare, privacy advocates may raise concerns that the tech giant, which makes the majority of its $257 billion annual revenue from personalized ads based on user data, would use patient information to feed its vast advertising machine.

Lynch says Google doesnt have any access to patients protected health information, and none of the data from the service would be used for the companys advertising efforts. Google claims the service is compliant with the Health Insurance Portability and Accountability Act, or HIPAA, a federal law that regulates the use of patient data.

The tech giant is working with a handful of medical organizations as early partners for the imaging software. One partner, a company called Hologic, is using the Google suite for cloud storage, as well as developing tech to help improve cervical cancer diagnostics. Another partner called Hackensack Meridian Health, a network of healthcare providers in New Jersey, is using the tools to scrub identifying information from millions of gigabytes of X-rays. The company will also use the software to help build an algorithm for predicting the metastasis of prostate cancer.

Google's software tools will help healthcare organizations to view and search through imaging data.

The new tools come as Google and its parent company Alphabet invest more heavily in health-related initiatives. In the early days of the pandemic, Alphabets Verily unit, which focuses on life-sciences and med tech, partnered with the Trump administration to provide online screening for Covid tests. Google also partnered with Apple to create a system for contract tracing on smartphones. Last year the company dissolved its Google Health unit, restructuring its health efforts so they werent housed in one central division.

Google has stirred controversy in the past for its healthcare efforts. In 2019, Google drew blowback for an initiative called Project Nightingale, in which the company partnered with Ascension, the second-largest healthcare system in the country, to collect the personal health information of millions of people. The data included lab results, diagnoses and hospitalization records, including names and birthdays, according to the Wall Street Journal, though Google at the time said the project complied with federal law. Google had reportedly been using the data in part to design new software.

Two years earlier, the tech giant partnered with the National Institute of Health to publicly post more than 100,000 images of human chest X-rays. The goal there was to showcase the companys cloud storage capabilities and make the data available to researchers. But two days before the images were to be posted, the NIH told Google its software had not properly removed data from the X-rays that could identify patients, according to The Washington Post, which would potentially violate federal law. In response, Google canceled its project with NIH.

Asked about Googles past fumble with de-identifying information, Sameer Sethi, SVP and chief data and analytics officer at Hackensack Meridian Health, says the company has safeguards in place to prevent such mishaps.

You never actually trust the tool, he told Forbes. He adds Hackensack Meridian Health works with a third-party company to certify that the images are de-identified, even after using Googles tools. We will not bring anything to use without expert determination.

Read the original here:

Google Will Let Healthcare Organizations Use Its AI To Analyze And Store X-Rays - Forbes

Posted in Ai | Comments Off on Google Will Let Healthcare Organizations Use Its AI To Analyze And Store X-Rays – Forbes

Trustworthy AI is now within reach – VentureBeat

Posted: at 1:21 pm

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The artificial intelligence (AI) boom began in earnest in 2012 when Alex Krizhevsky,in collaboration with Ilya SutskeverandGeoffrey Hinton (who was Krizhevskys Ph.D. advisor), created AlexNet, which then won the ImageNet Large Scale Visual Recognition Challenge. The goal of that annual competition, which had begun in 1996, was to classify the 1.3 million high-resolution photographs in the ImageNet training set into 1,000 different classes. In other words, to correctly identify a dog and a cat.

AlexNet consisted of a deep learning neural network and was the first entrant to break 75% accuracy in the competition. Perhaps more impressively, it halved the existing error rate on ImageNet visual recognition to15.3%.It also established, arguably for the first time, that deep learning had substantive real-world capabilities. Among other applications, this paved the way for the visual recognition systems used across industries from agriculture to manufacturing.

This deep learning breakthrough triggered accelerated use of AI. But beyond the unquestioned genius of these and other early practitioners of deep learning, it was the confluence of several major technology trends that boosted AI. The internet, mobile phones and social media led to a data explosion, which is the fuel for AI. Computing continued its metronome-like Moores Law advance of doubling performance about every 18 months, enabling the processing of vast amounts of data. The cloud provided ready access to data from anywhere and lowered the cost of large-scale computing. Software advances, largely open-source, led to a flourishing of AI code libraries available to anyone.

All of this led to an exponential increase in AI adoption and a gold rush mentality. Research from management consulting firm PwC shows global GDP could be up to 14% higher in 2030 as a result of AI, the equivalent of an additional$15.7 trillion making it the biggest commercial opportunity in todays economy. According to Statista, global AI startup company funding has grown exponentially from $670 million in 2011 to $36 billion in 2020. Tortoise Intelligence reported that this more than doubled to $77 billion in 2021. In the past year alone, there have been over 50 million online mentions of AI in news and social media.

MetaBeat 2022

MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business on October 4 in San Francisco, CA.

All of that is indicative of the groundswell of AI development and implementation. Already present in many consumer applications, AI is now gaining broad adoption in the enterprise. According to Gartner,75%of businesses are expected to shift from piloting to operationalizing AI by 2024.

It is not only deep learning that is driving this. Deep learning is a subset of machine learning (ML), some of which has existed for several decades. There are a large variety of ML algorithms in use, for everything from email spam filters to predictive maintenance for industrial and military equipment. ML has benefitted from the same technology trends that are driving AI development and adoption.

With a rush to adoption have come some notable missteps. AI systems are essentially pattern recognition technologies that scour existing data, most of which has been collected over many years. If the datasets upon which AI acts contain biased data, the output from the algorithms can reflect that bias. As a consequence, there have been chatbots that have gone terribly awry, hiring systems that reinforce gender stereotypes, inaccurate and possibly biased facial recognition systems that lead to wrongful arrests, and historical bias that leads to loan rejections.

These and other problems have prompted legitimate concerns and led to the field of AI Ethics. There is a clear need for Responsible AI, which is essentially a quest to do no harm with AI algorithms. To do this requires that bias be eliminated from the datasets or otherwise mitigated. It is also possible that bias is unconsciously introduced into the algorithms themselves by those who develop them and needs to be identified and countered. And it requires that the operation of AI systems be explainable so that there is transparency in how the insights and decisions are reached.

The goal of these endeavors is to ensure that AI systems not only do no specific harm but are trustworthy. As Forrester Research notes in a recent blog, this is critical for business, as it cannot afford to ignore the ethical debt that AI technology has accrued.

Responsible AI is not easy, but is critically important to the future of the AI industry. There are new applications using AI coming online all the time where this could be an issue, such as determining which U.S. Army candidates are deserving of promotion. Recognizing that the problem exists has focused considerable efforts over the last few years on developing corrective measures.

There is good news on this front, as techniques and tools have been developed to mitigate algorithm bias and other problems at different points in AI development and implementation, whether in the original design, in deployment or after it is in production. These capabilities are leading to the emerging field of algorithmic auditing and assurance which will build trust in AI systems.

Besides bias, there are other issues in building Trustworthy AI, including the ability to explain how an algorithm reaches its recommendations and whether the results are replicable and accurate, ensure privacy and data protection, and secure against adversarial attack. The auditing and assurance field will address all these issues, as found in research done by Infosys and the University College of London. The purpose is to provide standards, practical codes and regulations to assure users of the safety and legality of algorithmic systems.

There are four primary activities involved.

Development: An audit will have to account for the process of development and documentation of an algorithmic system.

Assessment: An audit will have to evaluate an algorithmic systems behaviors and capacities.

Mitigation: An audit will have to recommend service and improvement processes for addressing high-risk features of algorithmic systems.

Assurance: An audit will be aimed at providing a formal declaration that an algorithmic system conforms to a defined set of standards, codes of practice or regulations.

Ideally, a business would include these concepts from the beginning of an AI project to protect itself and its customers. If this is widely implemented, the result would produce an ecosystem of Trustworthy AI and Responsible AI. In doing so, algorithmic systems would be properly appraised, all plausible measures for reducing or eliminating risk would be undertaken, and users, providers and third parties would be assured of the systems safety.

Only a decade ago, AI was practiced mostly by a small group of academics. The development and adoption of these technologies has since expanded dramatically. For all the considerable advances, there have been shortcomings. Many of these can be addressed and resolved with algorithmic auditing and assurance. With the wild ride of AI over the last 10 years, this is no small accomplishment.

Bali (Balakrishna) DR is senior vice president, service offering head ECS, AI and automation atInfosys.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

Continued here:

Trustworthy AI is now within reach - VentureBeat

Posted in Ai | Comments Off on Trustworthy AI is now within reach – VentureBeat

How AI can lower costs and increase efficiency in content creation – IBC365

Posted: at 1:21 pm

Video content is a very powerful source of information, essential for analysis, said TVCONAL founder Masoumeh Izadi.

Her company has developed an AI and machine learning powered platform that rapidly analyses sports footage, with a current focus on cricket.

Data plays a central role in sports, she said, Put together to create game semantics and metadata labels, and that would make your content searchable, customisable, and you can extract value and monetise it.

Read moreArtificial intelligence in broadcasting

TVCONAL has analysed 168 cricket matches over an eight-month period. The end result in each case is a content platform that can be searched for game events such as specific batting or bowling techniques.

Each match needs to be sliced into what we call units of analysis, which is different in every sport, said Izadi. For example, this could be shots or pitches, or delivery or a stroke depending on the sport At the heart of this learning model is learning to slice.

The platform uses machine learning and computer vision algorithms to recognise these game events based solely on the content in the video itself.

The solution we are proposing is to use video analytics, which at the moment is very, very advanced, to the point you can understand and discover what is in the content. In sport content, this would mean identifying and locating different kinds of objects, being able to track those objects, detect players, the type of player, track their movements, etc - whether thats their position or the key points of their body just from the content of the video.

AI recognises batting techniques based on the movements of the player, or a 6-point shot by analysing when the ball crosses the boundary, with upwards of 95% accuracy. TVCONAL has developed the system recording top-tier multi-cam cricket productions, basic 3-camera shoots and single camera recordings, where accuracy with a more limited analysis model can reach above 99%.

This form of AI-powered video analysis takes much of the cost and effort out of content tagging and categorisation.

Even in top tier productions this is very time consuming, labour intensive, and prone to human error. Plus for archive content accumulated over the years, it is a nightmare to go through, said Izadi.

There are numerous uses for this form of AI content analysis, and TVCONAL is currently focused on applications within the sport itself. It is in discussions with six pro cricket teams in SE Asia and 10 cricket centres of excellence in Malaysia about the use of its platform as a training tool.

Izadi calls it a move to democratise sport in the digital age.

[AI technology can] give any sport team the privilege the big sport teams have. Saving cost and saving time from productions, empowering the production team in their operation, and unlocking their ability to produce more engaging, more interesting sport content.

TVCONAL used more than 20,000 samples to train its cricket machine learning algorithms, and is planning to branch out into other sports in future. Were looking into racquet spots, tennis and table tennis, said Izadi.

She also demonstrated experimental auto-generated commentary during IBC2002, building upon game event analysis with automatically generated sports presenter-style speech to match the on-screen action.

However, the true cutting edge of speech synthesis was demonstrated by Kiyoshi Kurihara from NHK, the Japan Broadcasting Corporation.

NHK currently uses text-to-speech generation to accompany news reports on the NHK 1 TV channel, live sports commentary online and for weather forecasts on local radio stations. It provides a professional line read, through an AI anchor, but the actual input is typed or automatically generated rather than spoken by an actual person.

Kiyoshi Kurihara explained the process as breaking down the written words into graphemes, recognising their respective sound, or phoneme, and then converting these phonemes into a waveform. This is the audio clip of generated speech that can be broadcast over the airwaves or synced to a video.

The AI model that can produce realistic line readings requires 20 hours of speech data per person, according to Kunihara. Its every hard work, he added.

This is particularly true using a more traditional method. Training is difficult for two reasons. First, text-to-speech requires high quality speech [recordings], and this requires four people. An anchor, engineer, director and data annotator. Second, regarding quality it is important to produce high quality speech synthesis, because noise will also be re-generated, he explained.

The leading edge component of NHKs process removes much of this heavy workload. This manual process can be eliminated, said Kunihara, using a supervised learning approach.

Read moreBroadcast trends: From AI and D2C to NFTs and 8K

NHKs AI Anchor model is created using real speech from radio already broadcast. One difficulty here is that a radio programme may feature a mix of music and speech and, naturally, only the speech can be used to build the tech-to-speech profile.

We have developed a method to automatically retrieve one sentence clips using state of the art speech processing, said Kunihara. The broadcast is broken down into small segments, cutting out music and other extraneous sections, which become the units of the units used to train the AI voice synthesiser.

Wave form synthesis uses deep learning and converts phoneme to waveform, explained Kunihara. And by automating some of the most challenging parts of the process, NHK is able to affordably and efficiently develop virtual radio and TV presenters.

Local radio provides a great example of when this can not just reduce costs, but increase the usefulness of the broadcast. There are 54 [radio] stations in the local area and this was costly for them to provide local weather, said Kunihara. NHK automatically generates scripts using weather report information for each local station, and then employs its TTS (text-to-speech) system to create ready-to-broadcast bespoke weather report audio.

NHK used similar techniques for mass sporting events too. In 2018 and 2021 we provided live sports commentary using TTS during the Olympic and Paralympic Games and distributed them over the internet, said Kunihara. The team used metadata from the official Olympic Data Feed to auto-generate a script for each feed.

This ties into the metadata creation TVCONAL creates in its cricket footage analysis, demonstrating how AI technologies can often work hand-in-hand in this field.

Kiyoshi Kurihara of NHK and TVCONAL founder Masoumeh Izadi were speaking at an IBC2022 session titled Technical Papers: How AI is Advancing Media Production. It was moderated by

Nick Lodge, director of Logical Media. For more content from the IBC2022 Show check out the latest IBC2022 video and the full coverage on 365.

Here is the original post:

How AI can lower costs and increase efficiency in content creation - IBC365

Posted in Ai | Comments Off on How AI can lower costs and increase efficiency in content creation – IBC365

Page 14«..10..13141516..2030..»