Page 92«..1020..91929394..100110..»

Category Archives: Ai

The 3 Steps To Building An AI-Powered Organization – Forbes

Posted: October 26, 2021 at 5:16 pm

Strategizing and innovating with AI using the 3-box solution

The COVID-19 crisis led to fundamental changes in how we conduct business. The big question today is whether these innovations will be sustained and built upon after the crisis.

How can leaders incorporate pandemic-induced innovations to augment and strengthen their core business models? How can companies tackle internal resistance and build a future-ready business around artificial intelligence (AI)?

Professor Vijay Govindarajan

I spoke to Professor Vijay Govindarajan, a renowned authority in strategy and innovation and a distinguished professor at the Tuck School of Business at Dartmouth College. He shares insights from his recent book, The Three-Box Solution, and we discuss how leaders can build an AI-powered organization.

I was always fascinated when I used to drive by McDonald's, says Govindarajan. They used to have a sign which said 5 million burgers sold or 1 billion burgers sold. Tracking the burgers you produce and sell is a typical industrial era way of keeping score, he adds.

In contrast, digital companies do not keep score that way: they want to know who is eating the burger. By learning more about the person buying their burger, they can solve more of the customers problems.

During the pandemic, most companies opened internet channels to sell to customers. Many of them falsely equated this change to a digital strategy. No, all you have is another way your customers can access your product, says Govindarajan. You are still selling the same burger through yet another channel without really knowing whos eating it.

The real difference comes by shifting from selling products to solving customer problems. For example, insurance companies with multiple product offerings such as life, auto, or property may have a fragmented customer view. Even when the same customer buys several products from the company, the individual departments lack insight into these choices.

It is critical to link divisions, centralize the data, and get an integrated view of the customer to understand who is buying the products and why. By leveraging AI at the core of your operating model, you can reinvent the business today and into the future.

This is real digital transformation.

Data is the lifeblood of a transformed organization, and AI-powered insights drive this intelligent, efficient, and future-ready enterprise - one that knows not just who is eating the burger but when and what the customer will crave next.

Today, there is greater awareness of AIs potential. However, many leaders squander the opportunity due to three challenges. They lack clarity on how Artificial Intelligence can optimize their existing business. They fail to tackle the cultural resistance that inhibits every AI-driven initiative. Finally, leaders dont leverage the innovative power of advanced analytics to create the business of the future.

Not surprisingly, close to 80% of AI initiatives dont deliver a return on investment. About 76% of leaders admit their inability to build a data-driven organization.

To build successful organizations, leaders need to balance the past, present, and future simultaneously. Govindarajan shares the three-box framework to address these challenges.

Leaders must manage the present, or box one, by keeping their core business running at optimal efficiency and effectiveness. They must continuously review historical practices to prune non-constructive behavior and selectively forget the past, or box two. Finally, they must create the future, or box three, through constant experimentation to innovate and pivot the business.

Lets examine these three challenges and look at how each of these boxes can help address them to build an AI-powered organization.

Box 1: Manage the present

Common challenges

Rewiring an existing business around data and machine learning needs careful design. Organizations may suffer from a suboptimal strategy, poor execution, or both. Due to a lack of clarity on end outcomes, leaders often pick initiatives that dont align with their strategy. Often, Artificial Intelligence is applied in a piecemeal fashion or with a big-bang approach that has low chances of succeeding.

Analytics projects dont get the right executive inputs or establish clear accountability to achieve business outcomes. Additionally, execution suffers due to a variety of factors, such as inadequate team skills, poor collaboration, suboptimal processes, inappropriate tools, and low user adoption.

Here are four best practices to help tackle these challenges and optimize the performance of an existing business:

Lets take the example of General Motors (GM). Today, 99% of GMs revenue comes from gasoline-powered automobiles, says Govindarajan. This is the foundation for the business, or its box one. This core business has ample opportunities for improvement in efficiency and effectiveness using AI, which GM has been methodically pursuing.

"As we look at the opportunity we got with a fresh start back in 2010, we're really trying to figure out how do we change the game, how do we get ahead, not how do we stay in business," said Randy Mott, CIO of GM in an earlier interview. This leadership vision was translated into a series of organization-wide information technology (IT) transformation initiatives.

GM built an award-winning data and analytics platform, Maxis, that generates self-service insights across its business lines. Once the data and analytics foundation was in place, teams began applying AI to optimize the business across areas such as design prototyping in research and development, computer vision-driven predictive maintenance in manufacturing, and conversational AI chatbots to improve customer experience. The companys vehicle intelligence platform launched in 2019 helps process up to 4.5 terabytes of data per hour - a foundational capability crucial to building next-generation vehicles.

Box 2: Forget the past

The biggest challenges for success with data science often have nothing to do with tools or technology: its people issues that need attention. Organizational culture is routinely cited as one of the top roadblocks for adopting AI within organizations, and leaders often do a poor job of selling the vision for AI.

AI disrupts peoples roles, responsibilities, and daily decision-making. Naturally, most humans resist change and try extending the status quo as long as they can. As a result, data science initiatives face massive internal resistance and user adoption challenges.

Dominant logic refers to the business models, practices, and skill sets that helped a company become profitable in the past. While dominant logic helps keep box one functional, it inhibits the creation of the future. Building a business around data science requires entirely new ideas, mindsets, and skills.

To create the future, you must selectively forget the past. If you can't forget, you can't learn, says Govindarajan. Many books have been written on learning organizations, but not a single one on forgetting organizations, he quips. There are two suggestions for this box:

Ten years after its 2009 bankruptcy, GM transformed from a company needing a government bailout to one of the top auto companies. This is an excellent example of how an organization can script a turnaround if its willing to forget its past selectively.

While filing bankruptcy, GM had several legacy issues to tackle. For example, the company had to be reorganized for collaboration rather than conflict. The product portfolio had to be reviewed. Its complex financial operations needed simplification, and the IT strategy required an overhaul to lay a robust technology foundation.

Most critically, GM had to unlearn the negative traits in its cultureits famously stagnant bureaucracy, a siloed mentality, a lack of urgency, and poor accountability. Not surprisingly, this change started at the top. GMs CEO, Mary Barra, emphasized the need for accountability and a problem-solving mindset. Don't tell me why you couldn't do it in 1984. Tell me what it takes to get it done now," said Barra. Under her leadership, GM reinvented its culture by rejecting complacency and striving for continuous progress.

Box 3: Create the future

Leaders rarely tap into the full potential of AI. While some leverage AI to optimize their existing business or box one initiatives, most fail to reimagine their business model. Their pilots arent bold enough or arent planned to test emerging trends that could influence the business.

Even when they plan the pilots well, organizations often struggle to execute them. They dont get the proper business support, fail to secure the funds needed to onboard the right resources, or arent integrated into the core business. As a result, most Artificial Intelligence experiments fail to achieve any long-term impact.

If you want to future-proof your business, the journey begins today, says Govindarajan. Strategy is not what you do in the future. It is about what you do today to create the future. In this vein, there are four steps to build a thriving box three:

At General Motors, the creation of a self-driving car couldnt have happened inside the traditional automobile business unit, observes Govindarajan. Their box two problems would hold them back. GM set up an autonomous vehicle unit in 2016 and soon after bought out Cruise Automation to pursue self-driving vehicles. Ever since, this unit has been operating autonomously in Silicon Valley, away from GMs Detroit headquarters.

As this box three unit gained traction, GM appointed Dan Ammannthen President of GMas the CEO of Cruise Automation. By placing a top executive in the new unit, the company signaled the strategic relevance of this effort for its future. Not surprisingly, the units processes were different from those of box one. When GM recruits an AI expert for its self-driving car unit, its not competing with Ford but with Google. So, the pay scales and perks need to be very different from that in Detroit, clarifies Govindarajan.

Over the past five years, Cruise Automation has been progressing with experiments in autonomous driving. Recently, GM announced the development of Ultra Cruise, an all-new advanced driver assistance technology that would enable hands-free driving in 95% of scenarios. Significantly, this technology developed by GMs box three unit will be rolled out in several car models produced by its core business units in box one.

This is an excellent example of the three boxes working together by tapping into AI-driven innovations, facilitating prosperity in the present while creating a secure future.

Summary of the 3-Box Solution

The idea of the three-box solution has its roots in Hindu spirituality, explains Govindarajan. The ancient scriptures portray life as a continuous cycle of preservation, destruction, and creation. Every entity in the universe invariably passes through these three phases.

Weve seen how the principles of the three-box solution, inspired by 5,000-year-old texts, are relevant for companies today. To build immortal companies, you must master this preservation, destruction, and creation cycle. Its a mission thats never fully accomplished because change is the only constant, concludes Govindarajan.

You can watch my full interview with Professor Vijay Govindarajan on how the three-box solution helps address the biggest challenges in building an AI-powered organization.

Original post:

The 3 Steps To Building An AI-Powered Organization - Forbes

Posted in Ai | Comments Off on The 3 Steps To Building An AI-Powered Organization – Forbes

Strengthening international cooperation on AI – Brookings Institution

Posted: at 5:16 pm

Executive SummaryInternational cooperation on artificial intelligencewhy, what, and how

Since 2017, when Canada became the first country to adopt a national AI strategy, at least 60 countries have adopted some form of policy for artificial intelligence (AI). The prospect of an estimated boost of 16 percent, or US$13 trillion, to global output by 2030 has led to an unprecedented race to promote AI uptake across industry, consumer markets, and government services. Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

At the same time, the work on developing global standards for AI has led to significant developments in various international bodies. These encompass both technical aspects of AI (in standards development organizations (SDOs) such as the International Organization for Standardization (ISO), the International Electrotechnical Commission (IEC), and the Institute of Electrical and Electronics Engineers (IEEE) among others) and the ethical and policy dimensions of responsible AI. In addition, in 2018 the G-7 agreed to establish the Global Partnership on AI, a multistakeholder initiative working on projects to explore regulatory issues and opportunities for AI development. The Organization for Economic Cooperation and Development (OECD) launched the AI Policy Observatory to support and inform AI policy development. Several other international organizations have become active in developing proposed frameworks for responsible AI development.

In addition, there has been a proliferation of declarations and frameworks from public and private organizations aimed at guiding the development of responsible AI. While many of these focus on general principles, the past two years have seen efforts to put principles into operation through fully-fledged policy frameworks. Canadas directive on the use of AI in government, Singapores Model AI Governance Framework, Japans Social Principles of Human-Centric AI, and the U.K. guidance on understanding AI ethics and safety have been frontrunners in this sense; they were followed by the U.S. guidance to federal agencies on regulation of AI and an executive order on how these agencies should use AI. Most recently, the EU proposal for adoption of regulation on AI has marked the first attempt to introduce a comprehensive legislative scheme governing AI.

Global corporate investment in AI has reportedly reached US$60 billion in 2020 and is projected to more than double by 2025.

In exploring how to align these various policymaking efforts, we focus on the most compelling reasons for stepping up international cooperation (the why); the issues and policy domains that appear most ready for enhanced collaboration (the what); and the instruments and forums that could be leveraged to achieve meaningful results in advancing international AI standards, regulatory cooperation, and joint R&D projects to tackle global challenges (the how). At the end of this report, we list the topics that we propose to explore in our forthcoming group discussions.

Even more than many domains of science and engineering in the 21st century, the international AI landscape is deeply collaborative, especially when it comes to research, innovation, and standardization. There are several reasons to sustain and enhance international cooperation.

The fact that international cooperation is an element of most governments AI strategies indicates that governments appreciate the connection between AI development and collaboration across borders. This report is about concrete ways to realize this connection.

At the same time, international cooperation should not be interpreted as complete global harmonization: countries legitimately differ in national strategic priorities, legal traditions, economic structures, demography, and geography. International collaboration can nonetheless create the level playing field that would enable countries to engage in fruitful co-opetition in AI: agreeing on basic principles and when possible seeking joint outcomes, but also competing for the best solutions to be scaled up at the global level. Robust cooperation based on common principles and values is a foundation for successful national development of AI.

Our exploration of international AI governance through roundtables, other discussions, and research led us to identify three main areas where enhanced collaboration would provide fruitful: regulatory policies, standard-setting, and joint research and development (R&D) projects. Below, we summarize ways in which cooperation may unfold in each of these areas, as well as the extent of collaboration conceivable in the short term as well as in the longer term.

Cooperation on regulatory policy

AI policy development is in the relatively early stages in all countries, and so timely and focused international cooperation can help align AI policies andregulations.

International regulatory cooperation has the potential to reduce regulatory burdens and barriers to trade, incentivize AI development and use, and increase market competition at the global level. That said, countries differ in legal tradition, economic structure, comparative advantage in AI, weighing of civil and fundamental rights, and balance between ex ante regulation and ex post enforcement and litigation systems. Such differences will make it difficult to achieve complete regulatory convergence. Indeed, national AI strategies and policies reflect differences in countries willingness to move towards a comprehensive regulatory framework for AI. Despite these differences, AI policy development is in the relatively early stages in all countries, and so timely and focused international cooperation can help align AI policies and regulations.

Against this backdrop, it is reasonable to assume that AI policy development is less embedded in pre-existing legal tradition or frameworks at this stage, and thus that international cooperation in this field can achieve higher levels of integration. The following areas for cooperation emerged from the FCAI dialogues and our other explorations.

Cooperation on sharing data across borders

Data governance is a focal area for international cooperation on AI because of the importance of data as an input for AI R&D and because of the added complexity of regulatory regimes already in place that restrict certain information flows, including data protection and intellectual property laws. Effective international cooperation on AI needs a robust and coherent framework for data protection and data sharing. There are a variety of channels addressing these issues including the Asia-Pacific Economic Cooperation group, the working group on data governance of the Global Partnership on AI, and bilateral discussions between the EU and U.S. Nonetheless, the potential impact of such laws on data available for AI-driven medical and scientific research requires specific focus as the EU both reviews its General Data Protection Regulation and considers new legislation on private and public sector data sharing.

There are other significant data governance issues that may benefit from pooled efforts across borders that, by and large, are the subject of international cooperation. Key areas in this respect include opening government data including international data sharing, improving data interoperability, and promoting technologies for trustworthy data sharing.

Cooperation on international standards for AI

As countries move from developing frameworks and policies to more concrete efforts to regulate AI, demand for AI standards will grow. These include standards for risk management, data governance, and technical documentation that can establish compliance with emerging legal requirements. International AI standards will also be needed to develop commonly accepted labeling practices that can facilitate business-to-business (B2B) contracting and to demonstrate conformity with AI regulations; address the ethics of AI systems (transparency, neutrality/lack of bias, etc.); and maximize the harmonization and interoperability for AI systems globally. International standards from standards development organizations like the ISO/IEC and IEEE can help ensure that global AI systems are ethically sound, robust, and trustworthy, that opportunities from AI are widely distributed, and that standards are technically sound and research-driven regardless of sector or application.

International standards from standards development organizations like the ISO/IEC and IEEE can help ensure that global AI systems are ethically sound, robust, and trustworthy, that opportunities from AI are widely distributed, and that standards are technically sound and research-driven regardless of sector or application.

The governments participating in the FCAI recognize and support industry-led standards setting. While there are differences in how the FCAI participants engage with industry-led standards bodies, a common element is support for the central role of the private sector in driving standards. That said, there is a range of steps that FCAI participants can take to strengthen international cooperation in AI standards. The approach of FCAI participants that emphasizes an industry-led approach to developing international AI standards contrasts with the overall approach of other countries, such as China, where the state is at the center of standards making activities. The more direct involvement by the Chinese government in setting standards, driving the standards agenda, and aligning these with broader Chinese government priorities requires attention by all FCAI participants with the aim of encouraging Chinese engagement in international AI standard-setting consistent with outcomes that are technically robust and industry driven.

Sound AI standards can also support international trade and investment in AI, expanding AI opportunity globally and increasing returns to investment in AI R&D. The World Trade Organization (WTO) Technical Barriers to Trade (TBT) Agreements relevance to AI standards is limited by its application only to goods, whereas many AI standards will apply to services. Recent trade agreements have started to address AI issues, including support for AI standards, but more is needed. An effective international AI standards development process is also needed to avoid bifurcated AI standardscentered around China on the one hand and the West on the other. Which outcome prevails will to some extent depend on progress in effective international AI standards development.

R&D cooperation: Selecting international AI projects

Productive discussion of AI ethics, regulation, risks, and benefits requires use cases because the issues are highly contextual. As a result, AI policy development has tended to move from broad principles to specific sectors or use cases. Considering this need, we suggest that developing international cooperation on AI would benefit from putting cooperation into operation with specific use cases. To this end, we propose that FCAI participants expand efforts to deploy AI on important global problems collectively by working toward agreement on joint research aimed at a specific development project (or projects). Such an effort could stimulate development of AI for social benefit and also provide a forcing function for overcoming differences in approaches to AI policy and regulation.

Criteria for the kinds of goals or projects to consider include the following:

This proposal could be modeled on several large-scale international scientific collaborations: CERN, the Human Genome Project, or the International Space Station. It would also build on numerous initiatives toward collaborative research and development on AI. Similar global collaboration will be more difficult in a world of increased geopolitical and economic competition, nationalism, nativism, and protectionism among governments that have been key players in these efforts.

Below, we present recommendations for developing international cooperation on AI based on our discussions and work to date.

R1. Commit to considering international cooperation in drafting and implementing national AI policies.

This recommendation could be implemented within a relatively short timeframe and initially would take the form of firm declarations by individual countries. Ultimately this could lead to a joint declaration with clear commitments on the part of the governments involved.

R2. Refine a common approach to responsible AI development.

This type of recommendation requires enhanced cooperation between FCAI governments, which can then provide a good basis for incremental forms of cooperation.

R3. Agree on a common, technology-neutral definition of AI systems.

FCAI governments should work on a common definition of AI that is technology-neutral and broad. This recommendation can be implemented in a relatively short term and requires joint action by FCAI governments. The time to act is short, as the rather broad definition given in the EU AI Act is still undergoing the legislative process in the EU and many other countries are still shaping their AI policy frameworks.

R4. Agree on the contours of a risk-based approach.

Alignment on this key element of AI policy would be an important step towards an interoperable system of responsible AI. It would also facilitate cooperation among FCAI governments, industry, and civil society working on AI standards in international SDOs. General agreement on a risk-based approach could be achieved in the short term; developing the contours of a risk-based classification system would probably take more time and require deeper cooperation among FCAI governments as well as stakeholders.

R5. Establish redlines in developing and deploying AI.

This may entail an iterative process. FCAI governments could agree on an initial, limited list of redlines such as certain AI uses for generalized social scoring by governments; and then gradually expand the list over time to include emerging AI uses on which there is substantial agreement on the need to prohibit use.

R6. Strengthen sectoral cooperation, starting with more developed policy domains.

Sectoral cooperation can be organized on relatively short timeframes starting from sectors that have well-developed regulatory systems and present higher risks, such as health care, transport and finance, in which sectoral regulation already exists, and its adaptation to AI could be achieved relatively swiftly.

R7. Create a joint platform for regulatory learning and experiments.

A joint repository could stimulate dialogue on how to design and implement sandboxes and secure sound governance, transparency, and reproducibility of results, and aid their transferability across jurisdictions and categories of users. This recommended action is independent of others and is feasible in the short term. It requires soft cooperation, in the form of a structured exchange of good practices. Over time, the repository should become richer in terms of content, and therefore more useful.

R8. Step up cooperation and exchange of practices on the use of AI in government.

FCAI governments could set up, either as a stand-alone initiative or in the context of a broader framework for cooperation, a structured exchange on government uses of AI. The dialogue may involve AI applications to improve the functioning of public administration such as the administration of public benefits or health care; AI-enabled regulation and regulatory governance practices; or other decision-making and standards and procedures for AI procurement. This recommended action could be implemented in the short term, although collecting all experiences and setting the stage for further cooperation would require more time.

R9. Step up cooperation on accountability.

FCAI governments could profit from enhanced cooperation on accountability, whether through market oversight and enforcement, auditing requirements, or otherwise. This could combine with sectoral cooperation and possibly also with standards development for auditing AI systems.

R10. Assess the impact of AI on international data governance.

There is a need for a common understanding of how data governance rules affect AI R&D in areas such as health research and other scientific research, and whether they inhibit the exploration that is an essential part of both scientific discovery and machine learning. There is also need for a critical look at R&D methods to develop a deeper understanding of appropriate boundaries on use of personal data or other protected information. In turn, there is also a need to expand R&D and understanding in privacy-protecting technologies that can enable exploration and discovery while protecting personal information.

R11. Adopt a stepwise, inclusive approach to international AI standardization.

A stepwise approach to standards development is needed to allow time for technology development and experimentation and to gather the data and use cases to support robust standards. It also would ensure that discussions at the international level happen once technology has reached a certain level of maturity or where a regulatory environment is adopted. To support such an approach, it would be helpful to establish a comprehensive database of AI standards under development at national and international levels.

R12. Develop a coordinated approach to AI standards development that encourages Chinese participation consistent with an industry-led, research-driven approach.

There is currently a risk of disconnect between growing concern among governments and national security officials alarmed by Chinese engagement in the standards process on the one hand, and industry participants perceptions of the impact of Chinese participation in SDOs on the other. To encourage constructive involvement and discourage self-serving standards, FCAI participants (and likeminded countries) should encourage Chinese engagement in international standards setting while also agreeing on costs for actions that use SDOs strategically to slow down or stall standards making. This can be accomplished through trade and other measures but will require cooperation among FCAI participants to be effective.

R13. Expand trade rules for AI standards.

The rules governing use of international standards in the WTO TBT Agreement and free trade agreements are limited to goods only, whereas AI standards will apply mainly to services. New trade rules are needed that extend rules on international standards to services. As a starting point, such rules should be developed in the context of bilateral free trade agreements or plurilateral agreements, with the aim to make them multilateral in the WTO. Trade rules are also needed to support data free flow with trust and to reduce barriers and costs to AI infrastructure. Consideration also should be given to linking participation in the development of AI standards in bodies such as ISO/IEC, with broader trade policy goals and compliance with core WTO commitments.

R14. Increase funding for participation in SDOs.

Funding should be earmarked for academics and industry participation in SDOs, as well as for SDO meetings in FCAI countries and more broadly in less developed countries. Broadened participation is important to democratize the standards making process and strengthen the legitimacy and adoption of the resulting standards. Hosting meetings of standards bodies in diverse countries can broaden exposure to standards-setting processes around AI and critical technology.

R15. Develop common criteria and governance arrangements for international large-scale R&D projects.

Joint research and development applying to large-scale global problems such as climate change or disease prevention and treatment can have two valuable effects: It can bring additional resources to the solution of pressing global challenges, and the collaboration can help to find common ground in addressing differences in approaches to AI. FCAI will seek to incubate a concrete roadmap on such R&D for adoption by FCAI participants as well as other governments and international organizations. Using collaboration on R&D as a mechanism to work through matters that affect international cooperation on AI policy means that this recommendation should play out in the near term.

Scaling R&D cooperation on AI projects. China and AI: what are the risks, opportunities, and ways forward? Government use of AI: developing common approaches. Regulatory cooperation and harmonization: issues and mechanisms. A suitable international framework for data governance. Standards development. An AI trade agreement: partners, content, and strategy.

Download the full report

Here is the original post:

Strengthening international cooperation on AI - Brookings Institution

Posted in Ai | Comments Off on Strengthening international cooperation on AI – Brookings Institution

Adobe continues its AI push in Creative Cloud – TechCrunch

Posted: at 5:16 pm

Over the course of the last few years, Adobe has gone all-in on AI. At its MAX conference this year, thats once again on full display across updates for virtually all of its products, powered by its Sensei AI platform. Those range from smarter masking tools and preset recommendations in Lightroom to the ability to transfer colors between images in Photoshop, all the way to a body tracker in Character Animator.

If youve ever worked with Photoshop, you know the pain of trying to precisely select an object in order to then manipulate it. Using the magic wand, after all, often felt anything but magical. Last year, Adobe added the object selection tool, which uses AI to help you with that. Now, with the latest update, Adobe is also introducing auto-masking, which takes this one step further by automatically recognizing the different objects in an image. Adobe is quite open about the fact that it wont detect everything just yet, but the company also notes that this feature will improve over time.

Image Credits: Adobe

Similarly, last year, Adobe introduced what it calls neural filters. These added the ability to colorize an old black and white image, improve portraits, add depth blur or zoom into an image, with the neural network automatically trying to recreate all of the details.

Image Credits: Adobe

This year, it is introdu0cing a feature called landscape mixer. By moving a few sliders, you can make your photo look like it was taken in the fall or winter, for example, either using a set of presets or by creating your own custom ones. Say theres a bit of a drab foreground but you want a green image; find yourself an image with a verdant green landscape to transfer that style and youre good to go.

Image Credits: Adobe

Also, depth blur, which was previously available, now lets you change your focal distance after the fact, which looks quite a bit more professional than the previous filter, which mostly focused on blurring everything around an object in an image.

Meanwhile, in Lightroom, photo editors can now use a new feature to automatically select the sky (and invert that to select everything else, too). Also new in Lightroom, though not AI-related, is a new Remix tab in the Discover section that allows photographers to share their work and lets others see the edits they made and maybe make some changes to it.

For videographers, Adobe is adding a new AI feature to Premiere Pro that can automatically adjust the length of a music clip to the length of a video sequence. First introduced in Adobe Audition, the audio editor in the Creative Cloud suite, this new feature (somewhat confusingly also called Remix), makes sure that you dont just fade out in the middle of a song when a sequence is over. It automatically cuts the audio so the end of the song is still there at the end of the sequence when you shorten a music clip.

Image Credits: Adobe

Among some of the other updates to Creative Cloud are Creative Cloud Web, a new hub to access, organize and share files and libraries on the web. Its still a private beta and will only be available in Fresco, Illustrator, XD and Photoshop. Itll feature a real-time collaboration space where teams can add text, stickers and images to assets. Its worth noting that this isnt Photoshop or XD on the web. Its merely a place to discuss projects and assets.

Image Credits: Adobe

But dont despair, because Photoshop and Illustrator on the web are also getting support for some basic editing tools in the browser.

As usual, there are also a slew of other updates to all of the Creative Cloud tools. Whats clear, though, is that Adobe is betting big on AI to make life easier for creative professionals and hobbyists. In some respects, it is catching up to competitors like Skylums Luminar AI, which have made AI the central focus of their applications. Adobes advantage is the breadth of its feature set, though, which is going to be hard to replicate for any newcomer.

See the rest here:

Adobe continues its AI push in Creative Cloud - TechCrunch

Posted in Ai | Comments Off on Adobe continues its AI push in Creative Cloud – TechCrunch

How AI is reinventing what computers are – MIT Technology Review

Posted: at 5:16 pm

Fall 2021: the season of pumpkins, pecan pies, and peachy new phones. Every year, right on cue, Apple, Samsung, Google, and others drop their latest releases. These fixtures in the consumer tech calendar no longer inspire the surprise and wonder of those heady early days. But behind all the marketing glitz, theres something remarkable going on.

Googles latest offering, the Pixel 6, is the first phone to have a separate chip dedicated to AI that sits alongside its standard processor. And the chip that runs the iPhone has for the last couple of years contained what Apple calls a neural engine, also dedicated to AI. Both chips are better suited to the types of computations involved in training and running machine-learning models on our devices, such as the AI that powers your camera. Almost without our noticing, AI has become part of our day-to-day lives. And its changing how we think about computing.

What does that mean? Well, computers havent changed much in 40 or 50 years. Theyre smaller and faster, but theyre still boxes with processors that run instructions from humans. AI changes that on at least three fronts: how computers are made, how theyre programmed, and how theyre used. Ultimately, it will change what they are for.

The core of computing is changing from number-crunching to decision-making, says Pradeep Dubey, director of the parallel computing lab at Intel. Or, as MIT CSAIL director Daniela Rus puts it, AI is freeing computers from their boxes.

The first change concerns how computersand the chips that control themare made. Traditional computing gains came as machines got faster at carrying out one calculation after another. For decades the world benefited from chip speed-ups that came with metronomic regularity as chipmakers kept up with Moores Law.

But the deep-learning models that make current AI applications work require a different approach: they need vast numbers of less precise calculations to be carried out all at the same time. That means a new type of chip is required: one that can move data around as quickly as possible, making sure its available when and where its needed. When deep learning exploded onto the scene a decade or so ago, there were already specialty computer chips available that were pretty good at this: graphics processing units, or GPUs, which were designed to display an entire screenful of pixels dozens of times a second.

Anything can become a computer. Indeed, most household objects, from toothbrushes to light switches to doorbells, already come in a smart version.

Now chipmakers like Intel and Arm and Nvidia, which supplied many of the first GPUs, are pivoting to make hardware tailored specifically for AI. Google and Facebook are also forcing their way into this industry for the first time, in a race to find an AI edge through hardware.

For example, the chip inside the Pixel 6 is a new mobile version of Googles tensor processing unit, or TPU. Unlike traditional chips, which are geared toward ultrafast, precise calculations, TPUs are designed for the high-volume but low-precision calculations required by neural networks. Google has used these chips in-house since 2015: they process peoples photos and natural-language search queries. Googles sister company DeepMind uses them to train its AIs.

In the last couple of years, Google has made TPUs available to other companies, and these chipsas well as similar ones being developed by othersare becoming the default inside the worlds data centers.

AI is even helping to design its own computing infrastructure. In 2020, Google used a reinforcement-learning algorithma type of AI that learns how to solve a task through trial and errorto design the layout of a new TPU. The AI eventually came up with strange new designs that no human would think ofbut they worked. This kind of AI could one day develop better, more efficient chips.

The second change concerns how computers are told what to do. For the past 40 years we have been programming computers; for the next 40 we will be training them, says Chris Bishop, head of Microsoft Research in the UK.

Traditionally, to get a computer to do something like recognize speech or identify objects in an image, programmers first had to come up with rules for the computer.

With machine learning, programmers no longer write rules. Instead, they create a neural network that learns those rules for itself. Its a fundamentally different way of thinking.

Continue reading here:

How AI is reinventing what computers are - MIT Technology Review

Posted in Ai | Comments Off on How AI is reinventing what computers are – MIT Technology Review

Five Companies Offering Conversational AI Solutions Named IDC Innovators – Business Wire

Posted: at 5:16 pm

NEEDHAM, Mass.--(BUSINESS WIRE)--International Data Corporation (IDC) today published an IDC Innovators report profiling five companies offering conversational artificial intelligence (AI) solutions that assist enterprises in developing and deploying conversational applications, such as chatbots or voice assistants, which users can talk to via a text- and/or voice-based interface. The five companies are: Aisera, DRUID, Openstream, SoundHound, and Uniphore.

Conversational AI has seen a recent acceleration in adoption thanks to improvements in AI technology, as well as an increase in remote and distributed work, commerce, and education models. As organizations look to provide customers and employees with self-service options to answer questions and complete tasks, as well as help employees be more efficient and effective at their jobs, conversational AI is providing significant ROI across a wide variety of industries and use cases.

"The conversational AI market is crowded and competitive right now, with the past two years characterized by an increasing number of startups, open-source innovations, and acquisitions by major technology vendors," said Hayley Sutherland, senior research analyst for Conversational AI and Intelligent Knowledge Discovery at IDC. "Technology buyers considering conversational AI should familiarize themselves with the more innovative startups to get the full picture of what is available in addition to what they may be seeing from technology giants."

The report, IDC Innovators: Worldwide Conversational Artificial Intelligence (IDC #US47355221), features five companies offering conversational artificial intelligence solutions that address the growing needs of organizations to scale employee and customer self-service through conversational interfaces. The five companies are:

About IDC Innovators

IDC Innovators reports present a set of vendors under $100 million in revenue at time of selection chosen by an IDC analyst within a specific market that offer an innovative new technology, a groundbreaking approach to an existing issue, and/or an interesting new business model. It is not an exhaustive evaluation of all companies in a segment or a comparative ranking of the companies. Vendors in the process of being acquired by a larger company may be included in the report provided the acquisition is not finalized at the time of publication of the report. Vendors funded by venture capital firms may also be included in the report even if the venture capital firm has a financial stake in the vendor's company. IDC INNOVATOR and IDC INNOVATORS are trademarks of International Data Group, Inc.

For more information about IDC Innovators research, please contact Jen Melker at jmelker@idc.com.

About IDC

International Data Corporation (IDC) is the premier global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets. With more than 1,100 analysts worldwide, IDC offers global, regional, and local expertise on technology, IT benchmarking and sourcing, and industry opportunities and trends in over 110 countries. IDC's analysis and insight helps IT professionals, business executives, and the investment community to make fact-based technology decisions and to achieve their key business objectives. Founded in 1964, IDC is a wholly owned subsidiary of International Data Group (IDG), the world's leading tech media, data, and marketing services company. To learn more about IDC, please visit http://www.idc.com. Follow IDC on Twitter at @IDC and LinkedIn. Subscribe to the IDC Blog for industry news and insights.

See the original post:

Five Companies Offering Conversational AI Solutions Named IDC Innovators - Business Wire

Posted in Ai | Comments Off on Five Companies Offering Conversational AI Solutions Named IDC Innovators – Business Wire

Why Many People Are Turning Toward Robots And AI To Help Support Their Mental Health And Careers – Forbes

Posted: at 5:16 pm

The pandemic was more than a devastating health crisis. The ripple effects changed the way we work ... [+] and live our lives. While this may seem insensitive, the Covid-19 outbreak led to some positive changes to our culture. For the first time, corporate leadership is paying close attention to the mental health and well-being of their employees.

The pandemic was more than a devastating health crisis. The ripple effects changed the way we work and live our lives. While this may seem insensitive, the Covid-19 outbreak led to some positive changes to our culture. For the first time, corporate leadership is paying close attention to the mental health and well-being of their employees.

We spent nearly two years working from home, feeling isolated, suffering from anxiety, afraid of catching or spreading the virus and worrying about our job security and livelihoods, while millions of Americans were fired or furloughed.

Theseand other related mental health issueswere widespread across all types of demographics. Theres no discrimination when it relates to suffering from mental health matters. It impacts gig-economy workers and CEOs alike.

I interviewed Yvette Cameron, Oracle Cloud HCM senior vice-president, to learn more about how technology, artificial intelligence and robots can help workers with their mental health matters and careers.

She is positive about the future of work, stating, The last year set a new course for the future of work. Cameron continued, Surprisingly, amongst the stress, anxiety and loneliness of the global pandemic, employees found their voice, became more empowered and are now speaking up for what they want.

She added, The evolving nature of the workplace shifted the way people think about success and reset peoples expectations for how organizations can best support them. To attract and retain talent, businesses will need to help employees develop new skills and career paths so they can feel in control of their careers again.

Seemingly overnight, the economy snapped back. The stock market went parabolic. Real estate values hit record highs. Recent accounts of government data reflect there are over 10 million jobs open. Businesses of all sizes are facing the trends of a war for talent and Great Resignation. It's become incredibly hard to attract and retain workers.

Cameron points out that this new mindset is attributed, in large part, to the realization by many people that life is short, and you need to make the best of it. If youre not appreciated at work and are paid poorly, it's time to leave and find a place that treats you with respect and empathy. She referred to this trend as the Great Awakening.

To better understand how workers feel and what theyre going through, Oracle, the large technology company, conducted a survey of 14,600 employees, managers, human resources leaders and C-level executives across 13 countries to find out how the pandemic affects the mental health of workers around the world.

The study, also involving Workplace Intelligence, an HR research and advisory firm, explored how the pandemic has affected different jobs, generations and geographies around the world. Some of the findings may surprise you, including executives have struggled the most to adapt to remote work. Evidently, while 45% of employees said that their mental health has suffered during the pandemic, 53% of C-level executives and 52% of HR leaders said that they struggle with ongoing mental health issues in the workplace.

This was largely attributed to the stress of leading dispersed teams, especially teams that are not used to working remotely. This type of leadership requires a different skill set than in-person leadership. This change may have caused some of the distress executives experienced.

Cameron maintains that artificial intelligence and robots can help people with their mental health issues. At first blush, this seems counterintuitive. Weve all been conditioned to think of lying on a couch and speaking with a therapist, as the go-to thing to do when youre coping with a crisis.

If you start to think about it more deeply, most of us, particularly those in the older Gen-X and Baby Boomer demographics, didnt grow up talking about and sharing their feelings and emotions. It just wasnt a thing back then. Now, it's different. Weve evolved as a society and have learned to become more empathetic, caring and understanding of what others are going through.

Nevertheless, it's still hard for some people to change. An employee may reasonably be concerned that if they speak with human resources about their mental health issues, they may get stigmatized. Word may spread and youll worry that people are whispering behind your back. They may also feel awkward sharing their innermost feelings and anxieties with the boss. Youll feel it's not worth the risk.

The alternative, according to Cameron, is to engage with software, specifically made for this purpose. You could follow a program that prompts you to share what you are going through. The AI will then offer suggestions of what to do next and direct you to appropriate healthcare providers, experts and benefits offered by your company. It's private, discreet and no one else will know about it, she says.

The survey shows that in addition to being impacted by mental health issues, C-Level executives are the most open to mental health support from AI. Globally, 62% of the workforce reported that they would prefer to speak to a robot than their manager about mental health. That number substantially increased for executives and HR leaders (73% and 69%, respectively.) Comparatively, only 65% of employees reported that AI has been helpful in this regard.

There could be another reason why executives turn to AI, as opposed to speaking with someone: executives are more apt to feel that disclosing mental health issues can be seen as a sign of weakness. This could be perceived as detrimental to their stewardship of the organization. If leaked, it could be twisted and become a public relations nightmare. This is one of many reasons why talking about ones feelings and challenges should be normalized within companies.

Continued, unrelenting uncertainty, due to the pandemic, has left many workers in emotional turmoil, feeling like their lives and careers are out of control.

Despite struggles over the last year, people around the world are eager to make changes in their professional lives.

To retain and grow top talent amidst changing workplace dynamics, employers need to pay attention to employee needs more than ever before and leverage technology to provide better support.

Dan Schawbel, managing partner at Workplace Intelligence, said about the study, The past year has changed how we work including where we work and, for a lot of people, who we work for. Schawbel added, While there have been a lot of challenges for both employees and employers, this has been an opportunity to change the workplace for the better.

The data clearly shows that investment in skills and career development is now a key differentiator for employers, as it plays a significant role in employees feeling like they have control over their personal and professional lives. He points out, Businesses that invest in their employees and help them find opportunities will reap the benefits of a productive, engaged workforce.

View original post here:

Why Many People Are Turning Toward Robots And AI To Help Support Their Mental Health And Careers - Forbes

Posted in Ai | Comments Off on Why Many People Are Turning Toward Robots And AI To Help Support Their Mental Health And Careers – Forbes

We need to pay attention to AI bias before it’s too late – TechRepublic

Posted: at 5:16 pm

Cognitive bias leads to AI bias, and the garbage-in/garbage-out axiom applies. Experts offer advice on how to limit the fallout from AI bias.

Image: Shutterstock/metamorworks

Artificial intelligence (AI) is the ability of computer systems to simulate human intelligence. It has not taken long for AI to become indispensable in most facets of human life, with the realm of cybersecurity being one of the beneficiaries.

AI can predict cyberattacks, help create improved security processes to reduce the likelihood of cyberattacks, and mitigate their impact on IT infrastructure. AI can also free up cybersecurity professionals to focus on more critical tasks in the organization.

However, along with the advantages, AI-powered solutionsfor cybersecurity and other technologiesalso present drawbacks and challenges. One such concern is AI bias.

SEE:Digital transformation: A CXO's guide (free PDF)(TechRepublic)

AI bias directly results from human cognitive bias. So, let's look at that first.

Cognitive bias is an evolutionary decision-making system in the mind that is intuitive, fast and automatic. "The problem comes when we allow our fast, intuitive system to make decisions that we really should pass over to our slow, logical system," writes Toby Macdonald in the BBC article How do we really make decisions? "This is where the mistakes creep in."

Human cognitive bias can color decision making. And, equally problematic, machine learning-based models can inherit human-created data tainted with cognitive biases. That's where AI bias enters the picture.

Cem Dilmegani, in his AIMultiple article Bias in AI: What it is, Types & Examples of Bias & Tools to fix it, defines AI bias as the following: "AI bias is an anomaly in the output of machine learning algorithms. These could be due to the discriminatory assumptions made during the algorithm development process or prejudices in the training data."

SEE: AI can be unintentionally biased: Data cleaning and awareness can help prevent the problem (TechRepublic)

Where AI bias comes into play most often is in the historical data being used. "If the historical data is based on prejudiced past human decisions, this can have a negative influence on the resulting models," suggested Dr. Shay Hershkovitz, GM & VP at SparkBeyond, an AI-powered problem-solving company, during an email conversation with TechRepublic. "A classic example of this is using machine-learning models to predict which job candidates will succeed in a role. If the data used for past hiring and promotion decisions is biasedor the algorithm is designed in a way that reflects biasthen the future hiring decision will be biased."

Unfortunately, Dilmegani also said that AI is not expected to become unbiased anytime soon. "After all, humans are creating the biased data while humans and human-made algorithms are checking the data to identify and remove biases."

To reduce the impact of AI bias, Hershkovitz suggests:

The above solutions, when considered, point out that humans must play a significant role in reducing AI bias. As to how that is accomplished, Hershkovitz suggests the following:

Hershkovitz's concern about AI bias does not mean he is anti-AI. In fact, he cautions we need to acknowledge that cognitive bias is often helpful. It represents relevant knowledge and experience, but only when it is based on facts, reason and widely accepted valuessuch as equality and parity.

He concluded, "In this day and age, where smart machines, powered by powerful algorithms, determine so many aspects of human existence, our role is to make sure AI systems do not lose their pragmatic and moral values."

Be in the know about smart cities, AI, Internet of Things, VR, AR, robotics, drones, autonomous driving, and more of the coolest tech innovations. Delivered Wednesdays and Fridays

Follow this link:

We need to pay attention to AI bias before it's too late - TechRepublic

Posted in Ai | Comments Off on We need to pay attention to AI bias before it’s too late – TechRepublic

Phenom AI Day Registration Opens; Limited to 5000 AI and HR Practitioners – Business Wire

Posted: at 5:16 pm

PHILADELPHIA--(BUSINESS WIRE)--Phenom has announced it will host a special event broadcasted live from its world headquarters at 11 a.m. Eastern Standard Time on Dec. 9. Phenom AI Day will showcase the practicalities and impact of artificial intelligence in HR right now. Field experts working with and within the worlds most prominent enterprises will demonstrate how HR professionals are using systems of intelligence to hire, develop and retain both knowledge and hourly workers. Phenom will make three industry-changing commitments that will immediately accelerate long-term user adoption and outcomes driven by AI.

Registration for the free event begins today and will be limited to 5,000. Phenom AI Day will prepare attendees to:

The broadcast will also cover how Phenoms global customers use its AI-powered Talent Experience Management (TXM) platform within manufacturing, healthcare, retail, transportation, financial services, pharmaceuticals, technology, food services and other industries.

Theres no doubt that enterprises around the world must build the next generation of talent with the advantages of AI, said Mahe Bayireddi, co-founder and CEO of Phenom. During Phenom AI Day, we will evangelize the value that large-scale employers are already gaining from AI-powered talent experiences. By delaying adoption, companies will struggle to compete for talent at global and local levels.

Secure your spot now at ai.phenom.com.

About Phenom

Phenom is a global HR technology company with a purpose to help a billion people find the right job. With an expertise in building AI-powered, scalable solutions, Phenom Talent Experience Management (TXM) personalizes and automates the talent journey for candidates, recruiters, employees and managers with its Career Site, Chatbot, CRM, CMS, SMS and Email Campaigns, University Recruiting, Internal Mobility, Career Pathing, Diversity & Inclusion, Talent Marketplace, Gigs, Referrals, Hiring Manager and Analytics. As a result, employers improve their talent acquisition and talent management efforts by helping candidates find the right job, employees grow and evolve, recruiters discover top talent, and managers build teams faster. Phenom was ranked among the fastest-growing private companies in the 2021 Inc. 5000 and was a winner in the Business Intelligence Groups 2021 Artificial Intelligence Excellence Awards program for its sophisticated machine learning capabilities.

Headquartered in Greater Philadelphia, Phenom also has offices in India, Israel, the Netherlands, Germany and the United Kingdom.

For more information, please visit http://www.phenom.com. Connect with Phenom on LinkedIn, Twitter, Facebook, YouTube and Instagram.

More:

Phenom AI Day Registration Opens; Limited to 5000 AI and HR Practitioners - Business Wire

Posted in Ai | Comments Off on Phenom AI Day Registration Opens; Limited to 5000 AI and HR Practitioners – Business Wire

Notified bodies join chorus of criticism of proposed European AI regs – MedTech Dive

Posted: at 5:16 pm

Dive Brief:

The EU acknowledged the potential for the Artificial Intelligence Act (AIA) to overlap with existing regulatory frameworks when it proposed the regulation earlier this year. However, the EU sought to mitigate the problem by using existing conformity assessment procedures to check AI requirements, creating a model that a leading official said "would come to complement or to stand next to the MDR" to ensure that devices are "secure and trustworthy and so on."

Outsiders immediately pushed back against the proposals, with a computational statistician's early comment that the EU's AI definition "feels hopelessly vague" setting the tone for feedback from the medical device industry.

Team-NB picked up on some of the same points as other commenters, for example by highlighting what it sees as an overly broad definition of AI, but framed its feedback in the context of the impact of the proposals on notified bodies.

The association provided a lengthy opinion on the question, which is unanswered in AIA, of whether notified bodies will need accrediting against the AI regulation to take part in conformity assessment procedures. Team-NB is against having specific AIA accreditation.

"Additional accreditation of notified bodies against AIA would not bring more expertise, but just increase the administrative burden and by this reduce the already limited number of notified bodies and their capacity," the association wrote.

Instead, Team-NB wants the EU to expand the existing authorization framework, so a notified body designated under MDR or IVDR "would need to show competency for assessing AI related aspects." Team-NB wants the EU to avoid "having the AI part evaluated by an AI notified-body and the medical device part by an MDR/IVDR notified body ... to ensure that the special characteristic of medical devices and the general safety and performance requirements of a medical device are considered during the AI assessment."

Whatever the system, Team-NB warns attracting the AI experts needed for conformity assessments will be "the major challenge for all stakeholders." As Team-NB sees it, a "strenuous effort" is needed to build and attract experts, leading it to recommend a European AI initiative.

Read more:

Notified bodies join chorus of criticism of proposed European AI regs - MedTech Dive

Posted in Ai | Comments Off on Notified bodies join chorus of criticism of proposed European AI regs – MedTech Dive

Keeping An AI On Fintech: AI-Based Use Cases Poised To Take Financial Services To The Next Level – Forbes

Posted: at 5:16 pm

Financial services stock image courtesy of Dreamstime.

Over the past few years, the business world has increasingly turned towards intelligent solutions to help cope with the changing digital landscape. Artificial intelligence (AI) enables devices and things to perceive, reason and act intuitivelymimicking the human brain, without being hindered by human subjectivity, ego and routine interruptions. The technology has the potential to greatly expand our capabilities, bringing added speed, efficiency and precision for tasks both complex and mundane.

To get a picture of the momentum behind AI, the global artificial intelligence market was valued at $62.35 billion in 2020 and is expected to expand at a compound annual growth rate (CAGR) of 40.2% from 2021 to 2028. Given this projection, its not surprising that tech giants such as AWS, IBM, Google and Qualcomm have all made significant investments into AI research, development, disparate impact testing and auditing.

My coverage area of expertise, fintech (financial technology), is no exception to this trend. The AI market for fintech alone is valued at an estimated $8 Billion and is projected to reach upwards of $27 Billion in the next five years.AI and machine learning (ML) have penetrated almost every facet of the space, from customer-facing functions to back-end processes. Lets take a closer look at these changing dynamics.

What AI and ML use cases are applicable to fintech?

Financial services have their own set of common AI and ML use cases. These include, but are not limited to cost reduction, process automation, spend reconciliation, data analysis and improving customer experiences.According to aBainreport, the pandemic has widened the gap between top performers and other companies in three main productivity driverspeoples time, talent and energy. These AI use cases stand to help address the business challenges associated with this gap.

Process automation, one of the industrys most common use cases, reduces the amount of manual work done by employees by automating repetitive tasks and processes. Not only does this reduce tedium, but it also gives workers time and energy to focus on innovation and other value-adding activities, which can also boost morale.

The processes of spend reconciliation and payment authorization are also highly labor intensive and time consuming for accounting departments. AI and ML can provide automated, three-way matching of incoming invoices from suppliers for approval. Intelligent systems can also learn the approval process for more complex and fragmented areas of expense spend, such as the miscellaneous travel, goods, and service expenses of todays increasingly distributed workforce.

For example, Beanworks, an accounts payable (AP) automation provider, recently introduced SmartCapture, an AI-driven data capture functionality that promises to drastically increase customers data entry speed and accuracy. The company claims its offering provides over 99% accuracy while completing AP processes in minutes. Combined with its SmartCoding technology, it claims to reduce the time accounting teams spend on data entry by more than 80%.

According to Beanworks, traditional, manual AP data entry is inefficient, error-prone and consumes considerable time and resources, costing anywhere from $12-20 per invoice to process. An AI solution such as SmartCapture, according to Beanworks President and COO Karim Ben-Jaafar, gets smarter with every invoice as it learns how to correctly interpret an organizations accounts payable documents and coding. This approach, said Ben-Jaafar, frees up accountants time and energy a to focus on other, more strategic tasks.

Other applications of AI and ML help consumers better manage their finances with hyper-tailored offers that fit their current financial situation, including credit on-demand and at a lower cost.

Ina recent column on the pandemics impact on credit trends,I quickly touched on Upstart, an AI-lending platform that is reimagining creditworthiness. Its worth taking a deeper look at the companys efforts to overhaul our outdated credit scoring system, which, as it stands, makes it difficult for lenders to accurately assess which borrowers are likely to default. This uncertainty often results in consumers being denied access or overcharged for credit. Meanwhile, Upstarts system leverages AI to approve more than two-thirds of its loans instantly.

Engaging AI credit-scoring software can reduce non-performing loans while boosting returns , meaning better loan decisioning. This means companies lend to less risky clients, and customers can access personalized, almost instant loan decision-making.

In keeping with the finance on-demand theme, two of the most sought-after AI/ML solutions are personalized portfolio management and product recommendations. Their popularity is growing, as is their refinement. Investment platforms such asBettermentrecommend clients' investment opportunities based on income, current investment habits, risk appetite and more. The wine investment platform,Vinovest, uses custom machine learning algorithms developed with world-class sommeliers to curate a wine portfolio that acts as both an alternative investment that can out-perform other asset classes and a liquid asset should the user actually decide to drink their wine.

In the coming years, we are likely to see refinement in all of these technologies. I believe well also see increasing amounts of automation in customer support, report generation and data analytics.

Can it be trusted?

As financial services providers continue on their digital transformation, we will see an increase in AI and ML security solutions over the next year. For example, I expect to see more regulatory (RegTech) solutions that analyze documents for account registration, detect anomalies in patterns within accounts and more. AI and ML moved from a curiosity to a necessity and priority during the pandemic, particularly for financial services.

AI needs auditingwe cant assume what the machine is learning will always be correct. As long as the proper audits are in place, though, AI can bring security to financial services. Granted, any technology used in modern banking demands a strong regulatory lens. Creators of AI in financial services have to prioritize the audit trail while constructing their solutions.

Gene Ludwig, founder and former CEO of Promontory Financial Group, an IBM-owned financial services consulting company, summarized this well during a Reinventing Financial Services Digital Forum podcast. AI, properly documented, is actually better than the human brain because you cant get inside the human brain and document the decisioning pattern thats used. Its actually potentially much better, not worse than people in terms of audit trails.

A gratifying result

What weve learned from the rise of fintech apps is that customers crave instant gratificationwhether in response time or personalization of their experience. Refinement in AI and ML will give the leaders in the space those precious minutes or even seconds that it takes to compete for someones business while delivering a personalized product to them. From customer-facing functions to back-end processes, the possibilities are endless.

Disclosure: My firm, Moor Insights & Strategy, like all research and analyst firms, provides, or has provided research, analysis, advising, and/or consulting to many high-tech companies and consortia in the industry. I do not hold any equity positions with any companies or organizations cited in this column.

Read the rest here:

Keeping An AI On Fintech: AI-Based Use Cases Poised To Take Financial Services To The Next Level - Forbes

Posted in Ai | Comments Off on Keeping An AI On Fintech: AI-Based Use Cases Poised To Take Financial Services To The Next Level – Forbes

Page 92«..1020..91929394..100110..»