Investors Pour $27.1 Billion Into A.I. Start-Ups, Defying a Downturn – The New York Times

For two years, many unprofitable tech start-ups have cut costs, sold themselves or gone out of business. But the ones focused on artificial intelligence have been thriving.

Now the A.I. boom that started in late 2022 has become the strongest counterpoint to the broader start-up downturn.

Investors poured $27.1 billion into A.I. start-ups in the United States from April to June, accounting for nearly half of all U.S. start-up funding in that period, according to PitchBook, which tracks start-ups. In total, U.S. start-ups raised $56 billion, up 57 percent from a year earlier and the highest three-month haul in two years.

A.I. companies are attracting huge rounds of funding reminiscent of 2021, when low interest rates and pandemic growth pushed investors to take risks on tech investments.

In May, CoreWeave, a provider of cloud computing services for A.I. companies, raised $1.1 billion, followed by $7.5 billion in debt, valuing it at $19 billion. Scale AI, a provider of data for A.I. companies, raised $1 billion, valuing it at $13.8 billion. And xAI, founded by Elon Musk, raised $6 billion, valuing it at $24 billion.

Such financing rounds have boosted the industrys overall deal-making by dollar amount and number of deals, said Kyle Stanford, a research analyst at PitchBook.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit andlog intoyour Times account, orsubscribefor all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?Log in.

Want all of The Times?Subscribe.

Read the original here:

Investors Pour $27.1 Billion Into A.I. Start-Ups, Defying a Downturn - The New York Times

What happened to the artificial-intelligence revolution? – The Economist

Move to San Francisco and it is hard not to be swept up by mania over artificial intelligence (AI). Advertisements tell you how the tech will revolutionise your workplace. In bars people speculate about when the world will get AGI, or when machines will become more advanced than humans. The five big tech firmsAlphabet, Amazon, Apple, Meta and Microsoft, all of which have either headquarters or outposts nearbyare investing vast sums. This year they are budgeting an estimated $400bn for capital expenditures, mostly on AI-related hardware, and for research and development.

In the worlds tech capital it is taken as read that AI will transform the global economy. But for ai to fulfil its potential, firms everywhere need to buy the technology, shape it to their needs and become more productive as a result. Investors have added more than $2trn to the market value of the five big tech firms in the past yearin effect projecting an extra $300bn-400bn in annual revenues according to our rough estimates, about the same as another Apples worth of sales. For now, though, the tech titans are miles from such results. Even bullish analysts think Microsoft will make only about $10bn from generative-AI-related sales this year. Beyond Americas west coast, there is little sign AI is having much of an effect on anything.

See more here:

What happened to the artificial-intelligence revolution? - The Economist

Regulating artificial intelligence doesn’t have to be complicated, some experts say – STAT

Artificial intelligence has the potential to revolutionize how drugs are discovered and change how hospitals deliver care to patients. But AI also comes with the risk of irreparable harm and perpetuating historic inequities.

Would-be health care AI regulators have been spinning in circles trying to figure out how to use AI safely. Industry bodies, investors, Congress, and federal agencies are unable to agree on which voluntary AI validation frameworks will help ensure that patients are safe. These questions have pitted lawmakers against the FDA and venture capitalists against the Coalition for Health AI (CHAI) and its Big Tech partners.

The National Academies on Tuesday zoomed out, discussing how to manage AI risk across all industries. At the event one in a series of workshops building on the National Institute of Standards and Technology (NIST)s AI Risk Management Framework speakers largely rejected the notion that AI is a beast so different from other technologies that it needs totally new approaches.

STAT+ Exclusive Story

Already have an account? Log in

Already have an account? Log in

Already have an account? Log in

Get unlimited access to award-winning journalism and exclusive events.

See original here:

Regulating artificial intelligence doesn't have to be complicated, some experts say - STAT

Artificial intelligence to affect broad range of public services – MDJOnline.com

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

Excerpt from:

Artificial intelligence to affect broad range of public services - MDJOnline.com

Artificial intelligence degree programs to be available at Oklahoma universities Oklahoma Voice – Oklahoma Voice

OKLAHOMA CITY Students at some of Oklahomas public colleges and universities will soon be able to pursue undergraduate degrees in artificial intelligence.

The Oklahoma State Regents for Higher Education approved artificial intelligence degree programs at Rose State College, Southwestern Oklahoma State University and the University of Oklahoma on June 4.

While some universities have offered courses in artificial intelligence, these are the first degree programs in the state.

Trisha Wald, dean of the Dobson College of Business and Technology at Southwestern Oklahoma State University, worked to start up the program at the university. Representatives at Rose State College and the University of Oklahoma were not available for comment.

While the degree program can begin in the fall for Southwestern Oklahoma State University, Wald said the late approval means some of the new AI classes may not be able to start until the spring.

Wald said she looked at similar programs in other states to create the proposed curriculum for this new program. While Wald said there are not as many programs as you would think, she was able to use their programs to determine what Southwestern Oklahoma State Universitys program needed.

Its a multidisciplinary program, so its not just computer science courses, Wald said. Weve got higher level math, psychology and philosophy courses, specifically on ethics. So its going to help us have more well-rounded individuals.

Wald said the approval process took months and the proposal had to demonstrate workforce demand to the Regents as part of the proposal process.

Over 19,000 jobs in Oklahoma currently require AI skills, officials said. This number is expected to increase by 21% in the next decade.

AI is rapidly emerging as a vital employment sector, said State Regents for Higher Education Chair Jack Sherry in a statement. New career opportunities in areas like machine learning, data science, robotics and AI ethics are driving demand for AI expertise, and Oklahomas state system colleges and universities are answering the call.

Gov. Kevin Stitt said the new degree programs will allow Oklahomas students to be at the forefront of the AI industry.

These degree programs are a great leap forward in our commitment to innovation in education and will position Oklahoma to be a leader in AI, said Gov. Kevin Stitt in a statement. AI is reshaping every aspect of our lives, especially academics. Im proud of the Board of Regents for ensuring Oklahomas higher ed students do more than just keep pace, theyll lead the AI revolution.

GET THE MORNING HEADLINES DELIVERED TO YOUR INBOX

SUBSCRIBE

Read more from the original source:

Artificial intelligence degree programs to be available at Oklahoma universities Oklahoma Voice - Oklahoma Voice

How GenAI is Reshaping eCommerce and Consumer Trust – PYMNTS.com

In the bustling digital marketplace of 2024, a new currency is emerging: content.

However, as arecent study by Google researchers warns, this currency may face rapid devaluation due to an influx of artificial intelligence (AI)-generated material flooding the internet. The implications for eCommerce, digital marketing and consumer behavior are profound, potentially reshaping the online business landscape in both productive and concerning ways.

The study, currently awaiting peer review, paints a stark picture of the current state of online content. According to their findings, most generative AI (GenAI) users employ the technology to create and disseminate artificial content across the web. This includes everything from product images and reviews to marketing campaigns and social media personas.

This trend presents a double-edged sword for businesses operating in the digital sphere. On one hand, GenAI offers unprecedented opportunities for content creation and customer engagement. Small enterprises can now produce professional-quality marketing materials at a fraction of the traditional cost, potentially leveling the playing field with larger competitors.

However, the proliferation of AI-generated content also poses significant challenges. As consumers become increasingly skeptical of the authenticity of online information, businesses may find it harder to build trust and credibility with their target audience. This erosion of faith could have far-reaching consequences for eCommerce, potentially impacting conversion rates and customer loyalty.

The researchers highlighted several key areas where GenAI is used to blur the lines between authenticity and deception. These include creating fake product reviews, manipulating images to misrepresent goods or services, and generating misleading or fabricated news articles to influence consumer opinion.

Perhaps most concerning for the eCommerce sector is the studys finding that a significant portion of GenAI content is being deployed with a discernible intent to influence public opinion, enable scam or fraudulent activities, or to generate profit. This suggests unscrupulous people are leveraging GenAI to gain unfair advantages in the digital marketplace.

The accessibility of these powerful AI tools is exacerbating the problem. As the researchers noted, many GenAI systems now require minimal technical expertise to operate, democratizing the ability to create convincing fake content. This ease of use has led to a surge in AI-generated material across various online platforms, from social media to eCommerce sites.

This new reality presents a complex challenge for established online retailers and digital brands.

How can retailers maintain consumer trust and differentiate their authentic offerings from a sea of potentially artificial competitors? Some companies are turning to blockchain technology and other verification methods to prove the authenticity of their products and content. Others are doubling down on personalized, human-centric marketing approaches that AI struggles to replicate convincingly.

The impact on consumer behavior is equally significant. The study suggests that the proliferation of AI-generated content is testing peoples capacity to discern fake from real. This growing skepticism could change how consumers interact with online content and make purchasing decisions. Some industry analysts predict a shift toward more reliance on trusted influencers and personal networks for product recommendations, potentially disrupting current digital marketing strategies.

Moreover, the researchers warn of a potential skepticism overload, where consumers become overwhelmed by the need to verify the authenticity of online information constantly. This could lead to a paradoxical situation where some users simply disengage from critical evaluation altogether, potentially making them more vulnerable to misinformation and scams.

The eCommerce industry is already responding to these challenges.

Major platforms are investing heavily in AI detection tools and content moderation systems. Some are exploring using watermarking techniques for AI-generated content, allowing users to identify synthetic material easily. However, as AI technology advances, many worry that detection methods will struggle to keep pace.

The study also raises important questions about the role of Big Tech companies in this evolving landscape. While not explicitly named in the paper, industry giants like Google have been at the forefront of developing and deploying GenAI technologies. These companies now face the complex task of balancing innovation with responsibility as they grapple with the unintended consequences of the tools theyve helped create.

GenAI will play an increasingly central role as the digital economy evolves. Businesses, consumers and regulators will face the challenge of harnessing the technologys potential while mitigating its risks. In this new era of digital commerce, the ability to navigate the blurred lines between authentic and artificial intelligence may well become a key determinant of success.

Read more here:

How GenAI is Reshaping eCommerce and Consumer Trust - PYMNTS.com

Humanoid robots powered by AI turn heads at the World Artificial Intelligence Conference – Lufkin Daily News

State Alabama Alaska Arizona Arkansas California Colorado Connecticut Delaware Florida Georgia Hawaii Idaho Illinois Indiana Iowa Kansas Kentucky Louisiana Maine Maryland Massachusetts Michigan Minnesota Mississippi Missouri Montana Nebraska Nevada New Hampshire New Jersey New Mexico New York North Carolina North Dakota Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Vermont Virginia Washington Washington D.C. West Virginia Wisconsin Wyoming Puerto Rico US Virgin Islands Armed Forces Americas Armed Forces Pacific Armed Forces Europe Northern Mariana Islands Marshall Islands American Samoa Federated States of Micronesia Guam Palau Alberta, Canada British Columbia, Canada Manitoba, Canada New Brunswick, Canada Newfoundland, Canada Nova Scotia, Canada Northwest Territories, Canada Nunavut, Canada Ontario, Canada Prince Edward Island, Canada Quebec, Canada Saskatchewan, Canada Yukon Territory, Canada

Zip Code

Country United States of America US Virgin Islands United States Minor Outlying Islands Canada Mexico, United Mexican States Bahamas, Commonwealth of the Cuba, Republic of Dominican Republic Haiti, Republic of Jamaica Afghanistan Albania, People's Socialist Republic of Algeria, People's Democratic Republic of American Samoa Andorra, Principality of Angola, Republic of Anguilla Antarctica (the territory South of 60 deg S) Antigua and Barbuda Argentina, Argentine Republic Armenia Aruba Australia, Commonwealth of Austria, Republic of Azerbaijan, Republic of Bahrain, Kingdom of Bangladesh, People's Republic of Barbados Belarus Belgium, Kingdom of Belize Benin, People's Republic of Bermuda Bhutan, Kingdom of Bolivia, Republic of Bosnia and Herzegovina Botswana, Republic of Bouvet Island (Bouvetoya) Brazil, Federative Republic of British Indian Ocean Territory (Chagos Archipelago) British Virgin Islands Brunei Darussalam Bulgaria, People's Republic of Burkina Faso Burundi, Republic of Cambodia, Kingdom of Cameroon, United Republic of Cape Verde, Republic of Cayman Islands Central African Republic Chad, Republic of Chile, Republic of China, People's Republic of Christmas Island Cocos (Keeling) Islands Colombia, Republic of Comoros, Union of the Congo, Democratic Republic of Congo, People's Republic of Cook Islands Costa Rica, Republic of Cote D'Ivoire, Ivory Coast, Republic of the Cyprus, Republic of Czech Republic Denmark, Kingdom of Djibouti, Republic of Dominica, Commonwealth of Ecuador, Republic of Egypt, Arab Republic of El Salvador, Republic of Equatorial Guinea, Republic of Eritrea Estonia Ethiopia Faeroe Islands Falkland Islands (Malvinas) Fiji, Republic of the Fiji Islands Finland, Republic of France, French Republic French Guiana French Polynesia French Southern Territories Gabon, Gabonese Republic Gambia, Republic of the Georgia Germany Ghana, Republic of Gibraltar Greece, Hellenic Republic Greenland Grenada Guadaloupe Guam Guatemala, Republic of Guinea, Revolutionary People's Rep'c of Guinea-Bissau, Republic of Guyana, Republic of Heard and McDonald Islands Holy See (Vatican City State) Honduras, Republic of Hong Kong, Special Administrative Region of China Hrvatska (Croatia) Hungary, Hungarian People's Republic Iceland, Republic of India, Republic of Indonesia, Republic of Iran, Islamic Republic of Iraq, Republic of Ireland Israel, State of Italy, Italian Republic Japan Jordan, Hashemite Kingdom of Kazakhstan, Republic of Kenya, Republic of Kiribati, Republic of Korea, Democratic People's Republic of Korea, Republic of Kuwait, State of Kyrgyz Republic Lao People's Democratic Republic Latvia Lebanon, Lebanese Republic Lesotho, Kingdom of Liberia, Republic of Libyan Arab Jamahiriya Liechtenstein, Principality of Lithuania Luxembourg, Grand Duchy of Macao, Special Administrative Region of China Macedonia, the former Yugoslav Republic of Madagascar, Republic of Malawi, Republic of Malaysia Maldives, Republic of Mali, Republic of Malta, Republic of Marshall Islands Martinique Mauritania, Islamic Republic of Mauritius Mayotte Micronesia, Federated States of Moldova, Republic of Monaco, Principality of Mongolia, Mongolian People's Republic Montserrat Morocco, Kingdom of Mozambique, People's Republic of Myanmar Namibia Nauru, Republic of Nepal, Kingdom of Netherlands Antilles Netherlands, Kingdom of the New Caledonia New Zealand Nicaragua, Republic of Niger, Republic of the Nigeria, Federal Republic of Niue, Republic of Norfolk Island Northern Mariana Islands Norway, Kingdom of Oman, Sultanate of Pakistan, Islamic Republic of Palau Palestinian Territory, Occupied Panama, Republic of Papua New Guinea Paraguay, Republic of Peru, Republic of Philippines, Republic of the Pitcairn Island Poland, Polish People's Republic Portugal, Portuguese Republic Puerto Rico Qatar, State of Reunion Romania, Socialist Republic of Russian Federation Rwanda, Rwandese Republic Samoa, Independent State of San Marino, Republic of Sao Tome and Principe, Democratic Republic of Saudi Arabia, Kingdom of Senegal, Republic of Serbia and Montenegro Seychelles, Republic of Sierra Leone, Republic of Singapore, Republic of Slovakia (Slovak Republic) Slovenia Solomon Islands Somalia, Somali Republic South Africa, Republic of South Georgia and the South Sandwich Islands Spain, Spanish State Sri Lanka, Democratic Socialist Republic of St. Helena St. Kitts and Nevis St. Lucia St. Pierre and Miquelon St. Vincent and the Grenadines Sudan, Democratic Republic of the Suriname, Republic of Svalbard & Jan Mayen Islands Swaziland, Kingdom of Sweden, Kingdom of Switzerland, Swiss Confederation Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand, Kingdom of Timor-Leste, Democratic Republic of Togo, Togolese Republic Tokelau (Tokelau Islands) Tonga, Kingdom of Trinidad and Tobago, Republic of Tunisia, Republic of Turkey, Republic of Turkmenistan Turks and Caicos Islands Tuvalu Uganda, Republic of Ukraine United Arab Emirates United Kingdom of Great Britain & N. Ireland Uruguay, Eastern Republic of Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Viet Nam, Socialist Republic of Wallis and Futuna Islands Western Sahara Yemen Zambia, Republic of Zimbabwe

The rest is here:

Humanoid robots powered by AI turn heads at the World Artificial Intelligence Conference - Lufkin Daily News

Top 3 Artificial Intelligence (AI) Coins of the First Week of July 2024 – BeInCrypto

The Artificial Intelligence token space has surged considerably this year, and the beginning of Q3 witnessed a major event in this industry.

BeInCrypto has put together a list of tokens that not only performed well during this bearish week but also deserve investors attention.

Zignlays 2.5% price rise was the best performance by an AI token this week. The crypto assets early rise during the previous weekend countered the decline over the last four days. ZIG is thus currently changing hands at $0.105.

It is attempting to close above the crucial support at $0.105, one that has been rested as resistance in the past. Securing it as support would enable recovery towards $0.112.

Read More: How Will Artificial Intelligence (AI) Transform Crypto?

However, failure to do so would mean additional potential losses for the token. A drop to $0.093 is likely, and the same would invalidate the bullish thesis.

While Aethir is a newly launched token, despite its strength, it has not conceded to the broader market trends. Trading at $0.066, ATHs price is looking at consolidation over a decline.

This consolidation range spans between $0.077 and $0.063. The altcoin, still a newcomer in the space, aims to facilitate DePINs role as a GPU cloud computing aggregator. Given DePINs considerable demand over the past few days, ATH is saved from losses.

Read More: How To Invest in Artificial Intelligence (AI) Cryptocurrencies?

Nevertheless, if ATHs price were to drop below the support of $0.063 again, a drawdown to $0.057 is likely. Losing it would invalidate the consolidation thesis.

The Artificial Superintelligent Alliance is one of the biggest events in the AI sector this quarter. As Ocean Protocol (OCEAN) and SingularityNET (AGIX) merged into Fetch.ai (FET), their collective identity has been converted into ASI.

They began the token merger this week and expect to complete it by mid-July. Until then, the Artificial Superintelligent Alliance will trade under the FET ticker. This altcoin, which now holds a market capitalization of almost $3 billion, has become the second biggest asset in the AI token market.

However, FET remained consolidated between $1.7 and $1.0 as the mergers bullishness encountered resistance from the broader market decline.

Read More: Top 9 Artificial Intelligence (AI) Cryptocurrencies in 2024

Therefore, this consolidation will continue in the coming days until the merger is completed, and the resulting bullishness will help FET break out of it.

Disclaimer

In line with the Trust Project guidelines, this price analysis article is for informational purposes only and should not be considered financial or investment advice. BeInCrypto is committed to accurate, unbiased reporting, but market conditions are subject to change without notice. Always conduct your own research and consult with a professional before making any financial decisions. Please note that ourTerms and Conditions,Privacy Policy, andDisclaimershave been updated.

Originally posted here:

Top 3 Artificial Intelligence (AI) Coins of the First Week of July 2024 - BeInCrypto

Everything You Need to Know About How Artificial Intelligence Transforming Travel and Tourism Industry – Travel And Tour World

Home HOTEL NEWS Everything You Need to Know About How Artificial Intelligence Transforming Travel and Tourism Industry

Saturday, July 6, 2024

The travel and tourism industry stands on the brink of a technological revolution driven by artificial intelligence (AI). Over the next decade, AI is set to enhance personalisation for travellers and significantly improve industry-wide efficiency.

From streamlining booking processes to revolutionizing customer service, AIs potential is vast and varied. This article explores how AI technology is becoming integral to hotels, travel booking systems, and the broader hotel industry. Jason Bradbury, the renowned British television presenter and technology expert, shares his insights at Travel Tech Show in London, United Kingdom on how artificial intelligence (AI) is set to revolutionize the travel and tourism industry.

AIs ability to analyze vast amounts of data in real time is transforming how travellers plan their journeys. By leveraging AI algorithms, travel companies can offer highly personalised experiences tailored to individual preferences. These smart algorithms consider past travel behaviour, preferences, and real-time data to suggest destinations, activities, and accommodations that align with a travellers unique tastes.

For example, when booking a trip, AI can analyze a customers previous travel history and current interests to recommend suitable destinations and activities. This level of personalisation not only enhances the travel experience but also increases customer satisfaction and loyalty.

The traditional travel booking process often involves numerous steps, including searching for flights, hotels, and rental cars, as well as filling out various forms and making payments. AI is poised to simplify this process dramatically. AI-powered chatbots and virtual assistants can handle complex queries and transactions, guiding customers through each step of the booking process seamlessly.

These AI systems can quickly search through vast databases of flights, hotels, and car rentals to find the best options based on the travellers preferences and budget. Additionally, AI can manage the tedious task of form-filling, ensuring that all necessary information is accurately and efficiently entered. This automation reduces the likelihood of errors and significantly speeds up the booking process.

One of the most impactful applications of AI in the travel industry is in customer service. AI-powered chatbots and virtual assistants are available 24/7 to handle customer inquiries and issues. These systems can understand and respond to natural language queries, providing accurate and timely information to customers.

For instance, if a traveller encounters a problem with their booking or needs information about local attractions, an AI-powered chatbot can provide immediate assistance. This not only improves the customer experience but also reduces the workload on human customer service agents, allowing them to focus on more complex issues that require a personal touch. Several luxury hotels are at the forefront of integrating AI into their services to enhance the guest experience. For instance, the Hilton Hotel chain uses AI-powered concierge services through their Connie robot, which provides guests with information about hotel amenities and local attractions. The Wynn Las Vegas has incorporated Amazon Echo devices in all their rooms, allowing guests to control room features and make service requests via voice commands. The Henn-na Hotel in Japan is known for its extensive use of robots and AI, from check-in to room service. These examples highlight how luxury hotels are leveraging AI to provide exceptional service and convenience to their guests.

AI is also revolutionizing the hotel check-in and check-out processes. Traditional check-in procedures often involve long queues and paperwork. With AI, hotels can offer a seamless and efficient check-in experience. Guests can check in online or through a mobile app, receive a digital room key, and go straight to their room upon arrival. Similarly, the check-out process can be automated, allowing guests to settle their bills and receive receipts electronically.

This automation not only enhances the guest experience but also allows hotel staff to focus on providing personalized services to guests. By reducing the time spent on administrative tasks, staff can engage more with guests, improving overall service quality.

One of the most impactful applications of AI in the travel industry is in customer service. AI-powered chatbots and virtual assistants are available 24/7 to handle customer inquiries and issues. These systems can understand and respond to natural language queries, providing accurate and timely information. For example, if a traveler encounters a problem with their booking or needs information about local attractions, an AI-powered chatbot can provide immediate assistance. This not only improves the customer experience but also reduces the workload on human customer service agents, allowing them to focus on more complex issues that require a personal touch.

AIs predictive capabilities are also transforming hotel operations. By analyzing data from various sensors and systems, AI can predict when equipment is likely to fail and schedule maintenance before a breakdown occurs. This predictive maintenance helps hotels avoid costly downtime and ensures that facilities are always in optimal condition.

Furthermore, AI can optimize energy usage within hotels by analyzing patterns in occupancy and usage. For instance, AI can adjust heating, cooling, and lighting based on real-time data, reducing energy consumption and operational costs.

While the benefits of AI in the travel and tourism industry are clear, there are also challenges to consider. Data privacy and security are paramount, as the increased use of AI involves handling large amounts of personal information. Travel companies must ensure that they have robust data protection measures in place to safeguard customer information.

Additionally, the integration of AI technology requires significant investment in infrastructure and training. Companies must be willing to invest in the necessary technology and ensure that their staff are adequately trained to work alongside AI systems.

AI is poised to revolutionize the travel and tourism industry over the next decade. By enhancing personalisation, streamlining booking processes, improving customer service, and optimizing operations, AI will significantly improve the travel experience for customers and drive efficiency across the industry. As travel companies continue to adopt and integrate AI technology, they will be better positioned to meet the evolving needs and expectations of modern travellers.

Tags: artificial intelligence, artificial intelligence news, customer service, Google, international travel news, Jason Bradbury, technology, technology news, Travel, travel and tourism industry, travel industry, Travel News, Travel Tech Show, Trending

More:

Everything You Need to Know About How Artificial Intelligence Transforming Travel and Tourism Industry - Travel And Tour World

AI is a Problem For Black Folks, Here’s How We Can Improve It – The Root

How many of you reading this use AI daily? If youre thinking, Not me, think again...just about all of us do. From unlocking our phones with facial recognition to scrolling through social media, artificial intelligence is everywhere and only becoming more pervasive. Its like having an invisible assistant that you didnt ask for but cant live without.

Susan Heyward on Season Four of The Boys

Susan Heyward on Season Four of The Boys

AI is also rapidly becoming part of our entertainment landscape used in ways that at once surprise and deceive us. Kendrick Lamar mightve been the first rapper to become the face of AI literally and figuratively with his 2022 music video The Heart Part 5, Lamar and director Dave Lee used AI as part of the songs artistic statement.

Earlier this year, amid the Kendrick-Drake rap beef that had everyone in a chokehold, an AI-generated diss track using Lamars voice fooled many into believing it was the real thing, raising concerns about the darker side of AI and ethics in music production.

Beyond the use (or misuse) of AI in pop culture, there are a multitude of real-world problems with artificial intelligence. And if we dont pay attention, well continue to be the victims of societal biases. To start, generative AI is expected to exacerbate the racial wealth gap because Black workers are overrepresented in roles that AI is likely to replace. Facial recognition technology is far less accurate for Black faces especially Black women and this unreliability goes far beyond a technical glitch, it can lead to wrongful arrests and other serious consequences. AI is also transforming the hiring process by deciding which resumes to read and share. Because these systems reflect systemic biases, it has led to the exclusion of qualified Black candidates.

AI systems can misrepresent our culture and spread misinformation about our history. This isnt a matter of computers occasionally getting it wrongits about stopping the perpetuation of harmful stereotypes that erase our identity.

The companies involved in training large language models (LLMs) like OpenAIs GPT-4, also bear high responsibility. One Black employee who formerly worked with AI training company Data Annotation Tech found themselves booted from the platform after frequently calling out racial bias. The worker also confirmed that all of their Black referrals have been fully ignored. Its like the adage If you see something, say something, except in this case, they were kicked out for it.

In a cruel and abusive irony, OpenAI abused Kenyan workers in what they called an effort to make ChatGPT less toxic, while only paying them $2 per hour. Were certain these companies could say its just a coincidence or point to other reasons for leaving us out of the process, but the fact remains: were being excluded.

Like most technological advances, AI could be a game-changer for us if we play it right. It could improve healthcare and education, and even begin fixing systemic biases in banking. But for this to happen, we have to stay informed and get proactive. If not, we risk AI becoming the high-tech version of a nosy neighbor whos always in our business but never quite gets the story right.

To start, we need to demand better representation in the tech industry. When were involved in developing and implementing AI, we can help ensure these systems actually work for the betterment of our culture. AI is clearly here to stay, so its up to our community to ensure it works for us, not against us.

By staying informed, getting involved, and being proactive, we can help ensure that AI technologies are developed with our needs and perspectives in mind, rather than perpetuating existing biases and inequalities.

Read the original:

AI is a Problem For Black Folks, Here's How We Can Improve It - The Root

China-led resolution on artificial intelligence passes in United Nations – South China Morning Post

In a diplomatic win for Beijing on Monday, the United Nations General Assembly unanimously adopted a China-led resolution that urges the international community to create a free, open, inclusive and non-discriminatory business environment among wealthy and developing nations for artificial intelligence development.

More than 140 nations, including the United States, co-sponsored the non-binding resolution affirming that all nations should enjoy equal opportunities in the non-military domain, calling for global cooperation to assist developing countries facing unique challenges and ensure they will not be further left behind.

Fu Cong, Chinas permanent representative to the United Nations, said after the assembly session that a fragmented approach toward AI, toward the digital technology, is not going to benefit anybody.

He added that the resolution was proposed to emphasize the important role that the UN could play on AI governance as the most inclusive organization.

Describing the significance of the measure as great and far-reaching, the envoy noted that AI technology was advancing quickly and the gap between the North and South, especially between the developing countries and the developed countries, is also widening.

Ambassador Fu, who served as the director-general of the department of arms control at the Chinese foreign affairs ministry from 2018 to 2022, also said that China was very thankful, and were very appreciative of the positive role that the US has played in this whole process.

He added that the issue of AI had been discussed at a very senior level, at the foreign-ministers level, and also even at the head-of-state level.

So we do look forward to intensifying our cooperation with the United States, and, for that matter, with all countries in the world on this issue, he said.

Beijings initiative also follows the assemblys adoption of the first global resolution on AI in March. Proposed by Washington and co-sponsored by China and over 120 nations, the measure encouraged countries to safeguard human rights, protect personal data and monitor AI for potential risks.

A senior official from US President Joe Bidens administration later said that consensus had been achieved after intense discussions among countries with differing views.

On Monday, Ambassador Fu called the two resolutions complementary, saying the earlier one was more general and the Chinese one was more focused on the capacity building.

Beijing has sought to incorporate voices from the developing world into discussions on managing AI. In October, China released its Global AI Governance Initiative, saying that all countries, regardless of their size, strength, or social system, should have equal rights to develop and use AI.

Beijing is seen as trying to ensure that the US solely does not dominate the discourse on setting global standards for AI.

The US and China also remain locked in a competition to advance in the hi-tech fields of AI and semiconductors.

In March, Washington revised regulations further limiting Chinas access to US-made AI chips and chip-making tools. The export controls were initially introduced in October 2022 to prevent Beijing from leveraging American technology for military modernization. They were updated a year later to eliminate loopholes.

In another push to hobble Beijings ability to gain cutting-edge technologies like semiconductors, quantum computing and AI, Biden signed an executive order in August 2023 banning US individuals and companies from investing in sensitive sectors in China.

05:03

How does Chinas AI stack up against ChatGPT?

How does Chinas AI stack up against ChatGPT?

The US Treasury Department, which is defining the restrictions in the measure, said last week that they would focus on the next generation of military, intelligence, surveillance or cyber-enabled capabilities that pose national security risks to the United States.

On Monday, Ambassador Fu called on the US to lift the sanctions in line with the newly adopted resolution.

If people are true to the content of this resolution, it says that it is important to foster inclusive business environment. We dont think that the US actions [are] along that line, he said.

Read the original:

China-led resolution on artificial intelligence passes in United Nations - South China Morning Post

The US intelligence community is embracing generative AI – Nextgov/FCW

The normally secretive U.S. intelligence community is as enthralled with generative artificial intelligence as the rest of the world, and perhaps growing bolder in discussing publicly how theyre using the nascent technology to improve intelligence operations.

We were captured by the generative AI zeitgeist just like the entire world was a couple of years back, Lakshmi Raman, the CIAs director of Artificial Intelligence Innovation said last week at Amazon Web Services Summit in Washington, D.C. Raman was among the keynote speakers for the event, which had a reported attendance of 24,000-plus.

Raman said U.S. intelligence analysts currently use generative AI in classified settings for search and discovery assistance, writing assistance, ideation, brainstorming and helping generate counter arguments. These novel uses of generative AI build on existing capabilities within intelligence agencies that date back more than a decade, including human language translation and transcription and data processing.

As the functional manager for the intelligence communitys open-source data collection, Raman said the CIA is turning to generative AI to keep pace with, for example, all of the news stories that come in every minute of every day from around the world. AI, Raman said, helps intelligence analysts comb through vast amounts of data to pull out insights that can inform policymakers. In a giant haystack, AI helps pinpoint the needle.

In our open-source space, weve also had a lot of success with generative AI, and we have leveraged generative AI to help us classify and triage open-source events to help us search and discover and do levels of natural language query on that data, Raman said.

A thoughtful approach to AI

Economists believe generative AI could add trillions of dollars in benefits to the global economy annually, but the technology is not without risks.Countless reports showcase so-calledhallucinations or inaccurate answers spit out by generative AI software. In national security settings, AI hallucinations could have catastrophic consequences. Senior intelligence officials recognize the technologys potential but must responsibly weigh its risks.

Were excited to see about the opportunity that [generative AI] has, Intelligence Community Chief Information Officer Adele Merritt told Nextgov/FCW in an April interview. And we want to make sure that we are being thoughtful about how we leverage this new technology.

Merritt oversees information technology strategy efforts across the 18 agencies that comprise the intelligence community. She meets regularly with other top intelligence officials, including Intelligence Community Chief Data Officer Lori Wade, newly-appointed Intelligence Community Chief Artificial Intelligence Officer John Beieler and Rebecca Richards, who heads the Office of the Director of National Intelligences Civil Liberties, Privacy and Transparency Office, to discuss and ensure AI efforts are safe, secure and adhere to privacy standards and other policies.

We also acknowledge that theres an immense amount of technical potential that we still have to kind of get our arms around, making sure that were looking past the hype and understanding whats happening, and how we can bring this into our networks, Merritt said.

At the CIA, Raman said her office works in concert with the Office of General Counsel and Office of Privacy and Civil Liberties to address risks inherent to generative AI.

We think about risks quite a bit, and one of the risks we really think about are, how will our users be able to use these technologies in a safe, secure and trusted way? Raman said. So thats about making sure that theyre able to look at the output and validate it for accuracy.

Because security requirements are so rigorous within the intelligence community, far fewer generative AI tools are secure enough to be used across its enterprise than in the commercial space. Intelligence analysts cant, for example, access a commercial generative AI tool like ChatGPT in a sensitive compartmented information facility pronounced "skiff" where some of their most sensitive work is performed.

Yet a growing number of generative AI tools have met those standards and are already impacting missions.

In March, Gary Novotny, chief of the ServiceNow Program Management Office at CIA, explained how at least one generative AI tool was helping reduce the time it took for analysts to run intelligence queries. His remarks followed a 2023 report that the CIA was building its own large language model.

In May, Microsoft announced theavailability of GPT-4 for users of its Azure Government Top Secret cloud, which includes defense and intelligence customers. Through the air-gapped solution, customers in the classified space can make use of a tool very similar to whats used in the commercial space. Microsoft officials noted security accreditation took 18 months, indicative of how complex software security vetting at the highest levels can be even for tech giants.

Each of the large commercial cloud providers are making similar commitments. Google Cloud is bringing many of its commercial AI offerings to some secure government workloads, including its popular Vertex AI development platform. Similarly, Oracles cloud infrastructure and associated AI tools are now available in its U.S. government cloud.

Meanwhile AWS, the first commercial cloud service provider toserve the intelligence community, is looking to leverage its market-leading position in cloud computing to better serve growing customer demands for generative AI.

The reality of generative AI is youve got to have a foundation of cloud computing, AWS Vice President of Worldwide Public Sector Dave Levy told Nextgov/FCW in a June 26 interview at AWS Summit. Youve got to get your data in a place where you can actually do something with it.

At the summit, Levy announced AWS Public Sector Generative AI Impact Initiative, a two-year, $50 million investment aimed at helping government and education customers address generative AI challenges, including training and tech support.

The imperative for us is helping customers understand that journey, Levy said.

On June 26, AI firm Anthropics chief executive officer Dario Amodei and Levy jointly announced the availability of Anthropics Claude 3 Sonnet and Claude 3 Haiku AI models to U.S. intelligence agencies. The commercially-popular generative AI tools are now available through the AWS Marketplace for the U.S. Intelligence Community, which is essentially a classified version of its commercial cloud marketplace.

Amodei said that while Anthropic is responsible for the security of the large language model, it partnered with AWS because of its superior cloud security standards and reputation as a public sector leader in the cloud computing space. Amodei said the classified marketplace, which allows government customers to spin up and try software before they buy it, also simplifies procurement for the government. And, he said, it gives intelligence agencies the means to use the same tools available to adversaries.

The [Intelligence Community Marketplace] makes it easier, because AWS has worked with this many times, and so we dont have to reinvent the wheel, Amodei said. AI needs to empower democracies and allow them to function better and remain competitive on the global stage.

Follow this link:

The US intelligence community is embracing generative AI - Nextgov/FCW

Star Trek open thread: A long way away from true artificial intelligence – Daily Kos

The term artificial intelligence is thrown around too loosely these days. For example, I was looking at an old student project on GitHub,a tic-tac-toe program. The programmer, who back then was a student, described his program as an artificial intelligence.

Its not artificial intelligence. The game engine is simply an algorithm that scans the available spaces and determines if placing an X or an O on that square wins the game. Theres no consciousness, no judgement based on the programs past experience.

Even with chess, claims of artificial intelligence are frequently exaggerated. Chess programs, along with many other programs, have gotten so fast that it is quite easy for a chess program to simply consider all possible game branches to a given depth (e.g., three moves ahead) and choose the most advantageous path.

Maybe for the endgame a chess program switches to evaluating positions according to a specialized endgames database. Even so, this doesnt have to be artificial intelligence. There might be some simple threshold, such as that when there are fewer than nine pieces on the board, switch to endgame mode.

Despite how good computers have gotten at chess, they have not put professional chess players out of work. The only way Im attending a tournament of computers playing chess is if my own chess program is competing, and Im moving very slowly on that one.

Though on the other hand, there are so few professional players in the world that putting them all out of work would not be as impactful as, for example, putting all marketing copy writers out of work, or putting all graphic designers out of work.

Despite the obvious shortcomings of the artificial intelligence that is available today,corporate executivesare still eager to use it to replace human writers, artists and anyone else they can think of.

A couple of weeks ago, I asked SDXL to generate for me a few images of William Shakespeare in a coffee shop writing a play on a laptop computer. All of the results were bad in varying degrees. The best image, in my opinion, the only one I thought worth posting here, still has some obvious flaws that you will notice even if you know very little about Shakespeare.

Now suppose that in an episode of a Star Trek show one of the main characters goes to a holodeck and asks the computer to render Shakespeare. The holodeck Shakespeare willalmost certainly have two hands with five fingers each, and on each hand each finger will have the expected number of joints (one fewer for the thumb than the other four).

Of course thats because from a real-world viewpoint, the producers of the show will hire a human actor to portray Shakespeare. In the story, its because the holodeck computer understands how humans are put together and how they move.

Looking at Star Trek as a promise for the future, its clear that A.I. has a very long way to go.

In much of Star Trek (the original series), computers store and retrieve lots of information, and automate many repetitive processes, but they dont really show creativity.

However, the original series episode The Ultimate Computer is rather prescient of the fears many have about A.I. today. The titular computer is the new M-5, which has just been installed aboard the Enterprise. Roger Thompson for CounterPunch.org:

In the early scenes [of the episode], Captain Kirk [(William Shatner)] expresses concerns that he might be replaced by the machine, a fear that is now common in many quarters.

There is ademonstration of the M-5s capabilities early on in the episode that does nothing to assuage Kirks fears. The powerful computer is tasked with naming crew members for a landing party to go down to the planet the Enterprise is currently orbiting. The M-5 makes the same choices as Captain Kirk, with one glaring and galling exception: the M-5 doesnt think Kirk needs to go down to the planet.

But even ChatGPT would be able to put together a landing party roster. War games will be the true test of the M-5.

When the M-5 begins its rampage during the war games, Kirk convinces the machines creator, Dr. Daystrom [(William Marshall)], to talk to it and try to make it stop, but Daystrom suffers a nervous breakdown before he can get the M-5 to discontinue the attack. Kirk then proves the M-5 is guilty of murder, and the computer shuts itself off and leaves theEnterpriseunable to defend herself from attack from the surviving ships in Commodore Wesleys attack force. Fortunately, Wesley [(Barry Russo)] decides not to destroy theEnterprise, and Kirk comments that he knew that Wesley would act with compassion. Dr. McCoy [(DeForest Kelley)], always true to character, then remarks: Compassion. Thats the one thing no machine ever had. Maybe its the one thing that keeps men ahead of them.

Not sure I agree with the good doctor on this one. If the M-5 can feel guilt, cant it also feel compassion? But if it couldnt feel guilt, wouldnt it just have gone ahead and destroyed all the ships in the war games?

How exactly the M-5 works is left very vague. But with Lt. Commander Data (Brent Spiner) on Star Trek: The Next Generation, the super-strong and super-smart android who is second in line to command the Enterprise-D should something unfortunate happen to Captain Picard, we get a much clearer idea of what the androids intelligence is based on. Datas brain essentially has an LLM.

I quote now from the page about violinists at Memory Alpha. I like Memory Alpha, despite the annoying tendency to use past tense for absolutely everything.

In2366,Datacombined the differing styles of violinistsJascha HeifetzandTrenka Bron-Kenexpertly.Jean-Luc Picardconvinced him that having done so evidenced that he had not merely imitated their techniques, but created something new from them. (TNG: "The Ensigns of Command")

Later that year, Data askedPerrinwhose style of the over three hundredconcertviolinists that he had been programmed with that she fancied. Among themanywere Heifetz,Yehudi Menuhin,Grak-tay, andTataglia. She chose Tataglia. (TNG: "Sarek")

I know who Jascha Heifetz and Yehudi Menuhin are, I have recordings by them in my iTunes collection. Id be hard-pressed to identify in-universe what stylistic details Data will take from them, though of course from a production point of view I strongly doubt the recording violinists were given the direction to mix the styles of Heifetz and Menuhin with fictional violinists of their own imagining.

Data surely has books like The Art of String Quartet Playing by Mary D. Herter Norton in his memory banks and can quote them at will. But he also has the experience of handling an actual violin and playing it in an ensemble with human players whose intonation and rhythm might not be quite as precise as his.

Unlike todays LLMs, Data understands that he can get things wrong sometimes. In the episode Cause and Effect, he realizes he got it wrong multiple times when its too late, but he will still be able to send a message to when its too early for anyone to understand whats going on.

Before anyone complains, I shouldsay something about spoilers. So far Ive only mentioned episodes that first aired more than thirty years ago. If youve read this far, you either have watched these episodes many times and know them by heart, or you havent watched them but know so much about more recent episodes, movies and series that its not a spoiler to tell you that the Enterprise gets destroyed and undestroyed several times.

At the crucial moment when everything is about to go wrong, Captain Picard (Patrick Stewart) listens to his senior officers for ideas on how to avert annihilation, and decides to do what Lt. Commander Data suggests.

It is during the explosion that Data realizes that the right thing to do is what Commander Riker (Jonathan Frakes) suggests. The Enterprise blows up and the time loop is reset. As the time loop starts a new iteration, Datas strange message from the future becomes more insistent: the number 3, corresponding to the three pips on Commander Rikers uniform.

For all Data's knowledge and ability, Starfleet still considers Commander Riker to be more qualified than Data to command a starship.

In the second part of the Redemption two-parter, Picard is trying to set up a detection grid to catch Romulans supplying weapons to one side in a Klingon civil war. Given the short notice, Picard can only assemble a small complement of random undermanned ships. Picard sends some of his senior officers to captain some of the ships.

First time I watched the episode, I was skeptical of the detection grid idea, but mostly because the diagram we see on the screen suggests a two-dimensional grid. If the Romulans can come all the way from Romulus, surely they can go around a detection grid that doesnt surround the whole planet. But lets put that aside, lets just say that either I misunderstood the diagram or the graphics department messed up the diagram.

Picard assigns Data to command the USS Sutherland. Lieutenant Hobson (Timothy Carhart) decides hes going to be the shadow captain of the Sutherland. No, actually, shadow captain is the wrong term, it implies that Hobson will treat the nominal captain with a bare minimum of respect and deference.

But from the moment Data comes aboard, Hobson openly disrespects the android, who has earned the same rank in Starfleet from years of experience, and Datas experience includes almost five years aboard the flagship of the fleet.

We may doubt that Data bases his violin playing on Heifetz or Menuhin, but its clear that he bases his leadership style on Picards example, calmly listening to his subordinates and treating them as professionals rather than recruits at boot camp.

But that style wont quite work with Hobson, who is always ready to substitute his own judgement for that of an artificial intelligence he does not respect.

Once again at a crucial moment, Data realizes that what needs to be done is not the obvious thing everyone assumes. The Romulans notice a hole in the detection grid and go to it. Data decides that he needs to make the hole bigger and fire a shot in the dark to illuminate the cloaked Romulan ship.

With Captain Picard saying the gap needs to be closed, Hobson obviously doesnt want to do the crazy idea that the android has just come up with. So Data feigns anger, like hes going to punch Hobson in the face if Hobson doesnt do what Data orders.

And so, the Romulan ship is detected, and the Romulans decide to abandon their favored side in the Klingon civil war. I dont think ChatGPT would come up with that idea.

The open thread question: Assuming the continuation of human civilization to the 24th Century, how do you think artificial intelligence will progress?

Feel free to mention pertinent examples from newer shows like Star Trek: Discovery and Star Trek: Picard. But please, no bashing of those shows just to bash them.

See original here:

Star Trek open thread: A long way away from true artificial intelligence - Daily Kos

OPCW launches 260,000 Artificial Intelligence Research Challenge | OPCW – Organisation for the Prohibition of Chemical Weapons

THE HAGUE, Netherlands 2 July 2024The Organisation for the Prohibition of Chemical Weapons (OPCW) launched a crowdsourcing challenge for researchers and scientists from all OPCW Member States to propose innovative artificial intelligence (AI) systems and approaches that could be used by the Organisation to enhance its capabilities and support adaptation towards future challenges. The deadline for submissions is 16:00 CET on 9 August 2024.

The OPCW and its Scientific Advisory Board (SAB) have been closely monitoring recent developments in AI and considering both the risks they may pose and the opportunities they could offer. The SAB recognises that AI could offer many benefits to the work of the Organisation, helping it achieve its mission to rid the world of chemical weapons. This Challenge aims to leverage AI technology to establish new capabilities within the OPCW or to further develop existing ones, ensuring the Organisation is best equipped and prepared to address current and future threats. The Challenge is seeking AI solutions to build capabilities specifically relating to implementation of the Chemical Weapons Convention, and not to the OPCWs business processes. Examples include document analysis to identify emerging threats or trends, data mining in chemical forensics, medical countermeasure design, and open-source data analysis to corroborate reports of chemical weapons use. Proposals from research teams in all OPCW Member States are strongly encouraged.

Following the review of all submissions by the Technical Evaluation Team, consisting of members of the SAB and qualified OPCW Technical Secretariat staff, a total of four proposals will be awarded up to 65,000 for the purpose of developing the project over the course of one year. The AI Challenge is funded by the European Union and the United Kingdom of Great Britain and Northern Ireland.

All questions and submissions should be sent to OPCW Procurement.

The OPCW SAB comprises 25 independent experts from OPCW Member States. Its role is to provide advice to the Director-General relating to developments in scientific and technological fields that are relevant to the Chemical Weapons Convention. On request, the SAB also provides advice to the OPCW Technical Secretariat on technical matters related to the implementation of the Convention, including on cooperation and assistance.

As the implementing body for the Chemical Weapons Convention, the OPCW, with its 193 Member States, oversees the global endeavour to permanently eliminate chemical weapons. Since the Conventions entry into force in 1997, it is the most successful disarmament treaty eliminating an entire class of weapons of mass destruction.

In 2023, the OPCW verified that all chemical weapons stockpiles declared by the 193 States Parties to the Chemical Weapons Convention since 1997 totalling 72,304 metric tonnes of chemical agents have been irreversibly destroyed under the OPCWs strict verification regime.For its extensive efforts in eliminating chemical weapons, the OPCW received the 2013 Nobel Peace Prize.

Read the rest here:

OPCW launches 260,000 Artificial Intelligence Research Challenge | OPCW - Organisation for the Prohibition of Chemical Weapons

Artificial Intelligence in Medicine Market is set to Fly High Growth in Years to Come| Alphabet, Berg Health, BioXcel … – openPR

Artificial Intelligence in Medicine Market

Get free access to Sample Report in PDF Version along with Graphs and Figures @ https://www.advancemarketanalytics.com/sample-report/4225-global-artificial-intelligence-in-medicine-market?utm_source=OpenPR/utm_medium=Rahul

Some of the key players profiled in the study are: Alphabet Inc. (Google Inc.) (United States), Berg Health (United Kingdom), BioXcel Corporation (United States), Enlitic Inc. (United States), Intel Corporation (United States), General Vision (United States), IBM Corporation (United States), Microsoft Corporation (United States), Nvidia Corporation (United States), Welltok Inc. (United States) Artificial intelligence (AI) is a field of computer science that can analyse large amounts of medical data. In many clinical settings, their ability to exploit significant relationships within a data collection may be employed in diagnosis, treatment, and prediction of result. Artificial intelligence in medicine can be defined as a scientific discipline concerned with research studies, projects, and applications aimed at assisting decision-based medical tasks through knowledge- and/or data-intensive computer-based solutions that ultimately support and improve a healthcare professional's performance. The lack of qualified healthcare workers and the development in the processing capacity of AI systems are two significant reasons driving the expansion of the AI in medicine market, which is expected to assist enhance the efficiency of drug discovery and clinical trial management.

On 12th April, 2021 " Microsoft Announced Its Acquisition of Nuance Communications (Leading Speech to Text Solution Provider), Valued at USD $19.7 Billion. The Deal is Aimed to Microsofts Strategic Expansion Across Healthcare Vertical, which is Followed by the Launch of Microsoft Cloud for Healthcare.

Keep yourself up-to-date with latest market trends and changing dynamics due to COVID Impact and Economic Slowdown globally. Maintain a competitive edge by sizing up with available business opportunity in Artificial Intelligence in Medicine Market various segments and emerging territory.

Influencing Market Trend Technical Progress in Medical Industry Market Drivers Digitization Across Different Industry Verticals High Investment in Medical R&D Opportunities: Growth in Healthcare Infrastructure Across Emerging Regions Challenges: Fierce Competitive Pressure

Analysis by Type (Hardware, Software, Service), Application (Drug Discovery, Clinical Research Trial, Personalized Medicine, Others), Technology (Deep Learning, Querying Method, Natural Language Processing, Context Aware Processing)

Have Any Questions Regarding Global Artificial Intelligence in Medicine Market Report, Ask Our Experts@ https://www.advancemarketanalytics.com/enquiry-before-buy/4225-global-artificial-intelligence-in-medicine-market?utm_source=OpenPR/utm_medium=Rahul

The regional analysis of Global Artificial Intelligence in Medicine Market is considered for the key regions such as Asia Pacific, North America, Europe, Latin America and Rest of the World. North America is the leading region across the world. Whereas, owing to rising no. of research activities in countries such as China, India, and Japan, Asia Pacific region is also expected to exhibit higher growth rate the forecast period 2024-2030.

Table of Content Chapter One: Industry Overview Chapter Two: Major Segmentation (Classification, Application and etc.) Analysis Chapter Three: Production Market Analysis Chapter Four: Sales Market Analysis Chapter Five: Consumption Market Analysis Chapter Six: Production, Sales and Consumption Market Comparison Analysis Chapter Seven: Major Manufacturers Production and Sales Market Comparison Analysis Chapter Eight: Competition Analysis by Players Chapter Nine: Marketing Channel Analysis Chapter Ten: New Project Investment Feasibility Analysis Chapter Eleven: Manufacturing Cost Analysis Chapter Twelve: Industrial Chain, Sourcing Strategy and Downstream Buyers

Read Executive Summary and Detailed Index of full Research Study @ https://www.advancemarketanalytics.com/reports/4225-global-artificial-intelligence-in-medicine-market?utm_source=OpenPR/utm_medium=Rahul

Highlights of the Report The future prospects of the global Artificial Intelligence in Medicine market during the forecast period 2024-2030 are given in the report. The major developmental strategies integrated by the leading players to sustain a competitive market position in the market are included in the report. The emerging technologies that are driving the growth of the market are highlighted in the report. The market value of the segments that are leading the market and the sub-segments are mentioned in the report. The report studies the leading manufacturers and other players entering the global Artificial Intelligence in Medicine market.

Contact Us: Craig Francis (PR & Marketing Manager) AMA Research & Media LLP Unit No. 429, Parsonage Road Edison, NJ New Jersey USA - 08837 Phone: +1(201) 7937323, +1(201) 7937193 sales@advancemarketanalytics.com

About Author: Advance Market Analytics is Global leaders of Market Research Industry provides the quantified B2B research to Fortune 500 companies on high growth emerging opportunities which will impact more than 80% of worldwide companies' revenues. Our Analyst is tracking high growth study with detailed statistical and in-depth analysis of market trends & dynamics that provide a complete overview of the industry. We follow an extensive research methodology coupled with critical insights related industry factors and market forces to generate the best value for our clients. We Provides reliable primary and secondary data sources, our analysts and consultants derive informative and usable data suited for our clients business needs. The research study enables clients to meet varied market objectives a from global footprint expansion to supply chain optimization and from competitor profiling to M&As.

This release was published on openPR.

Follow this link:

Artificial Intelligence in Medicine Market is set to Fly High Growth in Years to Come| Alphabet, Berg Health, BioXcel ... - openPR

This is How You Can Imagine and Plan Trip with Your Tour Planner for Your Dream Destination with the Use of Artificial … – Travel And Tour World

Home NEWS UPDATES This is How You Can Imagine and Plan Trip with Your Tour Planner for Your Dream Destination with the Use of Artificial Intelligence and Mixed Reality

Friday, July 5, 2024

As technology continues to advance, the travel industry is set to undergo a significant transformation. Mixed reality (MR) and artificial intelligence (AI) are at the forefront of this revolution, promising to change how travel companies help customers imagine their experiences and promote sustainable travel practices.

In an exclusive interview with Travel And Tour World, Jason Bradbury, the British television presenter expresses his ideas and opinion on artificial intelligence and emerging technologies. Artificial intelligence (AI) and emerging technologies are revolutionizing various industries by enhancing efficiency, accuracy, and innovation. From healthcare to finance, AI-driven solutions and advancements in robotics, blockchain, and IoT are transforming traditional practices, enabling smarter decision-making, and creating new opportunities for growth and development in a rapidly evolving digital landscape.

Imagine walking into a travel agency and putting on a pair of VR glasses. Instantly, your living room becomes the setting of an Egyptian sunset or the interior of a luxurious hotel you are considering. This immersive experience allows customers to virtually explore destinations and accommodations before committing to a booking. By providing a realistic preview, travel companies can enhance customer satisfaction and reduce the likelihood of disappointment upon arrival.

While mixed reality transforms how we visualize travel, AI is revolutionizing various aspects of the travel industry, including sustainability. The negative impact of travel on local ecosystems is well-documented. However, AI promises significant innovations in resource management and materials science, positively affecting how travelers interact with the environment.

AI can optimize resource management in the travel industry, from energy consumption in hotels to water usage in resorts. Smart systems can analyze usage patterns and adjust settings in real-time to reduce waste and conserve resources. For instance, AI can manage lighting, heating, and cooling systems in hotels, ensuring they operate efficiently only when needed.

AI is also driving advancements in less polluting fuels and energy production. Machine learning algorithms can optimize fuel usage in transportation, reducing emissions and promoting cleaner travel options. Additionally, AI can enhance the efficiency of renewable energy sources, such as solar and wind power, making them more viable for use in the travel industry.

One of the significant environmental challenges in travel is plastic waste. AI can play a crucial role in waste clean-up and recycling efforts. For example, AI-powered robots can identify and sort recyclable materials more efficiently than human workers, reducing the amount of waste that ends up in landfills. Additionally, AI can help develop sustainable alternatives to traditional plastics, further minimizing the environmental footprint of travel.

As technology continues to evolve, the travel industry is poised for a significant transformation. One of the most exciting developments is the Apple Vision Pro, a cutting-edge device that combines mixed reality and advanced artificial intelligence (AI) to revolutionize how travelers experience and plan their journeys. This article explores the role of Apple Vision Pro in the travel industry and its potential to enhance customer experiences and promote sustainable travel practices.

Beond, heralded as the worlds first premium leisure airline, has taken a pioneering step by announcing its plan to offer the innovative Apple Vision Pro to select passengers on its flights to the Maldives, starting in July 2024.

This move marks the airline as the first to introduce such an advanced inflight entertainment option, promising an unparalleled immersive experience that blends Beonds exclusive onboard content with the enchanting allure of the Maldives.

Under the leadership of Chairman and CEO Tero Taskila, Beond is set to revolutionize the way inflight entertainment is perceived and experienced. By integrating the Apple Vision Pro into their service, Beond aims to elevate passengers journeys, offering them a glimpse into the stunning resort destinations and activities awaiting them in the Maldives. This innovative approach not only enhances the flight experience but also builds anticipation and excitement among passengers, setting the stage for their upcoming vacation.

Beonds commitment to providing a premium travel experience is evident in its careful selection of destinations and the continuous expansion of its inflight content library, which includes movies, games, and now, captivating visual showcases of the Maldives. The introduction of the Apple Vision Pro is a testament to the airlines dedication to leveraging technology to create memorable and luxurious experiences for its customers.

Mixed reality, which combines elements of virtual reality (VR) and augmented reality (AR), allows users to interact with both the physical and digital worlds. Although VR is still in its infancy, it is rapidly evolving. Within a decade, it will be commonplace for people to wear VR glasses, transforming their smartphones into immersive devices.

The Apple Vision Pro is designed to offer an unparalleled mixed reality experience, blending the physical and digital worlds seamlessly. This device allows users to visualize and interact with virtual environments in ways previously unimaginable. For the travel industry, this means a new era of immersive travel planning.

Imagine being able to explore your dream destination from the comfort of your home. With Apple Vision Pro, travelers can virtually visit exotic locations, walk through hotel rooms, and experience local attractions before making any bookings. This immersive preview helps customers make informed decisions, ensuring that their expectations align with reality. Travel agencies and tour operators can leverage this technology to showcase their offerings in a captivating and engaging manner, enhancing customer satisfaction and confidence.

Travel companies can use Apple Vision Pro to create interactive and personalized travel experiences. By integrating AI, the device can analyze user preferences and suggest tailored itineraries, accommodations, and activities. For example, if a user frequently searches for beach destinations, the device can recommend the best beach resorts and activities based on their interests. This level of personalization enhances customer engagement and loyalty.

Beyond sustainability, AI is enhancing customer experiences in numerous ways. Here are a few examples:

AI can analyze vast amounts of data to provide personalized travel recommendations based on individual preferences and past behavior. This ensures that customers receive tailored suggestions that match their interests, making their travel experiences more enjoyable.

AI-powered chatbots and virtual assistants can handle complex booking queries, providing real-time assistance and support. These systems can streamline the booking process, reducing wait times and ensuring that customers find the best options for their needs.

AI systems are equipped to handle a wide range of customer inquiries and issues. From resolving booking problems to providing information about local attractions, AI-powered assistants offer immediate support, enhancing customer satisfaction.

As mixed reality and AI technologies continue to evolve, their impact on the travel industry will only grow. The combination of immersive experiences and intelligent systems promises to make travel more engaging, efficient, and sustainable.

While the widespread adoption of these technologies may still be a few years away, it is clear that they will play a pivotal role in shaping the future of travel. By embracing mixed reality and AI, travel companies can provide exceptional customer experiences, reduce their environmental impact, and stay ahead of the competition.

The travel industry is on the brink of a technological revolution, driven by mixed reality and AI. These innovations are set to transform how customers imagine and book their travel experiences while promoting sustainable practices. As travel companies continue to adopt these technologies, they will be better equipped to meet the evolving needs and expectations of modern travelers, ensuring a brighter and more sustainable future for the industry.

Tags: Apple Vision Pro, artificial intelligence, artificial intelligence news, augmented reality, Mixed Reality, tour, Tourism, Tourism news, Travel, Travel News, Trip, Virtual reality, VR

Visit link:

This is How You Can Imagine and Plan Trip with Your Tour Planner for Your Dream Destination with the Use of Artificial ... - Travel And Tour World

How Banks and Big Tech Use AI to Modernize Workflows – PYMNTS.com

Unlocking operational leverage is the name of the game for todays businesses.

In the face of ongoing macroeconomic uncertainty, controlling for what is controllable while making the most of available resources is emerging as a way to capture growth.

For instance, Amazon is equipping its finance teams with generative artificial intelligence (GenAI) tools designed to support and evolve legacy workflows across areas such as fraud detection, contract review, financial forecasting, personal productivity, interpretation of rules and regulations and tax-related work, per a Wall Street Journal (WSJ) report.

The tech giant is just one firm of many that are leveraging AI to transform back-office operations.

After all, it makes sense that the Big Tech firms responsible for developing AI would want to streamline their internal workflows and embrace improved efficiency and enhanced decision-making with the same AI innovations they are investing into and partnering with.

But it doesnt mean that the Magnificent Seven are the only enterprises able to tap AI to give existing and traditional processes and business functions a shot in the arm.

No matter where the world may fall on the AI hype cycle, as the technology continues to evolve and access further democratizes, its integration into internal workflows will likely become even more sophisticated and widespread as companies look to focus on areas like improving productivity, automating processes and modernizing the customer experience.

Read more: 5 Trends These AI Experts Think Could Change Payments and Commerce

From streamlining back-office processes to enhancing decision-making capabilities, AI is unlocking operational leverage as companies harness the technology to create value by transforming internal workflows.

TheChatGPTlight bulb went off in everybodys head, and it brought artificial intelligence and state-of-the-art deep learning into the public discourse,Andy Hock, senior vice president of product and strategy atCerebras, told PYMNTS. And from anenterprise standpoint, a light bulb went off in the heads of many Fortune 1000 CIOs and CTOs, too.

One of the most immediate benefits of AI in enterprise settings is the automation of repetitive and mundane tasks. Robotic Process Automation (RPA), combined with AI, enables organizations to handle high-volume, repetitive tasks more efficiently and accurately than human labor. These tasks include data entry, invoice processing, payroll management and routine administrative duties.

NewPYMNTS Intelligencein the June report SMBs Race to Critical Mass on AI Usage found that 96% of small- to medium-sized businesses (SMBs) that have tried AI tools see it as an effective method to streamline tasks.

And anew reportfrom venture capital firmAndreessen Horowitz finds that the use of AI within accounting can revolutionize traditionally tedious tasks like bookkeeping, tax preparation and auditing.

It isnt just accounting where AI can shine marketing functions are also getting a lift from the innovation. Additional PYMNTS Intelligence data in The 2024 CAIO Report: Are CMOs Missing GenAIs Potential? reveals that nearly four in five chief marketing officers (CMOs) consider GenAI to be very or extremely important to providing a positive customer experience.

Three-quarters of CMOs also consider GenAI very important for conducting market research, indicating a strong focus on understanding consumer behavior. Half of surveyed CMOs already use GenAI for routine tasks like drafting emails and visualizing data.

Read more:AIs Essential Use Cases Across B2B Operations

The integration of AI into enterprise workflows is not just a technological advancement but a strategic imperative for modern businesses.

Ive been in the artificial intelligence and machine learning (ML) space for more than 20 years now,Yoav Amiel, chief information officer at freight brokerage platform and third-party logistics companyRXO, told PYMNTS. When we build technology, were not building it just for its own sake, we build technologyto help the business.

By doing things like automating repetitive tasks, enhancing data analytics, improving customer service, streamlining HR processes, strengthening financial management, optimizing supply chains and fortifying cybersecurity, AI is enabling corporations to capture significant value creation.

For example, with theintegration of OpenAIs ChatGPT, Apple products will soon be able to handle customer inquiries, process orders and even provide product recommendations, while at JPMorgan Chase, getting trained in AI is now part of being hired.

Morgan Stanleysaid this past September that it was launching anAI-powered assistant for financial advisers and their support staff, and Salesforce has said that itsAI + Data + CRM platform has been instrumental in much of its recent growth.

PYMNTS Intelligence finds that the corporate treasury function is another area where AI can shine.

Companies need to adopt new technology, Claudia Villasis-Wallraff, head of data driven treasury atDeutsche Bank, told PYMNTS. And with this, I not only mean adopting API connectivity, but also cloud functions and artificial intelligence.

Continue reading here:

How Banks and Big Tech Use AI to Modernize Workflows - PYMNTS.com

Voyager Space and Palantir Announce Strategic Partnership Leveraging Artificial Intelligence to Drive Innovation in … – PR Newswire

DENVER, June 27, 2024 /PRNewswire/ --Voyager Space (Voyager), a global leader in space exploration, today announced a strategic partnership with Palantir Technologies, Inc. (NYSE:PLTR), a leading builder of artificial intelligence (AI) systems for the modern enterprise. Together, Voyager andPalantir will rapidly advance the space and defense technology sectors by integrating Palantir's cutting-edge AI tools across the Voyager enterprise.

This partnership solidifies Voyager's commitment to leading the space industry in AI-driven innovation, ensuring robust and agile solutions for defense and commercial applications. Expanding on a previous Memorandum of Understanding (MOU) announced earlier this year, Voyager will now fully integrate Palantir's AI capabilities into their defense solutions, benefiting from Palantir's deep expertise delivering for the Department of Defense (DoD). This collaboration enhances communications, military research and development, as well as intelligence and space research, making space more accessible to the defense community and vice versa.

"We are thrilled to partner with Palantir, a company that shares our vision for leveraging technology to drive transformative change," said Matt Kuta, President, Voyager Space. "By embracing Palantir's game-changing AI across our operations, we not only enhance Voyager's defense-tech capabilities, but also set a new standard for the broader aerospace industry. This collaboration will enable us to deliver unprecedented value and innovation to our customers and stakeholders."

Voyager will leverage Palantir Foundry and the Artificial Intelligence Platform (AIP) to drive value in its in-house payload management system for International Space Station customers today, as well as onboard the Starlab commercial space station in the future. It is also building aprototype "Customer Hub" for its customers to submit payload requests via Palantir's software.

The partnership also bolsters Voyager's defense segment,offering the opportunity to useAI to process and optimize flight and testing data on solid fuel thrusters to ensure smooth flight. Palantir's softwarecan alsohelp power increased real time signal data processing and more precise targeting for Voyager's optical communications systems for DoD customers.

"We look forward to deepening our collaboration with Voyager Space," said Shyam Sankar, CTO of Palantir."Palantir is committed to building transformative AI solutions across every domain. Our work with Voyager enables us to continue expanding the boundaries of these capabilities to better meet the context of our customer's mission. Together, we will drive the innovation our nation needs to create resilient infrastructure, scale production, and uphold national security."

Today's news builds on the recently announced partnership between Starlab Space and Palantir. As Voyager continues to embrace the benefits of AI alongside Palantir, both companies are poised to lead the defense and space industry into a new era of innovation.

About Voyager SpaceVoyager Space is dedicated to building a better future for humanity in space and on Earth. With over 35 years of spaceflight heritage and over 2,000 successful missions, Voyager is powering the commercial space revolution. Voyager delivers exploration, technology, and defense solutions to a global customer base that includes civil and national security agencies, commercial companies, academic and research institutions, and more.

About Palantir Technologies Inc. Palantir builds category-leading software that empowers organizations to create and govern artificial intelligence across public and private networks. Since 2003, we have helped some of the world's most important organizations solve their most difficult problems. Foundational Software of Tomorrow. Delivered Today. Additional information is available athttps://www.palantir.com.

Cautionary Statement Concerning Forward-Looking StatementsThis press release contains "forward-looking statements." All statements, other than statements of historical fact, including those with respect to Voyager Space, Inc.'s (the "Company's") mission statement and growth strategy, are "forward-looking statements." Although the Company's management believes that such forward-looking statements are reasonable, it cannot guarantee that such expectations are, or will be, correct. These forward-looking statements involve many risks and uncertainties, which could cause the Company's future results to differ materially from those anticipated. Potential risks and uncertainties include, among others, general economic conditions and conditions affecting the industries in which the Company operates; the uncertainty of regulatory requirements and approvals; and the ability to obtain necessary financing on acceptable terms or at all. Readers should not place any undue reliance on forward-looking statements since they involve these known and unknown uncertainties and other factors which are, in some cases, beyond the Company's control and which could, and likely will, materially affect actual results, levels of activity, performance or achievements. Any forward-looking statement reflects the Company's current views with respect to future events and is subject to these and other risks, uncertainties and assumptions relating to operations, results of operations, growth strategy and liquidity. The Company assumes no obligation to publicly update or revise these forward-looking statements for any reason, or to update the reasons actual results could differ materially from those anticipated in these forward-looking statements, even if new information becomes available in the future.

SOURCE Voyager Space

Go here to read the rest:

Voyager Space and Palantir Announce Strategic Partnership Leveraging Artificial Intelligence to Drive Innovation in ... - PR Newswire

This Is My Top Artificial Intelligence (AI) ETF to Buy Right Now – 24/7 Wall St.

Investing

Published: July 1, 2024 2:20 pm

Picking the individual winners of the burgeoning artificial intelligence (AI) race is no simple task. But one exchange-traded fund (ETF) that provides investors with exposure to a basket of AI stocks could be the solution.

While many investors struck gold by purchasing shares of NVDIA (NASDAQ: NVDA) before the chipmakers stock took off last year, the AI-adjacent company is more of the exception than the rule.

Pure play AI companies, on the other hand, have had less predictable successes, with some like C3.ai (NYSE: AI) seeing precipitous rises and falls. C3.ai, which produces AI applications for other enterprises, saw its stock surge to $161 per share by late 2020. At the time of writing, shares of the company are now trading for $28.96.

Forecasts suggest that the global AI market could increase exponentially by the early part of the next decade. By some analysts estimates, that growth could be as much as 300 times its 2022 valuation of $39 billion, which would translate to an astounding $1.3 trillion by 2032.

But how do investors identify the likely winners? Rather than picking one or two companies operating in the AI space and simply wishing for the best, ETFs with holdings spread across all facets of the AI industry allow investors to gain exposure to the trend without overexposing themselves to any individual holding.

In this way, not only are these ETFs providing broad exposure to AI with companies offering varying levels of involvement to the technology, but in doing so, these funds are simultaneously reducing overall risk exposure.

And just as ETFs go, the options for ones leveraged to the AI industry are bountiful. However, just like the stocks they hold, not all ETFs are created equally.

There are no fewer than 38 AI-themed ETFs currently trading on the major exchanges in the U.S. Some offer equal weighting, some prefer heavier allocations to the Magnificent Seven stocks. Some are actively managed with portfolio positions constantly shuffled.

They vary considerably by size, too, with some having assets under management (AUM) as low as $532,360 and others reaching as high as $2.72 billion.

But when it comes to finding a fund with the best combination of high growth potential, Big Tech names, diverse AI industry exposure, significant AUM coupled with a modest expense ratio, one ETF in particular takes the cake.

Enter the Global X Artificial Intelligence & Technology ETF (NASDAQ: AIQ), which has posted an eye-catching 138% gain since its inception in May 2018 and has gained over 17% so far in 2024. According to Global Xs website, the ETF has net assets of $2.08 billion and a total expense ratio of 0.68%.

And while its size and per share appreciation have been impressive so far, it is the funds holdings that should garner a lot of attention. By industry, AIQ spans packaged software, semiconductors, internet software and services, information technology services, telecommunications equipment, internet retail, and industrial conglomerates.

That breadth is expansive, but looking at the names among its top weighted holdings provides more insight into why this ETF is an AI powerhouse:

Of course, those are not all of AIQs holdings, but they are the big names with some of the heaviest weightings. And looking at that list, you can see why the AI ETF was capable of producing such enormous gains for shareholders since it debuted in 2018.

As AI expands out of its earliest phase, when it was constricted to pure play stocks, cloud services, and data centers, the technology is now finding its way into streaming services (Netflix), e-commerce (Alibaba), customer relationship management (Salesforce), and numerous other facets of the economy.

Rather than hoping any one of the aforementioned companies emerges as the biggest winner of the next phase of AI implementation, investing in a fund like the Global X Artificial Intelligence & Technology ETF can provide investors with the best of broad exposure and reduced risk.

Are you ready for retirement? Planning for retirement can be overwhelming, thats why it could be a good idea to speak to a fiduciary financial advisor about your goals today.

Start by taking this retirement quiz right here from SmartAsset that will match you with up to 3 financial advisors that serve your area and beyond in 5 minutes. Smart Asset is now matching over 50,000 people a month.

Click here now to get started.

Thank you for reading! Have some feedback for us? Contact the 24/7 Wall St. editorial team.

Visit link:

This Is My Top Artificial Intelligence (AI) ETF to Buy Right Now - 24/7 Wall St.

III. Artificial intelligence and the economy: implications for central banks – bis.org

Key takeaways

The advent of large language models (LLMs) has catapulted generative artificial intelligence (gen AI) into popular discourse. LLMs have transformed the way people interact with computers away from code and programming interfaces to ordinary text and speech. This ability to converse through ordinary language as well as gen AI's human-like capabilities in creating content have captured our collective imagination.

Below the surface, the underlying mathematics of the latest AI models follow basic principles that would be familiar to earlier generations of computer scientists. Words or sentences are converted into arrays of numbers, making them amenable to arithmetic operations and geometric manipulations that computers excel at.

What is new is the ability to bring mathematical order at scale to everyday unstructured data, whether they be text, images, videos or music. Recent AI developments have been enabled by two factors. First is the accumulation of vast reservoirs of data. The latest LLMs draw on the totality of textual and audiovisual information available on the internet. Second is the massive computing power of the latest generation of hardware. These elements turn AI models into highly refined prediction machines, possessing a remarkable ability to detect patterns in data and fill in gaps.

There is an active debate on whether enhanced pattern recognition is sufficient to approximate "artificial general intelligence" (AGI), rendering AI with full human-like cognitive capabilities. Irrespective of whether AGI can be attained, the ability to impose structure on unstructured data has already unlocked new capabilities in many tasks that eluded earlier generations of AI tools.1 The new generation of AI models could be a game changer for many activities and have a profound impact on the broader economy and the financial system. Not least, these same capabilities can be harnessed by central banks in pursuit of their policy objectives, potentially transforming key areas of their operations.

The economic potential of AI has set off a gold rush across the economy. The adoption of LLMs and gen AI tools is proceeding at such breathtaking speed that it easily outpaces previous waves of technology adoption (Graph 1.A). For example, ChatGPT alone reached one million users in less than a week and nearly half of US households have used gen AI tools in the past 12 months. Mirroring rapid adoption by users, firms are already integrating AI in their daily operations: global survey evidence suggests firms in all industries use gen AI tools (Graph 1.B). To do so, they are investing heavily in AI technology to tailor it to their specific needs and have embarked on a hiring spree of workers with AI-related skills (Graph 1.C). Most firms expect these trends to only accelerate.2

This chapter lays out the implications of these developments for central banks, which impinge on them in two important ways.

First, AI will influence central banks' core activities as stewards of the economy. Central bank mandates revolve around price and financial stability. AI will affect financial systems as well as productivity, consumption, investment and labour markets, which themselves have direct effects on price and financial stability. Widespread adoption of AI could also enhance firms' ability to quickly adjust prices in response to macroeconomic changes, with repercussions for inflation dynamics. These developments are therefore of paramount concern to central banks.

Second, the use of AI will have a direct bearing on the operations of central banks through its impact on the financial system. For one, financial institutions such as commercial banks increasingly employ AI tools, which will change how they interact with and are supervised by central banks. Moreover, central banks and other authorities are likely to increasingly use AI in pursuing their missions in monetary policy, supervision and financial stability.

Overall, the rapid and widespread adoption of AI implies that there is an urgent need for central banks to raise their game. To address the new challenges, central banks need to upgrade their capabilities both as informed observers of the effects of technological advancements as well as users of the technology itself. As observers, central banks need to stay ahead of the impact of AI on economic activity through its effects on aggregate supply and demand. As users, they need to build expertise in incorporating AI and non-traditional data in their own analytical tools. Central banks will face important trade-offs in using external vs internal AI models, as well as in collecting and providing in-house data vs purchasing them from external providers. Together with the centrality of data, the rise of AI will require a rethink of central banks' traditional roles as compilers, users and providers of data. To harness the benefits of AI, collaboration and the sharing of experiences emerge as key avenues for central banks to mitigate these trade-offs, in particular by reducing the demands on information technology (IT) infrastructure and human capital. Central banks need to come together to form a "community of practice" to share knowledge, data, best practices and AI tools.

The chapter starts with an overview of developments in AI, providing a deep dive into the underlying technology. It then examines the implications of the rise of AI for the financial sector. The discussion includes current use cases of AI by financial institutions and implications for financial stability. It also outlines the emerging opportunities and challenges and the implications for central banks, including how they can harness AI to fulfil their policy objectives. The chapter then discusses how AI affects firms' productive capacity and investment, as well as labour markets and household consumption, and how these changes in aggregate demand and supply affect inflation dynamics. The chapter concludes by examining the trade-offs arising from the use of AI and the centrality of data for central banks and regulatory authorities. In doing so, it highlights the urgent need for central banks to cooperate.

Artificial intelligence is a broad term, referring to computer systems performing tasks that require human-like intelligence. While the roots of AI can be traced back to the late 1950s, the advances in the field of machine learning in the 1990s laid the foundations of the current generation of AI models. Machine learning is a collective term referring to techniques designed to detect patterns in the data and use them in prediction or to aid decision-making.3

The development of deep learning in the 2010s constituted the next big leap. Deep learning uses neural networks, perhaps the most important technique in machine learning, underpinning everyday applications such as facial recognition or voice assistants. The main building block of neural networks is artificial neurons, which take multiple input values and transform them to output as a set of numbers that can be readily analysed. The artificial neurons are organised to form a sequence of layers that can be stacked: the neurons of the first layer take the input data and output an activation value. Subsequent layers then take the output of the previous layer as input, transform it and output another value, and so forth. A network's depth refers to the number of layers. More layers allow neural networks to capture increasingly complex relationships in the data. The weights determining the strength of connections between different neurons and layers are collectively called parameters, which are improved (known as learning) iteratively during training. Deeper networks with more parameters require more training data but predict more accurately.

A key advantage of deep learning models is their ability to work with unstructured data. They achieve this by "embedding" qualitative, categorical or visual data, such as words, sentences, proteins or images, into arrays of numbers an approach pioneered at scale by the Word2Vec model (see Box A). These arrays of numbers (ie vectors) are interpreted as points in a vector space. The distance between vectors conveys some dimension of similarity, enabling algebraic manipulations on what is originally qualitative data. For example, the vector linking the embeddings of the words "big" and "biggest" is very similar to that between "small" and "smallest". Word2Vec predicts a word based on the surrounding words in a sentence. The body of text used for the embedding exercise is drawn from the open internet through the "common crawl" database. The concept of embedding can be taken further into mapping the space of economic ideas, uncovering latent viewpoints or methodological approaches of individual economists or institutions ("personas"). The space of ideas can be linked to concrete policy actions, including monetary policy decisions.4

The advent of LLMs allows neural networks to access the whole context of a word rather than just its neighbour in the sentence. Unlike Word2Vec, LLMs can now capture the nuances of translating uncommon languages, answer ambiguous questions or analyse the sentiment of texts. LLMs are based on the transformer model (see Box B). Transformers rely on "multi-headed attention" and "positional encoding" mechanisms to efficiently evaluate the context of any word in the document. The context influences how words with multiple meanings map into arrays of numbers. For example, "bond" could refer to a fixed income security, a connection or link, or a famous espionage character. Depending on the context, the "bond" embedding vector lies geometrically closer to words such as "treasury", "unconventional" and "policy"; to "family" and "cultural"; or to "spy" and "martini". These developments have enabled AI to move from narrow systems that solve one specific task to more general systems that deal with a wide range of tasks.

LLMs are a leading example of gen AI applications because of their capacity to understand and generate accurate responses with minimal or even no prior examples (so-called few-shot or zero-shot learning abilities). Gen AI refers to AIs capable of generating content, including text, images or music, from a natural language prompt. The prompts contain instructions in plain language or examples of what users want from the model. Before LLMs, machine learning models were trained to solve one task (eg image classification, sentiment analysis or translating from French to English). It required the user to code, train and roll out the model into production after acquiring sufficient training data. This procedure was possible for only selected companies with researchers and engineers with specific skills. An LLM has few-shot learning abilities in that it can be given a task in plain language. There is no need for coding, training or acquiring training data. Moreover, it displays considerable versatility in the range of tasks it can take on. It can be used to first classify an image, then analyse the sentiment of a paragraph and finally translate it into any language. Therefore, LLMs and gen AI have enabled people using ordinary language to automate tasks that were previously performed by highly specialised models.

The capabilities of the most recent crop of AI models are underpinned by advances in data and computing power. The increasing availability of data plays a key role in training and improving models. The more data a model is trained on, the more capable it usually becomes. Furthermore, machine learning models with more parameters improve predictions when trained with sufficient data. In contrast to the previous conventional wisdom that "over-parameterisation" degrades the forecasting ability of models, more recent evidence points to a remarkable resilience of machine learning models to over-parameterisation. As a consequence, LLMs with well designed learning mechanisms can provide more accurate predictions than traditional parametric models in diverse scenarios such as computer vision, signal processing and natural language processing (NLP).5

An implication is that more capable models tend to be larger models that need more data. Bigger models and larger data sets therefore go together and increase computational demands. The use of advanced techniques on vast troves of data would not have been possible without substantial increases in computing power in particular, the computational resources employed by AI systems which has been doubling every six months.6The interplay between large amounts of data and computational resources implies that just a handful of companies provide cutting-edge LLMs, an issue revisited later in the chapter.

Some commentators have argued that AI has the potential to become the next general-purpose technology, profoundly impacting the economy and society. General-purpose technologies, like electricity or the internet, eventually achieve widespread usage, give rise to versatile applications and generate spillover effects that can improve other technologies. The adoption pattern of general-purpose technologies typically follows a J-curve: it is slow at first, but eventually accelerates. Over time, the pace of technology adoption has been speeding up. While it took electricity or the telephone decades to reach widespread adoption, smartphones accomplished the same in less than a decade. AI features two distinct characteristics that suggest an even steeper J-curve. First is its remarkable speed of adoption, reflecting ease of use and negligible cost for users. Second is its widespread use at an early stage by households as well as firms in all industries.

Of course, there is substantial uncertainty about the long-term capabilities of gen AI. Current LLMs can fail elementary logical reasoning tasks and struggle with counterfactual reasoning, as illustrated in recent BIS work.7 For example, when posed with a logical puzzle that demands reasoning about the knowledge of others and about counterfactuals, LLMs display a distinctive pattern of failure. They perform flawlessly when presented with the original wording of a puzzle, which they have likely seen during their training. They falter when the same problem is presented with small changes of innocuous details such as names and dates, suggesting a lack of true understanding of the underlying logic of statements. Ultimately, current LLMs do not know what they do not know. LLMs also suffer from the hallucination problem: they can present a factually incorrect answer as if it were correct, and even invent secondary sources to back up their fake claims. Unfortunately, hallucinations are a feature rather than a bug in these models. LLMs hallucinate because they are trained to predict the statistically plausible word based on some input. But they cannot distinguish what is linguistically probable from what is factually correct.

Do these problems merely reflect the limits posed by the size of the training data set and the number of model parameters? Or do they reflect more fundamental limits to knowledge that is acquired through language alone? Optimists acknowledge current limitations but emphasise the potential of LLMs to exceed human performance in certain domains. In particular, they argue that terms such as "reason", "knowledge" and "learning" rightly apply to such models. Sceptics point out the limitations of LLMs in reasoning and planning. They argue that the main limitation of LLMs derives from their exclusive reliance on language as the medium of knowledge. As LLMs are confined to interacting with the world purely through language, they lack the tacit non-linguistic, shared understanding that can be acquired only through active engagement with the real world.8

Whether AI will eventually be able to perform tasks that require deep logical reasoning has implications for its long-run economic impact. Assessing which tasks will be impacted by AI depends on the specific cognitive abilities required in those tasks. The discussion above suggests that, at least in the near term, AI faces challenges in reaching human-like performance. While it may be able to perform tasks that require moderate cognitive abilities and even develop "emergent" capabilities, it is not yet able to perform tasks that require logical reasoning and judgment.

The financial sector is among those facing the greatest opportunities and risks from the rise of AI, due to its high share of cognitively demanding tasks and data-intensive nature.9 Table 1 illustrates the impact of AI in four key areas: payments, lending, insurance and asset management.

Across all four areas, AI can substantially enhance efficiency and lower costs in back-end processing, regulatory compliance, fraud detection and customer service. These activities give full play to the ability of AI models to identify patterns of interest in seemingly unstructured data. Indeed, "finding a needle in the haystack" is an activity that plays to the greatest strength of machine learning models. A striking example is the improvement of know-your-customer (KYC) processes through quicker data processing and the enhanced ability to detect fraud, allowing financial institutions to ensure better compliance with regulations while lowering costs.10 LLMs are also increasingly being deployed for customer service operations through AI chatbots and co-pilots.

In payments, the abundance of transaction-level data enables AI models to overcome long-standing pain points. A prime example comes from correspondent banking, which has become a high-risk, low-margin activity. Correspondent banks played a key role in the expansion of cross-border payment activity by enabling transaction settlement, cheque clearance and foreign exchange operations. Facing heightened customer verification and anti-money laundering (AML) requirements, banks have systematically retreated from the business (Graphs 2.A and 2.B). Such retreat fragments the global payment system by leaving some regions less connected (Graph 2.C), handicapping their connectivity with the rest of the financial system. The decline in correspondent banking is part of a general de-risking trend, with returns from processing transactions being small compared with the risks of penalties from breaching AML, KYC and countering the financing of terrorism (CFT) requirements.11

A key use case of AI models is to improve KYC and AML processes by enhancing (i) the ability to understand the compliance and reputational risks that clients might carry, (ii) due diligence on the counterparties of a transaction and (iii) the analysis of payment patterns and anomaly detection. By bringing down costs and reducing risks through greater speed and automation, AI holds the promise to reverse the decline in correspondent banking.

The ability of AI models to detect patterns in the data is helping financial institutions address many of these challenges. For example, financial institutions are using AI tools to enhance fraud detection and to identify security vulnerabilities. At the global level, surveys indicate that around 70% of all financial services firms are using AI to enhance cash flow predictions and improve liquidity management, fine-tune credit scores and improve fraud detection.12

In credit assessment and lending, banks have used machine learning for many years, but AI can bring further capabilities. For one, AI could greatly enhance credit scoring by making use of unstructured data. In deciding whether to grant a loan, lenders traditionally rely on standardised credit scores, at times combined with easily accessible variables such as loan-to-value or debt-to-income ratios. AI-based tools enable lenders to assess individuals' creditworthiness with alternative data. These can include consumers' bank account transactions or their rental, utility and telecommunications payments data. But they can also be of a non-financial nature, for example applicants' educational history or online shopping habits. The use of non-traditional data can significantly improve default prediction, especially among underserved groups for whom traditional credit scores provide an imprecise signal about default probability. By being better able to spot patterns in unstructured data and detect "invisible primes", ie borrowers that are of high quality even if their credit scores indicate low quality, AI can enhance financial inclusion.13

AI has numerous applications in insurance, particularly in risk assessment and pricing. For example, companies use AI to automatically analyse images and videos to assess property damage due to natural disasters or, in the context of compliance, whether claims of damages correspond to actual damages. Underwriters, actuaries or claims adjusters further stand to benefit from AI summarising and synthesising data gathered during a claim's life cycle, such as call transcripts and notes, as well as legal and medical paperwork. More generally, AI is bound to play an increasingly important role in assessing different types of risks. For example, some insurance companies are experimenting with AI methods to assess climate risks by identifying and quantifying emissions based on aerial images of pollution. However, to the extent that AI is better at analysing or inferring individual-level characteristics in risk assessments, including those whose use is prohibited by regulation, existing inequalities could be exacerbated an issue revisited in the discussion on the macroeconomic impact of AI.

In asset management, AI models are used to predict returns, evaluate risk-return trade-offs and optimise portfolio allocation. Just as LLMs assign different characteristics to each word they process, they can be used to elicit unobservable features of financial data (so-called asset embeddings). This allows market participants to extract information (such as firm quality or investor preferences) that is difficult to discern from existing data. In this way, AI models can provide a better understanding of the risk-return properties of portfolios. Models that use asset embeddings can outperform traditional models that rely only on observable characteristics of financial data. Separately, AI models are useful in algorithmic trading, owing to their ability to analyse large volumes of data quickly. As a result, investors benefit from quicker and more precise information as well as lower management fees.14

The widespread use of AI applications in the financial sector, however, brings new challenges. These pertain to cyber security and operational resilience as well as financial stability.

The reliance on AI heightens concerns about cyber attacks, which regularly feature among the top worries in the financial industry. Traditionally, phishing emails have been used to trick a user to run a malicious code (malware) to take over the user's device. Credential phishing is the practice of stealing a user's login and password combination by masquerading as a reputable or known entity in an email, instant message or another communication channel. Attackers then use the victim's credentials to carry out attacks on additional targets and gain further access.15 Gen AI could vastly expand hackers' ability to write credible phishing emails or to write malware and use it to steal valuable information or encrypt a company's files for ransom. Moreover, gen AI allows hackers to imitate the writing style or voice of individuals, or even create fake avatars, which could lead to a dramatic rise in phishing attacks. These developments expose financial institutions and their customers to a greater risk of fraud.

But AI also introduces altogether new sources of cyber risk. Prompt injection attacks, one of the most widely reported weaknesses in LLMs, refer to an attacker creating an input to make the model behave in an unintended way. For example, LLMs are usually instructed not to provide dangerous information, such as how to manufacture napalm. However, in the infamous grandma jailbreak, where the prompter asked ChatGPT to pretend to be their deceased grandmother telling a bedtime story about the steps to produce napalm, the chatbot did reveal this information. While this vulnerability has been fixed, others remain. Data poisoning attacks refer to malicious tampering with the data an AI model is trained on. For example, an attacker could adjust input data so that the AI model fails to detect phishing emails. Model poisoning attacks deliberately introduce malware, manipulating the training process of an AI system to compromise its integrity or functionality. This attack aims to alter the model behaviour to serve the attacker's purposes.16 As more applications use data created by LLMs themselves, such attacks could have increasingly severe consequences, leading to heightened operational risks among financial institutions.

Greater use of AI raises issues of bias and discrimination. Two examples stand out. The first relates to consumer protection and fair lending practices. As with traditional models, AI models can reflect biases and inaccuracies in the data they are trained on, posing risks of unjust decisions, excluding some groups from socially desirable insurance markets and perpetuating disparities in access to credit through algorithmic discrimination.17 Consumers care about these risks: recent evidence from a representative survey of US households suggests a lower level of trust in gen AI than in human-operated services, especially in high-stakes areas such as banking and public policy (Graph 3.A) and when AI tools are provided by big techs (Graph 3.B).18 The second example relates to the challenge of ensuring data privacy and confidentiality when dealing with growing volumes of data, another key concern for users (Graph 3.C). In the light of the high privacy standards that financial institutions need to adhere to, this heightens legal risks. The lack of explainability of AI models (ie their black box nature) as well as their tendency to hallucinate amplify these risks.

Another operational risk arises from relying on just a few providers of AI models, which increases third-party dependency risks. Market concentration arises from the centrality of data and the vast costs of developing and implementing data-hungry models. Heavy up-front investment is required to build data storage facilities, hire and train staff, gather and clean data and develop or refine algorithms. However, once the infrastructure is in place, the cost of adding each extra unit of data is negligible. This centrality leads to so-called data gravity: companies that already have an edge in collecting, storing and analysing data can provide better-trained AI tools, whose use creates ever more data over time. The consequence of data gravity is that only a few companies provide cutting-edge LLMs. Any failure among or cyber attack on these providers, or their models, poses risks to financial institutions relying on them.

The reliance of market participants on the same handful of algorithms could lead to financial stability risks. These could arise from AI's ubiquitous adoption throughout the financial system and its growing capability to make decisions independently and without human intervention ("automaticity") at a speed far beyond human capacity. The behaviour of financial institutions using the same algorithms could amplify procyclicality and market volatility by exacerbating herding, liquidity hoarding, runs and fire sales. Using similar algorithms trained on the same data can also lead to coordinated recommendations or outright collusive outcomes that run afoul of regulations against market manipulation, even if algorithms are not trained or instructed to collude.19 In addition, AI may hasten the development and introduction of new products, potentially leading to new and little understood risks.

Central banks stand at the intersection of the monetary and financial systems. As stewards of the economy through their monetary policy mandate, they play a pivotal role in maintaining economic stability, with a primary objective of ensuring price stability. Another essential role is to safeguard financial stability and the payment system. Many central banks also have a role in supervising and regulating commercial banks and other participants of the financial system.

Central banks are not simply passive observers in monitoring the impact of AI on the economy and the financial system. They can harness AI tools themselves in pursuit of their policy objectives and in addressing emerging challenges. In particular, the use of LLMs and AI can support central banks' key tasks of information collection and statistical compilation, macroeconomic and financial analysis to support monetary policy, supervision, oversight of payment systems and ensuring financial stability. As early adopters of machine learning methods, central banks are well positioned to reap the benefits of AI tools.20

Data are the major resource that stand to become more valuable due to the advent of AI. A particularly rich source of data is the payment system. Such data present an enormous amount of information on economic transactions, which naturally lends itself to the powers of AI to detect patterns.21 Dealing with such data necessitates adequate privacy-preserving techniques and the appropriate data governance frameworks.

The BIS Innovation Hub's Project Aurora explores some of these issues. Using a synthetic data set emulating money laundering activities, it compares various machine learning models, taking into account payment relationships as input. The comparison occurs under three scenarios: transaction data that are siloed at the bank level, national-level pooling of data and cross-border pooling. The models undergo training with known simulated money laundering transactions and subsequently predict the likelihood of money laundering in unseen synthetic data.

The project offers two key insights. First, machine learning models outperform the traditional rule-based methods prevalent in most jurisdictions. Graph neural networks, in particular, demonstrate superior performance, effectively leveraging comprehensive payment relationships available in pooled data to more accurately identify suspect transaction networks. And second, machine learning models are particularly effective when data from different institutions in one or multiple jurisdictions are pooled, underscoring a premium on cross-border coordination in AML efforts (Graph 4).

The benefits of coordination are further illustrated by Project Agor. This project gathers seven central banks and private sector participants to bring tokenised central bank money and tokenised deposits together on the same programmable platform.

The tokenisation built into Agor would allow the platform to harness three capabilities: (i) combining messaging and account updates as a single operation; (ii) executing payments atomically rather than as a series of sequential updates; and (iii) drawing on privacy-preserving platform resources for KYC/AML compliance. In traditional correspondent banking, information checks and account updates are made sequentially and independently, with significant duplication of effort (Graph 5.A). In contrast, in Agor the contingent performance of actions enabled by tokenisation allows for the combination of assets, information, messaging and clearing into a single atomic operation, eliminating the risk of reversals (Graph 5.B). In turn, privacy-enhancing data-sharing techniques can significantly simplify compliance checks, while all existing rules and regulations are adhered to as part of the pre-screening process.22

In the development of a new payment infrastructure like Agor, great care must be taken to ensure potential gains are not lost due to fragmentation. This can be done via access policies to the infrastructure or via interoperability, as advocated in the idea of the Finternet. This refers to multiple interconnected financial ecosystems, much like the internet, designed to empower individuals and businesses by placing them at the centre of their financial lives. The Finternet leverages innovative technologies such as tokenisation and unified ledgers, underpinned by a robust economic and regulatory framework, to expand the range and quality of savings and financial services. Starting with assets that can be easily tokenised holds the greatest promise in the near term.23

Central banks also see great benefits in using gen AI to improve cyber security. In a recent BIS survey of central bank cyber experts, a majority deem gen AI to offer more benefits than risks (Graph 6.A) and think it can outperform traditional methods in enhancing cyber security management.24 Benefits are largely expected in areas such as the automation of routine tasks, which can reduce the costs of time-consuming activities traditionally performed by humans (Graph 6.B). But human expertise will remain important. In particular, data scientists and cyber security experts are expected to play an increasingly important role. Additional cyber-related benefits from AI include the enhancement of threat detection, faster response times to cyber attacks and the learning of new trends, anomalies or correlations that might not be obvious to human analysts. In addition, by leveraging AI, central banks can now craft and deploy highly convincing phishing attacks as part of their cyber security training. Project Raven of the BIS Innovation Hub is one example of the use of AI to enhance cyber resilience (see Box C).

The challenge for central banks in using AI tools comes in two parts. The first is the availability of timely data, which is a necessary condition for any machine learning application. Assuming this issue is solved, the second challenge is to structure the data in a way that yields insights. This second challenge is where machine learning tools, and in particular LLMs, excel. They can transform unstructured data from a variety of sources into structured form in real time. Moreover, by converting time series data into tokens resembling textual sequences, LLMs can be applied to a wide array of time series forecasting tasks. Just as LLMs are trained to guess the next word in a sentence using a vast database of textual information, LLM-based forecasting models use similar techniques to estimate the next numerical observation in a statistical series.

These capabilities are particularly promising for nowcasting. Nowcasting is a technique that uses real-time data to provide timely insights. This method can significantly improve the accuracy and timeliness of economic predictions, particularly during periods of heightened market volatility. However, it currently faces two important challenges, namely the limited usability of timely data and the necessity to pre-specify and train models for concrete tasks.25 LLMs and gen AI hold promise to overcome both bottlenecks (see Box D). For example, an LLM fine-tuned with financial news can readily extract information from social media posts or non-financial firms' and banks' financial statements or transcripts of earning reports and create a sentiment index. The index can then be used to nowcast financial conditions, monitor the build-up of risks or predict the probability of recessions.26 Moreover, by categorising texts into specific economic topics (eg consumer demand and credit conditions), the model can pinpoint the source of changes in sentiment (eg consumer sentiment or credit risk). Such data are particularly relevant early in the forecasting process when traditional hard data are scarce.

Beyond financial applications, AI-based nowcasting can also be useful to understand real-economy developments. For example, transaction-level data on household-to-firm or firm-to-firm payments, together with machine learning models, can improve nowcasting of consumption and investment. Another use case is measuring supply chain bottlenecks with NLP, eg based on text in the so-called Beige Book. After classifying sentences related to supply chains, a deep learning algorithm classifies the sentiment of each sentence and provides an index that offers a real-time view of supply chain bottlenecks. Such an index can be used to predict inflationary pressures. Many more examples exist, ranging from nowcasting world trade to climate risks.27

Access to granular data can also enhance central banks' ability to track developments across different industries and regions. For example, with the help of AI, data from job postings or online retailers can be used to track wage developments and employment dynamics across occupations, tasks and industries. Such a real-time and detailed view of labour market developments can help central banks understand the extent of technology-induced job displacements, how quickly workers find new jobs and attendant wage dynamics. Similarly, satellite data on aerial pollution or nighttime lights can be used to predict short-term economic activity, while data on electricity consumption can shed light on industrial production in different regions and industries.28 Central banks can thereby obtain a more nuanced picture of firms' capital expenditure and production, and how the supply of and demand for goods and services are changing.

Central banks can also use AI, together with human expertise, to better understand factors that contribute to inflation. Neural networks can handle more input variables compared with traditional econometric models, making it possible to work with detailed data sets rather than relying solely on aggregated data. They can further reflect intricate non-linear relationships, offering valuable insights during periods of rapidly changing inflation dynamics. If AI's impact varies by industry but materialises rapidly, such advantages are particularly beneficial for assessing inflationary dynamics.

Recent work in this area decomposes aggregate inflation into various sub-components.29 In a first step, economic theory is used to pre-specify four factors shaping aggregate inflation: past inflation patterns, inflation expectations, the output gap and international prices. A neural network then uses aggregate series (eg the unemployment rate or total services inflation) and disaggregate series (eg two-digit industry output) to estimate the contribution of each of the four subcomponents to overall inflation, accounting for possible non-linearities.

The use of AI could play an important role in supporting financial stability analysis. The strongest suit of machine learning and AI methodologies is identifying patterns in a cross-section. As such, they can be particularly useful to identify and enhance the understanding of risks in a large sample of observations, helping identify the cross-section of risk across financial and non-financial firms. Again, availability of timely data is key. For example, during increasingly frequent periods of low liquidity and market dysfunction, AI could help prediction through better monitoring of anomalies across markets.30

Finally, pairing AI-based insights with human judgment could help support macroprudential regulation. Systemic risks often result from the slow build-up of imbalances and vulnerabilities, materialising in infrequent but very costly stress events. The scarcity of data on such events and the uniqueness of financial crises limit the stand-alone use of data-intensive AI models in macroprudential regulation.31 However, together with human expertise and informed economic reasoning to see through the cycle, gen AI tools could yield large benefits to regulators and supervisors. When combined with rich data sets that provide sufficient scope to find patterns in the data, AI could help in building early warning indicators that alert supervisors to emerging pressure points known to be associated with system-wide risks.

In sum, with sufficient data, AI tools offer central banks an opportunity to get a much better understanding of economic developments. They enable central banks to draw on a richer set of structured and unstructured data, and complementarily, speed up data collection and analysis. In this way, the use of AI enables the analysis of economic activity in real time at a granular level. Such enhanced capabilities are all the more important in the light of AI's potential impact on employment, output and inflation, as discussed in the next section.

AI is poised to increase productivity growth. For workers, recent evidence suggests that AI directly raises productivity in tasks that require cognitive skills (Graph 7.A). The use of generative AI-based tools has had a sizeable and rapid positive effect on the productivity of customer support agents and of college-educated professionals solving writing tasks. Software developers that used LLMs through the GitHub Copilot AI could code more than twice as many projects per week. A recent collaborative study by the BIS with Ant Group shows that productivity gains are immediate and largest among less experienced and junior staff (Box E).32

Early studies also suggest positive effects of AI on firm performance. Patenting activity related to AI and the use of AI are associated with faster employment and output growth as well as higher revenue growth relative to comparable firms. Firms that adopt AI also experience higher growth in sales, employment and market valuations, which is primarily driven by increased product innovation. These effects have materialised over a horizon of one to two years. In a global sample, AI patent applications generate a positive effect on the labour productivity of small and medium-sized enterprises, especially in services industries.33

The macroeconomic impact of AI on productivity growth could be sizeable. Beyond directly enhancing productivity growth by raising workers' and firms' efficiency, AI can spur innovation and thereby future productivity growth indirectly. Most innovation is generated in occupations that require high cognitive abilities. Improving the efficiency of cognitive work therefore holds great potential to generate further innovation. The estimates provided by the literature for AI's impact on annual labour productivity growth (ie output per employee) are thus substantive, although their range varies.34 Through faster productivity growth, AI will expand the economy's productive capacity and thus raise aggregate supply.

Higher productivity growth will also affect aggregate demand through changes in firms' investment. While gen AI is a relatively new technology, firms are already investing heavily in the necessary IT infrastructure and integrating AI models into their operations on top of what they already spend on IT in general. In 2023 alone, spending on AI exceeded $150 billion worldwide, and a survey of US companies' technology officers across all sectors suggests almost 50% rank AI as their top budget item over the next years.35

An additional boost to investment could come from improved prediction. AI adoption will lead to more accurate predictions at a lower cost, which reduces uncertainty and enables better decision-making.36 Of course, AI could also introduce new sources of uncertainty that counteract some of its positive impact on firm investment, eg by changing market and price dynamics.

Another substantial part of aggregate demand is household consumption. AI could spur consumption by reducing search frictions and improving matching, making markets more competitive. For example, the use of AI agents could improve consumers' ability to search for products and services they want or need and help firms in advertising and targeting services and products to consumers.37

AI's impact on household consumption will also depend on how it affects labour markets, notably labour demand and wages. The overall impact depends on the relative strength of three forces (Graph 8): by how much AI raises productivity, how many new tasks it creates and how many workers it displaces by making existing tasks obsolete.

If AI is a true general-purpose technology that raises total factor productivity in all industries to a similar extent, the demand for labour is set to increase across the board (Graph 8, blue boxes). Like previous general-purpose technologies, AI could also create altogether new tasks, further increasing the demand for labour and spurring wage growth (green boxes). If so, AI would increase aggregate demand.

However, the effects of AI might differ across tasks and occupations. AI might benefit only some workers, eg those whose tasks require logical reasoning. Think of nurses who, with the assistance of AI, can more accurately interpret x-ray pictures. At the same time, gen AI could make other tasks obsolete, for example summarising documents, processing claims or answering standardised emails, which lend themselves to automation by LLMs. If so, increased AI adoption would lead to displacement of some workers (Graph 8, red boxes). This could lead to declines in employment and lower wage growth, with distributional consequences. Indeed, results from a recent survey of US households by economists in the BIS Monetary and Economic Department in collaboration with the Federal Reserve Bank of New York indicate that men, better-educated individuals or those with higher incomes think that they will benefit more from the use of gen AI than women and those with lower educational attainment or incomes (Graph 7.B).38

These considerations suggest that AI could have implications for economic inequality. Displacement might eliminate jobs faster than the economy can create new ones, potentially exacerbating income inequality. A differential impact of benefits across job categories would strengthen this effect. The "digital divide" could widen, with individuals lacking access to technology or with low digital literacy being further marginalised. The elderly are particularly at risk of exclusion.39

Through the effects on productivity, investment and consumption the deployment of AI has implications for output and inflation. A BIS study illustrates the key mechanisms at work.40 As the source of a permanent increase in productivity, AI will raise aggregate supply. An increase in consumption and investment raises aggregate demand. Through higher aggregate demand and supply, output increases (Graph 9.A). In the short term, if households and firms fully anticipate that they will be richer in the future, they will increase consumption at the expense of investment, slowing down output growth.

The response of inflation will also depend on households' and businesses' anticipation of future gains from AI. If the average household does not fully anticipate gains, it will increase today's consumption only modestly. AI will act as a disinflationary force in the short run (blue line in Graph 9.B), as the impact on aggregate supply dominates. In contrast, if households anticipate future gains, they will consume more, making AI's initial impact inflationary (red line in Graph 9.B). Since past general-purpose technologies have had an initial disinflationary impact, the former scenario appears more likely. But in either scenario, as economic capacity expands and wages rise, the demand for capital and labour will steadily increase. If these demand effects dominate the initial positive shock to output capacity over time, higher inflation would eventually materialise. How quickly demand forces increase output and prices will depend not only on households' expectations but also on the mismatch in skills required in obsolete and newly created tasks. The greater the skill mismatch (other things being equal), the lower employment growth will be, as it takes displaced workers longer to find new work. It might also be the case that some segments of the population will remain permanently unemployable without retraining. This, in turn, implies lower consumption and aggregate demand, and a longer disinflationary impact of AI.

Another aspect that warrants further investigation is the effect of AI adoption on price formation. Large retail companies that predominately sell online use AI extensively in their price-setting processes. Algorithmic pricing by these retailers has been shown to increase both the uniformity of prices across locations and the frequency of price changes.41 For example, when gas prices or exchange rates move, these companies quickly adjust the prices in their online stores. As the use of AI becomes more widespread, also among smaller companies, these effects could become stronger. Increased uniformity and flexibility in pricing can mean greater and quicker pass-through of aggregate shocks to local prices, and hence inflation, than in the past. This can ultimately change inflation dynamics. An important aspect to consider is how these effects could differ depending on the degree of competition in the AI model and data market, which could influence the variety of models used.

Finally, the impact of AI on fiscal sustainability remains an open question. All things equal, an AI-induced boost to productivity and growth would lead to a reduced debt burden. However, to the extent that faster growth is associated with higher interest rates, combined with the potential need for fiscal programmes to manage AI-induced labour relocation or sustained spells of higher unemployment rates, the impact of AI on the fiscal outlook might be modest. More generally, the AI growth dividend is unlikely to fully offset the spending needs that may arise from the green transition or population ageing over the next decades.

The use of AI models opens up new opportunities for central banks in pursuit of their policy objectives. A consistent theme running through the chapter has been the availability of data as a critical precondition for successful applications of machine learning and AI. Data governance frameworks will be part and parcel of any successful application of AI. Central banks' policy challenges thus encompass both models and data.

An important trade-off arises between using "off-the-shelf" models versus developing in-house fine-tuned ones. Using external models may be more cost-effective, at least in the short run, and leverages the comparative advantage of private sector companies. Yet reliance on external models comes with reduced transparency and exposes central banks to concerns about dependence on a few external providers. Beyond the general risks that market concentration poses to innovation and economic dynamism, the high concentration of resources could create significant operational risks for central banks, potentially affecting their ability to fulfil their mandates.

Another important aspect relates to central banks' role as users, compilers and disseminators of data. Central banks use data as a crucial ingredient in their decision-making and communication with the public. And they have always been extensive compilers of data, either collecting them on their own or drawing on other official agencies and commercial sources. Finally, central banks are also providers of data, to inform other parts of government as well as the general public. This role helps them fulfil their obligations as key stakeholders in national statistical systems.

The rise of machine learning and AI, together with advances in computing and storage capacity, have cast these aspects in an urgent new light. For one, central banks now need to make sense of and use increasingly large and diverse sets of structured and unstructured data. And these data often reside in the hands of the private sector. While LLMs can help process such data, hallucinations or prompt injection attacks can lead to biased or inaccurate analyses. In addition, commercial data vendors have become increasingly important, and central banks make extensive use of them. But in recent years, the cost of commercial data has increased markedly, and vendors have imposed tighter use conditions.

The decision on whether to use external or internal models and data has far-reaching implications for central banks' investments and human capital. A key challenge is setting up the necessary IT infrastructure, which is greater if central banks pursue the road of developing internal models and collecting or producing their own data. Providing adequate computing power and software, as well as training existing or hiring new staff, involves high up-front costs. The same holds for creating a data lake, ie pooling different curated data sets. Yet a reliable and safe IT infrastructure is a prerequisite not only for big data analyses but also to prevent cyber attacks.

Hiring new or retaining existing staff with the right mix of economic understanding and programming skills can be challenging. As AI applications increase the sophistication of the financial system over time, the premium on having the right mix of skills will only grow. Survey-based evidence suggests this is a top concern for central banks (Graph 10). There is high demand for data scientists and other AI-related roles, but public institutions often cannot match private sector salaries for top AI talent. The need for staff with the right skills also arises from the fact that the use of AI models to aid financial stability monitoring faces limitations, as discussed above. Indeed, AI is not a substitute for human judgment. It requires supervision by experts with a solid understanding of macroeconomic and financial processes.

How can central banks address these challenges and mitigate trade-offs? The answer lies, in large part, in cooperation paired with sound data governance practices.

Collaboration can yield significant benefits and relax constraints on human capital and IT. For one, the pooling of resources and knowledge can lower demands among central banks and could ease the resource constraints on collecting, storing and analysing big data as well as developing algorithms and training models. For example, central banks could address rising costs of commercial data, especially for smaller institutions, by sharing more granular data themselves or by acquiring data from vendors through joint procurement. Cooperation could also facilitate training staff through workshops in the use of AI or the sharing of experiences in conferences. This would particularly benefit central banks with fewer staff and resources and with limited economies of scale. Cooperation, for example by re-using trained models, could also mitigate the environmental costs associated with training algorithms and storing large amounts of data, which consume enormous amounts of energy.

Central bank collaboration and the sharing of experiences could also help identify areas in which AI adds the most value and how to leverage synergies. Common data standards could facilitate access to publicly available data and facilitate the automated collection of relevant data from various official sources, thereby enhancing the training and performance of machine learning models. Additionally, dedicated repositories could be set up to share the open source code of data tools, either with the broader public or, at least initially, only with other central banks. An example is a platform such as BIS Open Tech, which supports international cooperation and coordination in sharing statistical and financial software. More generally, central banks could consider sharing domain-adapted or fine-tuned models in the central banking community, which could significantly lower the hurdles for adoption.42 Joint work on AI models is possible without sharing data, so they can be applied even where there are concerns about confidentiality.

An example of how collaboration supports data collection and dissemination is the jurisdiction-level statistics on international banking, debt securities and over-the-counter derivatives by the BIS. These data sets have a long history the international banking statistics started in the 1970s. They are a critical element for monitoring developments and risks in the global financial system. They are compiled from submissions by participating authorities under clear governance rules and using well established statistical processes. At a more granular level, arrangements for the sharing of confidential bank-level data include the quantitative impact study data collected by the Basel Committee on Banking Supervision and the data on large global banks collected by the International Data Hub. Other avenues to explore include sharing synthetic or anonymised data that protect confidential information.

The rising importance of data and emergence of new sources and tools call for sound data governance practices. Central banks must establish robust governance frameworks that include guidelines for selecting, implementing and monitoring both data and algorithms. These frameworks should comprise adequate quality control and cover data management and auditing practices. The importance of metadata, in particular, increases as the range and variety of data expand. Sometimes referred to as "the data about the data", metadata include the definitions, source, frequency, units and other information that define a given data set. This metadata is crucial when privacy-preserving methods are used to draw lessons from several data sets overseen by different central banks. Machine readability is greatly enhanced when metadata are standardised so that the machines know what they are looking for. For example, the "Findable, Accessible, Interoperable and Reusable" (FAIR) principles provide guidance in organising data and metadata to ease the burden of sharing data and algorithms.43

More generally, metadata frameworks are crucial for a better understanding of the comparability and limits of data series. Central banks can also cooperate in this domain. For example, the Statistical Data and Metadata Exchange (SDMX) standard provides a common language and structure for metadata. Such standards are crucial to foster data-sharing, lower the reporting burden and facilitate interoperability. Similarly, the Generic Statistical Business Process Model lays out business processes for official statistics with a unified framework and consistent terminology. Sound data governance practices would also facilitate the sharing of confidential data.

In sum, there is an urgent need for central banks to collaborate in fostering the development of a community of practice to share knowledge, data, best practices and AI tools. In the light of rapid technological change, the exchange of information on policy issues arising from the role of central banks as data producers, users and disseminators is crucial. Collaboration lowers costs, and such a community would foster the development of common standards. Central banks have a history of successful collaboration to overcome new challenges. The emergence of AI has hastened the need for cooperation in the field of data and data governance.

Graph 1.A: The adoption of ChatGPT is proxied by the ratio of the maximum number of website visits worldwide for the period November 2022April 2023 and the worldwide population with internet connectivity. For more details on computer see US Census Bureau; for electric power, internet and social media see Comin and Hobijn (2004) and Our World in Data; for smartphones, see Statista.

Graph 1.B: Based on an April 2023 global survey with 1,684 participants.

Graph 1.C: Data for capital invested in AI companies for 2024 are annualised based on data up to mid-May. Data on the percentage of AI job postings for AU, CA, GB, NZ and US are available for the period 201423; for AT, BE, CH, DE, ES, FR, IT, NL and SE, data are available for the period 201823.

Graph 2.A: Three-month moving averages.

Graphs 2.B and 2.C: Correspondent banks that are active in several corridors are counted several times. Averages across countries in each region. Markers in panel C represent subregions within each region. Grouping of countries by region according to the United Nations Statistics Division; for further details see unstats.un.org/unsd/methodology/m49/.

Graph 3.A: Average scores in answers to the following question: "In the following areas, would you trust artificial intelligence (AI) tools less or more than traditional human-operated services? For each item, please indicate your level of trust on a scale from 1 (much less trust than in a human) to 7 (much more trust)."

Graph 3.B: Average scores and 95% confidence intervals in answers to the following question: "How much do you trust the following entities to safely store your personal data when they use artificial intelligence tools? For each of them, please indicate your level of trust on a scale from 1 (no trust at all in the ability to safely store personal data) to 7 (complete trust)."

Graph 3.C: Average scores (with scores ranging from 1 (lowest) to 7 (highest)) in answers to the following questions: (1) "Do you think that sharing your personal information with artificial intelligence tools will decrease or increase the risk of data breaches (that is, your data becoming publicly available without your consent)?"; (2) "Are you concerned that sharing your personal information with artificial intelligence tools could lead to the abuse of your data for unintended purposes (such as for targeted ads)?"

Graph 6.A: The bars show the share of respondents to the question, "Do you agree that the use of AI can provide more benefits than risks to your organisation?".

Graph 6.B: The bars show the average score that respondents gave to each option when asked to "Rate the level of significance of the following benefits of AI in cyber security"; the score scale of each option is from 1 (lowest) to 5 (highest).

Graph 7.A: The bars correspond to estimates of the increase in productivity of users that rely on generative AI tools relative to a control group that did not.

Acemoglu, D (2024): "The simple macroeconomics of AI", Economic Policy, forthcoming.

Agrawal, A, J Gans and A Goldfarb (2019): "Exploring the impact of artificial intelligence: prediction versus judgment", Information Economics and Policy, vol 47, pp16.

----- (2022): Prediction machines, updated and expanded: the simple economics of artificial intelligence, Harvard Business Review Press, 15 November.

Ahir, H, N Bloom and D Furceri (2022): "The world uncertainty index", NBER Working Papers, no 29763, February.

Aldasoro, I, O Armantier, S Doerr, L Gambacorta and T Oliviero (2024a): "Survey evidence on gen AI and households: job prospects amid trust concerns", BIS Bulletin, no 86, April.

----- (2024b): "The gen AI gender gap", BIS Working Papers, forthcoming.

Aldasoro, I, S Doerr, L Gambacorta, G Gelos and D Rees (2024): "Artificial intelligence, labour markets and inflation", mimeo.

Aldasoro, I, S Doerr, L Gambacorta, S Notra, T Oliviero and D Whyte (2024): "Generative artificial intelligence and cybersecurity in central banking", BIS Papers, no145, May.

See the rest here:

III. Artificial intelligence and the economy: implications for central banks - bis.org