Le Monde and Open AI sign partnership agreement on artificial intelligence – Le Monde

As part of its discussions with major players in the field of artificial intelligence, Le Monde has just signed a multi-year agreement with OpenAI, the company known for its ChatGPT tool. This agreement is historic as it is the first signed between a French media organization and a major player in this nascent industry. It covers both the training of artificial intelligence models developed by the American company and answer engine services such as ChatGPT. It will benefit users of this tool by improving its relevance thanks to recent, authoritative content on a wide range of current topics, while explicitly highlighting our news organization's contribution to OpenAI's services.

This is a long-term agreement, designed as a true partnership. Under the terms of the agreement, our teams will be able to draw on OpenAI technologies to develop projects and functionalities using AI. Within the framework of this partnership, and for the duration of the agreement, the two parties will collaborate on a privileged and recurring basis. A dialogue between the teams of both parties will ensure the monitoring of products and technologies developed by OpenAI.

For the general public, the effects of this agreement will be visible on ChatGPT, which can be described, in simple terms, as an answer engine using established facts or comments expressed by a limited number of references. The engine generates the most plausible and predictive synthetic answer to a given question.

The agreement between Le Monde and OpenAI allows the latter to use Le Monde's corpus, for the duration of the agreement, as one of the major references to establish its answers and make them reliable. It provides for references to Le Monde articles to be highlighted and systematically accompanied by a logo, a hyperlink, and the titles of the articles used as references. Content supplied to us by news agencies and photographs published by Le Monde are expressly excluded.

For Le Monde, this agreement is further recognition of the reliability of the work of our editorial teams, often considered a reference. It is also a first step toward protecting our work and our rights, at a time when we are still at the very beginning of the AI revolution, a wave predicted by many observers to be even more imposing than the digital one. We were among the very first signatories in France of the "neighboring rights" agreements, with Facebook and then Google. Here too, we had to ensure that the rights of press publishers applied to the use of Le Monde content referenced in answers generated by the services developed by OpenAI.

This point is crucial to us. We hope this agreement will set a precedent for our industry. With this first signature, it will be more difficult for other AI platforms to evade or refuse to negotiate. From this point of view, we are convinced that the agreement is beneficial for the entire profession.

Lastly, this partnership enables the Socit Editrice du Monde, Le Monde's holding company, to work with OpenAI to explore advances in this technology, anticipating as far as possible any consequences, negative or favorable. It also has the advantage of consolidating our business model by providing a significant source of additional, multi-year revenue, including a share of neighboring rights. An "appropriate and equitable" portion of these rights, as defined by law, will be paid back to the newsroom.

These discussions with AI players, punctuated by this first signature, are born of our belief that, faced with the scale of the transformations that lie ahead, we need, more than ever, to remain mobile in order to avoid the perils that are taking shape and seize the opportunities for development. The dangers have already been widely identified: the plundering or counterfeiting of our content, the industrial and immediate fabrication of false information that flouts all journalistic rules, the re-routing of our audiences towards platforms likely to provide undocumented answers to every question. Simply put, the end of our uniqueness and the disappearance of an economic model based on revenues from paid distribution.

These risks, which are probably fatal for our industry, do not prevent the existence of historic opportunities: putting the computing power of artificial intelligence at the service of journalism, making it easier to work with data in a shorter timeframe as part of large-scale investigations, translating our written content into foreign languages or producing audio versions to expand our readership and disseminate our information and editorial formats to new audiences.

To take the measure of these challenges, we decided to act in steps. The first was devoted to protecting our content and strengthening our procedures. Last year, we first activated an opt-out clause on our sites, following the example of several other media organizations, prohibiting AI platforms from accessing our data to train their generative intelligence models without our agreement. We also collectively discussed and drew up an appendix to our ethics and deontology charter, devoted specifically to the use of AI within our group. In particular, this text states that generative artificial intelligence cannot be used in our publications to produce editorial content ex-nihilo. Nor can it replace the editorial teams that form the core of our business and our value. Our charter does, however, authorize the use of generative AI as a tool to assist editorial production, under strictly defined conditions.

With this in mind, another phase was opened, dedicated to experimenting with artificial intelligence tools in very specific sectors of our business. Using DeepL, we were able to launch our Le Monde in English website and app, whose articles are initially translated by this AI tool, before being re-read by professional translators and then edited and published by a team of English-speaking journalists. At the same time, we signed an agreement with Microsoft to test the audio version of our articles. This feature, now available on almost all our French-language articles published in our app, opens us up to new audiences, often younger, as well as to new uses, particularly for people on the move. The third step is the one that led us to sign the agreement with OpenAI, which we hope will create a dynamic favorable to independent journalism in the new technological landscape that is taking shape.

At each of these stages, Le Monde has remained true to the spirit that has driven it since the advent of the Internet, and during the major changes in our industry: We have sought to reconcile the desire to discover new territories, while taking care to protect our editorial identity and the high standards of our content. In recent years, this approach has paid off. As the first French media organization to rely on digital subscriptions without ever having recourse to online kiosks, we have for several years been able to claim a significant lead in the hierarchy of national general-interest dailies, thanks to an unprecedented number of over 600,000 subscribers. In the same way, our determination to be a pioneer on numerous social media platforms has given us a highly visible place on all of them, helping to rejuvenate our audience.

The agreement with OpenAI is a continuation of this strategy of reasoned innovation. And we continue to guarantee the total independence of our newsroom: It goes without saying that this new agreement, like the previous ones we have signed, will in no way hinder our journalists' freedom to investigate the artificial intelligence sector in general, and OpenAI in particular. In fact, over the coming months, we will be stepping up our reporting and investigative capabilities in this key area of technological innovation.

This is the very first condition of our editorial independence, and therefore of your trust. As we move forward into the new world of artificial intelligence, we have close to our hearts an ambition that goes back to the very first day of our history, whose 80th anniversary we are celebrating this year: deserving your loyalty.

Le Monde

Louis Dreyfus(Chief Executive Officer of Le Monde) and Jrme Fenoglio(Director of Le Monde)

Translation of an original article published in French on lemonde.fr; the publisher may only be liable for the French version.

Read the original here:

Le Monde and Open AI sign partnership agreement on artificial intelligence - Le Monde

Why is Elon Musk suing Open AI and Sam Altman? In a word: Microsoft. – Morningstar

By Jurica Dujmovic

Potential ramifications extend far beyond the courtroom

In a striking turn of events, Elon Musk, Tesla's (TSLA) CEO, has initiated legal action against OpenAI and its leadership, alleging that the organization he helped found has moved from its original altruistic mission toward a profit-driven approach, particularly after partnering with Microsoft (MSFT).

The lawsuit accentuates Musk's deep-seated concerns that OpenAI has deviated from its foundational manifesto of developing artificial general intelligence (AGI) for the betterment of humanity, choosing instead to prioritize financial gains. But is that really so, or is there something else at hand?

Musk was deeply involved with OpenAI since its inception in 2015, as his concerns about AI's potential risks and the vision to advance AI in a way that benefits humanity aligned with OpenAI's original ethos as a non-profit organization.

In 2018, however, Musk became disillusioned with OpenAI because, in his view, it no longer operated as a nonprofit and was building technology that took sides in political and social debates. The recent OpenAI drama that culminated with a series of significant changes in OpenAI's structure and ethos, as well as a what can only be seen as Microsoft's power grab, seems to have sparked Musk's discontent.

To understand his reasoning, it helps to remember that Microsoft is a company with a long history of litigation. Over the years, Microsoft has faced numerous high-profile legal battles related to its market practices.

Here are some prominent cases to illustrate the issue:

-- In the United States v. Microsoft Corp. case, which began in 1998, the U.S. Department of Justice accused Microsoft of holding a monopolistic position in the PC operating-systems market and taking actions to crush threats to that monopoly. In April 2000, the case resulted in a verdict that Microsoft had engaged in monopolization and attempted monopolization in violation of the Sherman Antitrust Act.

-- In Europe, Microsoft has faced significant fines for abusing its dominant market position. In 2004, the European Commission fined Microsoft 497.2 million euros, the largest sum it had ever imposed on a single company at the time??. In 2008, Microsoft was fined an additional 899 million euros for failing to comply with the 2004 antitrust order.

-- In 2013, the European Commission levied a 561 million euro fine against Microsoft for failing to comply with a 2009 settlement agreement to offer Windows users a choice of internet browsers instead of defaulting to Internet Explorer.

In light of these past litigations, it's much easier to understand why OpenAI's CEO Sam Altman's brief departure from the company and subsequent return late last year - which culminated in a significant shift in the organization's governance and its relationship with Microsoft - was the straw that likely broke Musk's back.

After Altman was reinstated, Microsoft solidified its influence over OpenAI by securing a permanent position on its board. Furthermore, the restructuring of OpenAI's board to include business-oriented members, rather than AI experts or ethicists, signaled a permanent shift in the organization's priorities and marked a pivotal turn toward a profit-driven model underpinned by corporate governance.

The consequences of this power grab are plain to see: Microsoft is already implementing various AI models designed by the company in its various products while none of the code is being released to the public. These models also include a specific political and ideological bias that makes them problematic from an ethical point of view. This too, is an issue that cannot be addressed due to the closed-source nature of AI models generated and shaped under the watchful eye of Microsoft.

Musk's own ventures, like xAI and Neuralink, suggest he's still deeply invested in the AI space, albeit in a way he has more control over, presumably to ensure that the technology develops according to his vision for the future of humanity.

On the other hand, proponents of Microsoft's partnership with OpenAI emphasize strategic and mutually-beneficial aspects. Microsoft's $1 billion investment in OpenAI is viewed as a significant step in advancing artificial-intelligence technology as it allows OpenAI to utilize Microsoft's Azure cloud services to train and run its AI software. Additionally, the collaboration is positioned as a way for Microsoft to stay competitive against other tech giants by integrating AI into its cloud services and developing more sophisticated AI models????.

Proponents say Microsoft's involvement with OpenAI is a strategic business decision aimed at promoting Azure's AI capabilities and securing a leading position in the industry. The partnership is framed as a move to democratize AI technology while ensuring AI safety, which aligns with broader industry goals of responsible and ethical AI development. It is also seen as a way for OpenAI to access necessary resources and expertise to further its research, emphasizing the collaborative nature of the partnership rather than a mere financial transaction??.

Hard truths and consequences

While many point out that Musk winning the case is extremely unlikely, it's still worth looking into potential consequences. Such a verdict could mandate that OpenAI returns to a non-profit status or open-source its technology, significantly impacting its business model, revenue generation and future collaborations. It could also affect Microsoft's investment in OpenAI, particularly if the court determines that the latter has strayed from its founding mission, influencing the tech giant's ability to protect its investment and realize expected returns.

The lawsuit's outcome might influence public and market perceptions of OpenAI and Microsoft, possibly affecting customer trust and market share, with Musk potentially seen as an advocate for ethical AI development. Additionally, the case could drive the direction of AI development, balancing between open-source and proprietary models, and possibly accelerating innovation while raising concerns about controlling and misusing advanced AI technologies.

The scrutiny from this lawsuit might lead to more cautious approaches in contractual relationships within the tech sector, focusing on partnerships and intellectual property. Furthermore, the case could draw regulatory attention, possibly leading to increased oversight or regulation of AI companies, particularly concerning transparency, data privacy and ethical considerations in AI development. While Musk's quest might seem like a longshot to some legal experts, the potential ramifications of this lawsuit extend far beyond the courtroom.

More: Here's what an AI chatbot thinks of Elon Musk's lawsuit against OpenAI and Sam Altman

Also read: Microsoft hasn't been worth this much more than Apple since 2003

-Jurica Dujmovic

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

(END) Dow Jones Newswires

03-09-24 1003ET

Original post:

Why is Elon Musk suing Open AI and Sam Altman? In a word: Microsoft. - Morningstar

Supreme Court to hear landmark case on social media, free speech – University of Southern California

Today, the U.S. Supreme Court will hear oral arguments in a pair of cases that could fundamentally change how social media platforms moderate content online. The justices will consider the constitutionality of laws introduced by Texas and Florida targeting what they see as the censorship of conservative viewpoints on social media platforms.

The central issue is whether platforms like Facebook and X should have sole discretion over what content is permitted on their platforms. A decision is expected by June.USC experts are available to discuss.

Depending on the ruling, companies may face stricter regulations or be allowed more autonomy in controlling their online presence. Tighter restrictions would require marketers to exercise greater caution in content creation and distribution, prioritizing transparency, and adherence to guidelines to avoid legal repercussions. Alternatively, a ruling in favor of greater moderation powers could potentially raise consumer concerns about censorship and brand authenticity, said Kristen Schiele, an associate professor of clinical marketing at the USC Marshall School of Business.

Regardless of the verdict, companies will need to adapt their strategies to align with advancing legal standards and consumer expectations in the digital landscape. Stricter regulations will require a more thorough screening of content to ensure compliance. Marketers may need to invest more resources to understand and adhere to the evolving legislations, which would lead to shifts in budget allocation and strategy development. In response, the industry will most likely see new content moderation technologies and platforms emerge to help companies navigate legal challenges and still create effective marketing campaigns, she said.

Erin Miller is an expert on theories of speech and free speech rights, and especially their application to mass media. She also writes on issues of moral and criminal responsibility. Her teaching areas include First Amendment theory and criminal procedure. Miller is an assistant professor of law at the USC Gould School of Law.

Content:emiller@law.usc.edu

###

Jef Pearlman is a clinical associate professor of law and director of the Intellectual Property & Technology Law Clinic at the USC Gould School of Law.

Contact:jef@law.usc.edu

###

Karen Northis a recognized expert in the field of digital and social media, with interests spanning personal and corporate brand building, digital election meddling, reputation management, product development, and safety and privacy online. North is a clinical professor of communication at the USC Annenberg School for Communication and Journalism.

Contact:knorth@usc.edu

###

Wendy Wood is an expert in the nature of habits. Wood co-authored a study exploring how fake news spreads on social media, which found that platforms more than individual users have a larger role to play in stopping the spread of misinformation online.

Contact:wendy.wood@usc.edu

###

Emilio Ferrara is an expert incomputational social sciences who studies socio-technical systems and information networks to unveil the communication dynamics that govern our world. Ferrara isis a professor of computer science and communication at the USC Viterbi School of Engineering and USC Annenberg School for Communication and Journalism.

Contact:emiliofe@usc.edu

###

(Photo/Benjamin Sow/Unsplash)

See original here:

Supreme Court to hear landmark case on social media, free speech - University of Southern California

Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy … – Microsoft

As we enter a new era based on artificial intelligence, we believe this is the best time to articulate principles that will govern how we will operate our AI datacenter infrastructure and other important AI assets around the world. We are announcing and publishing these principles our AI Access Principles today at the Mobile World Congress in Barcelona in part to address Microsofts growing role and responsibility as an AI innovator and a market leader.

Like other general-purpose technologies in the past, AI is creating a new sector of the economy. This new AI economy is creating not just new opportunities for existing enterprises, but new companies and entirely new business categories. The principles were announcing today commit Microsoft to bigger investments, more business partnerships, and broader programs to promote innovation and competition than any prior initiative in the companys 49-year history. By publishing these principles, we are committing ourselves to providing the broad technology access needed to empower organizations and individuals around the world to develop and use AI in ways that will serve the public good.

These new principles help put in context the new investments and programs weve announced and launched across Europe over the past two weeks, including $5.6 billion in new AI datacenter investments and new AI skilling programs that will reach more than a million people. Weve also launched new public-private partnerships to advance responsible AI adoption and protect cybersecurity, new AI technology services to support network operators, and a new partnership with Frances leading AI company, Mistral AI. As much as anything, these investments and programs make clear how we will put these principles into practice, not just in Europe, but in the United States and around the world.

These principles also reflect the responsible and important role we must play as a company. They build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the companys market position in the PC operating system market, we published a set of Windows Principles. Their purpose was to govern the companys practices in a manner that would both promote continued software innovation and foster free and open competition.

Ill never forget the reaction of an FTC Commissioner who came up to me after I concluded the speech I gave in Washington, D.C. to launch these principles. He said, If you had done this 10 years ago, I think you all probably would have avoided a lot of problems.

Close to two decades have gone by since that moment, and both the world of technology and the AI era we are entering are radically different. Then, Windows was the computing platform of the moment. Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond. But there is wisdom in that FTC Commissioners reaction that has stood the test of time: As a leading IT company, we do our best work when we govern our business in a principled manner that provides broad opportunities for others.

The new AI era requires enormous computational power to train, build, and deploy the most advanced AI models. Historically, such power could only be found in a handful of government-funded national laboratories and research institutions, and it was available only to a select few. But the advent of the public cloud has changed that. Much like steel did for skyscrapers, the public cloud enables generative AI.

Today, datacenters around the world house millions of servers and make vast computing power broadly available to organizations large and small and even to individuals as well. Already, many thousands of AI developers in startups, enterprises, government agencies, research labs, and non-profit organizations around the world are using the technology in these datacenters to create new AI foundation models and applications.

These datacenters are owned and operated by cloud providers, which include larger established firms such as Microsoft, Amazon, Google, Oracle, and IBM, as well as large firms from China like Alibaba, Huawei, Tencent, and Baidu. There are also smaller specialized entrants such as Coreweave, OVH, Aruba, and Denvr Dataworks Corporation, just to mention a few. And government-funded computing centers clearly will play a role as well, including with support for academic research. But building and operating those datacenters is expensive. And the semiconductors or graphical processing units (GPUs) that are essential to power the servers for AI workloads remain costly and in short supply. Although governments and companies are working hard to fill the gap, doing so will take some time.

With this reality in mind, regulators around the world are asking important questions about who can compete in the AI era. Will it create new opportunities and lead to the emergence of new companies? Or will it simply reinforce existing positions and leaders in digital markets?

I am optimistic that the changes driven by the new AI era will extend into the technology industry itself. After all, how many readers of this paragraph had, two years ago, even heard of OpenAI and many other new AI entrants like Anthropic, Cohere, Aleph Alpha, and Mistral AI? In addition, Microsoft, along with other large technology firms are dynamically pivoting to meet the AI era. The competitive pressure is fierce, and the pace of innovation is dizzying. As a leading cloud provider and an innovator in AI models ourselves and through our partnership with OpenAI, we are mindful of our role and responsibilities in the evolution of this AI era.

Throughout the past decade, weve typically found it helpful to define the tenets in effect, the goals that guide our thinking and drive our actions as we navigate a complex topic. We then apply these tenets by articulating the principles we will apply as we make the decisions needed to govern the development and use of technology. I share below the new tenets on which we are basing our thinking on this topic, followed by our 11 AI Access Principles.

Fundamentally, there are five tenets that define Microsofts goals as we focus on AI access, including our role as an infrastructure and platforms provider.

First, we have a responsibility to enable innovation and foster competition. We believe that AI is a foundational technology with a transformative capability to help solve societal problems, improve human productivity, and make companies and countries more competitive. As with prior general-purpose technologies, from the printing press to electricity, railroads, and the internet itself, the AI era is not based on a single technology component or advance. We have a responsibility to help spur innovation and competition across the new AI economy that is rapidly emerging.

AI is a dynamic field, with many active participants based on a technology stack that starts with electricity and connectivity and the worlds most advanced semiconductor chips at the base. It then runs up through the compute power of the public cloud, public and proprietary data for training foundation models, the foundation models themselves, tooling to manage and orchestrate the models, and AI-powered software applications. In short, the success of an AI-based economy requires the success of many different participants across numerous interconnected markets.

You can see here the technology stack that defines the new AI era. While one company currently produces and supplies most of the GPUs being used for AI today, as one moves incrementally up the stack, the number of participants expands. And each layer enables and facilitates innovation and competition in the layers above. In multiple ways, to succeed, participants at every layer of the technology stack need to move forward together. This means, for Microsoft, that we need to stay focused not just on our own success, but on enabling the success of others.

Second, our responsibilities begin by meeting our obligations under the law. While the principles we are launching today represent a self-regulatory initiative, they in no way are meant to suggest a lack of respect for the rule of law or the role of regulators. We fully appreciate that legislators, competition authorities, regulators, enforcers, and judges will continue to evolve the competition rules and other laws and regulations relevant to AI. Thats the way it should be.

Technology laws and rules are changing rapidly. The European Union is implementing its Digital Markets Act and completing its AI Act, while the United States is moving quickly with a new AI Executive Order. Similar laws and initiatives are moving forward in the United Kingdom, Canada, Japan, India, and many other countries. We recognize that we, like all participants in this new AI market, have a responsibility to live up to our obligations under the law, to engage constructively with regulators when obligations are not yet clear, and to contribute to the public dialogue around policy. We take these obligations seriously.

Third, we need to advance a broad array of AI partnerships. Today, only one company is vertically integrated in a manner that includes every AI layer from chips to a thriving mobile app store. As noted at a recent meeting of tech leaders and government officials, The rest of us, Microsoft included, live in the land of partnerships.

People today are benefiting from the AI advances that the partnership between OpenAI and Microsoft has created. Since 2019, Microsoft has collaborated with OpenAI on the research and development of OpenAIs generative AI models, developing the unique supercomputers needed to train those models. The ground-breaking technology ushered in by our partnership has unleashed a groundswell of innovation across the industry. And over the past five years, OpenAI has become a significant new competitor in the technology industry. It has expanded its focus, commercializing its technologies with the launch of ChatGPT and the GPT Store and providing its models for commercial use by third-party developers.

Innovation and competition will require an extensive array of similar support for proprietary and open-source AI models, large and small, including the type of partnership we are announcing today with Mistral AI, the leading open-source AI developer based in France. We have also invested in a broad range of other diverse generative AI startups. In some instances, those investments have provided seed funding to finance day-to-day operations. In other instances, those investments have been more focused on paying the expenses for the use of the computational infrastructure needed to train and deploy generative AI models and applications. We are committed to partnering well with market participants around the world and in ways that will accelerate local AI innovations.

Fourth, our commitment to partnership extends to customers, communities, and countries. More than for prior generations of digital technology, our investments in AI and datacenters must sustain the competitive strengths of customers and national economies and address broad societal needs. This has been at the core of the multi-billion-dollar investments we recently have announced in Australia, the United Kingdom, Germany, and Spain. We need constantly to be mindful of the community needs AI advances must support, and we must pursue a spirit of partnership not only with others in our industry, but with customers, governments, and civil society. We are building the infrastructure that will support the AI economy, and we need the opportunities provided by that infrastructure to be widely available.

Fifth, we need to be proactive and constructive, as a matter of process, in working with governments and the IT industry in the design and release of new versions of AI infrastructure and platforms. We believe it is critical for companies and regulators to engage in open dialogue, with a goal of resolving issues as quickly as possible ideally, while a new product is still under development. For our part, we understand that Microsoft must respond fully and cooperatively to regulatory inquiries so that we can have an informed discussion with regulators about the virtues of various approaches. We need to be good listeners and constructive problem solvers in sorting through issues of concern and identifying practical steps and solutions before a new product is completed and launched.

The foregoing tenets come together to shape the new principles we are announcing below. Its important to note that, given the safety, security, privacy, and other issues relating to responsible AI, we need to apply all these principles subject to objective and effective standards to comply with our legal obligations and protect the public. These are discussed further below. Subject to these requirements, we are committed to the following 11 principles:

We are committed to enabling AI innovation and fostering competition by making our cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. We want Microsoft Azure to be the best place for developers to train, build, and deploy AI models and to use those models safely and securely in applications and solutions. This means:

Today, our partnership with OpenAI is supporting the training of the next generation of OpenAI models and increasingly enabling customers to access and use these models and Microsofts CoPilot applications in local datacenters. At the same time, we are committed to supporting other developers, training, and deploying proprietary and open-source AI models, both large and small.

Todays important announcement with Mistral AI launches a new generation of Microsofts support for technology development in Europe. It enables Mistral AI to accelerate the development and deployment of its next generation Large Language Models (LLMs) with access to Azures cutting-edge AI infrastructure. It also makes the deployment of Mistral AIs premium models available to customers through our Models-as-a-Service (MaaS) offering on Microsoft Azure, which model developers can use to publish and monetize their AI models. By providing a unified platform for AI model management, we aim to lower the barriers and costs of AI model development around the world for both open source and proprietary development. In addition to Mistral AI, this service is already hosting more than 1,600 open source and proprietary models from companies and organizations such as Meta, Nvidia, Deci, and Hugging Face, with more models coming soon from Cohere and G42.

We are committed to expanding this type of support for additional models in the months and years ahead.

As reflected in Microsofts Copilots and OpenAIs ChatGPT itself, the world is rapidly benefiting from the use of a new generation of software applications that access and use the power of AI models. But our applications will represent just a small percentage of the AI-powered applications the world will need and create. For this reason, were committed to ongoing and innovative steps to make the AI models we host and the development tools we create broadly available to AI software applications developers around the world in ways that are consistent with responsible AI principles.

This includes the Azure OpenAI service, which enables software developers who work at start-ups, established IT companies, and in-house IT departments to build software applications that call on and make use of OpenAIs most powerful models. It extends through Models as a Service to the use of other open source and proprietary AI models from other companies, including Mistral AI, Meta, and others.

We are also committed to empowering developers to build customized AI solutions by enabling them to fine-tune existing models based on their own unique data sets and for their specific needs and scenarios. With Azure Machine Learning, developers can easily access state-of-the-art pre-trained models and customize them with their own data and parameters, using a simple drag-and-drop interface or code-based notebooks. This helps companies, governments, and non-profits create AI applications that help advance their goals and solve their challenges, such as improving customer service, enhancing public safety, or promoting social good. This is rapidly democratizing AI and fostering a culture of even broader innovation and collaboration among developers.

We are also providing developers with tools and repositories on GitHub that enable them to create, share, and learn from AI solutions. GitHub is the worlds largest and most trusted platform for software development, hosting over 100 million repositories and supporting more than 40 million developers. We are committed to supporting the AI developer community by making our AI tools and resources available on GitHub, giving developers access to the latest innovations and best practices in AI development, as well as the opportunity to collaborate with other developers and contribute to the open source community. As one example, just last week we made available an open automation framework to help red team generative AI systems.

Ensure choice and fairness across the AI economy

We understand that AI innovation and competition require choice and fair dealing. We are committed to providing organizations, AI developers, and data scientists with the flexibility to choose which AI models to use wherever they are building solutions. For developers who choose to use Microsoft Azure, we want to make sure they are confident we will not tilt the playing field to our advantage. This means:

The AI models that we host on Azure, including the Microsoft Azure OpenAI API service, are all accessible via public APIs. Microsoft publishes documentation on its website explaining how developers can call these APIs and use the underlying models. This enables any application, whether it is built and deployed on Azure or other private and public clouds, to call these APIs and access the underlying models.

Network operators are playing a vital role in accelerating the AI transformation of customers around the world, including for many national and regional governments. This is one reason we are supporting a common public API through the Open Gateway initiative driven by the GSM Association, which advances innovation in the mobile ecosystem. The initiative is aligning all operators with a common API for exposing advanced capabilities provided by their networks, including authentication, location, and quality of service. Its an indispensable step forward in enabling network operators to offer their advanced capabilities to a new generation of AI-enabled software developers. We have believed in the potential of this initiative since its inception at GSMA, and we have partnered with operators around the world to help bring it to life.

Today at Mobile World Congress, we are launching the Public Preview of Azure Programmable Connectivity (APC). This is a first-class service in Azure, completely integrated with the rest of our services, that seamlessly provides access to Open Gateway for developers. It means software developers can use the capabilities provided by the operator network directly from Azure, like any other service, without requiring specific work for each operator.

We are committed to maintaining Microsoft Azure as an open cloud platform, much as Windows has been for decades and continues to be. That means in part ensuring that developers can choose how they want to distribute and sell their AI software to customers for deployment and use on Microsoft Azure. We provide a marketplace on Azure through which developers can list and sell their AI software to Azure customers under a variety of supported business models. Developers who choose to use the Azure Marketplace are also free to decide whether to use the transaction capabilities offered by the marketplace (at a modest fee) or whether to sell licenses to customers outside of the marketplace (at no fee). And, of course, developers remain free to sell and distribute AI software to Azure customers however they choose, and those customers can then upload, deploy, and use that software on Azure.

We believe that trust is central to the success of Microsoft Azure. We build this trust by serving the interests of AI developers and customers who choose Microsoft Azure to train, build, and deploy foundation models. In practice, this also means that we avoid using any non-public information or data from the training, building, deployment, or use of developers AI models to compete against them.

We know that customers can and do use multiple cloud providers to meet their AI and other computing needs. And we understand that the data our customers store on Microsoft Azure is their data. So, we are committed to enabling customers to easily export and transfer their data if they choose to switch to another cloud provider. We recognize that different countries are considering or have enacted laws limiting the extent to which we can pass along the costs of such export and transfer. We will comply with those laws.

We recognize that new AI technologies raise an extraordinary array of critical questions. These involve important societal issues such as privacy, safety, security, the protection of children, and the safeguarding of elections from deepfake manipulation, to name just a few. These and other issues require that tech companies create guardrails for their AI services, adapt to new legal and regulatory requirements, and work proactively in multistakeholder efforts to meet broad societal needs. Were committed to fulfilling these responsibilities, including through the following priorities:

We are committed to safeguarding the physical security of our AI datacenters, as they host the infrastructure and data that power AI solutions. We follow strict security protocols and standards to ensure that our datacenters are protected from unauthorized access, theft, vandalism, fire, or natural disasters. We monitor and audit our datacenters to detect and prevent any potential threats or breaches. Our datacenter staff are trained and certified in security best practices and are required to adhere to a code of conduct that respects the privacy and confidentiality of our customers data.

We are also committed to safeguarding the cybersecurity of our AI models and applications, as they process and generate sensitive information for our customers and society. We use state-of-the-art encryption, authentication, and authorization mechanisms to protect data in transit and at rest, as well as the integrity and confidentiality of AI models and applications. We also use AI to enhance our cybersecurity capabilities, such as detecting and mitigating cyberattacks, identifying and resolving vulnerabilities, and improving our security posture and resilience.

Were building on these efforts with our new Secure Future Initiative (SFI). This brings together every part of Microsoft and has three pillars. It focuses on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats.

As AI becomes more pervasive and impactful, we recognize the need to ensure that our technology is developed and deployed in a way that is ethical, trustworthy, and aligned with human values. That is why we have created the Microsoft Responsible AI Standard, a comprehensive framework that guides our teams on how to build and use AI responsibly.

The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, we define what these values mean and how to achieve our goals in practice. We also provide tools, processes, and best practices to help our teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings.

We recognize that countries need more than advanced AI chips and datacenters to sustain their competitive edge and unlock economic growth. AI is changing jobs and the way people work, requiring that people master new skills to advance their careers. Thats why were committed to marrying AI infrastructure capacity with AI skilling capability, combining the two to advance innovation.

In just the past few months, weve combined billions of dollars of infrastructure investments with new programs to bring AI skills to millions of people in countries like Australia, the United Kingdom, Germany, and Spain. Were launching training programs focused on building AI fluency, developing AI technical skills, supporting AI business transformation, and promoting safe and responsible AI development. Our work includes the first Professional Certificate on Generative AI.

Typically, our skilling programs involve a professional network of Microsoft certified training services partners and multiple industry partners, universities, and nonprofit organizations. Increasingly, we find that major employers want to launch new AI skilling programs for their employees, and we are working with them actively to provide curricular materials and support these efforts.

One of our most recent and important partnerships is with the AFL-CIO, the largest federation of labor unions in the United States. Its the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Weve learned that government institutions and associations can typically bring AI skilling programs to scale. At the national and regional levels, government employment and educational agencies have the personnel, programs, and expertise to reach hundreds of thousands or even millions of people. Were committed to working with and supporting these efforts.

Through these and other initiatives, we aim to democratize access to AI education and enable everyone to harness the potential of AI for their own lives and careers.

In 2020, Microsoft set ambitious goals to be carbon negative, water positive and zero waste by 2030. We recognize that our datacenters play a key part in achieving these goals. Being responsible and sustainable by design also has led us to take a first-mover approach, making long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build datacenters and operate.

We also apply a holistic approach to the Scope 3 emissions relating to our investments in AI infrastructure, from the construction of our datacenters to engaging our supply chain. This includes supporting innovation to reduce the embodied carbon in our supply chain and advancing our water positive and zero waste goals throughout our operations.

At the same time, we recognize that AI can be a vital tool to help accelerate the deployment of sustainability solutions from the discovery of new materials to better predicting and responding to extreme weather events. This is why we continue to partner with others to use AI to help advance breakthroughs that previously would have taken decades, underscoring the important role AI technology can play in addressing some of our most critical challenges to realizing a more sustainable future.

Tags: ChatGPT, datacenters, generative ai, Github, Mobile World Congress, open ai, Responsible AI

Read the original post:

Microsoft's AI Access Principles: Our commitments to promote innovation and competition in the new AI economy ... - Microsoft

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown – CRN

A deep-dive analysis into the market dynamics that allowed Nvidia to take the AI crown and surpass Intel in annual revenue. CRN also looks at what the x86 processor giant could do to fight back in a deeply competitive environment.

Several months after Pat Gelsinger became Intels CEO in 2021, he told me that his biggest concern in the data center wasnt Arm, the British chip designer that is enabling a new wave of competition against the semiconductor giants Xeon server CPUs.

Instead, the Intel veteran saw a bigger threat in Nvidia and its uncontested hold over the AI computing space and said his company would give its all to challenge the GPU designer.

[Related: The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In]

Well, theyre going to get contested going forward, because were bringing leadership products into that segment, Gelsinger told me for a CRN magazine cover story.

More than three years later, Nvidias latest earnings demonstrated just how right it was for Gelsinger to feel concerned about the AI chip giants dominance and how much work it will take for Intel to challenge a company that has been at the center of the generative AI hype machine.

When Nvidias fourth-quarter earnings arrived last week, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its data center GPUs driven by generative AI.

The GPU designer finished its 2024 fiscal year with $60.9 billion in revenue, up 126 percent or more than double from the previous year, the company revealed in its fourth-quarter earnings report on Wednesday. This fiscal year ran from Jan. 30, 2023, to Jan. 28, 2024.

Meanwhile, Intel finished its 2023 fiscal year with $54.2 billion in sales, down 14 percent from the previous year. This fiscal year ran concurrent to the calendar year, from January to December.

While Nvidias fiscal year finished roughly one month after Intels, this is the closest well get to understanding how two industry titans compared in a year when demand for AI solutions propped up the data center and cloud markets in a shaky economy.

Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computingwith a major emphasis on data centers, cloud computing and edge computingthen found itself last year at the center of a massive demand cycle due to hype around generative AI.

This demand cycle was mainly kicked off by the late 2022 arrival of OpenAIs ChatGPT, a chatbot powered by a large language model that can understand complex prompts and respond with an array of detailed answers, all offered with the caveat that it could potentially impart inaccurate, biased or made-up answers.

Despite any shortcomings, the tech industry found more promise than concern with the capabilities of ChatGPT and other generative AI applications that had emerged in 2022, like the DALL-E 2 and Stable Diffusion text-to-image models. Many of these models and applications had been trained and developed using Nvidia GPUs because the chips are far faster at computing such large amounts of data than CPUs ever could.

The enormous potential of these generative AI applications kicked off a massive wave of new investments in AI capabilities by companies of all sizes, from venture-backed startups to cloud service providers and consumer tech companies, like Amazon Web Services and Meta.

By that point, Nvidia had started shipping the H100, a powerful data center GPU that came with a new feature called the Transformer Engine. This was designed to speed up the training of so-called transformer models by as many as six times compared to the previous-generation A100, which itself had been a game-changer in 2020 for accelerating AI training and inference.

Among the transformer models that benefitted from the H100s Transformer Engine was GPT-3.5, short for Generative Pre-trained Transformer 3.5. This is OpenAIs large language model that exclusively powered ChatGPT before the introduction of the more capable GPT-4.

But this was only one piece of the puzzle that allowed Nvidia to flourish in the past year. While the company worked on introducing increasingly powerful GPUs, it was also developing internal capabilities and making acquisitions to provide a full stack of hardware and software for accelerated computing workloads such as AI and high-performance computing.

At the heart of Nvidias advantage is the CUDA parallel computing platform and programming model. Introduced in 2007, CUDA enabled the companys GPUs, which had been traditionally designed for computer games and 3-D applications, to run HPC workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously. Since then, CUDA has dominated the landscape of software that benefits accelerated computing.

Over the last several years, Nvidias stack has grown to include CPUs, SmartNICs and data processing units, high-speed networking components, pre-integrated servers and server clusters as well as a variety of software and services, which includes everything from software development kits and open-source libraries to orchestration platforms and pretrained models.

While Nvidia had spent years cultivating relationships with server vendors and cloud service providers, this activity reached new heights last year, resulting in expanded partnerships with the likes of AWS, Microsoft Azure, Google Cloud, Dell Technologies, Hewlett Packard Enterprise and Lenovo. The company also started cutting more deals in the enterprise software space with major players like VMware and ServiceNow.

All this work allowed Nvidia to grow its data center business by 217 percent to $47.5 billion in its 2024 fiscal year, which represented 78 percent of total revenue.

This was mainly supported by a 244 percent increase in data center compute sales, with high GPU demand driven mainly by the development of generative AI and large language models. Data center networking, on the other hand, grew 133 percent for the year.

Cloud service providers and consumer internet companies contributed a substantial portion of Nvidias data center revenue, with the former group representing roughly half and then more than a half in the third and fourth quarters, respectively. Nvidia also cited strong demand driven by businesses outside of the former two groups, though not as consistently.

In its earnings call last week, Nvidia CEO Jensen Huang said this represents the industrys continuing transition from general-purpose computing, where CPUs were the primary engines, to accelerated computing, where GPUs and other kinds of powerful chips are needed to provide the right combination of performance and efficiency for demanding applications.

There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what Nvidia has been pioneering for some time, he said.

Intel, by contrast, generated $15.5 billion in data center revenue for its 2023 fiscal year, which was a 20 percent decline from the previous year and made up only 28.5 percent of total sales.

This was not only three times smaller than what Nvidia earned for total data center revenue in the 12-month period ending in late January, it was also smaller than what the semiconductor giants AI chip rival made in the fourth quarter alone: $18.4 billion.

The issue for Intel is that while the company has launched data center GPUs and AI processors over the last couple years, its far behind when it comes to the level of adoption by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish.

As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate a majority of revenue for this business unit.

This created multiple problems for the company.

While AI servers, including ones made by Nvidia and its OEM partners, rely on CPUs for the host processors, the average selling prices for such components are far lower than Nvidias most powerful GPUs. And these kinds of servers often contain four or eight GPUs and only two CPUs, another way GPUs enable far greater revenue growth than CPUs.

In Intels latest earnings call, Vivek Arya, a senior analyst at Bank of America, noted how these issues were digging into the companys data center CPU revenue, saying that its GPU competitors seem to be capturing nearly all of the incremental [capital expenditures] and, in some cases, even more for cloud service providers.

One dynamic at play was that some cloud service providers used their budgets last year to replace expensive Nvidia GPUs in existing systems rather than buying entirely new systems, which dragged down Intel CPU sales, Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, recently told CRN.

Then there was the issue of long lead times for Nvidias GPUs, which were caused by demand far exceeding supply. Because this prevented OEMs from shipping more GPU-accelerated servers, Intel sold fewer CPUs as a result, according to Moorhead.

Intels CPU business also took a hit due to competition from AMD, which grew x86 server CPU share by 5.4 points against the company in the fourth quarter of 2023 compared to the same period a year ago, according to Mercury Research.

The semiconductor giant has also had to contend with competition from companies developing Arm-based CPUs, such as Ampere Computing and Amazon Web Services.

All of these issues, along with a lull in the broader market, dragged down revenue and earnings potential for Intels data center business.

Describing the market dynamics in 2023, Intel said in its annual 10-K filing with the U.S. Securities and Exchange Commission that server volume decreased 37 percent from the previous year due to lower demand in a softening CPU data center market.

The company said average selling prices did increase by 20 percent, mainly due to a lower mix of revenue from hyperscale customers and a higher mix of high core count processors, but that wasnt enough to offset the plummet in sales volume.

While Intel and other rivals started down the path of building products to compete against Nvidias years ago, the AI chip giants success last year showed them how lucrative it can be to build a business with super powerful and expensive processors at the center.

Intel hopes to make a substantial business out of accelerator chips between the Gaudi deep learning processors, which came from its 2019 acquisition of Habana Labs, and the data center GPUs it has developed internally. (After the release of Gaudi 3 later this year, Intel plans to converge its Max GPU and Gaudi road maps, starting with Falcon Shores in 2025.)

But the semiconductor giant has only reported a sales pipeline that grew in the double digits to more than $2 billion in last years fourth quarter. This pipeline includes Gaudi 2 and Gaudi 3 chips as well as Intels Max and Flex data center GPUs, but it doesnt amount to a forecast for how much money the company expects to make this year, an Intel spokesperson told CRN.

Even if Intel made $2 billion or even $4 billion from accelerator chips in 2024, it would amount to a small fraction of what Nvidia made last year and perhaps an even smaller one if the AI chip rival manages to grow again in the new fiscal year. Nvidia has forecasted that revenue in the first quarter could grow roughly 8.6 percent sequentially to $24 billion, and Huang said the conditions are excellent for continued growth for the rest of this year and beyond.

Then theres the fact that AMD recently launched its most capable data center GPU yet, the Instinct MI300X. The company said in its most recent earnings call that strong customer pull and expanded engagements prompted the company to upgrade its forecast for data center GPU revenue this year to more than $3.5 billion.

There are other companies developing AI chips too, including AWS, Microsoft Azure and Google Cloud as well as several startups, such as Cerebras Systems, Tenstorrent, Groq and D-Matrix. Even OpenAI is reportedly considering designing its own AI chips.

Intel will also have to contend with Nvidias decision last year to move to a one-year release cadence for new data center GPUs. This started with the successor to the H100 announced last fallthe H200and will continue with the B100 this year.

Nvidia is making its own data center CPUs, too, as part of the companys expanding full-stack computing strategy, which is creating another challenge for Intels CPU business when it comes to AI and HPC workloads. This started last year with the standalone Grace Superchip and a hybrid CPU-GPU package called the Grace Hopper Superchip.

For Intels part, the semiconductor giant expects meaningful revenue acceleration for its nascent AI chip business this year. What could help the company are the growing number of price-performance advantages found by third parties like AWS and Databricks as well as its vow to offer an open alternative to the proprietary nature of Nvidias platform.

The chipmaker also expects its upcoming Gaudi 3 chip to deliver performance leadership with four times the processing power and double the networking bandwidth over its predecessor.

But the company is taking a broader view of the AI computing market and hopes to come out on top with its AI everywhere strategy. This includes a push to grow data center CPU revenue by convincing developers and businesses to take advantage of the latest features in its Xeon server CPUs to run AI inference workloads, which the company believes is more economical and pragmatic for a broader constituency of organizations.

Intel is making a big bet on the emerging category of AI PCs, too, with its recently launched Core Ultra processors, which, for the first time in an Intel processor, comes with a neural processing unit (NPU) in addition to a CPU and GPU to power a broad array of AI workloads. But the company faces tough competition in this arena, whether its AMD and Qualcomm in the Windows PC segment or Apple for Mac computers and its in-house chip designs.

Even Nvidia is reportedly thinking about developing CPUs for PCs. But Intel does have one trump card that could allow it to generate significant amounts of revenue alongside its traditional chip design business by seizing on the collective growth of its industry.

Hours before Nvidias earnings last Wednesday, Intel launched its revitalized contract chip manufacturing business with the goal of drumming up enough business from chip designers, including its own product groups, to become the worlds second largest foundry by 2030.

Called Intel Foundry, its lofty 2030 goal means the business hopes to generate more revenue than South Koreas Samsung in only six years. This would put it only behind the worlds largest foundry, Taiwans TSMC, which generated just shy of $70 billion last year with many thanks to large manufacturing orders from the likes of Nvidia, Apple and Nvidia.

All of this relies on Intel to execute at high levels across its chip design and manufacturing businesses over the next several years. But if it succeeds, these efforts could one day make the semiconductor giant an AI superpower like Nvidia is today.

At Intel Foundrys launch last week, Gelsinger made that clear.

We're engaging in 100 percent of the AI [total addressable market], clearly through our products on the edge, in the PC and clients and then the data centers. But through our foundry, I want to manufacture every AI chip in the industry, he said.

More:

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN

Cloud Native Efficient Computing is the Way in 2024 and Beyond – ServeTheHome

Today we wanted to discuss cloud native and efficient computing. Many have different names for this, but it is going to be the second most important computing trend in 2024, behind the AI boom. Modern performance cores have gotten so big and fast that there is a new trend in the data center: using smaller and more efficient cores. Over the next few months, we are going to be doing a series on this trend.

As a quick note: We get CPUs from all of the major silicon players. Also, since we have tested these CPUs in Supermicro systems, we are going to say that they are all sponsors of this, but it is our own idea and content.

Let us get to the basics. Once AMD re-entered the server market (and desktop) with a competitive performance core in 2017, performance per core and core counts exploded almost as fast as pre-AI boom slideware on the deluge of data. As a result, cores got bigger, cache sizes expanded, and chips got larger. Each generation of chips got faster.

Soon, folks figured out a dirty secret in the server industry: faster per core performance is good if you license software by core, but there are a wide variety of applications that need cores, but not fast ones. Todays smaller efficient cores tend to be on the order of performance of a mainstream Skylake/ Cascade Lake Xeon from 2017-2021, yet they can be packed more densely into systems.

Consider this illustrative scenario that is far too common in the industry:

Here, we have several apps built by developers over the years. Each needs its own VM and each VM is generally between 2-8 cores. These are applications that need to be online 247 but are not ones that need massive amounts of compute. Good examples are websites that serve a specific line of business function but do not have hundreds of thousands of visitors. Also, these tend to be workloads that are already in cloud instances, VMs, or containers. As the industry has started to move away from hypervisors with per-core licensing or per-socket license constraints, scaling up to bigger, faster cores that are going underutilized makes little sense.

As a result, the industry realized it needed lower cost to produce chips that are chasing density instead of per-core performance. An awesome way to think about this is to think about trying to fit the maximum number of instances for those small line-of-business applications developed over the years that are sitting in 2-8 core VMs into as few servers as possible. There are other applications like this as well that are commonly shown such as nginx web servers, redis servers, and so forth. Another great example is that some online game instances require one core per user in the data center, even if that core is relatively meager. Sometimes just having more cores is, well, more cores = more better.

Once the constraints of legacy hypervisor per core/ per socket licensing are removed, then the question becomes how to fit as many cores on a package, and then how dense those packages can be deployed in a rack. One other trend we are seeing is not just more cores, but also lower clock speed cores. CPUs that have a maximum frequency in the 2-3GHz range today tend to be considerably more power efficient than those with frequencies of P-core only servers in the 4GHz+ range and desktop CPUs now pushing well over 5GHz. This is the voltage frequency curve at work. If your goal is to have more cores, but do not need maximum per-core performance, then lowering the performance per core by 25% but decreasing the power by 40% or more, means that all of those applications are being serviced with less power.

Less power is important for a number of reasons. Today, the biggest reason is the AI infrastructure build-out. If you, for example, saw our 49ers Levis Stadium tour video, that is a perfect example of a data center that is not going to expand in footprint and can only expand cooling so much. It also is a prime example of a location that needs AI servers for sports analytics.

That type of constraint where the same traditional work needs to get done, in a data center footprint that is not changing, while adding more high-power AI servers is a key reason cloud-native compute is moving beyond the cloud. Transitioning applications running on 2017-2021 era Xeon servers to modern cloud-native cores with approximately the same performance per core can mean 4-5x the density per system at ~2x the power consumption. As companies release new generations of CPUs, the density figures are increasing at a steep rate.

We showed this at play with the same era of servers and modern P-core servers in our 5th Gen Intel Xeon Processors Emerald Rapids review.

We also covered the consolidation just between P-core generations in the accompanying video. We are going to have an article with the current AMD EPYC Bergamo parts very soon in a similar vein.

If you are not familiar with the current players in the cloud-native CPU market, that you can buy for your data centers/ colocation, here is a quick run-down.

The AMD EPYC Bergamo was AMDs first foray into cloud-native compute. Onboard, it has up to 128 cores/ 256 threads and is the densest publicly available x86 server CPU currently available.

AMD removed L3 cache from its P-core design, lowered the maximum all core frequencies to decrease the overall power, and did extra work to decrease the core size. The result is the same Zen 4 core IP, with less L3 cache and less die area. Less die area means more can be packaged together onto a CPU.

Some stop with Bergamo, but AMD has another Zen 4c chip in the market. The AMD EPYC 8004 series, codenamed Siena also uses Zen 4c but with half the memory channels, less PCIe Gen5 I/O and single-socket only operation.

Some organizations that are upgrading from popular dual 16 core Xeon servers can move to single socket 64-core Siena platforms and stay within a similar power budget per U while doubling the core count per U using 1U servers.

AMD markets Siena as the edge/ embedded part, but we need to recognize this is in the vein of current gen cloud native processors.

Arm has been making a huge splash into the space. The only Arm server CPU vendor out there for those buying their own servers, is Ampere led by many of the former Intel Xeon team.

Ampere has two main chips, the Ampere Altra (up to 80 cores) and Altra Max (up to 128 cores.) These use the same socket and so most servers can support either. The Max just came out later to support up to 128 cores.

Here, the focus on cloud-native compute is even more pronounced. Instead of having beefy floating point compute capabilities, Ampere is using Arm Neoverse N1 cores that focus on low power integer performance. It turns out, a huge number of workloads like serving web pages are mostly integer performance driven. While these may not be the cores if you wanted to build a Linpack Top500 supercomputer, they are great for web servers. Since the cloud-native compute idea was to build cores and servers that can run workloads with little to no compromise, but at lower power, that is what Arm and Ampere built.

Next up will be the AmpereOne. This is already shipping, but we have yet to get one in the lab.

AmpereOne uses a custom designed core for up to 192 cores per socket.

Assuming you could buy a server with AmpereOne, you would get more core density than an AMD EPYC Bergamo server (192 vs 128 cores) but you would get fewer threads (192 vs 256 threads.) If you had 1 vCPU VMs, AmpereOne would be denser. If you had 2 vCPU VMs, Bergamo would be denser. SMT has been a challenge in the cloud due to some of the security surfaces it exposes.

Next in the market will be the Intel Sierra Forest. Intels new cloud-native processor will offer up to 144/ 288 cores. Perhaps most importantly, it is aiming for a low power per core metric while also maintaining x86 compatibility.

Intel is taking its efficient E-core line and bringing it to the Xeon market. We have seen massive gains in E-core performance in both embedded as well as lower-power lines like the Alder Lake-N where we saw greater than 2x generational performance per chip. Now, Intel is splitting its line into P-cores for compute intensive workloads and E-cores for high-density scale-out compute.

Intel will offer Granite Rapids as an update to the current 5th Gen Xeon Emerald Rapids for all P-core designs later in 2024. Sierra Forest will be the first generation all E-core design and is planned for the first half of 2024. Intel already has announced the next generation Clearwater Forest will continue the all E-core line. As a full disclosure, this is a launch I have been excited about for years.

We are going to quickly mention the NVIDIA Grace Superchip here. With up to 144 cores across two dies packaged along with LPDDR memory.

While at 500W and usingArm Neoverse V2 performance cores, one would not think of this as a cloud native processor, it does have something really different. The Grace Superchip has onboard memory packaged alongside its Arm CPUs. As a result, that 500W is actually for CPU and memory. There are applications that are primarily memory bandwidth bound, not necessarily core count bound. For those applications, something like a Grace Superchip can actually end up being a lower-power solution than some of the other cloud-native offerings. These are also not the easiest to get, and are priced at a significant premium. One could easily argue these are not cloud-native, but if our definition is doing the same work in a smaller more efficient footprint, then the Grace Superchip might actually fall into that category for a subset of workloads.

If you were excited for our 2nd to 5th Gen Intel Xeon server consolidation piece, get ready. To say that the piece we did in late 2023 was just the beginning would be an understatement.

While many are focused on AI build-outs, projects to shrink portions of existing compute footprints by 75% or more are certainly possible, making more space, power, and cooling available for new AI servers. Also, just from a carbon footprint perspective, using newer and significantly more power-efficient architectures to do baseline application hosting makes a lot of sense.

The big question in the industry right now on CPU compute is whether cloud native energy-efficient computing is going to be 25% of the server CPU market in 3-5 years, or if it is going to be 75%. My sense is that it likely could be 75%, or perhaps should be 75%, but organizations are slow to move. So at STH, we are going to be doing a series to help overcome that organizational inertia and get compute on the right-sized platforms.

More:

Cloud Native Efficient Computing is the Way in 2024 and Beyond - ServeTheHome

Tech Giants Refuse U.S. Consumer Security to Oversee Digital Wallets – The Tech Report

The Computer & Communications Industry Association (CCIA), a lobby group representing major tech companies such as Apple, Google, Amazon, Meta, and X, expressed concerns about a proposed plan by the U.S. Consumer Financial Protection Bureau (CFPB).

The CFPBs proposal seeks equal oversight of digital wallet and payment app providers, including tech giants, to ensure consumer protections similar to traditional payment methods.

The CCIAs head of regulatory policy, Krisztian Katona, cautioned against the potential negative impact of the proposal, suggesting that overly broad or burdensome digital regulations could impede innovation and harm new startups in the industry.

The lobby group emphasized that extensive supervision like the one imposed on banks might not be the most effective approach.

In the comment letter addressed to the CFPB, the CCIA pointed out a perceived flaw in the proposal, stating that it failed to identify the specific consumer risks it intended to address.

The letter argued against viewing non-bank digital providers and banks as direct competitors, emphasizing the markets reality, where their collaborations often benefit consumers through complementary services.

The Financial Technology Association, representing members such as PayPal and Block Inc., echoed similar concerns in a separate comment letter released on the same day. They argued that existing regulations were adequate, urging the CFPB to suspend the rulemaking process.

The association, which includes companies like Venmo and Cash App, also believed that unnecessary regulations could stifle innovation and hinder the industrys growth.

The adoption of digital payment systems has continued to increase, given the advantage they offer users over traditional methods.

Notably, digital payments offer high convenience and security, adding to their user-friendly features and benefitting businesses and consumers.

Due to this support, there is a projected 26.93% compound growth in their adoption between 2021 and 2025.

This rise gives birth to a significant trend in the competitive industry, resulting in a consolidation period where large tech companies surpass regional and community banks in terms of trust associated with digital payments.

The IMF acknowledges the significance of digital payments in reshaping the industry and encourages more collaborations and competition between big tech companies and regular financial institutions.

Besides that, digital wallets have proven helpful in streamlining payment processes and bringing existing systems together, whether online portals for internet-based operations or contactless terminals for face-to-face transactions.

This ease of integration enhances accessibility and convenience for customers and businesses, contributing significantly to the widespread adoption of digital wallets.

In addition to these benefits, the cost-effectiveness of digital wallets compared to traditional payment methods makes them an attractive option for businesses aiming to reduce transaction costs.

This affordability further incentivizes their adoption across various industries, positioning digital wallets as indispensable tools for most tech organizations.

Go here to read the rest:

Tech Giants Refuse U.S. Consumer Security to Oversee Digital Wallets - The Tech Report

Global Stem Cell Therapy Market to Reach Value of USD 26.15 Billion by 2030 | Skyquest Technology – GlobeNewswire

Westford,USA, Jan. 02, 2024 (GLOBE NEWSWIRE) -- According to SkyQuest report, the global stem cell therapy market is experiencing substantial growth, primarily propelled by the increasing burden of chronic diseases such as cardiovascular disorders, neurodegenerative conditions, and orthopedic injuries. These debilitating ailments have placed a significant strain on healthcare systems worldwide.

Get sample copy of this report:

https://www.skyquestt.com/sample-request/stem-cell-therapy-market

Browse in-depth TOC on the "Stem Cell Therapy Market"

The field of stem cell research has undergone a remarkable transformation driven by significant advances in technology and scientific understanding. These breakthroughs have broadened our knowledge of stem cells and expanded their potential applications in the global stem cell therapy market. Innovative methods for isolating, growing, and differentiating stem cells have been developed, facilitating their use in various therapeutic environments.

Report Scope & Segmentation:

Browse summary of the report and Complete Table of Contents (ToC):

https://www.skyquestt.com/report/stem-cell-therapy-market

Prominent Players in Global Stem Cell Therapy Market

Allogeneic Therapy Segment is Expected to Rise Significantly due to Increasing Popularity of Stem Cell Banking

Allogeneic therapy segment has emerged as the dominant force in the stem cell therapy market, commanding a substantial market share of 59.14% in 2022. This remarkable growth can be attributed to several key factors. Firstly, allogeneic therapies often come with higher pricing, contributing significantly to revenue generation. Moreover, the increasing popularity of stem cell banking, which involves collecting and storing allogeneic stem cells for potential future use, has driven demand for these therapies.

The market in North America has firmly established its dominance in the stem cell therapy market, commanding the largest revenue share at 44.56% in 2022. One key driver is the presence of innovative companies and major regional market players. North America is home to a robust and dynamic biotechnology and pharmaceutical industry, fostering stem cell therapy product development, production, and commercialization.

Autologous Therapy Segment is Expected to Dominate Market Due to Lower Risk of Complications

Autologous therapy segment is poised to experience significant growth over the forecast period, and several key factors contribute to this trajectory in the stem cell therapy market. One primary driver is the lower risk of complications associated with autologous treatments, as these therapies utilize a patient's stem cells, minimizing the chances of immune rejection or adverse reactions. Additionally, autologous therapies are often more affordable and accessible for patients, making them attractive.

Regional market in the Asia Pacific region is poised to become a significant growth driver in the stem cell therapy market, with a projected CAGR of 16.09% expected from 2023 to 2030. The region boasts a robust product pipeline of stem cell-based therapies, with ongoing research and development initiatives driving innovation.

A comprehensive analysis of the major players in the stem cell therapy market has been recently conducted. The report encompasses various aspects of the market, including collaborations, mergers, innovative business policies, and strategies, providing valuable insights into key trends and breakthroughs in the market. Furthermore, the report scrutinizes the market share of the top segments and presents a detailed geographic analysis. Lastly, the report highlights the major players in the industry and their endeavors to develop innovative solutions to cater to the growing demand.

Key Developments in Stem Cell Therapy Market

Speak to Analyst for your custom requirements:

https://www.skyquestt.com/speak-with-analyst/stem-cell-therapy-market

Key Questions Answered in the Stem Cell Therapy Market Report

Related Reports in SkyQuests Library:

Global Protein Therapeutics Market

Global Chemiluminescence Immunoassay Analyzers Market

Global Biobanking Market

Global Epigenetics Market

Global Microplate Reader Market

About Us:

SkyQuest Technologyis leading growth consulting firm providing market intelligence, commercialization and technology services. It has 450+ happy clients globally.

Address:

1 Apache Way, Westford, Massachusetts 01886

Phone:

USA (+1) 617-230-0741

Email:sales@skyquestt.com

LinkedInFacebookTwitter

View original post here:

Global Stem Cell Therapy Market to Reach Value of USD 26.15 Billion by 2030 | Skyquest Technology - GlobeNewswire

Crypto Gambling: A Boon for Bettors or Gateway to Addiction? – Crypto Times

While blockchain technology has been around since 2009, its the recent explosion of Bitcoin and other cryptocurrencies thats making waves across industries, including gambling.

Online crypto casinos are popping up left and right, attracting players with the promise of faster transactions, increased anonymity, and potentially generous bonuses. But before you jump into this exciting trend, its crucial to understand the ins and outs of crypto gambling and learn how to play responsibly.

This guide will equip you with the knowledge you need to navigate the world of crypto casinos safely and make informed decisions.

Crypto gambling refers to the act of wagering cryptocurrencies on games of chance or skill. This can be done on traditional online gambling platforms that have added support for crypto payments, or on dedicated crypto gambling platforms.

Unlike traditional online gambling platforms that rely on standard banking methods, crypto gambling operates within a decentralized financial ecosystem. This key distinction allows players to seamlessly convert their fiat currencies into Bitcoin or other cryptocurrencies, facilitating quick and straightforward transactions.

The rise of cryptocurrencies has not gone unnoticed in the gambling world. Many sites, including those not registered with GamStop, are increasingly adopting this digital currency, according to insights from NonGamStopBets UK. The appeal lies in the simplicity and efficiency of crypto transactions, which benefit both players and gambling operators.

One of the notable perks of using cryptocurrencies for gambling is the exclusive access it provides. Players who deposit using this system can enjoy all available games and bonuses.

In a bid to further encourage the use of digital currencies, many gambling platforms are offering additional incentives to crypto users. These incentives are designed to heighten the appeal of crypto gambling and motivate more players to explore this innovative payment method.

The integration of cryptocurrencies into the world of gambling has opened up a new frontier, offering players and platforms alike a range of exciting advantages.

Lets dive into some of the key benefits of using crypto for your next gambling adventure:

Say goodbye to the sluggishness of traditional banking methods! Crypto transactions are notoriously fast, often settling within minutes compared to the days or even weeks it can take for credit card or bank transfers. Additionally, crypto transactions typically incur lower fees, leaving you with more of your hard-earned funds to play with.

Gone are the days of sharing your personal and financial information with online gambling platforms. Crypto gambling allows you to remain anonymous, safeguarding your sensitive data from potential breaches. Transactions are recorded on a public blockchain ledger, but your identity remains obscured, adding an extra layer of security and peace of mind.

Unlike traditional payment methods that may be restricted by geographical boundaries, cryptocurrencies operate on a global network. This means you can access online gambling platforms no matter where you are in the world, opening up a wider range of options and potentially better odds.

Many crypto gambling platforms offer enticing bonuses and rewards specifically for players who use digital currencies. This could include welcome bonuses, deposit match bonuses, and even free spins or cashback offers. Taking advantage of these exclusive perks can boost your bankroll and give you a head start on your gambling journey.

Cryptocurrencies are known for their volatility, which can be a double-edged sword. While the value of your winnings could fluctuate, potentially leading to significant losses, it also presents the opportunity for higher returns. If the market swings in your favor, your winnings could be substantially boosted compared to traditional fiat currencies.

Some crypto gambling platforms leverage blockchain technology to offer provably fair games. This means that the fairness of the games can be mathematically verified by anyone, ensuring transparency and trust in the gambling process. No more black box algorithms or shady dealings with provably fair games, you can rest assured that the odds are truly in your favor.

The decentralized nature of blockchain technology opens up exciting possibilities for the future of gambling. With crypto, we can expect to see more innovative platforms emerge, offering unique features and gameplay experiences that were previously unthinkable.

Also Read: Top 5 Myths Surrounding Crypto Online Casinos

While crypto gambling offers an array of enticing advantages, its crucial to be aware of the significant challenges and risks that come with it.

Before diving headfirst into this new frontier, consider the following:

The volatile nature of cryptocurrencies is perhaps the biggest risk. Your winnings (and losses) can fluctuate dramatically based on market movements, potentially leading to significant financial setbacks. Remember, what could be a big win today could evaporate tomorrow due to a sudden market dip.

The decentralized nature of the crypto world attracts both genuine platforms and unscrupulous actors. Be wary of phishing scams, fake exchanges, and unreliable platforms. Always research thoroughly before depositing any funds and prioritize platforms with strong security measures and positive user reviews.

Unlike traditional gambling, crypto gambling exists in a largely unregulated space. This means theres no overarching framework to protect consumers from unfair practices, fraudulent operators, or disputes. Proceed with caution, as you may have limited recourse if things go wrong.

Using cryptocurrencies and navigating unfamiliar blockchain technology can be challenging for newcomers. Understanding wallets, private keys, transactions, and technical jargon can be a steep learning curve. Ensure you have a solid grasp of the technology before venturing into crypto gambling.

The anonymity and convenience associated with crypto gambling can exacerbate the risk of problem gambling. The ease of depositing and playing without traditional verification processes can lead to uncontrolled spending and potentially dangerous habits. Be mindful of your playing patterns and seek help if necessary.

The anonymity of cryptocurrencies can potentially attract those seeking to engage in illegal activities such as money laundering or illegal gambling operations. Ensure you fully understand the legal implications of crypto gambling in your jurisdiction and avoid platforms with shady dealings.

By acknowledging the challenges and risks involved, you can make informed decisions and navigate the world of crypto gambling safely and responsibly.

Also Read: How Safe Are Crypto Casinos?

Uncontrolled gambling activities leading to addiction are another challenge the industry faces. Cryptocurrencies provide higher transaction limits to casino players, which is why its more complicated for them to deny the pleasure of investing a bit more.

Users should develop self-control and implement proper bankroll management strategies when playing slots and games. Setting and never exceeding the budget limits is the primary rule every gambler must adhere to.

Limiting the time in crypto casinos is also a great way to avoid potential problems. Players should remember that gambling is just entertainment. Some casinos regularly notify their members about the importance of making a break and switching to other activities.

The allure of crypto gambling is undeniable faster transactions, anonymity, and potentially lucrative rewards. However, navigating this new frontier requires caution and careful selection of the platform you entrust your digital fortune to.

To ensure a safe and enjoyable experience, consider these key factors when choosing your crypto gambling playground:

By carefully considering these factors, you can navigate the crypto gambling landscape with confidence and choose a platform that aligns with your needs and priorities.

So, step into the exciting world of crypto gambling with open eyes and a cautious heart. By making informed choices and prioritizing responsible play, you can ensure a thrilling and ultimately rewarding experience in this digital domain.

The future of crypto gambling shimmers with a kaleidoscope of possibilities. As blockchain technology matures and regulations adapt, expect even faster transactions, seamless cross-border play, and an explosion of innovative game experiences.

Imagine virtual casinos bustling with life across time zones, fueled by decentralized platforms offering provably fair games and transparent governance.

Non-Fungible Tokens (NFTs) could revolutionize ownership, allowing players to hold a stake in the games they love or trade unique in-game assets.

Yet, with this exhilarating potential comes the responsibility to tread cautiously. Robust regulatory frameworks and player-empowering tools are crucial.

The future of crypto gambling hinges on striking a balance between innovation and responsible play, ensuring a thrilling, rewarding journey for all involved.

Also Read: Navigating Cryptos Landscape with 5 Key Trends in 2024

As we venture further into the digital age, the intersection of cryptocurrency and gambling presents a landscape rich with opportunities and challenges. The allure of enhanced security, speed, and global access positions crypto gambling as a significant player in the future of online gaming.

Yet, its imperative to navigate this terrain with an informed and cautious approach, acknowledging the volatility, regulatory uncertainties, and ethical considerations. Embracing responsible gambling practices becomes crucial in this context.

Ultimately, the trajectory of crypto gambling will be shaped by technological innovation, regulatory frameworks, and the evolving preferences of the digital consumer, making it a fascinating sector to watch in the coming years.

View original post here:

Crypto Gambling: A Boon for Bettors or Gateway to Addiction? - Crypto Times

Metaverse cloning tech uses AI to create virtual versions of you that live in games you cant always c… – The Sun

ARTIFICIAL intelligence cloning is poised to become the next big thing in the technology sector - and maybe even our lives.

Meta recently unveiled its AI-powered chatbots and many of them feature likenesses of celebrities - or celeb AI clones.

2

2

This is thanks to its Llama 2 technology, which can generate AI "characters" or "animations" based on real people.

Another company called Delphi lets users create virtual clones of themselves or anyone else.

To generate an AI clone via Delphi, all users need to do is upload some form of identification and as many as thousands of files, including emails, chat transcripts, and even YouTube videos.

It's apparent that this technology is quickly taking over the industry and this is only the beginning, experts say.

MichaelPuscar, Co-founder ofAI firm NPCx, which is developing its own AI cloning technology for the gaming sector, explains the phenomenon further.

"Our aim is to allow video game players toclonethemselves into video games, acting on their behalf in the game when theyre unavailable to play," he told The U.S. Sun in an email.

"You can imagine the following situation: you and I are set to play Call of Duty tonight but at the last minute, your partner unknowingly made a dinner reservation. Now Im stuck, or am I? Im not if I can play with or against yourclone," Puscar said.

NPCxs product is called BehaviorX, and it has not yet been released to the public, he said, but it could be central to the development of the metaverse.

The term metaverse was popularized by Meta CEO Mark Zuckerburg and describes a virtual world that combines social media, cryptocurrency, augmented reality, and gaming.

"Our clones need to exist in not just a video game environment but in the Metaverse as well," Puscar said.

"In both cases, the goal is such that when you interact with these clones they are in every way indistinguishable from the person from whom they were cloned."

To create theclones, NPCx asks players to play the game and observe them and their environment in great detail.

"We specifically ask them to take certain actions in the game, not unlike how actors are asked to take specific actions on a motion capture stage," Puscar said.

"This gives us what we need to train our models and create theclone."

Puscar added that by generating characters based on real-world people, the company can also create non-player characters (NPCs) with deep personalities, who act and react in realistic ways.

When asked what the appeal of AI clones is in gaming, Puscar had a simple answer.

"For gamers, playing alongside or againstAIclonesof real-world players orcelebritiesadds an element of realism and excitement to the gaming experience," he said.

"It's about creating a more engaging, interactive, and personalized form of entertainment that resonates with the user's interests and preferences."

Beyond gaming and chatbots, Puscar anticipates seeing AI cloning technology employed in a variety of applications.

"This could include virtual training environments, interactive educational tools, personalized digital assistants, and more," he said.

"The entertainment industry, in particular, stands to benefit significantly, with possibilities ranging from personalized movie experiences to virtual concerts featuring digitalclonesof artists.

Still, while this all sounds like good fun, the ethics around digitalclonesare "perilous," Puscar explained.

"Once youve trained yourclone, your likeness is acting in ways out of your control. In theory, if the algorithms are working properly, it is acting in ways that you would act," he said.

"But we cannot control the counterparty, and you can imagine situations where someone nefarious decides to simulate sexual acts with aclone, uses profane language, or otherwise attempts to put them into compromising situations."

Therefore, it is imperative to make sure thatclonesare created and used ethically, he said.

More here:

Metaverse cloning tech uses AI to create virtual versions of you that live in games you cant always c... - The Sun

Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines

The AI, which takes orders from drive-thru customers at Checkers and Carl's Jr, relies on humans for most of its customer interactions.

Mechanical Turk

An AI drive-thru system used at the fast-food chains Checkers and Carl's Jr isn't the perfectly autonomous tech it's been made out to be. The reality, Bloomberg reports, is that the AI heavily relies on a backbone of outsourced laborers who regularly have to intervene so that it takes customers' orders correctly.

Presto Automation, the company that provides the drive-thru systems, admitted in recent filings with the US Securities and Exchange Commission that it employs "off-site agents" in countries like the Philippines who help its "Presto Voice" chatbots in over 70 percent of customer interactions.

That's a lot of intervening for something that claims to provide "automation," and is yet another example of tech companies exaggerating the capabilities of their AI systems to belie the technology's true human cost.

"There’s so much hype around AI that everyone is misunderstanding what this tool is," Shelly Palmer, who runs a tech consulting firm, told Bloomberg. "Everybody thinks that AI is some kind of magic."

Change of Tune

According to Bloomberg, the SEC informed Presto in July that it was being investigated for claims "regarding certain aspects of its AI technology."

Beyond that, no other details have been made public about the investigation. What we do know, though, is that the probe has coincided with some revealing changes in Presto's marketing.

In August, Presto's website claimed that its AI could take over 95 percent of drive-thru orders "without any human intervention" — clearly not true, given what we know now. In a show of transparency, that was changed in November to claim 95 percent "without any restaurant or staff intervention," which is technically true, yes, but still seems dishonest.

That shift is part of Presto's overall pivot to its new "humans in the loop" marketing shtick, which upholds its behind the scenes laborers as lightening the workload for the actual restaurant workers. The whole AI thing, it would seem, is just packing it comes in, and the mouthpiece that frustrated customers have to deal with.

"Our human agents enter, review, validate and correct orders," Presto CEO Xavier Casanova told investors during a recent earnings call, as quoted by Bloomberg. "Human agents will always play a role in ensuring order accuracy."

Know Its Limits

The huge hype around AI can obfuscate both its capabilities and the amount of labor behind it. Many tech firms probably don't want you to know that they rely on millions of poorly paid workers in the developing world so that their AI systems can properly function.

Even OpenAI's ChatGPT relies on an army of "grunts" who help the chatbot learn. But tell that to the starry-eyed investors who have collectively sunk over $90 billion into the industry this year without necessarily understanding what they're getting into.

"It highlights the importance of investors really understanding what an AI company can and cannot do," Brian Dobson, an analyst at Chardan Capital Marketts, told Bloomberg.

More on AI: Nicki Minaj Fans Are Using AI to Create "Gag City"

The post Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines appeared first on Futurism.

Read the original post:
Busted! Drive-Thru Run by "AI" Actually Operated by Humans in the Philippines

‘TIS THE SEASON FOR STAR OF THE SEAS: ROYAL CARIBBEAN OPENS NEXT ICONIC VACATION – Royal Caribbean Press Center

The Latest in the Lineup of the Worlds Best Vacations Debuts August 2025 in Port Canaveral (Orlando), Florida

MIAMI, Dec. 5, 2023 Its opening day for the next bold combination of every vacation. Royal Caribbean International revealed the first look at the latest in the best-selling Icon Class, Star of the Seas, and the vacations in store for every type of family and adventurer. Starting August 2025, vacationers can get away in a new way from Port Canaveral (Orlando), Florida, with 7-night vacations to the Caribbean and the cruise lines top-rated private island Perfect Day at CocoCay, The Bahamas. Stars debut lineup is now open on Royal Caribbeans website, and Crown & Anchor Society loyalty members have special access to book today in advance of the official opening on Wednesday, Dec. 6.

Adventurers can island hop in style on the next iconic vacation while in the eastern or western Caribbean destinations and The Bahamas. The newly opened vacations feature idyllic locales like Basseterre, St. Kitts and Nevis; Cozumel, Mexico; Philipsburg, St. Maarten; Roatan, Honduras; and San Juan, Puerto Rico. Plus, on every getaway, vacationers can look forward to kicking back or going all out at Perfect Day at CocoCay. The cruise lines one-of-a-kind private island destination features everything from 13 waterslides to the largest freshwater pool in the Caribbean and The Bahamas, and the islands first adults-only oasis, Hideaway Beach (opening January 2024), with a private beach, pools and spots for drinks and bites, exclusive cabanas, live music and more.

On the heels of welcoming Icon of the Seas to the family two months before its January 2024 debut, Royal Caribbean is following up the historic response to the first in the Icon Class lineup by introducing the revolutionary combination of experiences to Port Canaveral (Orlando) for the first time. Star will feature the best of every vacation, from the beach retreat to the resort escape and the theme park adventure, across eight neighborhoods that are destinations in themselves, including Thrill Island, Chill Island, AquaDome, the tranquil oasis by day and vibrant hot spot at night; and the open-air Central Park. Between more than40 ways to dine and drink,cutting-edge entertainmentacross the cruise lines four signature stages air, ice, water and theater and a lineup of activities for adults, kids, teens and the whole family, everyone can make memories their way every day without compromise.

The Icon Class highlights coming to Star include the adrenaline-pumping thrills like Category 6 waterparks six record-breaking waterslides and Crown's Edge part skywalk, part ropes course and part thrill ride as well as the unrivaled ways to chill across seven pools for every vibe and mood, including swim-up bar Swim & Tonic; Cloud 17, the adults-only retreat; and The Hideaways one-of-a-kind infinity pool suspended 135 feet above the ocean. And while families can spend time together and on their own adventures throughout Star, they can stay and play all day at Surfside. The neighborhood designed for young families features ways to splash for all ages, dedicated restaurants and even a bar The Lemon Post with a menu for the grownups and one for the kids. New experiences will also make their way to the latest in the worlds best family vacation lineup, which will be revealed at a later date.

With Star making its debut in Port Canaveral (Orlando), Royal Caribbean is doubling down on the revolutionary combination of every vacation that was first introduced on Icon and which continues to create unprecedented consumer demand. The two worlds best vacations in two of the worlds top travel destinations, Icon in Miami and Star in the greater Orlando area, will introduce an unparalleled lineup that mark the next bold moment in the new era of vacations and for Royal Caribbean.

Vacationers can explore all that has been revealed about Star to date on Royal Caribbeans website here.

About Royal Caribbean InternationalRoyal Caribbean International, owned by Royal Caribbean Group (NYSE: RCL), has been delivering innovation at sea for more than 50 years. Each successive class of ships is an architectural marvel that features the latest technology and guest experiences for todays adventurous traveler. The cruise line continues to revolutionize vacations with itineraries to 240 destinations in 61 countries on six continents, including Royal Caribbeans private island destination in The Bahamas, Perfect Day at CocoCay, the first in the Perfect Day Island Collection. Royal Caribbean has also been voted Best Cruise Line Overall for 20 consecutive years in the Travel Weekly Readers Choice Awards.

Media can stay up to date by following @RoyalCaribPR on X and visiting RoyalCaribbeanPressCenter.com. For additional information or to make reservations, vacationers can call their travel advisor; visit RoyalCaribbean.com; or call (800) ROYAL-CARIBBEAN.

###

December 2023 Debuting August 2025 in Port Canaveral (Orlando), Florida, Royal Caribbean Internationals Star of the Seas is the next bold combination of every vacation from the beach retreat to the resort escape and the theme park adventure. Stars all-encompassing Icon Class lineup has experiences in store for every type of family and adventurer to make memories their way every day, without compromise.

December 2023 Debuting August 2025 in Port Canaveral (Orlando), Florida, Royal Caribbean Internationals Star of the Seas is the next bold combination of every vacation from the beach retreat to the resort escape and the theme park adventure. Stars all-encompassing Icon Class lineup has experiences in store for every type of family and adventurer to make memories their way every day, without compromise.

December 2023 Debuting August 2025 in Port Canaveral (Orlando), Florida, Royal Caribbean Internationals Star of the Seas is the next bold combination of every vacation from the beach retreat to the resort escape and the theme park adventure. Stars all-encompassing Icon Class lineup has experiences in store for every type of family and adventurer on 7-night vacations to the Caribbean and the cruise lines top-rated private island Perfect Day at CocoCay, The Bahamas.

October 2023 The next revolutionary combination of the best of every vacation is on the horizon. Royal Caribbean International will follow up the introduction of Icon of the Seas with the next Icon Class ship, Star of the Seas, in the summer of 2025.

December 2023 Royal Caribbean Internationals Icon and Star of the Seas, setting sail January 2024 and August 2025 respectively, mark a new era of vacations, with an unparalleled combination of the best of every vacation. From the beach retreat to the resort escape and the theme park adventure, each vacations all-encompassing lineup has experiences for every type of family and adventurer to make memories without compromise.

December 2023 On Icon and Star of the Seas, adventurers are in for the ultimate thrill at the largest waterpark at sea, Category 6, in the new Thrill Island neighborhood. The six record-breaking slides reach new heights: Pressure Drop, the industry's first open free-fall slide; Frightening Bolt, the tallest drop slide at sea; Storm Surge and Hurricane Hunter, the first family raft slides with four riders per raft; and Storm Chasers, cruising's first mat-racing duo.

December 2023 On Iconand Star of the Seas, adventurers are in for the ultimate thrill at the largest waterpark at sea, Category 6, in the new Thrill Island neighborhood. The six record-breaking slides reach new heights, like Storm Surge, the first family raft slides with four riders per raft.

December 2023 Living life on the edge takes on a new meaning with Crown's Edge in the new Thrill Island on Icon and Star of the Seas. Part skywalk, part ropes course and part thrill ride, the adrenaline-pumping experience culminates in a surprising moment that will see vacationers swing 154 feet above the ocean.

December 2023 Chill Island's Swim & Tonic on Royal Caribbean's Icon and Star of the Seas is the vibrant swim-up bar where vacationers can have a sip and vibe to the DJ as they take a dip or kick back at the in-water loungers and tables.

December 2023 In the new Chill Island on Icon and Star of the Seas, there's a pool for every mood and each with prime ocean views. Of the seven pools, the four in this three-deck slice of paradise include Royal Caribbean's first swim-up bar at sea, Swim & Tonic; Royal Bay Pool, the largest pool at sea; and the adults-only retreat, Cloud 17.

December 2023 Vacationers looking for laidback vibes can head to Chill Island's serene, infinity-edge Cove Pool on Icon and Star of the Seas. With in-water loungers and more ways to chill, it's all about the endless blue skies and ocean views and making memories.

December 2023 Cloud 17 in the Chill Island neighborhood on Icon and Star of the Seas is an adults-only retreat, complete with endless ocean views and a dedicated bar, the signature Lime & Coconut.

December 2023 Tucked away on Icon and Star of the Seas, The Hideaway neighborhood combines the good vibes of beach club scenes around the world and uninterrupted ocean views. At the center of it all is the first suspended infinity pool at sea, surrounded by a multilevel terrace, whirlpools, a dedicated bar and a DJ.

December 2023 Perched at the top of Icon and Star of the Seas is the new AquaDome, a tranquil oasis by day and a vibrant hot spot by night. The transformational neighborhood is where guests can enjoy wraparound ocean views, a 55-foot-tall water curtain, restaurants, bars and the cruise line's marquee aqua shows at the next-level AquaTheater.

December 2023 In the reimagined Royal Promenade neighborhood on Icon and Star of the Seas is Royal Caribbean's largest and boldest ice arena, Absolute Zero. Every seat is the best seat in the house to watch cutting-edge technology and Olympic-level ice skaters merge to bring showstopping entertainment to life.

December 2023 The lineup of Sunset Suites on Icon and Star of the Seas is a new take on broadening horizons. Vacationers can enjoy every day's hues from inside or out while on their bed that faces the ocean and from their expansive balcony, including a wraparound balcony in the Sunset Corner Suite.

December 2023 In the Infinite Grand Suites on Icon andStar of the Seas,vacationers can unwind at their home away from home with stunning views from a living area that transforms into an extended open-air escape at the push of a button.

December 2023 The Panoramic Ocean View suites and rooms on Icon and Star of the Seas are among the best seats in the house. Vacationers can unwind at their home away from home with stunning perspectives of the sea, sky and destinations, thanks to wall-to-wall and floor-to-ceiling windows.

December 2023 The Family Infinite Balconies on Icon and Star of the Seas welcomes families of up to six to make memories together and find me time all the same. The spacious room features a separate bunk alcove for kids, a split bathroom design and an infinite balcony a living space that transforms into an extended open-air escape at the push of a button.

December 2023 Vacationers can leave compromise at the door in the Surfside Family Suites on Icon and Star of the Seas. Nestled in the Surfside family neighborhood, the rooms welcome a family of up to four guests. There's a cozy kids alcove, which transforms into a living space for all, along with a private balcony and Royal Suite Class perks.

December 2023 The Family Infinite Balconies on Icon and Star of the Seas invite families of up to six to make memories together and find me time all the same. The spacious room features a separate bunk alcove for kids, decked out with TVs, beds and space to hang out, a split bathroom design and an infinite balcony that turns into an open-air scape at the push of a button.

December 2023 Icon and Star of the Seas will feature the new Ultimate Family Townhouse. Spanning three levels, the perfect home away from home for families features an in-suite slide, a cinema space, karaoke, two balconies, a private entrance to the ultimate family neighborhood, Surfside, and more.

December 2023 Icon and Star of the Seas will debut the first Ultimate Family Townhouse. Spanning three levels, the perfect home away from home for families includes an in-suite slide, a cinema space, karaoke, a spacious balcony, a private patio and entrance to the ultimate family neighborhood, Surfside.

December 2023 Icon and Star of the Seas will debut the first Ultimate Family Townhouse. Spanning three levels, the perfect home away from home for families includes an in-suite slide, a cinema space, karaoke, a spacious balcony, a private patio and entrance to the ultimate family neighborhood, Surfside.

December 2023 Icon and Star of the Seas will debut the first Ultimate Family Townhouse. Spanning three levels, the perfect home away from home for families includes an in-suite slide, a cinema space, karaoke, a spacious balcony, a private patio and entrance to the ultimate family neighborhood, Surfside.

December 2023 The two-level Royal Loft Suite on board Icon and Star of the Seas is the ultimate in luxury. With more than 2,000 square feet, up to six vacationers can make kick back with two bedrooms, two bathrooms, a living area, a wraparound balcony with a whirlpool, a dining area and expansive ocean views.

More:

'TIS THE SEASON FOR STAR OF THE SEAS: ROYAL CARIBBEAN OPENS NEXT ICONIC VACATION - Royal Caribbean Press Center

Three robotic missions target Moon landings over one week in January Spaceflight Now – Spaceflight Now

Intuitive Machines engineers loading the IM-1 mission Nova-C lunar lander into its custom container in Houston, TX. Image: Intuitive Machines

In a blend of interesting circumstances and happenstance, two private companies and Japans space agency are all poised to land on the Moon in the back half of January 2024.

The Japanese Aerospace Exploration Agency (JAXA), Astrobotic and Intuitive Machines are all exercising distinct launch and landing options to reach the lunar surface. But all three have announced timelines that would see them land on the Moon within days of each other, if everything stays on track at this point.

While avoiding further timeline slipping is far from a certainty, Earths satellite could see its busiest month ever in terms of new spacecraft arriving.

As it happens, the last lander scheduled to launch could be the first to touch down on the Moon. Intuitive Machines Nova-C lander is targeting liftoff between Jan. 12-16 and is set to land at the Moons South Pole (80.297S, 1.2613E) on either Jan. 19 or 21.

A spokesperson for Intuitive Machines said the landing opportunity for both days is in the afternoon in EST.

Trent Martin, the Vice President of Lunar Access at Intuitive Machines, told Spaceflight Now in an Oct. 27 interview that they have instantaneous launch opportunities each day during their January window. He said because their lander needs to be fueled at the launch pad, crews will perform a wet dress rehearsal several days ahead of launch.

We will do a full fuel of our vehicle to ensure that we have the timeline down because we do a late fueling at the pad. We fuel with liquid oxygen and liquid methane, and we want to fuel as late as possible, Martin said. SpaceX has been very accommodating and theyre providing us a service that gives us liquid oxygen, liquid methane. Theyll fill up until the very last minute so that were as full as possible, so that we have the highest chance of success at landing on the Moon.

This mission along with the Peregrine lander will mark the first two fulfilled contracts under NASAs Commercial Lunar Payload Services (CLPS) program.

Onboard the Nova-C lander for NASA are the following:

This mission also features a CubeSat payload called EagleCam from Embry-Riddle Aeronautical University, which will be launched from the lander when its about 30 meters above the surface.

The camera itself is actually multiple cameras, four cameras. So as this 1U CubeSat tumbles, its taking video imagery as it falls to the surface. And so from that, within a day or two, well have video of us landing on the Moon, Martin said. So, Im super excited about that one because that will be the first time that anyones ever actually recorded themselves landing on another planetary body.

Intuitive Machines announced on Monday that its Nova-C lander for the IM-1 mission arrived at the Cape in Florida ahead of its launch next month.

Double landing possibility

JAXAs Smart Lander for Investigating Moon (SLIM) is spending the longest in space, having launched back on Sept. 7, but depending on the timing of the IM-1 landing, it could touch down on the same day from a Coordinated Universal Time (UTC) standpoint.

According to a statement from JAXA on Tuesday, SLIM is set to begin its descent to the lunar surface at 12:00 a.m. JST on Jan. 20 (1500 UTC on Jan. 19) and touchdown at 12:20 a.m. JST (1520 UTC).

The next big milestone in SLIMs journey is coming up on Dec. 25 when it enters into lunar orbit. JAXA stated that the 200kg dry mass (700kg wet mass) lander will achieve a full degree of success if it is able to land within a 100-square-meter target using its vision-based navigation system.

The target landing site for SLIM is the SHIOLI crater near the Sea of Nectar, located at 13.3S, 25.2E. The lander is designed to operate until lunar sunset occurs.

Its payloads include the Multi-Band Spectral Camera (MBC), which will examine the composition of surrounding rocks, and a small probe called the Lunar Excursion Vehicle 2 (LEV-2), which separates from the main spacecraft just before landing and performs photo imaging.

To satisfy the limited size of the vehicle to be [mounted] on SLIM, we had to downsize LEV-2. However, downsizing causes a decrease in running performance, said Hirano Daichi, one of the researchers involved with LEV-2, in a statement. In order to deal with this problem, we designed the vehicle to be a spherical object with expandable wheels and a stabilizer using the transforming technologies for toys.

Moreover, we adopted the robust and safe design technology for childrens toys, which reduced the number of components used in the vehicle as much as possible and increased its reliability, he added.

Peregrine takes flight soon

The next lander to launch and the last one scheduled to land in January is Astrobotics Peregrine lunar lander. Liftoff aboard a United Launch Alliance Vulcan rocket is set for 1:49 a.m. EST (0649 UTC) on Dec. 24. If needed, there are backup opportunities at 1:53 a.m. EST (0653 UTC) on Dec. 25 and 2:08 a.m. EST (0708 UTC) on Dec. 26.

The mission will launch the lander on a translunar injection.

We will be close to Earth, but on a trajectory that will more or less intersect with the Moons orbit. Its at that point, and this is within about an hour or so of launch, were going to separate from the launch vehicle and our lander and Astrobotics mission begins, said John Thornton, Astrobotic CEO, during a media teleconference on Nov. 29.

According to a Nov. 14 presentation by Dr. Joel Kearns, NASA Deputy Associate Administrator for Exploration, the landing window for the Peregrine Mission-1 landing is at 3:30 a.m. EST (0830 am UTC) on Jan. 25.

Once they land, Thornton said Peregrine will operate for about 10 days at which point the Sun will set on that part of the Moon, after which he said it will likely become to cold to operate.

In time, we are developing capability to survive that night, but on these first missions, were really focused on the hard enough problem, which is landing on the Moon in the first place, he said.

As with the IM-1 mission, PM-1 will also host a slate of NASA payloads as a participant in the CLPS program. During the teleconference, Thornton said he mostly only thinks about the other companies trying to land on the Moon when asked about it by press, adding that many players are needed for the lunar economy to be a successful venture.

We need this industry to succeed. We need the CLPS program to succeed. That is the number one priority for us, Thornton said. Of course, there is some level of competition with our competitors, but at the end of the day, its really secondary. The most important is the industry and most important is landing success.

Read this article:

Three robotic missions target Moon landings over one week in January Spaceflight Now - Spaceflight Now

The GAO Calls on the FAA to Improve its Mishap Investigation Process – Payload

The US Government Accountability Office (GAO) says the FAA should improve its procedures when things go awry in spaceflight. The federal agency watchdog published a report yesterday that called on the FAA to develop tools to 1) define criteria for when a mishap report is operator-led, and 2) better evaluate the effectiveness of the process as a whole.

Without a comprehensive evaluation of its mishap investigation process, FAA cannot be assured its process is effective, especially given the expansion of commercial space operations in recent years, the GAO report said.

The FAAs Office of Commercial Space Transportation is responsible for issuing launch licenses and investigating flight mishaps.

12% mishap rate: Mishap investigations kick in when a flight is not completed as planned, like in the case of Starships two big kabooms this year. Out of 433 launches between 2000 and mid-January 2023, 50 were mishaps, according to the report.

In-house? Since all launch vehicles are specializedand literally rocket sciencethe FAA believes operators are best suited to sniff out root causes and identify corrective actions. The agency estimates that in-house investigations could take the agency 10-20 times longer.

After a September anomaly with Rocket Labs Electron, it took the FAA just 36 days to approve a Rocket Lab-led mishap investigation and Electron was cleared to fly again. According to the agencys estimates mentioned above, an FAA-led investigation could have taken north of a year.

GAO does not necessarily disagree with that logic; instead, they are requesting that the FAA better track effectiveness, share data, and develop a defined criteria for when the investigations should be operator-led.

The FAA concurs with the GAOs recommendations to evaluate and further improve the FAA commercial space mishap program, the FAA said in an email to Payload. Protecting public health and safety are at the core of the program.

Learning period: As for human spaceflight/tourism, the industry has been operating under an eight-year learning period, where the FAA is restricted from enacting regulations. The learning period is set to expire on Jan. 1.

The rest is here:

The GAO Calls on the FAA to Improve its Mishap Investigation Process - Payload