GPT-5 might arrive this summer as a materially better update to ChatGPT – Ars Technica

When OpenAI launched its GPT-4 AI model a year ago, it created a wave of immense hype and existential panic from its ability to imitate human communication and composition. Since then, the biggest question in AI has remained the same: When is GPT-5 coming out? During interviews and media appearances around the world, OpenAI CEO Sam Altman frequently gets asked this question, and he usually gives a coy or evasive answer, sometimes coupled with promises of amazing things to come.

According to a new report from Business Insider, OpenAI is expected to release GPT-5, an improved version of the AI language model that powers ChatGPT, sometime in mid-2024and likely during the summer. Two anonymous sources familiar with the company have revealed that some enterprise customers have recently received demos of GPT-5 and related enhancements to ChatGPT.

One CEO who recently saw a version of GPT-5 described it as "really good" and "materially better," with OpenAI demonstrating the new model using use cases and data unique to his company. The CEO also hinted at other unreleased capabilities of the model, such as the ability to launch AI agents being developed by OpenAI to perform tasks automatically.

We asked OpenAI representatives about GPT-5's release date and the Business Insider report. They responded that they had no particular comment, but they included a snippet of a transcript from Altman's recent appearance on the Lex Fridman podcast.

Lex Fridman(01:06:13) So when is GPT-5 coming out again? Sam Altman(01:06:15) I dont know. Thats the honest answer. Lex Fridman(01:06:18) Oh, thats the honest answer. Blink twice if its this year. Sam Altman(01:06:30) We will release an amazing new model this year. I dont know what well call it. Lex Fridman(01:06:36) So that goes to the question of, whats the way we release this thing? Sam Altman(01:06:41) Well release in the coming months many different things. I think thatd be very cool. I think before we talk about a GPT-5-like model called that, or not called that, or a little bit worse or a little bit better than what youd expect from a GPT-5, I think we have a lot of other important things to release first.

In this conversation, Altman seems to imply that the company is prepared to launch a major AI model this year, but whether it will be called "GPT-5" or be considered a major upgrade to GPT-4 Turbo (or perhaps an incremental update like GPT-4.5) is up in the air.

Like its predecessor, GPT-5 (or whatever it will be called) is expected to be a multimodal large language model (LLM) that can accept text or encoded visual input (called a "prompt"). And like GPT-4, GPT-5 will be a next-token prediction model, which means that it will output its best estimate of the most likely next token (a fragment of a word) in a sequence, which allows for tasks such as completing a sentence or writing code. When configured in a specific way, GPT models can power conversational chatbot applications like ChatGPT.

OpenAI launched GPT-4 in March 2023 as an upgrade to its most major predecessor, GPT-3, which emerged in 2020 (with GPT-3.5 arriving in late 2022). Last November, OpenAI released GPT-4 Turbo, which lowered inference (running) costs of OpenAI's best AI model dramatically but has been plagued with accusations of "laziness" where the model sometimes refuses to answer prompts or complete coding projects as requested. OpenAI has attempted to fix the laziness issue several times.

LLMs like those developed by OpenAI are trained on massive datasets scraped from the Internet and licensed from media companies, enabling them to respond to user prompts in a human-like manner. However, the quality of the information provided by the model can vary depending on the training data used, and also based on the model's tendency to confabulate information. If GPT-5 can improve generalization (its ability to perform novel tasks) while also reducing what are commonly called "hallucinations" in the industry, it will likely represent a notable advancement for the firm.

According to the report, OpenAI is still training GPT-5, and after that is complete, the model will undergo internal safety testing and further "red teaming" to identify and address any issues before its public release. The release date could be delayed depending on the duration of the safety testing process.

Of course, the sources in the report could be mistaken, and GPT-5 could launch later for reasons aside from testing. So, consider this a strong rumor, but this is the first time we've seen a potential release date for GPT-5 from a reputable source. Also, we now know that GPT-5 is reportedly complete enough to undergo testing, which means its major training run is likely complete. Further refinements will likely follow.

Visit link:

GPT-5 might arrive this summer as a materially better update to ChatGPT - Ars Technica

NASA Invites Media to Speak with Artemis II Moon Crew, Recovery Team – NASA

Media are invited to speak with the four Artemis II astronauts on Wednesday, Feb. 28, at Naval Base San Diego in California. The crew will fly around the Moon next year as part of NASAs Artemis campaign, marking the first astronauts to make the journey in more than 50 years.

NASA and the U.S. Department of Defense are conducting training with the crew in the Pacific Ocean to demonstrate the procedures and hardware needed to retrieve NASA astronauts Reid Wiseman, Victor Glover, Christina Koch, and CSA (Canadian Space Agency) astronaut Jeremy Hansen after their approximately 10-day, 685,000-mile journey beyond the lunar far side and back.

The flight is the first crewed mission under NASAs Artemis campaign and will test the agencys Orion spacecraft life support systems needed for future lunar missions.

Attendees will be able to view hardware associated with the training, including a test version of Orion aboard the USS San Diego, and speak with other personnel from the agency and the Defense Department who are responsible for bringing the crew and the capsule to safety after the mission.

Media interested in attending must RSVP by 4 p.m. PST, Monday, Feb. 26, to Naval Base San Diego Public Affairs atnbsd.pao@us.navy.mil or 619-556-7359. The exact time of the planned afternoon Feb. 28 event is subject to the conclusion of testing activities.

Under Artemis, NASA will establish the foundation for long-term scientific exploration at the Moon, land the first woman, first person of color, and its first international partner astronaut on the lunar surface, and prepare for human expeditions to Mars for the benefit of all.

For more about NASAs Artemis II mission, visit:

Artemis II

-end-

Rachel Kraft Headquarters, Washington 202-358-1100 rachel.h.kraft@nasa.gov

Madison Tuttle Kennedy Space Center, Florida 321-298-5868 madison.e.tuttle@nasa.gov

Courtney Beasley Johnson Space Center, Houston 281-483-5111 courtney.m.beasley@nasa.gov

See more here:

NASA Invites Media to Speak with Artemis II Moon Crew, Recovery Team - NASA

Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy … – Microsoft

As we enter a new era based on artificial intelligence, we believe this is the best time to articulate principles that will govern how we will operate our AI datacenter infrastructure and other important AI assets around the world. We are announcing and publishing these principles our AI Access Principles today at the Mobile World Congress in Barcelona in part to address Microsofts growing role and responsibility as an AI innovator and a market leader.

Like other general-purpose technologies in the past, AI is creating a new sector of the economy. This new AI economy is creating not just new opportunities for existing enterprises, but new companies and entirely new business categories. The principles were announcing today commit Microsoft to bigger investments, more business partnerships, and broader programs to promote innovation and competition than any prior initiative in the companys 49-year history. By publishing these principles, we are committing ourselves to providing the broad technology access needed to empower organizations and individuals around the world to develop and use AI in ways that will serve the public good.

These new principles help put in context the new investments and programs weve announced and launched across Europe over the past two weeks, including $5.6 billion in new AI datacenter investments and new AI skilling programs that will reach more than a million people. Weve also launched new public-private partnerships to advance responsible AI adoption and protect cybersecurity, new AI technology services to support network operators, and a new partnership with Frances leading AI company, Mistral AI. As much as anything, these investments and programs make clear how we will put these principles into practice, not just in Europe, but in the United States and around the world.

These principles also reflect the responsible and important role we must play as a company. They build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the companys market position in the PC operating system market, we published a set of Windows Principles. Their purpose was to govern the companys practices in a manner that would both promote continued software innovation and foster free and open competition.

Ill never forget the reaction of an FTC Commissioner who came up to me after I concluded the speech I gave in Washington, D.C. to launch these principles. He said, If you had done this 10 years ago, I think you all probably would have avoided a lot of problems.

Close to two decades have gone by since that moment, and both the world of technology and the AI era we are entering are radically different. Then, Windows was the computing platform of the moment. Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond. But there is wisdom in that FTC Commissioners reaction that has stood the test of time: As a leading IT company, we do our best work when we govern our business in a principled manner that provides broad opportunities for others.

The new AI era requires enormous computational power to train, build, and deploy the most advanced AI models. Historically, such power could only be found in a handful of government-funded national laboratories and research institutions, and it was available only to a select few. But the advent of the public cloud has changed that. Much like steel did for skyscrapers, the public cloud enables generative AI.

Today, datacenters around the world house millions of servers and make vast computing power broadly available to organizations large and small and even to individuals as well. Already, many thousands of AI developers in startups, enterprises, government agencies, research labs, and non-profit organizations around the world are using the technology in these datacenters to create new AI foundation models and applications.

These datacenters are owned and operated by cloud providers, which include larger established firms such as Microsoft, Amazon, Google, Oracle, and IBM, as well as large firms from China like Alibaba, Huawei, Tencent, and Baidu. There are also smaller specialized entrants such as Coreweave, OVH, Aruba, and Denvr Dataworks Corporation, just to mention a few. And government-funded computing centers clearly will play a role as well, including with support for academic research. But building and operating those datacenters is expensive. And the semiconductors or graphical processing units (GPUs) that are essential to power the servers for AI workloads remain costly and in short supply. Although governments and companies are working hard to fill the gap, doing so will take some time.

With this reality in mind, regulators around the world are asking important questions about who can compete in the AI era. Will it create new opportunities and lead to the emergence of new companies? Or will it simply reinforce existing positions and leaders in digital markets?

I am optimistic that the changes driven by the new AI era will extend into the technology industry itself. After all, how many readers of this paragraph had, two years ago, even heard of OpenAI and many other new AI entrants like Anthropic, Cohere, Aleph Alpha, and Mistral AI? In addition, Microsoft, along with other large technology firms are dynamically pivoting to meet the AI era. The competitive pressure is fierce, and the pace of innovation is dizzying. As a leading cloud provider and an innovator in AI models ourselves and through our partnership with OpenAI, we are mindful of our role and responsibilities in the evolution of this AI era.

Throughout the past decade, weve typically found it helpful to define the tenets in effect, the goals that guide our thinking and drive our actions as we navigate a complex topic. We then apply these tenets by articulating the principles we will apply as we make the decisions needed to govern the development and use of technology. I share below the new tenets on which we are basing our thinking on this topic, followed by our 11 AI Access Principles.

Fundamentally, there are five tenets that define Microsofts goals as we focus on AI access, including our role as an infrastructure and platforms provider.

First, we have a responsibility to enable innovation and foster competition. We believe that AI is a foundational technology with a transformative capability to help solve societal problems, improve human productivity, and make companies and countries more competitive. As with prior general-purpose technologies, from the printing press to electricity, railroads, and the internet itself, the AI era is not based on a single technology component or advance. We have a responsibility to help spur innovation and competition across the new AI economy that is rapidly emerging.

AI is a dynamic field, with many active participants based on a technology stack that starts with electricity and connectivity and the worlds most advanced semiconductor chips at the base. It then runs up through the compute power of the public cloud, public and proprietary data for training foundation models, the foundation models themselves, tooling to manage and orchestrate the models, and AI-powered software applications. In short, the success of an AI-based economy requires the success of many different participants across numerous interconnected markets.

You can see here the technology stack that defines the new AI era. While one company currently produces and supplies most of the GPUs being used for AI today, as one moves incrementally up the stack, the number of participants expands. And each layer enables and facilitates innovation and competition in the layers above. In multiple ways, to succeed, participants at every layer of the technology stack need to move forward together. This means, for Microsoft, that we need to stay focused not just on our own success, but on enabling the success of others.

Second, our responsibilities begin by meeting our obligations under the law. While the principles we are launching today represent a self-regulatory initiative, they in no way are meant to suggest a lack of respect for the rule of law or the role of regulators. We fully appreciate that legislators, competition authorities, regulators, enforcers, and judges will continue to evolve the competition rules and other laws and regulations relevant to AI. Thats the way it should be.

Technology laws and rules are changing rapidly. The European Union is implementing its Digital Markets Act and completing its AI Act, while the United States is moving quickly with a new AI Executive Order. Similar laws and initiatives are moving forward in the United Kingdom, Canada, Japan, India, and many other countries. We recognize that we, like all participants in this new AI market, have a responsibility to live up to our obligations under the law, to engage constructively with regulators when obligations are not yet clear, and to contribute to the public dialogue around policy. We take these obligations seriously.

Third, we need to advance a broad array of AI partnerships. Today, only one company is vertically integrated in a manner that includes every AI layer from chips to a thriving mobile app store. As noted at a recent meeting of tech leaders and government officials, The rest of us, Microsoft included, live in the land of partnerships.

People today are benefiting from the AI advances that the partnership between OpenAI and Microsoft has created. Since 2019, Microsoft has collaborated with OpenAI on the research and development of OpenAIs generative AI models, developing the unique supercomputers needed to train those models. The ground-breaking technology ushered in by our partnership has unleashed a groundswell of innovation across the industry. And over the past five years, OpenAI has become a significant new competitor in the technology industry. It has expanded its focus, commercializing its technologies with the launch of ChatGPT and the GPT Store and providing its models for commercial use by third-party developers.

Innovation and competition will require an extensive array of similar support for proprietary and open-source AI models, large and small, including the type of partnership we are announcing today with Mistral AI, the leading open-source AI developer based in France. We have also invested in a broad range of other diverse generative AI startups. In some instances, those investments have provided seed funding to finance day-to-day operations. In other instances, those investments have been more focused on paying the expenses for the use of the computational infrastructure needed to train and deploy generative AI models and applications. We are committed to partnering well with market participants around the world and in ways that will accelerate local AI innovations.

Fourth, our commitment to partnership extends to customers, communities, and countries. More than for prior generations of digital technology, our investments in AI and datacenters must sustain the competitive strengths of customers and national economies and address broad societal needs. This has been at the core of the multi-billion-dollar investments we recently have announced in Australia, the United Kingdom, Germany, and Spain. We need constantly to be mindful of the community needs AI advances must support, and we must pursue a spirit of partnership not only with others in our industry, but with customers, governments, and civil society. We are building the infrastructure that will support the AI economy, and we need the opportunities provided by that infrastructure to be widely available.

Fifth, we need to be proactive and constructive, as a matter of process, in working with governments and the IT industry in the design and release of new versions of AI infrastructure and platforms. We believe it is critical for companies and regulators to engage in open dialogue, with a goal of resolving issues as quickly as possible ideally, while a new product is still under development. For our part, we understand that Microsoft must respond fully and cooperatively to regulatory inquiries so that we can have an informed discussion with regulators about the virtues of various approaches. We need to be good listeners and constructive problem solvers in sorting through issues of concern and identifying practical steps and solutions before a new product is completed and launched.

The foregoing tenets come together to shape the new principles we are announcing below. Its important to note that, given the safety, security, privacy, and other issues relating to responsible AI, we need to apply all these principles subject to objective and effective standards to comply with our legal obligations and protect the public. These are discussed further below. Subject to these requirements, we are committed to the following 11 principles:

We are committed to enabling AI innovation and fostering competition by making our cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. We want Microsoft Azure to be the best place for developers to train, build, and deploy AI models and to use those models safely and securely in applications and solutions. This means:

Today, our partnership with OpenAI is supporting the training of the next generation of OpenAI models and increasingly enabling customers to access and use these models and Microsofts CoPilot applications in local datacenters. At the same time, we are committed to supporting other developers, training, and deploying proprietary and open-source AI models, both large and small.

Todays important announcement with Mistral AI launches a new generation of Microsofts support for technology development in Europe. It enables Mistral AI to accelerate the development and deployment of its next generation Large Language Models (LLMs) with access to Azures cutting-edge AI infrastructure. It also makes the deployment of Mistral AIs premium models available to customers through our Models-as-a-Service (MaaS) offering on Microsoft Azure, which model developers can use to publish and monetize their AI models. By providing a unified platform for AI model management, we aim to lower the barriers and costs of AI model development around the world for both open source and proprietary development. In addition to Mistral AI, this service is already hosting more than 1,600 open source and proprietary models from companies and organizations such as Meta, Nvidia, Deci, and Hugging Face, with more models coming soon from Cohere and G42.

We are committed to expanding this type of support for additional models in the months and years ahead.

As reflected in Microsofts Copilots and OpenAIs ChatGPT itself, the world is rapidly benefiting from the use of a new generation of software applications that access and use the power of AI models. But our applications will represent just a small percentage of the AI-powered applications the world will need and create. For this reason, were committed to ongoing and innovative steps to make the AI models we host and the development tools we create broadly available to AI software applications developers around the world in ways that are consistent with responsible AI principles.

This includes the Azure OpenAI service, which enables software developers who work at start-ups, established IT companies, and in-house IT departments to build software applications that call on and make use of OpenAIs most powerful models. It extends through Models as a Service to the use of other open source and proprietary AI models from other companies, including Mistral AI, Meta, and others.

We are also committed to empowering developers to build customized AI solutions by enabling them to fine-tune existing models based on their own unique data sets and for their specific needs and scenarios. With Azure Machine Learning, developers can easily access state-of-the-art pre-trained models and customize them with their own data and parameters, using a simple drag-and-drop interface or code-based notebooks. This helps companies, governments, and non-profits create AI applications that help advance their goals and solve their challenges, such as improving customer service, enhancing public safety, or promoting social good. This is rapidly democratizing AI and fostering a culture of even broader innovation and collaboration among developers.

We are also providing developers with tools and repositories on GitHub that enable them to create, share, and learn from AI solutions. GitHub is the worlds largest and most trusted platform for software development, hosting over 100 million repositories and supporting more than 40 million developers. We are committed to supporting the AI developer community by making our AI tools and resources available on GitHub, giving developers access to the latest innovations and best practices in AI development, as well as the opportunity to collaborate with other developers and contribute to the open source community. As one example, just last week we made available an open automation framework to help red team generative AI systems.

Ensure choice and fairness across the AI economy

We understand that AI innovation and competition require choice and fair dealing. We are committed to providing organizations, AI developers, and data scientists with the flexibility to choose which AI models to use wherever they are building solutions. For developers who choose to use Microsoft Azure, we want to make sure they are confident we will not tilt the playing field to our advantage. This means:

The AI models that we host on Azure, including the Microsoft Azure OpenAI API service, are all accessible via public APIs. Microsoft publishes documentation on its website explaining how developers can call these APIs and use the underlying models. This enables any application, whether it is built and deployed on Azure or other private and public clouds, to call these APIs and access the underlying models.

Network operators are playing a vital role in accelerating the AI transformation of customers around the world, including for many national and regional governments. This is one reason we are supporting a common public API through the Open Gateway initiative driven by the GSM Association, which advances innovation in the mobile ecosystem. The initiative is aligning all operators with a common API for exposing advanced capabilities provided by their networks, including authentication, location, and quality of service. Its an indispensable step forward in enabling network operators to offer their advanced capabilities to a new generation of AI-enabled software developers. We have believed in the potential of this initiative since its inception at GSMA, and we have partnered with operators around the world to help bring it to life.

Today at Mobile World Congress, we are launching the Public Preview of Azure Programmable Connectivity (APC). This is a first-class service in Azure, completely integrated with the rest of our services, that seamlessly provides access to Open Gateway for developers. It means software developers can use the capabilities provided by the operator network directly from Azure, like any other service, without requiring specific work for each operator.

We are committed to maintaining Microsoft Azure as an open cloud platform, much as Windows has been for decades and continues to be. That means in part ensuring that developers can choose how they want to distribute and sell their AI software to customers for deployment and use on Microsoft Azure. We provide a marketplace on Azure through which developers can list and sell their AI software to Azure customers under a variety of supported business models. Developers who choose to use the Azure Marketplace are also free to decide whether to use the transaction capabilities offered by the marketplace (at a modest fee) or whether to sell licenses to customers outside of the marketplace (at no fee). And, of course, developers remain free to sell and distribute AI software to Azure customers however they choose, and those customers can then upload, deploy, and use that software on Azure.

We believe that trust is central to the success of Microsoft Azure. We build this trust by serving the interests of AI developers and customers who choose Microsoft Azure to train, build, and deploy foundation models. In practice, this also means that we avoid using any non-public information or data from the training, building, deployment, or use of developers AI models to compete against them.

We know that customers can and do use multiple cloud providers to meet their AI and other computing needs. And we understand that the data our customers store on Microsoft Azure is their data. So, we are committed to enabling customers to easily export and transfer their data if they choose to switch to another cloud provider. We recognize that different countries are considering or have enacted laws limiting the extent to which we can pass along the costs of such export and transfer. We will comply with those laws.

We recognize that new AI technologies raise an extraordinary array of critical questions. These involve important societal issues such as privacy, safety, security, the protection of children, and the safeguarding of elections from deepfake manipulation, to name just a few. These and other issues require that tech companies create guardrails for their AI services, adapt to new legal and regulatory requirements, and work proactively in multistakeholder efforts to meet broad societal needs. Were committed to fulfilling these responsibilities, including through the following priorities:

We are committed to safeguarding the physical security of our AI datacenters, as they host the infrastructure and data that power AI solutions. We follow strict security protocols and standards to ensure that our datacenters are protected from unauthorized access, theft, vandalism, fire, or natural disasters. We monitor and audit our datacenters to detect and prevent any potential threats or breaches. Our datacenter staff are trained and certified in security best practices and are required to adhere to a code of conduct that respects the privacy and confidentiality of our customers data.

We are also committed to safeguarding the cybersecurity of our AI models and applications, as they process and generate sensitive information for our customers and society. We use state-of-the-art encryption, authentication, and authorization mechanisms to protect data in transit and at rest, as well as the integrity and confidentiality of AI models and applications. We also use AI to enhance our cybersecurity capabilities, such as detecting and mitigating cyberattacks, identifying and resolving vulnerabilities, and improving our security posture and resilience.

Were building on these efforts with our new Secure Future Initiative (SFI). This brings together every part of Microsoft and has three pillars. It focuses on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats.

As AI becomes more pervasive and impactful, we recognize the need to ensure that our technology is developed and deployed in a way that is ethical, trustworthy, and aligned with human values. That is why we have created the Microsoft Responsible AI Standard, a comprehensive framework that guides our teams on how to build and use AI responsibly.

The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, we define what these values mean and how to achieve our goals in practice. We also provide tools, processes, and best practices to help our teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings.

We recognize that countries need more than advanced AI chips and datacenters to sustain their competitive edge and unlock economic growth. AI is changing jobs and the way people work, requiring that people master new skills to advance their careers. Thats why were committed to marrying AI infrastructure capacity with AI skilling capability, combining the two to advance innovation.

In just the past few months, weve combined billions of dollars of infrastructure investments with new programs to bring AI skills to millions of people in countries like Australia, the United Kingdom, Germany, and Spain. Were launching training programs focused on building AI fluency, developing AI technical skills, supporting AI business transformation, and promoting safe and responsible AI development. Our work includes the first Professional Certificate on Generative AI.

Typically, our skilling programs involve a professional network of Microsoft certified training services partners and multiple industry partners, universities, and nonprofit organizations. Increasingly, we find that major employers want to launch new AI skilling programs for their employees, and we are working with them actively to provide curricular materials and support these efforts.

One of our most recent and important partnerships is with the AFL-CIO, the largest federation of labor unions in the United States. Its the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Weve learned that government institutions and associations can typically bring AI skilling programs to scale. At the national and regional levels, government employment and educational agencies have the personnel, programs, and expertise to reach hundreds of thousands or even millions of people. Were committed to working with and supporting these efforts.

Through these and other initiatives, we aim to democratize access to AI education and enable everyone to harness the potential of AI for their own lives and careers.

In 2020, Microsoft set ambitious goals to be carbon negative, water positive and zero waste by 2030. We recognize that our datacenters play a key part in achieving these goals. Being responsible and sustainable by design also has led us to take a first-mover approach, making long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build datacenters and operate.

We also apply a holistic approach to the Scope 3 emissions relating to our investments in AI infrastructure, from the construction of our datacenters to engaging our supply chain. This includes supporting innovation to reduce the embodied carbon in our supply chain and advancing our water positive and zero waste goals throughout our operations.

At the same time, we recognize that AI can be a vital tool to help accelerate the deployment of sustainability solutions from the discovery of new materials to better predicting and responding to extreme weather events. This is why we continue to partner with others to use AI to help advance breakthroughs that previously would have taken decades, underscoring the important role AI technology can play in addressing some of our most critical challenges to realizing a more sustainable future.

Tags: ChatGPT, datacenters, generative ai, Github, Mobile World Congress, open ai, Responsible AI

Read the original post:

Microsoft's AI Access Principles: Our commitments to promote innovation and competition in the new AI economy ... - Microsoft

Get Rich Quick With These 3 Cloud Computing Stocks to Buy Now – InvestorPlace

As part of our day-to-day life, cloud computing companies are completely necessary as they keep us interconnected and take care of streamlining our operations, allowing us to be more efficient and effective. They also make many tasks much easier to perform through their great technological solutions. These solutions can be applied from the financial area to the human resources area.

If you want to take advantage of the great boom and the strong demand of these companies, here are three cloud computing stocks to buy quick and that you can consider adding to your portfolio.

Source: IgorGolovniov / Shutterstock.com

Behind pharmaceutical companies and biotech companies there is a big figure that is responsible for providing them with cloud-based software solutions to streamline their entire operations, that big figure is Veeva Systems Inc (NYSE:VEEV).

Financially VEEV is completely stable and are always on the move. Its revenues speak for themselves as they are on the rise and if we focus on net income, it is growing consistently reflected in their market performance.

One of the particularities that distinguishes this company is its capacity for innovation.

For example, their most recent release, the Veeva Compass Suite, is a comprehensive set of tools that gives healthcare companies a much deeper understanding of existing patient populations and a picture of healthcare provider behaviors.

Its practically like giving you a complete and specific picture of the entire healthcare network landscape.

On top of that, they make a real impact on the lives of patients, as their training solutions are helping many companies modernize their employee qualification processes.

Source: Sundry Photography / Shutterstock.com

Next on the list of companies involved in the cloud computing sector is Workday Inc (NASDAQ:WDAY), which specializes in providing companies with cloud-based enterprise applications for financial management and human resources.

They provide practical software-based solutions that allow companies to streamline their processes in managing their financial operations and human talent.

One of the things that makes this company completely attractive is its great financial performance, since in their last financial quarter they indicated that their revenues increased by 16.7% compared to the same period of the previous year, which can be translated into $1.87 billion, what good figures.

As part of their most important metrics we have subscription revenues, which increased much stronger than their normal revenues, with 18.1%, reaching approximately $1.69 billion.

In addition to these incredible numbers, they are making important strategic alliances, where they have partnered with McLaren Racing to provide them with innovative solutions.

This partnership demonstrates the versatility of Workday, as they not only provide business solutions in traditional sectors, but they also have a large participation in completely competitive industries.

Source: Jonathan Weiss / Shutterstock.com

And to close the list of these companies completely necessary in our day to day, we have the giant Oracle Corporation (NYSE:ORCL), a technology company completely recognized worldwide.

This company specializes entirely in data management solutions and of course in cloud computing. One of its main commitments is to help organizations improve their efficiency and optimize their operations through completely innovative technological solutions.

Financially, this company is in a phase of solid growth specifically in its total revenue and in its cloud division.

One of the stars of this company is its cloud application suite, which has gained a strong foothold in the healthcare sector.

Large and important institutions such as Baptist Health Care and the University of Chicago Medicine, are adopting the solutions provided by this company to improve their experience with employees and of course the care of their patients.

In addition, they are expanding their global presence with the grand opening of a new cloud region in Nairobi, Kenya. This major expansion makes clear their important commitment to economic and technological development in the greater African continent.

Oracle Cloud Infrastructures (OCI) unique infrastructure allows them the great opportunity and advantage to offer governments and businesses the opportunity to drive innovation and growth in the region.

As of this writing, Gabriel Osorio-Mazzilli did not hold (either directly or indirectly) any positions in the securities mentioned in this article. The opinions expressed in this article are those of the writer, subject to the InvestorPlace.com Publishing Guidelines(no position)

Gabriel Osorio is a former Goldman Sachs and Citigroup employee. He possesses discipline in bottom-up value investing and volatility-based long/short equities trading.

Read more here:

Get Rich Quick With These 3 Cloud Computing Stocks to Buy Now - InvestorPlace

Why This Brain-Hacking Technology Will Turn Us All Into Cyborgs – The Daily Beast

It felt like magic: As I moved my head and eyes across the computer screen, the cursor moved with me. My goal was to click on pictures of targets on the display. Once the cursor reached a target, I would blink causing it to click on the targetas if it were reading my mind.

Of course, thats essentially what was happening. The headband I was wearing picked on my brain, eye, and facial signals. This data was fed through an AI-software that translated it into commands for the cursor. This allowed me to control what was on the screen, even though I didnt have a mouse or a trackpad. I didnt need them. My mind was doing all of the work.

The brain, eye, and face are great generators of electricity, Naeem Kemeilipoor, the founder of brain-computer interface (BCI) startup AAVAA, told The Daily Beast at the 2024 Consumer Electronics Show. Our sensors pick up the signals, and using AI we can interpret them.

The headband is just one of AAVAAs products that promises to bring non-invasive BCIs to the consumer market. Their other devices include AR glasses, headphones, and earbuds that all essentially accomplish the same function: reading your brain and facial signals to allow you to control your devices.

While BCI technology has largely remained in the research labs of universities and medical institutions, startups like AAVAA are looking for ways to put them in the handsor, rather, on the headsof everyday people. These products go beyond what we typically expect of our smart devices, seamlessly integrating our brain with technology around us. They also offer a lot of hope and promise for people with disabilities or limited mobilityallowing them to interact with and control their computers, smartphones, and even wheelchairs.

However, BCIs also blur the lines between the tech around us and our very minds. Though they can be helpful for people with disabilities, their widespread use and adoption raises questions and concerns about privacy, security, and even a users very personhood. Allowing a device to read our brain signals throws open the doors to these ethical considerations so, as they steadily become more popular, they could become more dangerous as well.

AAVAAs BCI devices on a table at CES 2024. AAVAA is looking for ways to put them in the handsor, rather, on the headsof everyday people.

BCIs loomed large all throughout CES 2024and for good reason. Beyond being able to control your devices, wearables that could read brain signals also promised to provide greater insights into users health, wellness, and productivity habits.

There were also a number of devices targeted at improving sleep quality such as the Frenz Brainband. The headband measures users brainwaves, heart rate, and breathing (among other metrics) to provide AI-curated sounds and music to help them fall asleep.

Every day is different and so every day your brain will be different, a Frenz spokesperson told The Daily Beast. Today, your brain might feel like white noise or nature sounds. Tomorrow, you might want binaural beats. Based on your brains reactions to your audio content, we know whats best for you.

To produce the noises, the headband used bone conduction, which converts audio data into vibrations on the skull that travel to the inner ear producing sound. Though it was difficult to hear clearly on the crowded show floor of CES, the headband managed to produce soothing beats as I wore them in a demo.

When you fall asleep, the audio automatically fades out, the spokesperson said. The headband keeps tracking all night, and if you wake up, you can press a button on the side to start the sounds to put you back to sleep.

However, not all BCIs are quite as helpful as they might appear. For example, there was MW75 Neuro, a pair of headphones from Master and Dynamic that purports to read your brains electroencephalogram (EEG) signals to provide insights on your level of focus. If you become distracted or your focus wanes for whatever reason, it alerts you so you can maintain productivity.

Sure, this might seem helpful if youre a student looking to squeeze in some more quality study time or a writer trying to hit a deadline on a story, but its also a stark and grim example of late-stage capitalism and a culture obsessed with work and productivity. While this technology is relatively new, its not difficult to imagine a future where these headphones are more commonplace andpotentiallyrequired by workplaces.

When most people think about BCIs, they typically think of brain-chip startups like Synchron and Neuralink. However, these technologies require users to undergo invasive surgeries in order to implant the technology. Non-invasive BCIs from the likes of AAVAA, on the other hand, require just a headband or headphones.

Thats what makes them so promising, Kemeilipoor explained. No longer would it be limited to only those users who really need it like those with disability issues. Any user can pop on the headband and start scrolling on their computer or turning their lamps and appliances on and off.

The Daily Beasts intrepid reporter Tony Ho Tran wears AAVAAs headband, which promises to bring non-invasive BCIs to the consumer market.

Its out of the box, he explained. Weve done the training [for the BCI] and now it works. Thats the beauty of what we do. It works right out of the boxand it works for everyone.

However, the fact that it can work for everyone is a top concern for ethical experts. Technology like this creates a minefield of potential privacy issues. After all, these companies may potentially have completely unfettered access to data from our literal brains. This is information that can be bought, sold, and used against consumers in an unprecedented way.

One comprehensive review published in 2017 in the journal BMC Medical Ethics pointed out that privacy is a major concern for potential users for this reason. BCI devices could reveal a variety of information, ranging from truthfulness, to psychological traits and mental states, to attitudes toward other people, creating potential issues such as workplace discrimination based on neural signals, the authors wrote.

To their credit, Kemeilipoor was adamant that AAVAA would and does not have access to individual brain signal data. But the concerns are still there, especially since there are notable examples of tech companies misusing user data. For example, Facebook has been sued multiple times for millions of dollars for storing users biometric data without their knowledge or consent. (Theyre certainly not the only company doing this either.)

These issues arent going to go awayand theyll be further exacerbated by the infusion of technology and the human brain. This is a phenomenon that also brings up concerns about personhood as well. At what point, exactly, does the human end and the computer begin once you are able to essentially control devices as an extension of yourself like your arms or legs?

The questionis it a tool or is it myself?takes on an ethical valence when researchers ask whether BCI users will become cyborgs, the authors wrote. They later added that some ethical experts worry that being more robotic makes one less human.

Yet, the benefits are undeniableespecially for those for whom BCIs could give more autonomy and mobility. Youre no longer limited by what you can do with your hands. Now, you can control the things around you simply by looking in a certain direction or moving your face in a specific way. It doesnt matter if youre in a wheelchair or completely paralyzed. Your mind is the limit.

This type of technology is like the internet of humans, Kemeilipoor said. This is the FitBit of the future. Not only are you able to monitor all your biometrics, it also allows you to control your devicesand its coming to market very soon.

Its promising. Its scary. And its also inevitable. The biggest challenge that we all must face is thatas these devices become more popular and we gradually give over our minds and bodies to technologywe dont lose what makes us human in the first place.

Read more:

Why This Brain-Hacking Technology Will Turn Us All Into Cyborgs - The Daily Beast

More Is Not Always Better: How The Las Vegas Swim Club Rebuilt To The National Stage – SwimSwam

This article originally appeared in the 2023 College Preview issue of SwimSwam Magazine. Subscribe here.

At the end of 2013, Peter Mavro and Amber Stewart were given the task of resurrecting a swim club on the verge of falling apart. With determination, a clear vision, and the influence of one of swimmings brightest minds in Russell Mark, they were able to make it happen.

When Mavro and Stewart first took over the Las Vegas Swim Club (LVSC) as head and assistant coach respectively, there were only 25 members and a 50% athlete retention rate. Numbers had been dropping for the club ever since their training facility, the Pavilion Center Pool, shut down in 2010 and they had to relocate. When Pavillion opened up in 2012 again, LVSC had their facility back, but the culture and outlook of the club was still very bleak.

At that time, LVSCs only purpose was to serve as a feeder organizationkids showed up to swim for three months, and then they either quit or moved on to the bigger, more lucrative Sandpipers of Nevada (SAND) club that was just ten minutes down the road. In other words, LVSCs biggest competition was the club that would later go on to produce six different Olympic and World Championship team members in the next decade.

There was just this constant revolving door of kids coming in and kids coming out, Mavro said. You cant build a consistent culture in that kind of scenario,

In front of them, Mavro and Stewart faced an organization that was barely holding itself together, and the fact that they were next-door neighbors with the biggest age group talent hotbed in the country only rubbed more salt into the wound. It was very easy for them to raise the white flag of surrender, but instead, they decided from day one that they were committed to reform.

I remember saying, what is our goal? What are we trying to be? Mavro said. From the day that I started working with this team, my mindset was to teach these kids, teach our families what it means to be in a committed environment, what it means to work hard, and not have it be a revolving door of swimmers.

It started from the little things, such as establishing attendance requirements, holding team meetings with parents, getting age groupers to have their cap and goggles on before practices started, never-ending practice early unless it was absolutely necessary, and finding alternate pools instead of canceling practice when the Pavillion wasnt available.

Another thing that Mavro and Stewart had to do was put their egos asideeven though Mavro is the head coach, he mainly works with age groupers, while assistant coach Stewart works with the older swimmers in the National Team group. Thats a non-traditional arrangement, with most clubs assigning their head coach to their fastest group of athletes.I believe in assessing my own strengths. That 10 to 14-year-old level is where my biggest strength is, so why wouldnt I be in that group? I believe Ambers biggest strength is to really inspire kids to do the impossible on a daily basis, so why wouldnt she be in that group? Mavro said. It just made so much sense to me, and thats why were set up the way were set up, so everybody can focus on their strengths.

Every small step that Mavro and Stewart took helped build LVSC from the bottom up, and in the end, it culminated into a growing culture of commitment and hard work.

Soon enough, the work of Mavro and Stewart began to pay off. In 2018, LVSC qualified swimmers for Sectionals for the first time. In 2019, Jack Gallob became LVSCs first Summer and Winter Juniors qualifier. In 2022, Owen Carlsen committed to Utah as the clubs first Power 5 Conference commitand his brother Max is on track to becoming one of the top recruits in the class of 2025. The number of swimmers in LVSC grew to around 200 and held steady, which is in line with Marvo and Stewarts mission to create a team that is both serious about swimming but still has that small, family-based feel.

Thats what separates us, Mavro said. When youre right next to another gigantic team thats shown a lot of success, you really have to give your families reason to believe that theyre getting something special. We want to build an environment where people want to bea hard work environment where the expectations are high, but we do not have coaches that yell, make kids feel bad about themselves, any of those kids. Its really about inspiring the kids to want to do it for themselves.

Were not just trying to throw a bunch of kids in the pool and let the best athletes find their way. We are trying to develop every single athlete to the best they can be.

More is not always better, better is better, Stewart added, making a statement that is frequently repeated throughout the national group that she coaches.

With LVSC and SAND being located so close together, they sometimes share a pool and hold practices back-to-back. When Stewart first began coaching LVSCs national group, she noticed that her swimmers acted complacently in front of the SAND swimmers, standing aside and waiting for them to finish warm-down even though it was LVSCs practice time. After time, though, Stewart decided that the dynamic and mentality of her program needed to change.

One of the first major things that I did as a coach was [make it clear that] we get in the pool at 4:30, we get in on time, Stewart said. There was a little bit of friction in the beginning, but [SAND] became very respectful of that and realized oh, okay, theyre serious. They arent gonna stand around just because we have this extra 300 to do.

Again, it was little things like these that sent out a message that LVSC was no longer going to be the pushover, and that they deserved the same respect as any other established club. Even though Stewart and Marvo dont want the entire identity of their club to revolve around being next to the Sandpipers, they acknowledge that getting over the hurdle of being overshadowed by their neighbors is a big part of what makes LVSC the club that they are today.

In the early days of Stewart and Mavros coaching, the LVSC had always looked towards SAND, with discussions at board meetings constantly being about trying to emulate what SAND does. Over time, however, they learned how to both co-exist with their neighborhood giant, as well as build their own, distinct, identity in their presence.

My mindset was, were not Sandpipers, were LVSC. We dont need to do what they do, and frankly, were not gonna be able to compete with them that way, Mavro said.If were trying to build a mini-Sandpipers, why would a swimmer or a family ever stay with us when the Sandpipers are already there?

Beyond the fact that they are both located in Las Vegas, LVSC and SAND dont actually have much in common. SAND has over 500 swimmers, while LVSC is less than half its size. LVSC has a lower volume training philosophy than SAND. SAND does three doubles a week, LVSC doesnt do doubles during the school year because of pool availability issueswhich Mavro thinks acclimatizes swimmers to the training hour limits in the NCAA. Not all swimmers need the same thing, and LVSC offered families in the Las Vegas area an alternate option if their swimmers dont fit the Sandpiper lifestyle.

We are very different programs, Stewart said. With the approach that we have, which is different from theirs, we have kept swimmers in our program that probably would not have stayed swimming otherwise.

Besides some tension here and there, not much bad blood exists between LVSC and SAND. Mavro is good friends with Sandpiper age group coach Chris Barberthe two of them are open books, talking about practice strategy, training, and season planning whenever they see each other.

At the end of the day, Mavro and Stewart believe that having SAND right next to them ultimately makes LVSC a stronger club, and they are grateful for the challenges that come with it.

Having the Sandpipers right next to us holds us to an incredibly high standard, Mavro said. We cannot get away with making lazy choices. As much as it can be frustrating, it is our greatest motivator by far. Were better because were right next to them.

The teams [of Las Vegas] have quality staff that are working against each other, but they are also working to build a really fast swimming community.

Yeah, theres friction and frustration, but at the end of the day, were all here to support each other and make the world of swimming together. Stewart added.

Less than ten years after their rebuild, LVSC was seeing the kind of national-level success that some much older clubs havent experienced before. Prior to 2019, the club didnt know what coaching Junior National and DI-caliber swimmers was likethey ran headlong into a lot of firsts and learned by doing.

When Jack Gallob, LVSCs first Winter Juniors qualifier, came to the National Team group for the first time, he was instantaneously moved from the slowest lane to the fastest lane with no in-betweens. It became clear that he was a one-of-a-kind type of swimmer, and shortly thereafter, Stewart began giving him sets that nobody else in the club was capable of doing.

Initially, the transition for Gallob was challenging. In fact, he even complained to Stewart that his situation wasnt fair. But Stewart didnt buy it.

I told [him], I think what youre trying to do is say that the definition of fair is that everybody gets the same thing. Stewart said. But if thats the definition of fair that I abide by as a coach, then Im not doing a good job, because my definition of fair is that everybody gets what they need. And [he needed] something that [was] different from the rest of the athletes in the pool.

And he remembers that conversationit was really impactful, and a light bulb switched. I think he realized, oh, okay, I dont wanna get away with less. I wanna get away with what I can do and maximize what I can do.

Three years after swimming at his first winter juniors in 2019, Gallob is now set to swim at Indiana University-Purdue (IUPUI) starting in the fall of 2023. Since 2019, he has taken his 100 back personal best from 50.21 to 49.18 and his 200 back personal best from 1:50.15 to 1:47.56, amongst drops in other events.

After Gallob, the success train just kept on rolling at LVSC, with Owen Carlsen excelling in distance freestyle and committing to Utah, and Max Carlsen becoming the 8th-fastest 15-year-old of all-time in the 1000 free. Joe Christ came into LVSC with a 2:27 200 free and dropped down to a 1:39 by the time he was a senior and committed to Air Force. At the Carlsbad Sectionals in March 2023, LVSC won first place in the small team division.

Once Gallob reached heights that had never been attained before, it caused a domino effect.

Seeing [Gallob] do it makes that belief for the next group of kids, Mavro said. When you see your teammates do these sorts of things, it does help you with that belief so that when the coach sits down with you and looks at your individual goals, lets say its making futuresAmber [can say] well, I think your goals need to be a little more higher than that. Youve got more in you, youve seen your teammates do it.

Stewart said that she and Mavro discuss goals with all of their swimmers, trying to make them ambitious and realistic. After deciding upon what their goals are, those goals will then get laminated and put in the gear bags of swimmers so they can be reminded of them every day.

Increasing success also means greater chances of a swimmer competing at the highest level in college, which was also a hurdle that Mavro and Stewart had to overcome, as they had never experienced intense college recruiting until recently. However, just like with everything else, they adapted.

Stewart, who swam in college herself at Brigham Young University, used her own NCAA connections to help her swimmers in the recruiting process. Gallob had relatives who swam for Kentucky, and they came over to LVSC to speak about the college experience. Ben Loorz, the head coach of the University of Las Vegas-Nevada held a PowerPoint night at the Pavillion once. In addition, Stewart herself listened to swimming podcasts and exchanged ideas with other coaches on Facebook to familiarize herself more with recruiting.

Its not my forte by any means, but having relationships and being willing to reach out to coaches when coaches reach out to us and making sure that were responsive to them is [something that Im trying to be better at], Stewart said. Were kind of learning as we go.

However, arguably the best resource for LVSC has been Russell Mark, who is best known for being USA Swimmings former High Performance Manager, and who now works for the American Swimming Coaches Association (ASCA). Mavro knew Mark from their time together at the University of Virginia, and the two are close friends. Frequently, Mark analyzes the strokes and techniques of LVSC swimmers via videos that Mavro sends him, and provides LVSC with connections to the great swimming community.

For example, LVSCs national group got invited to an ASCA clinic via Mark, where they got to meet names like Ohio State head coach Bill Dorenkott, Virginia head coach Todd DeSorbo, as well as Mel Marshall, the coach of world record holder Adam Peaty. At that clinic, DeSorbo arranged a time with Mavro and Stewart where they would be able to travel to Virginia and come watch one of their practices.

Its random for a small team in Las Vegas to happen to have access to what I would consider the greatest swimming mind in this country, Mavro said. Without Russell, we would not be where we are. Everything Ive done in coaching and developing the kids is based on everything Ive learned from him, as far as stroke technique.

In the end, however, everything circles back to the values that Mavro and Stewart had wanted to ingrain in LVSC from the very beginning. Its not about the accolades, college commits, or timesits about developing a family-friendly culture, and for swimmers to grow into the best versions of themselves inside and outside the pool.

I cant say how proud I am of what weve been able to do with our program and what our programs athletes dobecause if they dont buy in then Im out of a job, Mavro said. You cant have a national group if you dont instill the tools that the kids need to be there in the first place. I want to see the kids succeed, but I want to see the kids fail and learn from it and learn how to take that next step.

One of the things we hear oftentimesis your kids are always so nice and respectful. And that thats always going to be mean more than me than your kids were so fast.

Read more:

More Is Not Always Better: How The Las Vegas Swim Club Rebuilt To The National Stage - SwimSwam