‘Materially better’ GPT-5 could come to ChatGPT as early as this summer – ZDNet

Leon Neal/Getty Images

OpenAI has released several iterations of the large language model (LLM) powering ChatGPT, including GPT-4 and GPT-4 Turbo. Still, sources say the highly anticipated GPT-5 could be released as early as mid-year.

According to reports from Business Insider, GPT-5 is expected to be a major leap from GPT-4 and was described as "materially better" by early testers. The new LLM will offer improvements that have reportedly impressed testers and enterprise customers, including CEOs who've been demoed GPT bots tailored to their companies and powered by GPT-5.

Also: What does GPT stand for? Understanding GPT 3.5, GPT 4, and more

`A customer who got a GPT-5 demo from OpenAI told BI that the company hinted at new, yet-to-be-released GPT-5 features, including its ability to interact with other AI programs that OpenAI is developing. These AI programs, called AI agents by OpenAI, could perform tasks autonomously.

This feature hints at an interconnected ecosystem of AI tools developed by OpenAI, which would allow its different AI systems to collaborate to complete complex tasks or provide more comprehensive services.

The specific launch date for GPT-5 has yet to be released. OpenAI is reportedly training the model and will conduct red-team testing to identify and correct potential issues before its public release.

Also: 3 ways we tried to outwit AI last week: Legislation, preparation, intervention

It's unclear whether GPT-5 will be released exclusively to Plus subscribers, who pay a $20-a-month fee to access GPT-4. GPT-3.5 powers the free tier of ChatGPT, but anyone can access GPT-4 Turbo in Copilot for free by choosing the Creative or Precise conversation styles.

OpenAI has been the target of scrutiny and dissatisfaction from users amid reports of quality degradation with GPT-4, making this a good time to release a newer and smarter model.

Excerpt from:

'Materially better' GPT-5 could come to ChatGPT as early as this summer - ZDNet

Why Online Free Speech Is Now Up to the Supreme Court – Bloomberg

Conspiracy theories, election lies and Covid misinformation before the 2020 US presidential election led social media companies to implement rules policing online speech and suspending some users including former President Donald Trump. That practice, known as content moderation, will be put to the test after two Republican-led states, Florida and Texas, passed laws in 2021 to stop what they believed were policies censoring conservatives. The fate of those social media laws now rests with the US Supreme Court, which could fundamentally reshape how platforms handle speech online in the run-up to the 2024 election and beyond.

The central issue is whether the laws violate the free speech rights of social media platforms by limiting the companies editorial control. The laws apply to companies including Meta Platforms Inc.s Facebook, Alphabet Inc.s Google, X Corp. (formerly Twitter) and Reddit Inc. The justices will scrutinize provisions of the new laws that require the companies to carry content that violates their internal guidelines and to provide a rationale to users whose posts are taken down.

View original post here:

Why Online Free Speech Is Now Up to the Supreme Court - Bloomberg

NASA will retire the ISS soon. Here’s what comes next. – NPR

The International Space Station is pictured from the SpaceX Crew Dragon Endeavour during a fly around of the orbiting lab on Nov. 8, 2021. NASA hide caption

The International Space Station is pictured from the SpaceX Crew Dragon Endeavour during a fly around of the orbiting lab on Nov. 8, 2021.

Since its first modules launched at the end of 1998, the International Space Station has been orbiting 250 miles above Earth. But at the end of 2030, NASA plans to crash the ISS into the ocean after it is replaced with a new space station, a reminder that nothing within Earth's orbit can stay in space forever.

NASA is collaborating on developing a space station owned, built, and operated by a private company either Axiom Space, Voyager Space, or Blue Origin. NASA is giving each company hundreds of millions of dollars in funding and sharing their expertise with them.

Eventually, they will select one company to officially partner with and have them replace the ISS. NASA says this will help them focus on deep space exploration, which they consider a much more difficult task.

Progress photos showing the Axiom Space station being built. ENRICO SACCHETTI/Axiom Space hide caption

Progress photos showing the Axiom Space station being built.

But any company that is able to develop their own space station, get approval from the federal government and launch it into space will be able to pursue their own deep space missions even without the approval of NASA.

Phil McCalister, director of the Commercial Space Division of NASA, told NPR's Morning Edition that NASA does not want to own in perpetuity everything in low-Earth orbit which is up to 1,200 miles above Earth's surface.

"We want to turn those things over to other organizations that could potentially do it more cost-effectively, and then focus our research and activities on deep space exploration," said McCalister.

McCalister says the ISS could stay in space longer, but it's much more cost-effective for NASA to acquire a brand new station with new technology. NASA would then transition to purchasing services from commercial entities as opposed to the government building a next-generation commercial space station.

The ISS was designed in the 80s, so the technology when it was first built was very different from what is available today.

"I kind of see this as like an automobile. When we bought that automobile in 1999, it was state of the art. And it has been great. And it serves us well and continues to be safe. But it's getting older. It's getting harder to find spare parts. The maintenance for that is becoming a larger issue," McCalister said.

A new, private space station will have a lot of similarities and some differences from the current ISS.

Robyn Gatens, director of the International Space Station, says that despite it aging, not all the technology on the ISS is out of date.

"We've been evolving the technology on the International Space Station since it was first built. So some of these technologies will carry over to these private space stations," said Gatens. "We've upgraded the batteries, we've upgraded and added solar arrays that roll out and are flexible, we've been upgrading our life support systems."

The view from NASA spacewalker Thomas Marshburn's camera points downward toward the ISS on December 2, 2021. Thomas Marshburn/NASA hide caption

The view from NASA spacewalker Thomas Marshburn's camera points downward toward the ISS on December 2, 2021.

Paulo Lozano is the director of the Space Propulsion Laboratory at MIT and an aerospace engineer. He said, "NASA has already changed the solar panels at least once and switched them from these very large arrays that produce relatively little power, to these smaller arrays that produce much more power. All the computer power at the beginning is nothing compared to what can be done today."

Gatens says the structure of the space station which is the size of a football field is what can't be upgraded and replaced. And something of that size is costly for NASA to maintain.

"The big structure, even though it's doing very well, has a finite lifetime. It won't last forever. It is affected by the environment that it's in. And every time we dock a vehicle and undock a vehicle, the thermal environment puts stresses and loads on that primary structure that will eventually make it wear out," said Gatens.

Gatens says we can expect a new space station to be designed a little more efficiently and right sized for the amount of research that NASA and its partners are going to want to do in low-Earth orbit.

NASA astronaut Megan McArthur doing an experiment on the ISS on May 26, 2021. NASA hide caption

NASA astronaut Megan McArthur doing an experiment on the ISS on May 26, 2021.

The structure of the ship is also extremely important to the people who work there.

The ISS carries scientists who perform research that can only be done in the weak gravity of space, like medical research. In space, cells age more quickly and conditions progress more rapidly, helping researchers understand the progression of things like heart disease or cancer more quickly.

Researchers on the ISS also work to understand what happens to the human body when it's exposed to microgravity. This research is aimed at helping develop ways to counteract the negative effects of being in space and let astronauts stay there longer something essential to getting a human on Mars.

Gatens says a new space station will have updated research facilities.

"I'm looking forward to seeing very modern laboratory equipment on these space stations. We say the International Space Station has a lot of capability, but it's more like a test kitchen. I'm looking forward to seeing the future commercial space stations take these laboratory capabilities and really develop them into state-of-the-art space laboratories," said Gatens.

Expedition 60 crewmembers Luca Parmitano, Christina Koch, Andrew Morgan, and Nick Hague in the ISS cupola photographing Hurricane Dorian on August 30, 2019. NASA hide caption

Expedition 60 crewmembers Luca Parmitano, Christina Koch, Andrew Morgan, and Nick Hague in the ISS cupola photographing Hurricane Dorian on August 30, 2019.

On top of having modern research facilities, new space stations will likely be designed to provide a cleaner environment for researchers.

"If you see pictures of the station, you'll think 'how can they work there?' It looks cluttered, it looks messy," Astronaut Peggy Whitson told NPR. She's spent more time in space than any other woman and is the first woman to command the ISS. Whitson is now Director of Human Spaceflight and an astronaut at Axiom Space, one of the companies funded by NASA to develop a space station.

Whitson said the reason there are cables all over the place is because the structure of the station wasn't designed for some of the systems it has now. She thinks having a method for making a station even more adaptable to new technology will be important in terms of user experience.

Whitson doesn't know what technology will be available five years from now. But she said Axiom Space will want to take advantage of whatever they can get their hands on, ideally without wires everywhere.

Peggy Whitson in the ISS's cupola. AXIOM SPACE/Axiom Space hide caption

Peggy Whitson in the ISS's cupola.

"I would like all that cabling and networking to be behind the panels so that it's easier for folks to move around in space," Whitson said. "Having and building in that adaptability is one of the most critical parts, I think, of building a station for low-Earth orbit."

Paulo Lozano says many of the electronic components on the ISS are bulky. But now that electronics are smaller, she expects the interior of future stations might be a bit different.

At the current ISS, there is one small inflatable module. That structure flies up, collapsed, and then expands as it gets filled with air once it's attached to the primary structure of the station with it literally blowing up kind of like a balloon. Gatens says they are looking at multiple elements of a new space station being inflatable.

Whitson told NPR that on the space station Axiom Space is developing, they will have windows in the crew quarters and a huge cupola, what she describes as an astronaut's window to the world. On the ISS, they have a cupola you can pop your head and shoulders into and see 360-degree views of space and look down at the Earth.

On the proposed Axiom space station, Whitson said the cupola is so large that astronauts will be able to float their whole body in there and have it be an experience of basically almost flying in space.

NASA hopes that by handing responsibility of an ISS replacement over to private companies, it will allow the agency to develop technology more quickly and focus on their next goal of putting a station beyond low-Earth orbit for the first time. Current proposed low-Earth orbit stations include the Lunar Gateway, which is NASA's planned space station on the moon.

"What the space stations of today are doing is just paving the way for humans to actually explore deeper into space, which is going to be a significantly harder challenge to accomplish. The space stations of today are essential stepping stones towards that goal," said Lozano.

Gatens says one piece of technology that is being developed at Blue Origin is a big rotating space station that, when finished, would have artificial gravity.

For long trips in space, the lack of gravity is a main issue for the human body, causing bone-loss and other health issues. "If you could recreate that in space, that will be very beneficial," Gatens said.

Lozano says that a space station beyond low-Earth orbit would need new technology that is radically different from what's been used in the ISS. And both NASA and Lozano don't think it is possible to venture deeper into space, and eventually get a human on Mars, with U.S. government funding alone.

"I don't think we're very far away in terms of technology development. I think we're a little bit far away in terms of investment, because space technology is quite expensive and sometimes a single nation cannot really make it work by itself. So you need international cooperation." Lozano said.

Treye Green edited the digital version of this story.

More:

NASA will retire the ISS soon. Here's what comes next. - NPR

Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy … – Microsoft

As we enter a new era based on artificial intelligence, we believe this is the best time to articulate principles that will govern how we will operate our AI datacenter infrastructure and other important AI assets around the world. We are announcing and publishing these principles our AI Access Principles today at the Mobile World Congress in Barcelona in part to address Microsofts growing role and responsibility as an AI innovator and a market leader.

Like other general-purpose technologies in the past, AI is creating a new sector of the economy. This new AI economy is creating not just new opportunities for existing enterprises, but new companies and entirely new business categories. The principles were announcing today commit Microsoft to bigger investments, more business partnerships, and broader programs to promote innovation and competition than any prior initiative in the companys 49-year history. By publishing these principles, we are committing ourselves to providing the broad technology access needed to empower organizations and individuals around the world to develop and use AI in ways that will serve the public good.

These new principles help put in context the new investments and programs weve announced and launched across Europe over the past two weeks, including $5.6 billion in new AI datacenter investments and new AI skilling programs that will reach more than a million people. Weve also launched new public-private partnerships to advance responsible AI adoption and protect cybersecurity, new AI technology services to support network operators, and a new partnership with Frances leading AI company, Mistral AI. As much as anything, these investments and programs make clear how we will put these principles into practice, not just in Europe, but in the United States and around the world.

These principles also reflect the responsible and important role we must play as a company. They build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the companys market position in the PC operating system market, we published a set of Windows Principles. Their purpose was to govern the companys practices in a manner that would both promote continued software innovation and foster free and open competition.

Ill never forget the reaction of an FTC Commissioner who came up to me after I concluded the speech I gave in Washington, D.C. to launch these principles. He said, If you had done this 10 years ago, I think you all probably would have avoided a lot of problems.

Close to two decades have gone by since that moment, and both the world of technology and the AI era we are entering are radically different. Then, Windows was the computing platform of the moment. Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond. But there is wisdom in that FTC Commissioners reaction that has stood the test of time: As a leading IT company, we do our best work when we govern our business in a principled manner that provides broad opportunities for others.

The new AI era requires enormous computational power to train, build, and deploy the most advanced AI models. Historically, such power could only be found in a handful of government-funded national laboratories and research institutions, and it was available only to a select few. But the advent of the public cloud has changed that. Much like steel did for skyscrapers, the public cloud enables generative AI.

Today, datacenters around the world house millions of servers and make vast computing power broadly available to organizations large and small and even to individuals as well. Already, many thousands of AI developers in startups, enterprises, government agencies, research labs, and non-profit organizations around the world are using the technology in these datacenters to create new AI foundation models and applications.

These datacenters are owned and operated by cloud providers, which include larger established firms such as Microsoft, Amazon, Google, Oracle, and IBM, as well as large firms from China like Alibaba, Huawei, Tencent, and Baidu. There are also smaller specialized entrants such as Coreweave, OVH, Aruba, and Denvr Dataworks Corporation, just to mention a few. And government-funded computing centers clearly will play a role as well, including with support for academic research. But building and operating those datacenters is expensive. And the semiconductors or graphical processing units (GPUs) that are essential to power the servers for AI workloads remain costly and in short supply. Although governments and companies are working hard to fill the gap, doing so will take some time.

With this reality in mind, regulators around the world are asking important questions about who can compete in the AI era. Will it create new opportunities and lead to the emergence of new companies? Or will it simply reinforce existing positions and leaders in digital markets?

I am optimistic that the changes driven by the new AI era will extend into the technology industry itself. After all, how many readers of this paragraph had, two years ago, even heard of OpenAI and many other new AI entrants like Anthropic, Cohere, Aleph Alpha, and Mistral AI? In addition, Microsoft, along with other large technology firms are dynamically pivoting to meet the AI era. The competitive pressure is fierce, and the pace of innovation is dizzying. As a leading cloud provider and an innovator in AI models ourselves and through our partnership with OpenAI, we are mindful of our role and responsibilities in the evolution of this AI era.

Throughout the past decade, weve typically found it helpful to define the tenets in effect, the goals that guide our thinking and drive our actions as we navigate a complex topic. We then apply these tenets by articulating the principles we will apply as we make the decisions needed to govern the development and use of technology. I share below the new tenets on which we are basing our thinking on this topic, followed by our 11 AI Access Principles.

Fundamentally, there are five tenets that define Microsofts goals as we focus on AI access, including our role as an infrastructure and platforms provider.

First, we have a responsibility to enable innovation and foster competition. We believe that AI is a foundational technology with a transformative capability to help solve societal problems, improve human productivity, and make companies and countries more competitive. As with prior general-purpose technologies, from the printing press to electricity, railroads, and the internet itself, the AI era is not based on a single technology component or advance. We have a responsibility to help spur innovation and competition across the new AI economy that is rapidly emerging.

AI is a dynamic field, with many active participants based on a technology stack that starts with electricity and connectivity and the worlds most advanced semiconductor chips at the base. It then runs up through the compute power of the public cloud, public and proprietary data for training foundation models, the foundation models themselves, tooling to manage and orchestrate the models, and AI-powered software applications. In short, the success of an AI-based economy requires the success of many different participants across numerous interconnected markets.

You can see here the technology stack that defines the new AI era. While one company currently produces and supplies most of the GPUs being used for AI today, as one moves incrementally up the stack, the number of participants expands. And each layer enables and facilitates innovation and competition in the layers above. In multiple ways, to succeed, participants at every layer of the technology stack need to move forward together. This means, for Microsoft, that we need to stay focused not just on our own success, but on enabling the success of others.

Second, our responsibilities begin by meeting our obligations under the law. While the principles we are launching today represent a self-regulatory initiative, they in no way are meant to suggest a lack of respect for the rule of law or the role of regulators. We fully appreciate that legislators, competition authorities, regulators, enforcers, and judges will continue to evolve the competition rules and other laws and regulations relevant to AI. Thats the way it should be.

Technology laws and rules are changing rapidly. The European Union is implementing its Digital Markets Act and completing its AI Act, while the United States is moving quickly with a new AI Executive Order. Similar laws and initiatives are moving forward in the United Kingdom, Canada, Japan, India, and many other countries. We recognize that we, like all participants in this new AI market, have a responsibility to live up to our obligations under the law, to engage constructively with regulators when obligations are not yet clear, and to contribute to the public dialogue around policy. We take these obligations seriously.

Third, we need to advance a broad array of AI partnerships. Today, only one company is vertically integrated in a manner that includes every AI layer from chips to a thriving mobile app store. As noted at a recent meeting of tech leaders and government officials, The rest of us, Microsoft included, live in the land of partnerships.

People today are benefiting from the AI advances that the partnership between OpenAI and Microsoft has created. Since 2019, Microsoft has collaborated with OpenAI on the research and development of OpenAIs generative AI models, developing the unique supercomputers needed to train those models. The ground-breaking technology ushered in by our partnership has unleashed a groundswell of innovation across the industry. And over the past five years, OpenAI has become a significant new competitor in the technology industry. It has expanded its focus, commercializing its technologies with the launch of ChatGPT and the GPT Store and providing its models for commercial use by third-party developers.

Innovation and competition will require an extensive array of similar support for proprietary and open-source AI models, large and small, including the type of partnership we are announcing today with Mistral AI, the leading open-source AI developer based in France. We have also invested in a broad range of other diverse generative AI startups. In some instances, those investments have provided seed funding to finance day-to-day operations. In other instances, those investments have been more focused on paying the expenses for the use of the computational infrastructure needed to train and deploy generative AI models and applications. We are committed to partnering well with market participants around the world and in ways that will accelerate local AI innovations.

Fourth, our commitment to partnership extends to customers, communities, and countries. More than for prior generations of digital technology, our investments in AI and datacenters must sustain the competitive strengths of customers and national economies and address broad societal needs. This has been at the core of the multi-billion-dollar investments we recently have announced in Australia, the United Kingdom, Germany, and Spain. We need constantly to be mindful of the community needs AI advances must support, and we must pursue a spirit of partnership not only with others in our industry, but with customers, governments, and civil society. We are building the infrastructure that will support the AI economy, and we need the opportunities provided by that infrastructure to be widely available.

Fifth, we need to be proactive and constructive, as a matter of process, in working with governments and the IT industry in the design and release of new versions of AI infrastructure and platforms. We believe it is critical for companies and regulators to engage in open dialogue, with a goal of resolving issues as quickly as possible ideally, while a new product is still under development. For our part, we understand that Microsoft must respond fully and cooperatively to regulatory inquiries so that we can have an informed discussion with regulators about the virtues of various approaches. We need to be good listeners and constructive problem solvers in sorting through issues of concern and identifying practical steps and solutions before a new product is completed and launched.

The foregoing tenets come together to shape the new principles we are announcing below. Its important to note that, given the safety, security, privacy, and other issues relating to responsible AI, we need to apply all these principles subject to objective and effective standards to comply with our legal obligations and protect the public. These are discussed further below. Subject to these requirements, we are committed to the following 11 principles:

We are committed to enabling AI innovation and fostering competition by making our cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. We want Microsoft Azure to be the best place for developers to train, build, and deploy AI models and to use those models safely and securely in applications and solutions. This means:

Today, our partnership with OpenAI is supporting the training of the next generation of OpenAI models and increasingly enabling customers to access and use these models and Microsofts CoPilot applications in local datacenters. At the same time, we are committed to supporting other developers, training, and deploying proprietary and open-source AI models, both large and small.

Todays important announcement with Mistral AI launches a new generation of Microsofts support for technology development in Europe. It enables Mistral AI to accelerate the development and deployment of its next generation Large Language Models (LLMs) with access to Azures cutting-edge AI infrastructure. It also makes the deployment of Mistral AIs premium models available to customers through our Models-as-a-Service (MaaS) offering on Microsoft Azure, which model developers can use to publish and monetize their AI models. By providing a unified platform for AI model management, we aim to lower the barriers and costs of AI model development around the world for both open source and proprietary development. In addition to Mistral AI, this service is already hosting more than 1,600 open source and proprietary models from companies and organizations such as Meta, Nvidia, Deci, and Hugging Face, with more models coming soon from Cohere and G42.

We are committed to expanding this type of support for additional models in the months and years ahead.

As reflected in Microsofts Copilots and OpenAIs ChatGPT itself, the world is rapidly benefiting from the use of a new generation of software applications that access and use the power of AI models. But our applications will represent just a small percentage of the AI-powered applications the world will need and create. For this reason, were committed to ongoing and innovative steps to make the AI models we host and the development tools we create broadly available to AI software applications developers around the world in ways that are consistent with responsible AI principles.

This includes the Azure OpenAI service, which enables software developers who work at start-ups, established IT companies, and in-house IT departments to build software applications that call on and make use of OpenAIs most powerful models. It extends through Models as a Service to the use of other open source and proprietary AI models from other companies, including Mistral AI, Meta, and others.

We are also committed to empowering developers to build customized AI solutions by enabling them to fine-tune existing models based on their own unique data sets and for their specific needs and scenarios. With Azure Machine Learning, developers can easily access state-of-the-art pre-trained models and customize them with their own data and parameters, using a simple drag-and-drop interface or code-based notebooks. This helps companies, governments, and non-profits create AI applications that help advance their goals and solve their challenges, such as improving customer service, enhancing public safety, or promoting social good. This is rapidly democratizing AI and fostering a culture of even broader innovation and collaboration among developers.

We are also providing developers with tools and repositories on GitHub that enable them to create, share, and learn from AI solutions. GitHub is the worlds largest and most trusted platform for software development, hosting over 100 million repositories and supporting more than 40 million developers. We are committed to supporting the AI developer community by making our AI tools and resources available on GitHub, giving developers access to the latest innovations and best practices in AI development, as well as the opportunity to collaborate with other developers and contribute to the open source community. As one example, just last week we made available an open automation framework to help red team generative AI systems.

Ensure choice and fairness across the AI economy

We understand that AI innovation and competition require choice and fair dealing. We are committed to providing organizations, AI developers, and data scientists with the flexibility to choose which AI models to use wherever they are building solutions. For developers who choose to use Microsoft Azure, we want to make sure they are confident we will not tilt the playing field to our advantage. This means:

The AI models that we host on Azure, including the Microsoft Azure OpenAI API service, are all accessible via public APIs. Microsoft publishes documentation on its website explaining how developers can call these APIs and use the underlying models. This enables any application, whether it is built and deployed on Azure or other private and public clouds, to call these APIs and access the underlying models.

Network operators are playing a vital role in accelerating the AI transformation of customers around the world, including for many national and regional governments. This is one reason we are supporting a common public API through the Open Gateway initiative driven by the GSM Association, which advances innovation in the mobile ecosystem. The initiative is aligning all operators with a common API for exposing advanced capabilities provided by their networks, including authentication, location, and quality of service. Its an indispensable step forward in enabling network operators to offer their advanced capabilities to a new generation of AI-enabled software developers. We have believed in the potential of this initiative since its inception at GSMA, and we have partnered with operators around the world to help bring it to life.

Today at Mobile World Congress, we are launching the Public Preview of Azure Programmable Connectivity (APC). This is a first-class service in Azure, completely integrated with the rest of our services, that seamlessly provides access to Open Gateway for developers. It means software developers can use the capabilities provided by the operator network directly from Azure, like any other service, without requiring specific work for each operator.

We are committed to maintaining Microsoft Azure as an open cloud platform, much as Windows has been for decades and continues to be. That means in part ensuring that developers can choose how they want to distribute and sell their AI software to customers for deployment and use on Microsoft Azure. We provide a marketplace on Azure through which developers can list and sell their AI software to Azure customers under a variety of supported business models. Developers who choose to use the Azure Marketplace are also free to decide whether to use the transaction capabilities offered by the marketplace (at a modest fee) or whether to sell licenses to customers outside of the marketplace (at no fee). And, of course, developers remain free to sell and distribute AI software to Azure customers however they choose, and those customers can then upload, deploy, and use that software on Azure.

We believe that trust is central to the success of Microsoft Azure. We build this trust by serving the interests of AI developers and customers who choose Microsoft Azure to train, build, and deploy foundation models. In practice, this also means that we avoid using any non-public information or data from the training, building, deployment, or use of developers AI models to compete against them.

We know that customers can and do use multiple cloud providers to meet their AI and other computing needs. And we understand that the data our customers store on Microsoft Azure is their data. So, we are committed to enabling customers to easily export and transfer their data if they choose to switch to another cloud provider. We recognize that different countries are considering or have enacted laws limiting the extent to which we can pass along the costs of such export and transfer. We will comply with those laws.

We recognize that new AI technologies raise an extraordinary array of critical questions. These involve important societal issues such as privacy, safety, security, the protection of children, and the safeguarding of elections from deepfake manipulation, to name just a few. These and other issues require that tech companies create guardrails for their AI services, adapt to new legal and regulatory requirements, and work proactively in multistakeholder efforts to meet broad societal needs. Were committed to fulfilling these responsibilities, including through the following priorities:

We are committed to safeguarding the physical security of our AI datacenters, as they host the infrastructure and data that power AI solutions. We follow strict security protocols and standards to ensure that our datacenters are protected from unauthorized access, theft, vandalism, fire, or natural disasters. We monitor and audit our datacenters to detect and prevent any potential threats or breaches. Our datacenter staff are trained and certified in security best practices and are required to adhere to a code of conduct that respects the privacy and confidentiality of our customers data.

We are also committed to safeguarding the cybersecurity of our AI models and applications, as they process and generate sensitive information for our customers and society. We use state-of-the-art encryption, authentication, and authorization mechanisms to protect data in transit and at rest, as well as the integrity and confidentiality of AI models and applications. We also use AI to enhance our cybersecurity capabilities, such as detecting and mitigating cyberattacks, identifying and resolving vulnerabilities, and improving our security posture and resilience.

Were building on these efforts with our new Secure Future Initiative (SFI). This brings together every part of Microsoft and has three pillars. It focuses on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats.

As AI becomes more pervasive and impactful, we recognize the need to ensure that our technology is developed and deployed in a way that is ethical, trustworthy, and aligned with human values. That is why we have created the Microsoft Responsible AI Standard, a comprehensive framework that guides our teams on how to build and use AI responsibly.

The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, we define what these values mean and how to achieve our goals in practice. We also provide tools, processes, and best practices to help our teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings.

We recognize that countries need more than advanced AI chips and datacenters to sustain their competitive edge and unlock economic growth. AI is changing jobs and the way people work, requiring that people master new skills to advance their careers. Thats why were committed to marrying AI infrastructure capacity with AI skilling capability, combining the two to advance innovation.

In just the past few months, weve combined billions of dollars of infrastructure investments with new programs to bring AI skills to millions of people in countries like Australia, the United Kingdom, Germany, and Spain. Were launching training programs focused on building AI fluency, developing AI technical skills, supporting AI business transformation, and promoting safe and responsible AI development. Our work includes the first Professional Certificate on Generative AI.

Typically, our skilling programs involve a professional network of Microsoft certified training services partners and multiple industry partners, universities, and nonprofit organizations. Increasingly, we find that major employers want to launch new AI skilling programs for their employees, and we are working with them actively to provide curricular materials and support these efforts.

One of our most recent and important partnerships is with the AFL-CIO, the largest federation of labor unions in the United States. Its the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Weve learned that government institutions and associations can typically bring AI skilling programs to scale. At the national and regional levels, government employment and educational agencies have the personnel, programs, and expertise to reach hundreds of thousands or even millions of people. Were committed to working with and supporting these efforts.

Through these and other initiatives, we aim to democratize access to AI education and enable everyone to harness the potential of AI for their own lives and careers.

In 2020, Microsoft set ambitious goals to be carbon negative, water positive and zero waste by 2030. We recognize that our datacenters play a key part in achieving these goals. Being responsible and sustainable by design also has led us to take a first-mover approach, making long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build datacenters and operate.

We also apply a holistic approach to the Scope 3 emissions relating to our investments in AI infrastructure, from the construction of our datacenters to engaging our supply chain. This includes supporting innovation to reduce the embodied carbon in our supply chain and advancing our water positive and zero waste goals throughout our operations.

At the same time, we recognize that AI can be a vital tool to help accelerate the deployment of sustainability solutions from the discovery of new materials to better predicting and responding to extreme weather events. This is why we continue to partner with others to use AI to help advance breakthroughs that previously would have taken decades, underscoring the important role AI technology can play in addressing some of our most critical challenges to realizing a more sustainable future.

Tags: ChatGPT, datacenters, generative ai, Github, Mobile World Congress, open ai, Responsible AI

Read the original post:

Microsoft's AI Access Principles: Our commitments to promote innovation and competition in the new AI economy ... - Microsoft

Highmark Teams With Google on AI-Powered Health Partnership – PYMNTS.com

Highmark Health is working with Epic and Google Cloud to support payer-provider coordination.

Epics Payer Platform improves collaboration between health insurers and health providers, the companies said in a Monday (Feb. 26) news release. Now, by connecting to Google Cloud, the insights shared with payers and providers can be used to inform consumers of the next best actions in their care journeys.

The Epic platform allows for better payer-provider collaboration by driving automation, faster decision-making and better care while lowering burdens and fragmentation, according to the release.

Google Clouds data analytics technologies, meanwhile, can help facilitate insights shared with provider partner organizations using Epic, Highmark health plan staff, and Highmark members through other integrated digital channels like the My Highmark member portal.

Highmark Healths use of Google Cloud will enable the organization to create an intelligence system equipped with AI to deliver valuable analytics and insights to healthcare workers, patients and members, said Amy Waldron, director of healthcare and life sciences strategy and solutions at Google Cloud. Highmark Healths investment in cloud technology is delivering real-time value and simplifying communications; its redefining the provider and consumer experience.

As PYMNTS wrote late last year, the intersection of AI and healthcare was one of 2023s more exciting developments, with generative AI finding its way into areas ranging from medical imaging and pathology to electronic health record data entry.

PYMNTS Intelligence found that the generative AI healthcare market is expected to reach $22 billion by 2032, providing several possibilities for improved patient care, diagnosis accuracy and treatment outcomes.

Many of the latest AI innovations, including those aimed at helping doctors pull insights from healthcare data and allow users to find accurate clinical information more efficiently, are designed to help put clinician pajama time the time spent on paperwork after shifts are ostensibly over to rest.

These problems typically cost providers significant amounts of time and resources, and a variety of point-solutions were brought to market this year to address them, PYMNTS wrote in December.

For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.

Read this article:

Highmark Teams With Google on AI-Powered Health Partnership - PYMNTS.com

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown – CRN

A deep-dive analysis into the market dynamics that allowed Nvidia to take the AI crown and surpass Intel in annual revenue. CRN also looks at what the x86 processor giant could do to fight back in a deeply competitive environment.

Several months after Pat Gelsinger became Intels CEO in 2021, he told me that his biggest concern in the data center wasnt Arm, the British chip designer that is enabling a new wave of competition against the semiconductor giants Xeon server CPUs.

Instead, the Intel veteran saw a bigger threat in Nvidia and its uncontested hold over the AI computing space and said his company would give its all to challenge the GPU designer.

[Related: The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In]

Well, theyre going to get contested going forward, because were bringing leadership products into that segment, Gelsinger told me for a CRN magazine cover story.

More than three years later, Nvidias latest earnings demonstrated just how right it was for Gelsinger to feel concerned about the AI chip giants dominance and how much work it will take for Intel to challenge a company that has been at the center of the generative AI hype machine.

When Nvidias fourth-quarter earnings arrived last week, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its data center GPUs driven by generative AI.

The GPU designer finished its 2024 fiscal year with $60.9 billion in revenue, up 126 percent or more than double from the previous year, the company revealed in its fourth-quarter earnings report on Wednesday. This fiscal year ran from Jan. 30, 2023, to Jan. 28, 2024.

Meanwhile, Intel finished its 2023 fiscal year with $54.2 billion in sales, down 14 percent from the previous year. This fiscal year ran concurrent to the calendar year, from January to December.

While Nvidias fiscal year finished roughly one month after Intels, this is the closest well get to understanding how two industry titans compared in a year when demand for AI solutions propped up the data center and cloud markets in a shaky economy.

Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computingwith a major emphasis on data centers, cloud computing and edge computingthen found itself last year at the center of a massive demand cycle due to hype around generative AI.

This demand cycle was mainly kicked off by the late 2022 arrival of OpenAIs ChatGPT, a chatbot powered by a large language model that can understand complex prompts and respond with an array of detailed answers, all offered with the caveat that it could potentially impart inaccurate, biased or made-up answers.

Despite any shortcomings, the tech industry found more promise than concern with the capabilities of ChatGPT and other generative AI applications that had emerged in 2022, like the DALL-E 2 and Stable Diffusion text-to-image models. Many of these models and applications had been trained and developed using Nvidia GPUs because the chips are far faster at computing such large amounts of data than CPUs ever could.

The enormous potential of these generative AI applications kicked off a massive wave of new investments in AI capabilities by companies of all sizes, from venture-backed startups to cloud service providers and consumer tech companies, like Amazon Web Services and Meta.

By that point, Nvidia had started shipping the H100, a powerful data center GPU that came with a new feature called the Transformer Engine. This was designed to speed up the training of so-called transformer models by as many as six times compared to the previous-generation A100, which itself had been a game-changer in 2020 for accelerating AI training and inference.

Among the transformer models that benefitted from the H100s Transformer Engine was GPT-3.5, short for Generative Pre-trained Transformer 3.5. This is OpenAIs large language model that exclusively powered ChatGPT before the introduction of the more capable GPT-4.

But this was only one piece of the puzzle that allowed Nvidia to flourish in the past year. While the company worked on introducing increasingly powerful GPUs, it was also developing internal capabilities and making acquisitions to provide a full stack of hardware and software for accelerated computing workloads such as AI and high-performance computing.

At the heart of Nvidias advantage is the CUDA parallel computing platform and programming model. Introduced in 2007, CUDA enabled the companys GPUs, which had been traditionally designed for computer games and 3-D applications, to run HPC workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously. Since then, CUDA has dominated the landscape of software that benefits accelerated computing.

Over the last several years, Nvidias stack has grown to include CPUs, SmartNICs and data processing units, high-speed networking components, pre-integrated servers and server clusters as well as a variety of software and services, which includes everything from software development kits and open-source libraries to orchestration platforms and pretrained models.

While Nvidia had spent years cultivating relationships with server vendors and cloud service providers, this activity reached new heights last year, resulting in expanded partnerships with the likes of AWS, Microsoft Azure, Google Cloud, Dell Technologies, Hewlett Packard Enterprise and Lenovo. The company also started cutting more deals in the enterprise software space with major players like VMware and ServiceNow.

All this work allowed Nvidia to grow its data center business by 217 percent to $47.5 billion in its 2024 fiscal year, which represented 78 percent of total revenue.

This was mainly supported by a 244 percent increase in data center compute sales, with high GPU demand driven mainly by the development of generative AI and large language models. Data center networking, on the other hand, grew 133 percent for the year.

Cloud service providers and consumer internet companies contributed a substantial portion of Nvidias data center revenue, with the former group representing roughly half and then more than a half in the third and fourth quarters, respectively. Nvidia also cited strong demand driven by businesses outside of the former two groups, though not as consistently.

In its earnings call last week, Nvidia CEO Jensen Huang said this represents the industrys continuing transition from general-purpose computing, where CPUs were the primary engines, to accelerated computing, where GPUs and other kinds of powerful chips are needed to provide the right combination of performance and efficiency for demanding applications.

There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what Nvidia has been pioneering for some time, he said.

Intel, by contrast, generated $15.5 billion in data center revenue for its 2023 fiscal year, which was a 20 percent decline from the previous year and made up only 28.5 percent of total sales.

This was not only three times smaller than what Nvidia earned for total data center revenue in the 12-month period ending in late January, it was also smaller than what the semiconductor giants AI chip rival made in the fourth quarter alone: $18.4 billion.

The issue for Intel is that while the company has launched data center GPUs and AI processors over the last couple years, its far behind when it comes to the level of adoption by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish.

As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate a majority of revenue for this business unit.

This created multiple problems for the company.

While AI servers, including ones made by Nvidia and its OEM partners, rely on CPUs for the host processors, the average selling prices for such components are far lower than Nvidias most powerful GPUs. And these kinds of servers often contain four or eight GPUs and only two CPUs, another way GPUs enable far greater revenue growth than CPUs.

In Intels latest earnings call, Vivek Arya, a senior analyst at Bank of America, noted how these issues were digging into the companys data center CPU revenue, saying that its GPU competitors seem to be capturing nearly all of the incremental [capital expenditures] and, in some cases, even more for cloud service providers.

One dynamic at play was that some cloud service providers used their budgets last year to replace expensive Nvidia GPUs in existing systems rather than buying entirely new systems, which dragged down Intel CPU sales, Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, recently told CRN.

Then there was the issue of long lead times for Nvidias GPUs, which were caused by demand far exceeding supply. Because this prevented OEMs from shipping more GPU-accelerated servers, Intel sold fewer CPUs as a result, according to Moorhead.

Intels CPU business also took a hit due to competition from AMD, which grew x86 server CPU share by 5.4 points against the company in the fourth quarter of 2023 compared to the same period a year ago, according to Mercury Research.

The semiconductor giant has also had to contend with competition from companies developing Arm-based CPUs, such as Ampere Computing and Amazon Web Services.

All of these issues, along with a lull in the broader market, dragged down revenue and earnings potential for Intels data center business.

Describing the market dynamics in 2023, Intel said in its annual 10-K filing with the U.S. Securities and Exchange Commission that server volume decreased 37 percent from the previous year due to lower demand in a softening CPU data center market.

The company said average selling prices did increase by 20 percent, mainly due to a lower mix of revenue from hyperscale customers and a higher mix of high core count processors, but that wasnt enough to offset the plummet in sales volume.

While Intel and other rivals started down the path of building products to compete against Nvidias years ago, the AI chip giants success last year showed them how lucrative it can be to build a business with super powerful and expensive processors at the center.

Intel hopes to make a substantial business out of accelerator chips between the Gaudi deep learning processors, which came from its 2019 acquisition of Habana Labs, and the data center GPUs it has developed internally. (After the release of Gaudi 3 later this year, Intel plans to converge its Max GPU and Gaudi road maps, starting with Falcon Shores in 2025.)

But the semiconductor giant has only reported a sales pipeline that grew in the double digits to more than $2 billion in last years fourth quarter. This pipeline includes Gaudi 2 and Gaudi 3 chips as well as Intels Max and Flex data center GPUs, but it doesnt amount to a forecast for how much money the company expects to make this year, an Intel spokesperson told CRN.

Even if Intel made $2 billion or even $4 billion from accelerator chips in 2024, it would amount to a small fraction of what Nvidia made last year and perhaps an even smaller one if the AI chip rival manages to grow again in the new fiscal year. Nvidia has forecasted that revenue in the first quarter could grow roughly 8.6 percent sequentially to $24 billion, and Huang said the conditions are excellent for continued growth for the rest of this year and beyond.

Then theres the fact that AMD recently launched its most capable data center GPU yet, the Instinct MI300X. The company said in its most recent earnings call that strong customer pull and expanded engagements prompted the company to upgrade its forecast for data center GPU revenue this year to more than $3.5 billion.

There are other companies developing AI chips too, including AWS, Microsoft Azure and Google Cloud as well as several startups, such as Cerebras Systems, Tenstorrent, Groq and D-Matrix. Even OpenAI is reportedly considering designing its own AI chips.

Intel will also have to contend with Nvidias decision last year to move to a one-year release cadence for new data center GPUs. This started with the successor to the H100 announced last fallthe H200and will continue with the B100 this year.

Nvidia is making its own data center CPUs, too, as part of the companys expanding full-stack computing strategy, which is creating another challenge for Intels CPU business when it comes to AI and HPC workloads. This started last year with the standalone Grace Superchip and a hybrid CPU-GPU package called the Grace Hopper Superchip.

For Intels part, the semiconductor giant expects meaningful revenue acceleration for its nascent AI chip business this year. What could help the company are the growing number of price-performance advantages found by third parties like AWS and Databricks as well as its vow to offer an open alternative to the proprietary nature of Nvidias platform.

The chipmaker also expects its upcoming Gaudi 3 chip to deliver performance leadership with four times the processing power and double the networking bandwidth over its predecessor.

But the company is taking a broader view of the AI computing market and hopes to come out on top with its AI everywhere strategy. This includes a push to grow data center CPU revenue by convincing developers and businesses to take advantage of the latest features in its Xeon server CPUs to run AI inference workloads, which the company believes is more economical and pragmatic for a broader constituency of organizations.

Intel is making a big bet on the emerging category of AI PCs, too, with its recently launched Core Ultra processors, which, for the first time in an Intel processor, comes with a neural processing unit (NPU) in addition to a CPU and GPU to power a broad array of AI workloads. But the company faces tough competition in this arena, whether its AMD and Qualcomm in the Windows PC segment or Apple for Mac computers and its in-house chip designs.

Even Nvidia is reportedly thinking about developing CPUs for PCs. But Intel does have one trump card that could allow it to generate significant amounts of revenue alongside its traditional chip design business by seizing on the collective growth of its industry.

Hours before Nvidias earnings last Wednesday, Intel launched its revitalized contract chip manufacturing business with the goal of drumming up enough business from chip designers, including its own product groups, to become the worlds second largest foundry by 2030.

Called Intel Foundry, its lofty 2030 goal means the business hopes to generate more revenue than South Koreas Samsung in only six years. This would put it only behind the worlds largest foundry, Taiwans TSMC, which generated just shy of $70 billion last year with many thanks to large manufacturing orders from the likes of Nvidia, Apple and Nvidia.

All of this relies on Intel to execute at high levels across its chip design and manufacturing businesses over the next several years. But if it succeeds, these efforts could one day make the semiconductor giant an AI superpower like Nvidia is today.

At Intel Foundrys launch last week, Gelsinger made that clear.

We're engaging in 100 percent of the AI [total addressable market], clearly through our products on the edge, in the PC and clients and then the data centers. But through our foundry, I want to manufacture every AI chip in the industry, he said.

More:

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN

This astronaut took 5 spacewalks. Now, he’s helping make spacesuits for future ISS crews (exclusive) – Space.com

The next generation of spacesuits for astronauts just went parabolic.

Collins Aerospace tested its new spacesuit design, built for International Space Station spacewalks, on a parabolic flight that simulated microgravity conditions. The goal was to fulfill requirements for a NASA contract aimed at replacing the long-standing extravehicular mobility units (EMUs) now used on the orbiting complex.

Following the news release on Feb. 1, Collinschief test astronaut John "Danny" Olivas a retired NASA astronaut spoke with Space.com about the company's plans for the floating suit. He also discussed exciting possibilities for moon exploration. Read on to learn more about how Olivas is using his past spacewalking experience to pave the way for future spacewalkers.

Collins received a 2022 task order from NASA to develop a next-generation EMU to be lighter and more flexible than current spacesuits. These suits are also under consideration to become moonwalking outfits for the agency's Artemis program; the design team received a separate task order in July 2023 to modify the floating-style spacesuits for surface excursions.

Related: Watch next-generation lightweight spacesuit tested on Zero-G flight (photos, video)

Space.com: What sorts of experiences were you able to port from your time at NASA to Collins, to help with the development?

Danny Olivas: I've been an engineer for over 35 years. I've always been fascinated about space. It is very much like coming home and being part of an engineering family where we toil away to produce things that are safe, efficient and effective for our clients.

The intent is basically, "right design" this suit. It should be a suit that is intuitive to the astronauts. So I feel like what I'm bringing to the table is essentially helping the engineers understand what is important, where do things need to be placed, what are the things that you need to be considering. For example, in December of last year, we completed an exercise called the "concept of operations." That essentially is evaluating the suit in an environment like you're integrated onto a spacewalk and then coming back from doing a spacewalk.

I was able to bring to the table: when we do our prep and post, here's what we do. Here's what we did on orbit. Here's how we work to this particular issue. Through that exercise, it provided feedback directly to the engineers on how to move forward. It's not a one and done thing. It's a collaboration: we've gone, and taken a look at that, and we can do this or we can't do that.

Related: Shuttle astronaut Danny Olivas talks diversity on Earth (and space) in 'Virtual Astronaut' webcast

I feel like I'm bringing everything I can to this. This likely going to be my last job, and I'm going to be on the field. I care about the astronauts: that we're building the spacesuits for the people who got me five spacewalks, and did so in a safe manner. I owe it to them, to give back to the engineering community: everything I can to help our team be successful and provide the safest and most efficient, most effective spacesuit for the next generation of explorers. That's the very least I owe for being given the opportunity.

Space.com: Can you step us through the development?

Olivas: Collins, with our partners ILC Dover Astrospace and Oceaneering, use heritage or legacy from the original Hamilton Standard suit technology, which is something that's been ingrained in the company DNA from the Apollo missions. The A7L spacesuit was the first one that was formed, all the way through the current EMU. It makes perfect sense that we are looking at extending to the next generation spacesuit for the International Space Station.

The intent is for NASA delivery and, at that point, we'll have a new suit on the space station that will not only be for the space station, but also will be applicable for other commercial destinations after ISS. That includes lunar landings as well; as you're familiar with, Axiom Space won the contract for the lunar suit and they're destined for their launch on (first moon landing) Artemis 3. We wish them the very best of luck. But we're also making a suit that's compatible with lunar applications. We look to be a continued competitor in the lunar space as well, because that is the future of exploration.

Space.com: What happened during the parabolic campaign?

Olivas: This campaign actually began over a year ago, when it was first decided that we would conduct a portion of the crew capability assessment in a microgravity environment. There's no 1 G equivalent that would give you confidence that the things that you would be doing could be applicable in microgravity. We looked at some of the more challenging things, such as airlock egress and ingress. Collins has built a mockup that was to scale.

Getting this this new suit across the hatch was vital to demonstrate that you have the ability to be able to do so, and the geometry of the suit would in fact actually go through there. So that was a big risk, especially if you consider that you only have a parabola to be able to demonstrate that. Sometimes getting in and out of the airlock can take upwards of a couple of minutes, but you don't get that liberty if you're doing a zero-G flight.

Related: I flew weightlessly on a parabolic flight to see incredible student science soar

The answer to that is practice, practice, practice, practice, practice, practice. We were literally, on a weekly basis, writing the choreography of what we would do on each and every parabola. Every team member was there. We knew where we were going to be positioned. The whole idea was that you want to be out of the way when it's time to go to the task, when there's limited time to be able to do that. And it worked flawlessly.

I learned some things. Trying to stand on your feet on a footplate makes it a bit challenging, so for me, it was trying to learn how to operate in this I would call it a bronco, if you will. Certainly there were oscillations. But we were still able to demonstrate that you could get inside a portable foot restraint within 20 seconds.

Space.com: Can you give a comparison about what it's like to be working in the current EMU compared with what Collins is going to be able to offer?

Olivas: From the outside, probably not a lot. You're going to see two arms, two legs, a helmet and a layer of white. The secret sauce is below that layer of white. There's no technology that's carried over from the EMU, but what has been carried over is all the lessons learned against this concept in doing this from day one. We bring all that experience and heritage with the suit into the development designers.

Now let's talk about the difference between the EMU and the next generation suit. It is like night and day. I'm talking strictly right now from the PGS the pressure garment system, the mobility aspect of it. Things that would lock you up in the suit on orbit. By the way, lockup issues especially with shoulder joints are part of the reason why we had an injury rate.

As we think of accessibility to the lunar application, we have intentionally gotten rid of a component called the waist bearing assembly, the ability to essentially pivot around the waist. In exchange we have introduced hip joints, joints which work in unison to allow for walking. This gives us a lot more flexibility in the lower extremities. I think the increasing range of motion, increased maneuverability are probably the biggest attributes that I've seen.

Space.com: Anything else would you like to add?

Olivas: I would say, help me carry forward the message about what the suit is. As much as this machine is to keep the human being alive in the space like solo spacecraft it's the contributions that make it right. It's all those engineers who go through kind of an anonymous perspective in their entire career, and you're never really knowing what they do. But it just happens because of a human being behind it. That team, I'm part of today, and I want to make sure that that becomes clear.

This interview was edited and condensed. This article was amended at 2:15 p.m. EST Feb. 14 to add information about other companies involved with the Collins spacesuit and to address a typo.

Go here to see the original:

This astronaut took 5 spacewalks. Now, he's helping make spacesuits for future ISS crews (exclusive) - Space.com

Why Casey Left Substack, Elon Musk and Drugs, and an A.I. Antibiotic Discovery – The New York Times

Listen and follow Hard Fork Apple | Spotify | Amazon | YouTube

Casey is taking his newsletter Platformer off Substack, as criticism over the companys handling of pro-Nazi content grows. Then, The Wall Street Journal spoke with witnesses who said that Elon Musk had used LSD, cocaine, ecstasy and psychedelic mushrooms, worrying some directors and board members of his companies. And finally, how researchers found a new class of antibiotics with the help of an artificial intelligence algorithm used to win the board game Go.

Todays guests:

Kirsten Grind, enterprise reporter for The Wall Street Journal

Felix Wong, postdoctoral fellow at M.I.T. and co-founder of Integrated Biosciences

Additional Reading:

Hard Fork is hosted by Kevin Roose and Casey Newton and produced by Davis Land and Rachel Cohn . The show is edited by Jen Poyant. Engineering by Alyssa Moxley and original music by Dan Powell, Marion Lozano, Diane Wong and Pat McCusker . Fact-checking by Mary Mathis.

Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti and Jeffrey Miranda.

Read the rest here:

Why Casey Left Substack, Elon Musk and Drugs, and an A.I. Antibiotic Discovery - The New York Times