SpaceX launches U.S. military weather monitoring satellite – SpaceNews

COLORADO SPRINGS A SpaceX Falcon 9 rocket on April 11 launched a U.S. Space Force weather monitoring satellite. The vehicle lifted off from Vandenberg Space Force Base, California, at 7:25 a.m. Pacific.

The USSF-62 mission flew to orbit the U.S. militarys first Weather System Follow-on Microwave (WSF-M) satellite.

Made by Ball Aerospace a company recently acquired by BAE Systems WSF-M has a microwave imager instrument to collect weather data including the measurement of ocean surface wind speed and direction, ice thickness, snow depth, soil moisture and local space weather.

The spacecraft will operate in a low polar orbit. The Space Force has ordered a second WSF-M satellite, projected to be delivered by 2028. These satellites are part of a broader effort to modernize the militarys space-based environmental monitoring assets.

Data used for military planning

The data gathered by WSF-M will be provided to meteorologists in support of the generation of a wide variety of weather products necessary to conduct mission planning and operations globally every day, the U.S. Space Force said.

Just under eight minutes after liftoff and payload separation, the Falcon 9s first stage flew back to Earth and landed at Vandenbergs Landing Zone 4.

USSF-62 is the 37th launch performed by SpaceX so far in 2024 and its second national security space launch mission of the year. In February SpaceX launched the USSF-124 mission from Cape Canaveral Space Force Station, Florida, deploying six U.S. missile defense satellites for the Space Development Agency and the Missile Defense Agency.

Sandra Erwin writes about military space programs, policy, technology and the industry that supports this sector. She has covered the military, the Pentagon, Congress and the defense industry for nearly two decades as editor of NDIAs National Defense... More by Sandra Erwin

Read the original here:

SpaceX launches U.S. military weather monitoring satellite - SpaceNews

ChatGPT Use Linked to Memory Loss, Procrastination in Students – Futurism

You won't always have an AI chatbot in your pocket... right? Brain Drain

New research has found a worrying link to memory loss and tanking grades in students who relied on ChatGPT, in an early but fascinating exploration of the swift impact that large language models have had in education.

As detailed in a new studypublished in the International Journal of Educational Technology in Higher Education, the researchers surveyed hundreds of university students ranging from undergrads to doctoral candidates over two phases, using self-reported evaluations. They were spurred on by witnessing more and more of their own students turn to ChatGPT.

"My interest in this topic stemmed from the growing prevalence of generative artificial intelligence in academia and its potential impact on students," study co-author Muhammad Abhas at the National University of Computer and Emerging Sciences in Pakistan told PsyPost. "For the last year, I observed an increasing, uncritical, reliance on generative AI tools among my students for various assignments and projects I assigned."

In the first phase, the researchers collected responses from 165 students who used an eight-item scale to report their degree of ChatGPT reliance. The items ranged from "I use ChatGPT for my course assignments" to "ChatGPT is part of my campus life."

To validate those results, they also conducted a more rigorous "time-lagged" second phase, in which they expanded their scope to nearly 500 students, who were surveyed three times at one to two week intervals.

Perhaps unsurprisingly, the researchers found that students under a heavy academic workload and "time pressure" were much more likely to use ChatGPT. They observed that those who relied on ChatGPT reported more procrastination, more memory loss, and a drop in GPA. And the reason why is quite simple: the chatbot, however good or bad its responses are, is making schoolwork too easy.

"Since ChatGPT can quickly respond to any questions asked by a user," the researchers wrote in the study, "students who excessively use ChatGPT may reduce their cognitive efforts to complete their academic tasks, resulting in poor memory."

There were a few curveballs, however.

"Contrary to expectations, students who were more sensitive to rewards were less likely to use generative AI," Abbas told PsyPost, suggesting that those seeking good grades avoided using the chatbot out of fear of getting caught.

It's possible that the relationship between ChatGPT usage and its negative effects is bidirectional, notes PsyPost. A student may turn to the chatbot because they already have bad grades, and not the other way around. It's also worth considering that the data was self-reported, which comes with its own biases.

That's not to exonerate AI, though. Based on these findings, we should be wary about ChatGPT's role in education.

"The average person should recognize the dark side of excessive generative AI usage," Abbas told Psypost. "While these tools offer convenience, they can also lead to negative consequences such as procrastination, memory loss, and compromised academic performance."

More on AI: Google's AI Search Caught Pushing Users to Download Malware

Read the original:

ChatGPT Use Linked to Memory Loss, Procrastination in Students - Futurism

Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project – ZDNet

David Gewirtz/ZDNET

There's certainly been a lot of golly-wow, gee-whiz press about generative artificial intelligence (AI) over the past year or so. I'm certainly guilty of producing some of it myself. But tools like ChatGPT are also just that: tools. They can be used to help out with projects just like other productivity software.

Today, I'll walk you through a quick project where ChatGPT saved me a few hours of grunt work. While you're unlikely to need to do the same project, I'll share my thinking for the prompts, which may inspire you to use ChatGPT as a workhorse tool for some of your projects.

Also: 4 generative AI tools your enterprise can leverage to boost productivity

This is just the sort of project I would have assigned to a human assistant, back when I had human assistants. I'm telling you this fact because I structured the assignments for ChatGPT similarly to how I would have for someone working for me, back when I was sitting in a cubicle as a managerial cog of a giant corporation.

In a month or so, I'll post what I like to call a "stunt article." Stunt articles are projects I come up with that are fun and that I know readers will be interested in. The article I'm working on is a rundown of how much computer gear I can buy from Temu for under $100 total. I came in at $99.77.

Putting this article together involved looking on the Temu site for items to spotlight. For example, I found an iPad keyboard and mouse that cost about $6.

Also: Is Temu legit? What to know before you place an order

To stay under my $100 budget, I wanted to add all the Temu links to a spreadsheet, find each price, and then move things around until I got the exact total budget I wanted to spend.

The challenge was converting the Temu links into something useful. That's where ChatGPT came in.

The first thing I did was gather all my links. For each product, I copied the link from Temu and pasted it into a Notion page. When pasting a URL, Notion gives you the option to create bookmark blocks that not only contain links but also contain, crucially, product names. Here's a snapshot of that page:

As you can see, I've started selecting the blocks. Once you select all the blocks, you can copy them. I just pasted the entire set into a text editor, which looked like this:

The page looks ugly, but the result is useful.

Let's take a look at one of the data blocks. I switched my editor out of dark mode so it's easier for you to see the data elements in the block:

There are three key elements. The gold text shows the name of the product, surrounded by braces. The green text is the base URL of the product, surrounded by parenthesis. There's a question mark that separates the main page URL from all the random tracking data passed to the Temu page. I just wanted the main URL. The purple sections highlight the delimiters -- this is the data we're going to feed into ChatGPT.

I first fed ChatGPT this prompt:

Accept the following data and await further instructions.

Then I copied all the information from the text editor and pasted it into ChatGPT. At this point, ChatGPT knew to wait for more details.

The next step is where the meat of the project took place. I wanted ChatGPT to pull out the titles and the links, and leave the rest behind. Here's that prompt:

The data above consists of a series of blocks of data. At the beginning of each block is a section within [] brackets. For each block, designate this as TITLE.

Following the [] brackets is an open paren (followed by a web URL). For each block, extract that URL, but dispose of everything following the question mark, and also dispose of the question mark. Most URLs will then end in .html. We will designate this as URL.

For each block, display the TITLE followed by a carriage return, followed by the URL, followed by two newlines.

This process accomplished two things. It allowed me to name the data, so I could refer to it later. The process also allowed me to test whether ChatGPT understood the assignment.

Also: How to use ChatGPT

ChatGPT did the assignment correctly but stopped about two-thirds through when its buffer ran out. I told the bot to continue and got the rest of the data.

Doing this process by hand would have involved lots of annoying cutting and pasting. ChatGPT did the work in less than a minute.

For my project, Temu's titles are just too much. Instead of:

10 Inch LCD Writing Tablet, Electronis Memo With Leather Protective Case, Electronic Drawing Board For Digital Handwriting Pad Doodle Board, Gifts For

I wanted something more like:

LCD writing tablet with case

I gave this assignment to ChatGPT as well. I reminded the tool that it had previously parsed and identified the data. I find that reminding ChatGPT about a previous step helps it more reliably incorporate that step into subsequent steps. Then I told it to give me titles. Here's that prompt:

You just created a list with TITLE and URL. Do you remember? For the above items, please summarize the TITLE items in 4-6 words each. Only capitalize proper words and the first word. Give it back to me in a bullet list.

I got back a list like this, but for all 26 items:

My goal was to copy and paste this list of clickable links into Excel so I could use column math to play around with the items I planned to order, adding and removing items until I got to my $100 budget. I wanted the names clickable in the spreadsheet because it would be much easier to manage and jump back and forth between Temu and my project spreadsheet.

So, my final ChatGPT task was to turn the list above into a set of clickable links. Again, I started by reminding the tool of the work it had completed. Then I told it to create a list with links:

Do you see the bulleted list you just created? That is a list of summarized titles.

Okay, make the same list again, but turn each summarized title into a live web link with its corresponding URL.

And that was that. I got all the links I needed and ChatGPT did all the grunt work. I pasted the results into my spreadsheet, chose the products, and placed the order.

Also: 6 ways ChatGPT can make your everyday life easier

This is the final spreadsheet. There were more products when I started the process, but I added and removed them from the REMAINING column until I got the budget I was aiming for:

This was a project I could have done myself. But it would have required a ton of cutting and pasting, and a reasonable amount of extra thought to summarize all the product titles. It would have taken me two or three hours of grunt work and probably added to my wrist pain.

But by thinking this work through as an assignment that could be delegated, the entire ChatGPT experience took me less than 10 minutes. It probably took me less time to use ChatGPT to do all that grunt work and write this article than it would have taken me to do all that cutting, pasting, and summarizing.

Also:Thanks to my 5 favorite AI tools, I'm working smarter now

This sort of project isn't fancy and it isn't sexy. But it saved me a few hours of work I would have found tedious and unpleasant. Next time you have a data-parsing project, consider using ChatGPT.

Oh, and stay tuned. As soon as Temu sends me their haul, I'll post the detailed article about how much tech gear you can get for under $100. It'll be fun. See you there.

You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter, and follow me on Twitter/X at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

Excerpt from:

Saving hours of work with AI: How ChatGPT became my virtual assistant for a data project - ZDNet

There Might Be No ChatGPT-like Apple Chatbot in iOS 18 – The Mac Observer

The recent months in the tech scene have been all about artificial intelligence and its impact, but one company that has been late to the party is Apple. Apple first hinted about inhouse-AI development during a recent earnings call, which followed the earlier reports of the company reaching out to major publishers to use their data to train its AIs dataset, canceling the Apple Car project and shifting the team to AI. However, according to Bloombergs Mark Gurman, Apple might not debut a ChatGPT-like chatbot, at all. Instead, the company is exploring deals with established tech giants such as Chinas Baidu, OpenAI, and Google about potential partnerships.

That said, Apple might instead focus on licensing already-established chatbots like Googles Gemini (fka Bard) or OpenAIs ChatGPT. They might delay all plans to release an Apple chatbot, internally dubbed Ajax GPT.

Nevertheless, Mark Gurman believes AI will remain in the shows spotlight at the upcoming Worldwide Developers Conference (WWDC), slated for June 10-14, 2024 where we expect to see iOS 18, iPadOS 18, watchOS 11, tvOS 18, macOS 15, and visionOS 2. Although he doesnt delve into details of the upcoming AI feature, he mentions the companys plans to unveil new AI features, which could serve as the backbone of the next iOS 18. This suggests that even if Apple doesnt intend to bring a native AI chatbot to the devices, we might see a popular chatbot pre-installed on the phones or supported natively by the device. For reference, a London-based consumer tech firm, Nothing, recently partnered with the Perplexity AI search engine to power up its latest release, Phone 2(a), and Apple might have similar plans, but with generative AI giants.

CEO Tim Cook recently spoke to investors that the company will disclose its AI plans to the public later this year. Despite Apples overall reticence on the topic, Cook has been notably vocal about the potential of AI, particularly generative AI.

More importantly, according to previous reports, he has indicated that generative AI will improve Siris ability to respond to more complex queries and enable the Messages app to complete sentences automatically. Furthermore, other Apple apps such as Apple Music, Shortcuts, Pages, Numbers, and Keynote are expected to integrate generative AI functionality.

Source

Read the rest here:

There Might Be No ChatGPT-like Apple Chatbot in iOS 18 - The Mac Observer

Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy … – Microsoft

As we enter a new era based on artificial intelligence, we believe this is the best time to articulate principles that will govern how we will operate our AI datacenter infrastructure and other important AI assets around the world. We are announcing and publishing these principles our AI Access Principles today at the Mobile World Congress in Barcelona in part to address Microsofts growing role and responsibility as an AI innovator and a market leader.

Like other general-purpose technologies in the past, AI is creating a new sector of the economy. This new AI economy is creating not just new opportunities for existing enterprises, but new companies and entirely new business categories. The principles were announcing today commit Microsoft to bigger investments, more business partnerships, and broader programs to promote innovation and competition than any prior initiative in the companys 49-year history. By publishing these principles, we are committing ourselves to providing the broad technology access needed to empower organizations and individuals around the world to develop and use AI in ways that will serve the public good.

These new principles help put in context the new investments and programs weve announced and launched across Europe over the past two weeks, including $5.6 billion in new AI datacenter investments and new AI skilling programs that will reach more than a million people. Weve also launched new public-private partnerships to advance responsible AI adoption and protect cybersecurity, new AI technology services to support network operators, and a new partnership with Frances leading AI company, Mistral AI. As much as anything, these investments and programs make clear how we will put these principles into practice, not just in Europe, but in the United States and around the world.

These principles also reflect the responsible and important role we must play as a company. They build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the companys market position in the PC operating system market, we published a set of Windows Principles. Their purpose was to govern the companys practices in a manner that would both promote continued software innovation and foster free and open competition.

Ill never forget the reaction of an FTC Commissioner who came up to me after I concluded the speech I gave in Washington, D.C. to launch these principles. He said, If you had done this 10 years ago, I think you all probably would have avoided a lot of problems.

Close to two decades have gone by since that moment, and both the world of technology and the AI era we are entering are radically different. Then, Windows was the computing platform of the moment. Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond. But there is wisdom in that FTC Commissioners reaction that has stood the test of time: As a leading IT company, we do our best work when we govern our business in a principled manner that provides broad opportunities for others.

The new AI era requires enormous computational power to train, build, and deploy the most advanced AI models. Historically, such power could only be found in a handful of government-funded national laboratories and research institutions, and it was available only to a select few. But the advent of the public cloud has changed that. Much like steel did for skyscrapers, the public cloud enables generative AI.

Today, datacenters around the world house millions of servers and make vast computing power broadly available to organizations large and small and even to individuals as well. Already, many thousands of AI developers in startups, enterprises, government agencies, research labs, and non-profit organizations around the world are using the technology in these datacenters to create new AI foundation models and applications.

These datacenters are owned and operated by cloud providers, which include larger established firms such as Microsoft, Amazon, Google, Oracle, and IBM, as well as large firms from China like Alibaba, Huawei, Tencent, and Baidu. There are also smaller specialized entrants such as Coreweave, OVH, Aruba, and Denvr Dataworks Corporation, just to mention a few. And government-funded computing centers clearly will play a role as well, including with support for academic research. But building and operating those datacenters is expensive. And the semiconductors or graphical processing units (GPUs) that are essential to power the servers for AI workloads remain costly and in short supply. Although governments and companies are working hard to fill the gap, doing so will take some time.

With this reality in mind, regulators around the world are asking important questions about who can compete in the AI era. Will it create new opportunities and lead to the emergence of new companies? Or will it simply reinforce existing positions and leaders in digital markets?

I am optimistic that the changes driven by the new AI era will extend into the technology industry itself. After all, how many readers of this paragraph had, two years ago, even heard of OpenAI and many other new AI entrants like Anthropic, Cohere, Aleph Alpha, and Mistral AI? In addition, Microsoft, along with other large technology firms are dynamically pivoting to meet the AI era. The competitive pressure is fierce, and the pace of innovation is dizzying. As a leading cloud provider and an innovator in AI models ourselves and through our partnership with OpenAI, we are mindful of our role and responsibilities in the evolution of this AI era.

Throughout the past decade, weve typically found it helpful to define the tenets in effect, the goals that guide our thinking and drive our actions as we navigate a complex topic. We then apply these tenets by articulating the principles we will apply as we make the decisions needed to govern the development and use of technology. I share below the new tenets on which we are basing our thinking on this topic, followed by our 11 AI Access Principles.

Fundamentally, there are five tenets that define Microsofts goals as we focus on AI access, including our role as an infrastructure and platforms provider.

First, we have a responsibility to enable innovation and foster competition. We believe that AI is a foundational technology with a transformative capability to help solve societal problems, improve human productivity, and make companies and countries more competitive. As with prior general-purpose technologies, from the printing press to electricity, railroads, and the internet itself, the AI era is not based on a single technology component or advance. We have a responsibility to help spur innovation and competition across the new AI economy that is rapidly emerging.

AI is a dynamic field, with many active participants based on a technology stack that starts with electricity and connectivity and the worlds most advanced semiconductor chips at the base. It then runs up through the compute power of the public cloud, public and proprietary data for training foundation models, the foundation models themselves, tooling to manage and orchestrate the models, and AI-powered software applications. In short, the success of an AI-based economy requires the success of many different participants across numerous interconnected markets.

You can see here the technology stack that defines the new AI era. While one company currently produces and supplies most of the GPUs being used for AI today, as one moves incrementally up the stack, the number of participants expands. And each layer enables and facilitates innovation and competition in the layers above. In multiple ways, to succeed, participants at every layer of the technology stack need to move forward together. This means, for Microsoft, that we need to stay focused not just on our own success, but on enabling the success of others.

Second, our responsibilities begin by meeting our obligations under the law. While the principles we are launching today represent a self-regulatory initiative, they in no way are meant to suggest a lack of respect for the rule of law or the role of regulators. We fully appreciate that legislators, competition authorities, regulators, enforcers, and judges will continue to evolve the competition rules and other laws and regulations relevant to AI. Thats the way it should be.

Technology laws and rules are changing rapidly. The European Union is implementing its Digital Markets Act and completing its AI Act, while the United States is moving quickly with a new AI Executive Order. Similar laws and initiatives are moving forward in the United Kingdom, Canada, Japan, India, and many other countries. We recognize that we, like all participants in this new AI market, have a responsibility to live up to our obligations under the law, to engage constructively with regulators when obligations are not yet clear, and to contribute to the public dialogue around policy. We take these obligations seriously.

Third, we need to advance a broad array of AI partnerships. Today, only one company is vertically integrated in a manner that includes every AI layer from chips to a thriving mobile app store. As noted at a recent meeting of tech leaders and government officials, The rest of us, Microsoft included, live in the land of partnerships.

People today are benefiting from the AI advances that the partnership between OpenAI and Microsoft has created. Since 2019, Microsoft has collaborated with OpenAI on the research and development of OpenAIs generative AI models, developing the unique supercomputers needed to train those models. The ground-breaking technology ushered in by our partnership has unleashed a groundswell of innovation across the industry. And over the past five years, OpenAI has become a significant new competitor in the technology industry. It has expanded its focus, commercializing its technologies with the launch of ChatGPT and the GPT Store and providing its models for commercial use by third-party developers.

Innovation and competition will require an extensive array of similar support for proprietary and open-source AI models, large and small, including the type of partnership we are announcing today with Mistral AI, the leading open-source AI developer based in France. We have also invested in a broad range of other diverse generative AI startups. In some instances, those investments have provided seed funding to finance day-to-day operations. In other instances, those investments have been more focused on paying the expenses for the use of the computational infrastructure needed to train and deploy generative AI models and applications. We are committed to partnering well with market participants around the world and in ways that will accelerate local AI innovations.

Fourth, our commitment to partnership extends to customers, communities, and countries. More than for prior generations of digital technology, our investments in AI and datacenters must sustain the competitive strengths of customers and national economies and address broad societal needs. This has been at the core of the multi-billion-dollar investments we recently have announced in Australia, the United Kingdom, Germany, and Spain. We need constantly to be mindful of the community needs AI advances must support, and we must pursue a spirit of partnership not only with others in our industry, but with customers, governments, and civil society. We are building the infrastructure that will support the AI economy, and we need the opportunities provided by that infrastructure to be widely available.

Fifth, we need to be proactive and constructive, as a matter of process, in working with governments and the IT industry in the design and release of new versions of AI infrastructure and platforms. We believe it is critical for companies and regulators to engage in open dialogue, with a goal of resolving issues as quickly as possible ideally, while a new product is still under development. For our part, we understand that Microsoft must respond fully and cooperatively to regulatory inquiries so that we can have an informed discussion with regulators about the virtues of various approaches. We need to be good listeners and constructive problem solvers in sorting through issues of concern and identifying practical steps and solutions before a new product is completed and launched.

The foregoing tenets come together to shape the new principles we are announcing below. Its important to note that, given the safety, security, privacy, and other issues relating to responsible AI, we need to apply all these principles subject to objective and effective standards to comply with our legal obligations and protect the public. These are discussed further below. Subject to these requirements, we are committed to the following 11 principles:

We are committed to enabling AI innovation and fostering competition by making our cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. We want Microsoft Azure to be the best place for developers to train, build, and deploy AI models and to use those models safely and securely in applications and solutions. This means:

Today, our partnership with OpenAI is supporting the training of the next generation of OpenAI models and increasingly enabling customers to access and use these models and Microsofts CoPilot applications in local datacenters. At the same time, we are committed to supporting other developers, training, and deploying proprietary and open-source AI models, both large and small.

Todays important announcement with Mistral AI launches a new generation of Microsofts support for technology development in Europe. It enables Mistral AI to accelerate the development and deployment of its next generation Large Language Models (LLMs) with access to Azures cutting-edge AI infrastructure. It also makes the deployment of Mistral AIs premium models available to customers through our Models-as-a-Service (MaaS) offering on Microsoft Azure, which model developers can use to publish and monetize their AI models. By providing a unified platform for AI model management, we aim to lower the barriers and costs of AI model development around the world for both open source and proprietary development. In addition to Mistral AI, this service is already hosting more than 1,600 open source and proprietary models from companies and organizations such as Meta, Nvidia, Deci, and Hugging Face, with more models coming soon from Cohere and G42.

We are committed to expanding this type of support for additional models in the months and years ahead.

As reflected in Microsofts Copilots and OpenAIs ChatGPT itself, the world is rapidly benefiting from the use of a new generation of software applications that access and use the power of AI models. But our applications will represent just a small percentage of the AI-powered applications the world will need and create. For this reason, were committed to ongoing and innovative steps to make the AI models we host and the development tools we create broadly available to AI software applications developers around the world in ways that are consistent with responsible AI principles.

This includes the Azure OpenAI service, which enables software developers who work at start-ups, established IT companies, and in-house IT departments to build software applications that call on and make use of OpenAIs most powerful models. It extends through Models as a Service to the use of other open source and proprietary AI models from other companies, including Mistral AI, Meta, and others.

We are also committed to empowering developers to build customized AI solutions by enabling them to fine-tune existing models based on their own unique data sets and for their specific needs and scenarios. With Azure Machine Learning, developers can easily access state-of-the-art pre-trained models and customize them with their own data and parameters, using a simple drag-and-drop interface or code-based notebooks. This helps companies, governments, and non-profits create AI applications that help advance their goals and solve their challenges, such as improving customer service, enhancing public safety, or promoting social good. This is rapidly democratizing AI and fostering a culture of even broader innovation and collaboration among developers.

We are also providing developers with tools and repositories on GitHub that enable them to create, share, and learn from AI solutions. GitHub is the worlds largest and most trusted platform for software development, hosting over 100 million repositories and supporting more than 40 million developers. We are committed to supporting the AI developer community by making our AI tools and resources available on GitHub, giving developers access to the latest innovations and best practices in AI development, as well as the opportunity to collaborate with other developers and contribute to the open source community. As one example, just last week we made available an open automation framework to help red team generative AI systems.

Ensure choice and fairness across the AI economy

We understand that AI innovation and competition require choice and fair dealing. We are committed to providing organizations, AI developers, and data scientists with the flexibility to choose which AI models to use wherever they are building solutions. For developers who choose to use Microsoft Azure, we want to make sure they are confident we will not tilt the playing field to our advantage. This means:

The AI models that we host on Azure, including the Microsoft Azure OpenAI API service, are all accessible via public APIs. Microsoft publishes documentation on its website explaining how developers can call these APIs and use the underlying models. This enables any application, whether it is built and deployed on Azure or other private and public clouds, to call these APIs and access the underlying models.

Network operators are playing a vital role in accelerating the AI transformation of customers around the world, including for many national and regional governments. This is one reason we are supporting a common public API through the Open Gateway initiative driven by the GSM Association, which advances innovation in the mobile ecosystem. The initiative is aligning all operators with a common API for exposing advanced capabilities provided by their networks, including authentication, location, and quality of service. Its an indispensable step forward in enabling network operators to offer their advanced capabilities to a new generation of AI-enabled software developers. We have believed in the potential of this initiative since its inception at GSMA, and we have partnered with operators around the world to help bring it to life.

Today at Mobile World Congress, we are launching the Public Preview of Azure Programmable Connectivity (APC). This is a first-class service in Azure, completely integrated with the rest of our services, that seamlessly provides access to Open Gateway for developers. It means software developers can use the capabilities provided by the operator network directly from Azure, like any other service, without requiring specific work for each operator.

We are committed to maintaining Microsoft Azure as an open cloud platform, much as Windows has been for decades and continues to be. That means in part ensuring that developers can choose how they want to distribute and sell their AI software to customers for deployment and use on Microsoft Azure. We provide a marketplace on Azure through which developers can list and sell their AI software to Azure customers under a variety of supported business models. Developers who choose to use the Azure Marketplace are also free to decide whether to use the transaction capabilities offered by the marketplace (at a modest fee) or whether to sell licenses to customers outside of the marketplace (at no fee). And, of course, developers remain free to sell and distribute AI software to Azure customers however they choose, and those customers can then upload, deploy, and use that software on Azure.

We believe that trust is central to the success of Microsoft Azure. We build this trust by serving the interests of AI developers and customers who choose Microsoft Azure to train, build, and deploy foundation models. In practice, this also means that we avoid using any non-public information or data from the training, building, deployment, or use of developers AI models to compete against them.

We know that customers can and do use multiple cloud providers to meet their AI and other computing needs. And we understand that the data our customers store on Microsoft Azure is their data. So, we are committed to enabling customers to easily export and transfer their data if they choose to switch to another cloud provider. We recognize that different countries are considering or have enacted laws limiting the extent to which we can pass along the costs of such export and transfer. We will comply with those laws.

We recognize that new AI technologies raise an extraordinary array of critical questions. These involve important societal issues such as privacy, safety, security, the protection of children, and the safeguarding of elections from deepfake manipulation, to name just a few. These and other issues require that tech companies create guardrails for their AI services, adapt to new legal and regulatory requirements, and work proactively in multistakeholder efforts to meet broad societal needs. Were committed to fulfilling these responsibilities, including through the following priorities:

We are committed to safeguarding the physical security of our AI datacenters, as they host the infrastructure and data that power AI solutions. We follow strict security protocols and standards to ensure that our datacenters are protected from unauthorized access, theft, vandalism, fire, or natural disasters. We monitor and audit our datacenters to detect and prevent any potential threats or breaches. Our datacenter staff are trained and certified in security best practices and are required to adhere to a code of conduct that respects the privacy and confidentiality of our customers data.

We are also committed to safeguarding the cybersecurity of our AI models and applications, as they process and generate sensitive information for our customers and society. We use state-of-the-art encryption, authentication, and authorization mechanisms to protect data in transit and at rest, as well as the integrity and confidentiality of AI models and applications. We also use AI to enhance our cybersecurity capabilities, such as detecting and mitigating cyberattacks, identifying and resolving vulnerabilities, and improving our security posture and resilience.

Were building on these efforts with our new Secure Future Initiative (SFI). This brings together every part of Microsoft and has three pillars. It focuses on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats.

As AI becomes more pervasive and impactful, we recognize the need to ensure that our technology is developed and deployed in a way that is ethical, trustworthy, and aligned with human values. That is why we have created the Microsoft Responsible AI Standard, a comprehensive framework that guides our teams on how to build and use AI responsibly.

The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, we define what these values mean and how to achieve our goals in practice. We also provide tools, processes, and best practices to help our teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings.

We recognize that countries need more than advanced AI chips and datacenters to sustain their competitive edge and unlock economic growth. AI is changing jobs and the way people work, requiring that people master new skills to advance their careers. Thats why were committed to marrying AI infrastructure capacity with AI skilling capability, combining the two to advance innovation.

In just the past few months, weve combined billions of dollars of infrastructure investments with new programs to bring AI skills to millions of people in countries like Australia, the United Kingdom, Germany, and Spain. Were launching training programs focused on building AI fluency, developing AI technical skills, supporting AI business transformation, and promoting safe and responsible AI development. Our work includes the first Professional Certificate on Generative AI.

Typically, our skilling programs involve a professional network of Microsoft certified training services partners and multiple industry partners, universities, and nonprofit organizations. Increasingly, we find that major employers want to launch new AI skilling programs for their employees, and we are working with them actively to provide curricular materials and support these efforts.

One of our most recent and important partnerships is with the AFL-CIO, the largest federation of labor unions in the United States. Its the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Weve learned that government institutions and associations can typically bring AI skilling programs to scale. At the national and regional levels, government employment and educational agencies have the personnel, programs, and expertise to reach hundreds of thousands or even millions of people. Were committed to working with and supporting these efforts.

Through these and other initiatives, we aim to democratize access to AI education and enable everyone to harness the potential of AI for their own lives and careers.

In 2020, Microsoft set ambitious goals to be carbon negative, water positive and zero waste by 2030. We recognize that our datacenters play a key part in achieving these goals. Being responsible and sustainable by design also has led us to take a first-mover approach, making long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build datacenters and operate.

We also apply a holistic approach to the Scope 3 emissions relating to our investments in AI infrastructure, from the construction of our datacenters to engaging our supply chain. This includes supporting innovation to reduce the embodied carbon in our supply chain and advancing our water positive and zero waste goals throughout our operations.

At the same time, we recognize that AI can be a vital tool to help accelerate the deployment of sustainability solutions from the discovery of new materials to better predicting and responding to extreme weather events. This is why we continue to partner with others to use AI to help advance breakthroughs that previously would have taken decades, underscoring the important role AI technology can play in addressing some of our most critical challenges to realizing a more sustainable future.

Tags: ChatGPT, datacenters, generative ai, Github, Mobile World Congress, open ai, Responsible AI

Read the original post:

Microsoft's AI Access Principles: Our commitments to promote innovation and competition in the new AI economy ... - Microsoft

Cloud Native Efficient Computing is the Way in 2024 and Beyond – ServeTheHome

Today we wanted to discuss cloud native and efficient computing. Many have different names for this, but it is going to be the second most important computing trend in 2024, behind the AI boom. Modern performance cores have gotten so big and fast that there is a new trend in the data center: using smaller and more efficient cores. Over the next few months, we are going to be doing a series on this trend.

As a quick note: We get CPUs from all of the major silicon players. Also, since we have tested these CPUs in Supermicro systems, we are going to say that they are all sponsors of this, but it is our own idea and content.

Let us get to the basics. Once AMD re-entered the server market (and desktop) with a competitive performance core in 2017, performance per core and core counts exploded almost as fast as pre-AI boom slideware on the deluge of data. As a result, cores got bigger, cache sizes expanded, and chips got larger. Each generation of chips got faster.

Soon, folks figured out a dirty secret in the server industry: faster per core performance is good if you license software by core, but there are a wide variety of applications that need cores, but not fast ones. Todays smaller efficient cores tend to be on the order of performance of a mainstream Skylake/ Cascade Lake Xeon from 2017-2021, yet they can be packed more densely into systems.

Consider this illustrative scenario that is far too common in the industry:

Here, we have several apps built by developers over the years. Each needs its own VM and each VM is generally between 2-8 cores. These are applications that need to be online 247 but are not ones that need massive amounts of compute. Good examples are websites that serve a specific line of business function but do not have hundreds of thousands of visitors. Also, these tend to be workloads that are already in cloud instances, VMs, or containers. As the industry has started to move away from hypervisors with per-core licensing or per-socket license constraints, scaling up to bigger, faster cores that are going underutilized makes little sense.

As a result, the industry realized it needed lower cost to produce chips that are chasing density instead of per-core performance. An awesome way to think about this is to think about trying to fit the maximum number of instances for those small line-of-business applications developed over the years that are sitting in 2-8 core VMs into as few servers as possible. There are other applications like this as well that are commonly shown such as nginx web servers, redis servers, and so forth. Another great example is that some online game instances require one core per user in the data center, even if that core is relatively meager. Sometimes just having more cores is, well, more cores = more better.

Once the constraints of legacy hypervisor per core/ per socket licensing are removed, then the question becomes how to fit as many cores on a package, and then how dense those packages can be deployed in a rack. One other trend we are seeing is not just more cores, but also lower clock speed cores. CPUs that have a maximum frequency in the 2-3GHz range today tend to be considerably more power efficient than those with frequencies of P-core only servers in the 4GHz+ range and desktop CPUs now pushing well over 5GHz. This is the voltage frequency curve at work. If your goal is to have more cores, but do not need maximum per-core performance, then lowering the performance per core by 25% but decreasing the power by 40% or more, means that all of those applications are being serviced with less power.

Less power is important for a number of reasons. Today, the biggest reason is the AI infrastructure build-out. If you, for example, saw our 49ers Levis Stadium tour video, that is a perfect example of a data center that is not going to expand in footprint and can only expand cooling so much. It also is a prime example of a location that needs AI servers for sports analytics.

That type of constraint where the same traditional work needs to get done, in a data center footprint that is not changing, while adding more high-power AI servers is a key reason cloud-native compute is moving beyond the cloud. Transitioning applications running on 2017-2021 era Xeon servers to modern cloud-native cores with approximately the same performance per core can mean 4-5x the density per system at ~2x the power consumption. As companies release new generations of CPUs, the density figures are increasing at a steep rate.

We showed this at play with the same era of servers and modern P-core servers in our 5th Gen Intel Xeon Processors Emerald Rapids review.

We also covered the consolidation just between P-core generations in the accompanying video. We are going to have an article with the current AMD EPYC Bergamo parts very soon in a similar vein.

If you are not familiar with the current players in the cloud-native CPU market, that you can buy for your data centers/ colocation, here is a quick run-down.

The AMD EPYC Bergamo was AMDs first foray into cloud-native compute. Onboard, it has up to 128 cores/ 256 threads and is the densest publicly available x86 server CPU currently available.

AMD removed L3 cache from its P-core design, lowered the maximum all core frequencies to decrease the overall power, and did extra work to decrease the core size. The result is the same Zen 4 core IP, with less L3 cache and less die area. Less die area means more can be packaged together onto a CPU.

Some stop with Bergamo, but AMD has another Zen 4c chip in the market. The AMD EPYC 8004 series, codenamed Siena also uses Zen 4c but with half the memory channels, less PCIe Gen5 I/O and single-socket only operation.

Some organizations that are upgrading from popular dual 16 core Xeon servers can move to single socket 64-core Siena platforms and stay within a similar power budget per U while doubling the core count per U using 1U servers.

AMD markets Siena as the edge/ embedded part, but we need to recognize this is in the vein of current gen cloud native processors.

Arm has been making a huge splash into the space. The only Arm server CPU vendor out there for those buying their own servers, is Ampere led by many of the former Intel Xeon team.

Ampere has two main chips, the Ampere Altra (up to 80 cores) and Altra Max (up to 128 cores.) These use the same socket and so most servers can support either. The Max just came out later to support up to 128 cores.

Here, the focus on cloud-native compute is even more pronounced. Instead of having beefy floating point compute capabilities, Ampere is using Arm Neoverse N1 cores that focus on low power integer performance. It turns out, a huge number of workloads like serving web pages are mostly integer performance driven. While these may not be the cores if you wanted to build a Linpack Top500 supercomputer, they are great for web servers. Since the cloud-native compute idea was to build cores and servers that can run workloads with little to no compromise, but at lower power, that is what Arm and Ampere built.

Next up will be the AmpereOne. This is already shipping, but we have yet to get one in the lab.

AmpereOne uses a custom designed core for up to 192 cores per socket.

Assuming you could buy a server with AmpereOne, you would get more core density than an AMD EPYC Bergamo server (192 vs 128 cores) but you would get fewer threads (192 vs 256 threads.) If you had 1 vCPU VMs, AmpereOne would be denser. If you had 2 vCPU VMs, Bergamo would be denser. SMT has been a challenge in the cloud due to some of the security surfaces it exposes.

Next in the market will be the Intel Sierra Forest. Intels new cloud-native processor will offer up to 144/ 288 cores. Perhaps most importantly, it is aiming for a low power per core metric while also maintaining x86 compatibility.

Intel is taking its efficient E-core line and bringing it to the Xeon market. We have seen massive gains in E-core performance in both embedded as well as lower-power lines like the Alder Lake-N where we saw greater than 2x generational performance per chip. Now, Intel is splitting its line into P-cores for compute intensive workloads and E-cores for high-density scale-out compute.

Intel will offer Granite Rapids as an update to the current 5th Gen Xeon Emerald Rapids for all P-core designs later in 2024. Sierra Forest will be the first generation all E-core design and is planned for the first half of 2024. Intel already has announced the next generation Clearwater Forest will continue the all E-core line. As a full disclosure, this is a launch I have been excited about for years.

We are going to quickly mention the NVIDIA Grace Superchip here. With up to 144 cores across two dies packaged along with LPDDR memory.

While at 500W and usingArm Neoverse V2 performance cores, one would not think of this as a cloud native processor, it does have something really different. The Grace Superchip has onboard memory packaged alongside its Arm CPUs. As a result, that 500W is actually for CPU and memory. There are applications that are primarily memory bandwidth bound, not necessarily core count bound. For those applications, something like a Grace Superchip can actually end up being a lower-power solution than some of the other cloud-native offerings. These are also not the easiest to get, and are priced at a significant premium. One could easily argue these are not cloud-native, but if our definition is doing the same work in a smaller more efficient footprint, then the Grace Superchip might actually fall into that category for a subset of workloads.

If you were excited for our 2nd to 5th Gen Intel Xeon server consolidation piece, get ready. To say that the piece we did in late 2023 was just the beginning would be an understatement.

While many are focused on AI build-outs, projects to shrink portions of existing compute footprints by 75% or more are certainly possible, making more space, power, and cooling available for new AI servers. Also, just from a carbon footprint perspective, using newer and significantly more power-efficient architectures to do baseline application hosting makes a lot of sense.

The big question in the industry right now on CPU compute is whether cloud native energy-efficient computing is going to be 25% of the server CPU market in 3-5 years, or if it is going to be 75%. My sense is that it likely could be 75%, or perhaps should be 75%, but organizations are slow to move. So at STH, we are going to be doing a series to help overcome that organizational inertia and get compute on the right-sized platforms.

More:

Cloud Native Efficient Computing is the Way in 2024 and Beyond - ServeTheHome

US moon lander launched half century after last Apollo lunar mission – The Jerusalem Post

A moon lander built by Houston-based aerospace company Intuitive Machines was launched from Florida early on Thursday on a mission to conduct the first US lunar touchdown in more than a half century and the first by a privately owned spacecraft.

The company's Nova-C lander, dubbed Odysseus, lifted off shortly after 1 a.m. EST (0600 GMT) atop a Falcon 9 rocket flown by Elon Musk' SpaceX from NASA's Kennedy Space Center in Cape Canaveral.

A live NASA-SpaceX online video feed showed the two-stage, 25-story rocket roaring off the launch pad and streaking into the dark sky over Florida's Atlantic coast, trailed by a fiery yellowish plume of exhaust.

The launch, previously set for Wednesday morning, was postponed for 24 hours because of irregular temperatures detected in liquid methane used in the lander's propulsion system. SpaceX said the issue was later resolved.

Although considered an Intuitive Machines mission, the IM-1 flight is carrying six NASA payloads of instruments designed to gather data about the lunar environment ahead of NASA's planned return of astronauts to the moon later this decade.

Thursday's launch came a month after the lunar lander of another private firm, Astrobotic Technology, suffered a propulsion system leak on its way to the moon shortly after being placed in orbit on Jan. 8 by a United Launch Alliance (ULA) Vulcan rocket making its debut flight.

The failure of Astrobotic's Peregrine lander, which was also flying NASA payloads to the moon, marked the third time a private company had been unable to achieve a "soft landing" on the lunar surface, following ill-fated efforts by companies from Israel and Japan.

Those mishaps illustrated the risks NASA faces in leaning more heavily on the commercial sector than it had in the past to realize its spaceflight goals.

Plans call for Intuitive Machines' Nova-C vehicle, a hexagonal cylinder with four legs, to reach its destination after about a weeklong flight on Feb. 22 for a landing at crater Malapert A near the moon's south pole.

If successful, the flight would represent the first controlled descent to the lunar surface by a US spacecraft since the final Apollo crewed moon mission in 1972, and the first by a private company.

The feat also would mark the first journey to the lunar surface under NASA's Artemis moon program, as the US races to return astronauts to Earth's natural satellite before China lands its own crewed spacecraft there.

IM-1 is the latest test of NASA's strategy of paying for the use of spacecraft built and owned by private companies to slash the cost of the Artemis missions, envisioned as precursors to human exploration of Mars.

By contrast, during the Apollo era, NASA bought rockets and other technology from the private sector, but owned and operated them itself.

NASA announced last month that it was delaying its target date for a first crewed Artemis moon landing from 2025 to late 2026, while China has said it was aiming for 2030.

Small landers such as Nova-C are expected to get there first, carrying instruments to closely survey the lunar landscape, its resources and potential hazards. Odysseus will focus on space weather interactions with the moon's surface, radio astronomy, precision landing technologies and navigation.

Intuitive Machine's IM-2 mission is scheduled to land at the lunar south pole in 2024, followed by an IM-3 mission later in the year with several small rovers.

Last month, Japan became the fifth country to place a lander on the moon, with its space agency JAXA achieving an unusually precise "pinpoint" touchdown of its SLIM probe last month. Last year, India became the fourth nation to land on the moon, after Russia failed in an attempt the same month.

The United States, the former Soviet Union and China are the only other countries that have carried out successful soft lunar touchdowns. China scored a world first in 2019 by achieving the first landing on the far side of the moon.

Continue reading here:

US moon lander launched half century after last Apollo lunar mission - The Jerusalem Post