Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy … – Microsoft

As we enter a new era based on artificial intelligence, we believe this is the best time to articulate principles that will govern how we will operate our AI datacenter infrastructure and other important AI assets around the world. We are announcing and publishing these principles our AI Access Principles today at the Mobile World Congress in Barcelona in part to address Microsofts growing role and responsibility as an AI innovator and a market leader.

Like other general-purpose technologies in the past, AI is creating a new sector of the economy. This new AI economy is creating not just new opportunities for existing enterprises, but new companies and entirely new business categories. The principles were announcing today commit Microsoft to bigger investments, more business partnerships, and broader programs to promote innovation and competition than any prior initiative in the companys 49-year history. By publishing these principles, we are committing ourselves to providing the broad technology access needed to empower organizations and individuals around the world to develop and use AI in ways that will serve the public good.

These new principles help put in context the new investments and programs weve announced and launched across Europe over the past two weeks, including $5.6 billion in new AI datacenter investments and new AI skilling programs that will reach more than a million people. Weve also launched new public-private partnerships to advance responsible AI adoption and protect cybersecurity, new AI technology services to support network operators, and a new partnership with Frances leading AI company, Mistral AI. As much as anything, these investments and programs make clear how we will put these principles into practice, not just in Europe, but in the United States and around the world.

These principles also reflect the responsible and important role we must play as a company. They build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the companys market position in the PC operating system market, we published a set of Windows Principles. Their purpose was to govern the companys practices in a manner that would both promote continued software innovation and foster free and open competition.

Ill never forget the reaction of an FTC Commissioner who came up to me after I concluded the speech I gave in Washington, D.C. to launch these principles. He said, If you had done this 10 years ago, I think you all probably would have avoided a lot of problems.

Close to two decades have gone by since that moment, and both the world of technology and the AI era we are entering are radically different. Then, Windows was the computing platform of the moment. Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond. But there is wisdom in that FTC Commissioners reaction that has stood the test of time: As a leading IT company, we do our best work when we govern our business in a principled manner that provides broad opportunities for others.

The new AI era requires enormous computational power to train, build, and deploy the most advanced AI models. Historically, such power could only be found in a handful of government-funded national laboratories and research institutions, and it was available only to a select few. But the advent of the public cloud has changed that. Much like steel did for skyscrapers, the public cloud enables generative AI.

Today, datacenters around the world house millions of servers and make vast computing power broadly available to organizations large and small and even to individuals as well. Already, many thousands of AI developers in startups, enterprises, government agencies, research labs, and non-profit organizations around the world are using the technology in these datacenters to create new AI foundation models and applications.

These datacenters are owned and operated by cloud providers, which include larger established firms such as Microsoft, Amazon, Google, Oracle, and IBM, as well as large firms from China like Alibaba, Huawei, Tencent, and Baidu. There are also smaller specialized entrants such as Coreweave, OVH, Aruba, and Denvr Dataworks Corporation, just to mention a few. And government-funded computing centers clearly will play a role as well, including with support for academic research. But building and operating those datacenters is expensive. And the semiconductors or graphical processing units (GPUs) that are essential to power the servers for AI workloads remain costly and in short supply. Although governments and companies are working hard to fill the gap, doing so will take some time.

With this reality in mind, regulators around the world are asking important questions about who can compete in the AI era. Will it create new opportunities and lead to the emergence of new companies? Or will it simply reinforce existing positions and leaders in digital markets?

I am optimistic that the changes driven by the new AI era will extend into the technology industry itself. After all, how many readers of this paragraph had, two years ago, even heard of OpenAI and many other new AI entrants like Anthropic, Cohere, Aleph Alpha, and Mistral AI? In addition, Microsoft, along with other large technology firms are dynamically pivoting to meet the AI era. The competitive pressure is fierce, and the pace of innovation is dizzying. As a leading cloud provider and an innovator in AI models ourselves and through our partnership with OpenAI, we are mindful of our role and responsibilities in the evolution of this AI era.

Throughout the past decade, weve typically found it helpful to define the tenets in effect, the goals that guide our thinking and drive our actions as we navigate a complex topic. We then apply these tenets by articulating the principles we will apply as we make the decisions needed to govern the development and use of technology. I share below the new tenets on which we are basing our thinking on this topic, followed by our 11 AI Access Principles.

Fundamentally, there are five tenets that define Microsofts goals as we focus on AI access, including our role as an infrastructure and platforms provider.

First, we have a responsibility to enable innovation and foster competition. We believe that AI is a foundational technology with a transformative capability to help solve societal problems, improve human productivity, and make companies and countries more competitive. As with prior general-purpose technologies, from the printing press to electricity, railroads, and the internet itself, the AI era is not based on a single technology component or advance. We have a responsibility to help spur innovation and competition across the new AI economy that is rapidly emerging.

AI is a dynamic field, with many active participants based on a technology stack that starts with electricity and connectivity and the worlds most advanced semiconductor chips at the base. It then runs up through the compute power of the public cloud, public and proprietary data for training foundation models, the foundation models themselves, tooling to manage and orchestrate the models, and AI-powered software applications. In short, the success of an AI-based economy requires the success of many different participants across numerous interconnected markets.

You can see here the technology stack that defines the new AI era. While one company currently produces and supplies most of the GPUs being used for AI today, as one moves incrementally up the stack, the number of participants expands. And each layer enables and facilitates innovation and competition in the layers above. In multiple ways, to succeed, participants at every layer of the technology stack need to move forward together. This means, for Microsoft, that we need to stay focused not just on our own success, but on enabling the success of others.

Second, our responsibilities begin by meeting our obligations under the law. While the principles we are launching today represent a self-regulatory initiative, they in no way are meant to suggest a lack of respect for the rule of law or the role of regulators. We fully appreciate that legislators, competition authorities, regulators, enforcers, and judges will continue to evolve the competition rules and other laws and regulations relevant to AI. Thats the way it should be.

Technology laws and rules are changing rapidly. The European Union is implementing its Digital Markets Act and completing its AI Act, while the United States is moving quickly with a new AI Executive Order. Similar laws and initiatives are moving forward in the United Kingdom, Canada, Japan, India, and many other countries. We recognize that we, like all participants in this new AI market, have a responsibility to live up to our obligations under the law, to engage constructively with regulators when obligations are not yet clear, and to contribute to the public dialogue around policy. We take these obligations seriously.

Third, we need to advance a broad array of AI partnerships. Today, only one company is vertically integrated in a manner that includes every AI layer from chips to a thriving mobile app store. As noted at a recent meeting of tech leaders and government officials, The rest of us, Microsoft included, live in the land of partnerships.

People today are benefiting from the AI advances that the partnership between OpenAI and Microsoft has created. Since 2019, Microsoft has collaborated with OpenAI on the research and development of OpenAIs generative AI models, developing the unique supercomputers needed to train those models. The ground-breaking technology ushered in by our partnership has unleashed a groundswell of innovation across the industry. And over the past five years, OpenAI has become a significant new competitor in the technology industry. It has expanded its focus, commercializing its technologies with the launch of ChatGPT and the GPT Store and providing its models for commercial use by third-party developers.

Innovation and competition will require an extensive array of similar support for proprietary and open-source AI models, large and small, including the type of partnership we are announcing today with Mistral AI, the leading open-source AI developer based in France. We have also invested in a broad range of other diverse generative AI startups. In some instances, those investments have provided seed funding to finance day-to-day operations. In other instances, those investments have been more focused on paying the expenses for the use of the computational infrastructure needed to train and deploy generative AI models and applications. We are committed to partnering well with market participants around the world and in ways that will accelerate local AI innovations.

Fourth, our commitment to partnership extends to customers, communities, and countries. More than for prior generations of digital technology, our investments in AI and datacenters must sustain the competitive strengths of customers and national economies and address broad societal needs. This has been at the core of the multi-billion-dollar investments we recently have announced in Australia, the United Kingdom, Germany, and Spain. We need constantly to be mindful of the community needs AI advances must support, and we must pursue a spirit of partnership not only with others in our industry, but with customers, governments, and civil society. We are building the infrastructure that will support the AI economy, and we need the opportunities provided by that infrastructure to be widely available.

Fifth, we need to be proactive and constructive, as a matter of process, in working with governments and the IT industry in the design and release of new versions of AI infrastructure and platforms. We believe it is critical for companies and regulators to engage in open dialogue, with a goal of resolving issues as quickly as possible ideally, while a new product is still under development. For our part, we understand that Microsoft must respond fully and cooperatively to regulatory inquiries so that we can have an informed discussion with regulators about the virtues of various approaches. We need to be good listeners and constructive problem solvers in sorting through issues of concern and identifying practical steps and solutions before a new product is completed and launched.

The foregoing tenets come together to shape the new principles we are announcing below. Its important to note that, given the safety, security, privacy, and other issues relating to responsible AI, we need to apply all these principles subject to objective and effective standards to comply with our legal obligations and protect the public. These are discussed further below. Subject to these requirements, we are committed to the following 11 principles:

We are committed to enabling AI innovation and fostering competition by making our cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. We want Microsoft Azure to be the best place for developers to train, build, and deploy AI models and to use those models safely and securely in applications and solutions. This means:

Today, our partnership with OpenAI is supporting the training of the next generation of OpenAI models and increasingly enabling customers to access and use these models and Microsofts CoPilot applications in local datacenters. At the same time, we are committed to supporting other developers, training, and deploying proprietary and open-source AI models, both large and small.

Todays important announcement with Mistral AI launches a new generation of Microsofts support for technology development in Europe. It enables Mistral AI to accelerate the development and deployment of its next generation Large Language Models (LLMs) with access to Azures cutting-edge AI infrastructure. It also makes the deployment of Mistral AIs premium models available to customers through our Models-as-a-Service (MaaS) offering on Microsoft Azure, which model developers can use to publish and monetize their AI models. By providing a unified platform for AI model management, we aim to lower the barriers and costs of AI model development around the world for both open source and proprietary development. In addition to Mistral AI, this service is already hosting more than 1,600 open source and proprietary models from companies and organizations such as Meta, Nvidia, Deci, and Hugging Face, with more models coming soon from Cohere and G42.

We are committed to expanding this type of support for additional models in the months and years ahead.

As reflected in Microsofts Copilots and OpenAIs ChatGPT itself, the world is rapidly benefiting from the use of a new generation of software applications that access and use the power of AI models. But our applications will represent just a small percentage of the AI-powered applications the world will need and create. For this reason, were committed to ongoing and innovative steps to make the AI models we host and the development tools we create broadly available to AI software applications developers around the world in ways that are consistent with responsible AI principles.

This includes the Azure OpenAI service, which enables software developers who work at start-ups, established IT companies, and in-house IT departments to build software applications that call on and make use of OpenAIs most powerful models. It extends through Models as a Service to the use of other open source and proprietary AI models from other companies, including Mistral AI, Meta, and others.

We are also committed to empowering developers to build customized AI solutions by enabling them to fine-tune existing models based on their own unique data sets and for their specific needs and scenarios. With Azure Machine Learning, developers can easily access state-of-the-art pre-trained models and customize them with their own data and parameters, using a simple drag-and-drop interface or code-based notebooks. This helps companies, governments, and non-profits create AI applications that help advance their goals and solve their challenges, such as improving customer service, enhancing public safety, or promoting social good. This is rapidly democratizing AI and fostering a culture of even broader innovation and collaboration among developers.

We are also providing developers with tools and repositories on GitHub that enable them to create, share, and learn from AI solutions. GitHub is the worlds largest and most trusted platform for software development, hosting over 100 million repositories and supporting more than 40 million developers. We are committed to supporting the AI developer community by making our AI tools and resources available on GitHub, giving developers access to the latest innovations and best practices in AI development, as well as the opportunity to collaborate with other developers and contribute to the open source community. As one example, just last week we made available an open automation framework to help red team generative AI systems.

Ensure choice and fairness across the AI economy

We understand that AI innovation and competition require choice and fair dealing. We are committed to providing organizations, AI developers, and data scientists with the flexibility to choose which AI models to use wherever they are building solutions. For developers who choose to use Microsoft Azure, we want to make sure they are confident we will not tilt the playing field to our advantage. This means:

The AI models that we host on Azure, including the Microsoft Azure OpenAI API service, are all accessible via public APIs. Microsoft publishes documentation on its website explaining how developers can call these APIs and use the underlying models. This enables any application, whether it is built and deployed on Azure or other private and public clouds, to call these APIs and access the underlying models.

Network operators are playing a vital role in accelerating the AI transformation of customers around the world, including for many national and regional governments. This is one reason we are supporting a common public API through the Open Gateway initiative driven by the GSM Association, which advances innovation in the mobile ecosystem. The initiative is aligning all operators with a common API for exposing advanced capabilities provided by their networks, including authentication, location, and quality of service. Its an indispensable step forward in enabling network operators to offer their advanced capabilities to a new generation of AI-enabled software developers. We have believed in the potential of this initiative since its inception at GSMA, and we have partnered with operators around the world to help bring it to life.

Today at Mobile World Congress, we are launching the Public Preview of Azure Programmable Connectivity (APC). This is a first-class service in Azure, completely integrated with the rest of our services, that seamlessly provides access to Open Gateway for developers. It means software developers can use the capabilities provided by the operator network directly from Azure, like any other service, without requiring specific work for each operator.

We are committed to maintaining Microsoft Azure as an open cloud platform, much as Windows has been for decades and continues to be. That means in part ensuring that developers can choose how they want to distribute and sell their AI software to customers for deployment and use on Microsoft Azure. We provide a marketplace on Azure through which developers can list and sell their AI software to Azure customers under a variety of supported business models. Developers who choose to use the Azure Marketplace are also free to decide whether to use the transaction capabilities offered by the marketplace (at a modest fee) or whether to sell licenses to customers outside of the marketplace (at no fee). And, of course, developers remain free to sell and distribute AI software to Azure customers however they choose, and those customers can then upload, deploy, and use that software on Azure.

We believe that trust is central to the success of Microsoft Azure. We build this trust by serving the interests of AI developers and customers who choose Microsoft Azure to train, build, and deploy foundation models. In practice, this also means that we avoid using any non-public information or data from the training, building, deployment, or use of developers AI models to compete against them.

We know that customers can and do use multiple cloud providers to meet their AI and other computing needs. And we understand that the data our customers store on Microsoft Azure is their data. So, we are committed to enabling customers to easily export and transfer their data if they choose to switch to another cloud provider. We recognize that different countries are considering or have enacted laws limiting the extent to which we can pass along the costs of such export and transfer. We will comply with those laws.

We recognize that new AI technologies raise an extraordinary array of critical questions. These involve important societal issues such as privacy, safety, security, the protection of children, and the safeguarding of elections from deepfake manipulation, to name just a few. These and other issues require that tech companies create guardrails for their AI services, adapt to new legal and regulatory requirements, and work proactively in multistakeholder efforts to meet broad societal needs. Were committed to fulfilling these responsibilities, including through the following priorities:

We are committed to safeguarding the physical security of our AI datacenters, as they host the infrastructure and data that power AI solutions. We follow strict security protocols and standards to ensure that our datacenters are protected from unauthorized access, theft, vandalism, fire, or natural disasters. We monitor and audit our datacenters to detect and prevent any potential threats or breaches. Our datacenter staff are trained and certified in security best practices and are required to adhere to a code of conduct that respects the privacy and confidentiality of our customers data.

We are also committed to safeguarding the cybersecurity of our AI models and applications, as they process and generate sensitive information for our customers and society. We use state-of-the-art encryption, authentication, and authorization mechanisms to protect data in transit and at rest, as well as the integrity and confidentiality of AI models and applications. We also use AI to enhance our cybersecurity capabilities, such as detecting and mitigating cyberattacks, identifying and resolving vulnerabilities, and improving our security posture and resilience.

Were building on these efforts with our new Secure Future Initiative (SFI). This brings together every part of Microsoft and has three pillars. It focuses on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats.

As AI becomes more pervasive and impactful, we recognize the need to ensure that our technology is developed and deployed in a way that is ethical, trustworthy, and aligned with human values. That is why we have created the Microsoft Responsible AI Standard, a comprehensive framework that guides our teams on how to build and use AI responsibly.

The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, we define what these values mean and how to achieve our goals in practice. We also provide tools, processes, and best practices to help our teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings.

We recognize that countries need more than advanced AI chips and datacenters to sustain their competitive edge and unlock economic growth. AI is changing jobs and the way people work, requiring that people master new skills to advance their careers. Thats why were committed to marrying AI infrastructure capacity with AI skilling capability, combining the two to advance innovation.

In just the past few months, weve combined billions of dollars of infrastructure investments with new programs to bring AI skills to millions of people in countries like Australia, the United Kingdom, Germany, and Spain. Were launching training programs focused on building AI fluency, developing AI technical skills, supporting AI business transformation, and promoting safe and responsible AI development. Our work includes the first Professional Certificate on Generative AI.

Typically, our skilling programs involve a professional network of Microsoft certified training services partners and multiple industry partners, universities, and nonprofit organizations. Increasingly, we find that major employers want to launch new AI skilling programs for their employees, and we are working with them actively to provide curricular materials and support these efforts.

One of our most recent and important partnerships is with the AFL-CIO, the largest federation of labor unions in the United States. Its the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Weve learned that government institutions and associations can typically bring AI skilling programs to scale. At the national and regional levels, government employment and educational agencies have the personnel, programs, and expertise to reach hundreds of thousands or even millions of people. Were committed to working with and supporting these efforts.

Through these and other initiatives, we aim to democratize access to AI education and enable everyone to harness the potential of AI for their own lives and careers.

In 2020, Microsoft set ambitious goals to be carbon negative, water positive and zero waste by 2030. We recognize that our datacenters play a key part in achieving these goals. Being responsible and sustainable by design also has led us to take a first-mover approach, making long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build datacenters and operate.

We also apply a holistic approach to the Scope 3 emissions relating to our investments in AI infrastructure, from the construction of our datacenters to engaging our supply chain. This includes supporting innovation to reduce the embodied carbon in our supply chain and advancing our water positive and zero waste goals throughout our operations.

At the same time, we recognize that AI can be a vital tool to help accelerate the deployment of sustainability solutions from the discovery of new materials to better predicting and responding to extreme weather events. This is why we continue to partner with others to use AI to help advance breakthroughs that previously would have taken decades, underscoring the important role AI technology can play in addressing some of our most critical challenges to realizing a more sustainable future.

Tags: ChatGPT, datacenters, generative ai, Github, Mobile World Congress, open ai, Responsible AI

Read the original post:

Microsoft's AI Access Principles: Our commitments to promote innovation and competition in the new AI economy ... - Microsoft

Leveraging Cloud Computing and Data Analytics for Businesses – Analytics Insight

In todays dynamic business landscape, organizations are constantly seeking innovative ways to drive efficiency, agility, and value. Among the transformative technologies reshaping business operations, cloud computing and data analytics stand out as powerful tools that, when leveraged effectively, can yield significant business value. By integrating these technologies strategically, businesses can unlock new opportunities for growth, streamline operations, and gain a competitive edge in the market.

Cloud computing offers organizations the flexibility to access computing resources on-demand, without the need for substantial investments in hardware and software infrastructure. This agility enables businesses to scale their operations rapidly in response to changing market demands, without the constraints of traditional IT environments. By migrating workloads to the cloud, organizations can streamline their operations, reduce downtime, and optimize resource utilization, leading to improved efficiency across the board.

In todays data-driven world, businesses are sitting on a goldmine of valuable information. Data analytics empowers organizations to extract actionable insights from vast volumes of data, enabling informed decision-making and driving business value. By leveraging advanced analytics techniques, such as machine learning and predictive modeling, businesses can identify trends, anticipate customer needs, and optimize processes for maximum efficiency. Furthermore, effective data governance and quality assurance practices ensure that insights derived from data analytics are accurate, reliable, and actionable.

Cloud FinOps, a practice focused on optimizing cloud spending and maximizing business value, plays a crucial role in ensuring that cloud investments deliver tangible returns. By tracking key performance indicators (KPIs) and measuring the business impact of cloud transformations, organizations can quantify the value derived from their cloud investments. Cloud FinOps goes beyond cost savings to encompass broader metrics such as improved resiliency, innovation, and operational efficiency, providing a comprehensive view of the business value generated by cloud initiatives.

Cloud computing infrastructure provides organizations with the foundation they need to harness the power of data analytics at scale. By leveraging cloud-based platforms for big data processing and analytics, organizations can access virtually unlimited computing resources, enabling them to analyze large datasets quickly and efficiently. Additionally, cloud infrastructure offers built-in features for data protection, disaster recovery, and security, ensuring that sensitive information remains safe and secure at all times. Furthermore, the pay-as-you-go pricing model of cloud services allows organizations to optimize costs and maximize ROI on their infrastructure investments.

Cloud computing accelerates the pace of software development by providing developers with access to scalable resources and flexible development environments. By leveraging cloud-based tools and platforms, organizations can streamline the software development lifecycle, reduce time-to-market, and improve collaboration among development teams. Furthermore, cloud-based development environments enable developers to experiment with new ideas and technologies without the constraints of traditional IT infrastructure, fostering innovation and driving business growth.

In conclusion, cloud computing and data analytics represent powerful tools for driving business value in todays digital economy. By embracing these technologies and implementing sound strategies for their deployment, organizations can unlock new opportunities for growth, enhance operational efficiency, and gain a competitive edge in the market. With the right approach, cloud computing and data analytics can serve as catalysts for innovation and transformation, enabling businesses to thrive in an increasingly data-driven world.

Join our WhatsApp and Telegram Community to Get Regular Top Tech Updates

Go here to read the rest:

Leveraging Cloud Computing and Data Analytics for Businesses - Analytics Insight

Cloud-Computing in the Post-Serverless Era: Current Trends and Beyond – InfoQ.com

Key Takeaways

[Note: The opinions and predictions in this article are those of the author and not of InfoQ.]

As AWS Lambda approaches its 10th anniversary this year, serverless computing expands beyond just Function as a Service (FaaS). Today, serverless describes cloud services that require no manual provisioning, offer on-demand auto-scaling, and use consumption-based pricing. This shift is part of a broader evolution in cloud computing, with serverless technology continuously transforming. This article focuses on the future beyond serverless, exploring how the cloud landscape will evolve beyond current hyperscaler models and its impact on developers and operations teams. I will examine the top three trends shaping this evolution.

In software development, a "module" or "component" typically refers to a self-contained unit of software that performs a cohesive set of actions. This concept corresponds elegantly to the microservice architecture that typically runs on long-running compute services such as Virtual Machines (VMs) or a container service. AWS EC2, one of the first widely accessible cloud computing services, offered scalable VMs. Introducing such scalable, accessible cloud resources provided the infrastructure necessary for microservices architecture to become practical and widespread. This shift led to decomposing monolithic applications into independently deployable microservice units.

Lets continue with this analogy of software units. A function is a block of code that encapsulates a sequence of statements performing a single task with defined input and output. This unit of code nicely corresponds to the FaaS execution model. The concept of FaaS executing code in response to events without the need to manage infrastructure existed before AWS Lambda but lacked broad implementation and recognition.

The concept of FaaS, which involves executing code in response to events without the need for managing infrastructure, was already suggested by services like Google App Engine, Azure WebJobs, IronWorker, and AWS Elastic Beanstalk before AWS Lambda brought it into the mainstream. Lambda, emerging as the first major commercial implementation of FaaS, acted as a catalyst for its popularity by easing the deployment process for developers. This advancement led to the transformation of microservices into smaller, individually scalable, event-driven operations.

In the evolution toward smaller software units offered as a service, one might wonder if we will see basic programming elements like expressions or statements as a service (such as int x = a + b;). The progression, however, steers away from this path. Instead, we are witnessing the minimization and eventual replacement of functions by configurable cloud constructs. Constructs in software development, encompassing elements like conditionals (if-else, switch statements), loops (for, while), exception handling (try-catch-finally), or user-defined data structures, are instrumental in controlling program flow or managing complex data types. In cloud services, constructs align with capabilities that enable the composition of distributed applications, interlinking software modules such as microservices and functions, and managing data flow between them.

Cloud construct replacing functions, replacing microservices, replacing monolithic applications

While you might have previously used a function to filter, route, batch, split events, or call another cloud service or function, now these operations and more can be done with less code in your functions, or in many cases with no function code at all. They can be replaced by configurable cloud constructs that are part of the cloud services. Lets look at a few concrete examples from AWS to demonstrate this transition from Lambda function code to cloud constructs:

These are just a few examples of application code constructs becoming serverless cloud constructs. Rather than validating input values in a function with if-else logic, you can validate the inputs through configuration. Rather than routing events with a case or switch statement to invoke other code from within a function, you can define routing logic declaratively outside the function. Events can be triggered from data sources on data change, batched, or split without a repetition construct, such as a for or while loop.

Events can be validated, transformed, batched, routed, filtered, and enriched without a function. Failures were handled and directed to DLQs and back without a try-catch code, and successful completion was directed to other functions and service endpoints. Moving these constructs from application code into construct configuration reduces application code size or removes it, eliminating the need for security patching and any maintenance.

A primitive and a construct in programming have distinct meanings and roles. A primitive is a basic data type inherently part of a programming language. It embodies a basic value, such as an integer, float, boolean, or character, and does not comprise other types. Mirroring this concept, the cloud - just like a giant programming runtime - is evolving from infrastructure primitives like network load balancers, virtual machines, file storage, and databases to more refined and configurable cloud constructs.

Like programming constructs, these cloud constructs orchestrate distributed application interactions and manage complex data flows. However, these constructs are not isolated cloud services; there isnt a standalone "filtering as a service" or "event emitter as service." There are no "Constructs as a Service," but they are increasingly essential features of core cloud primitives such as gateways, data stores, message brokers, and function runtimes.

This evolution reduces application code complexity and, in many cases, eliminates the need for custom functions. This shift from FaaS to NoFaaS (no fuss, implying simplicity) is just beginning, with insightful talks and code examples on GitHub. Next, I will explore the emergence of construct-rich cloud services within vertical multi-cloud services.

In the post-serverless cloud era, its no longer enough to offer highly scalable cloud primitives like compute for containers and functions, or storage services such as key/value stores, event stores, relational databases, or networking primitives like load balancers. Post-serverless cloud services must be rich in developer constructs and offload much of the application plumbing. This goes beyond hyperscaling a generic cloud service for a broad user base; it involves deep specialization and exposing advanced constructs to more demanding users.

Hyperscalers like AWS, Azure, GCP, and others, with their vast range of services and extensive user bases, are well-positioned to identify new user needs and constructs. However, providing these more granular developer constructs results in increased complexity. Each new construct in every service requires a deep learning curve with its specifics for effective utilization. Thus, in the post-serverless era, we will observe the rise of vertical multi-cloud services that excel in one area. This shift represents a move toward hyperspecialization of cloud services.

Consider Confluent Cloud as an example. While all major hyperscalers (AWS, Azure, GCP, etc.) offer Kafka services, none match the developer experience and constructs provided by Confluent Cloud. With its Kafka brokers, numerous Kafka connectors, integrated schema registry, Flink processing, data governance, tracing, and message browser, Confluent Cloud delivers the most construct-rich and specialized Kafka service, surpassing what hyperscalers offer.

This trend is not isolated; numerous examples include MongoDB Atlas versus DocumentDB, GitLab versus CodeCommit, DataBricks versus EMR, RedisLabs versus ElasticCache, etc. Beyond established cloud companies, a new wave of startups is emerging, focusing on a single multi-cloud primitive (like specialized compute, storage, networking, build-pipeline, monitoring, etc.) and enriching it with developer constructs to offer a unique value proposition. Here are some cloud services hyperspecializing in a single open-source technology, aiming to provide a construct-rich experience and attract users away from hyperscalers:

This list represents a fraction of a growing ecosystem of hyperspecialized vertical multi-cloud services built atop core cloud primitives offered by hyperscalers. They compete by providing a comprehensive set of programmable constructs and an enhanced developer experience.

Serverless cloud services hyperspecializing in one thing with rich developer constructs

Once this transition is completed, bare-bones cloud services without rich constructs, even serverless ones, will seem like outdated on-premise software. A storage service must stream changes like DynamoDB; a message broker should include EventBridge-like constructs for event-driven routing, filtering, and endpoint invocation with retries and DLQs; a pub/sub system should offer message batching, splitting, filtering, transforming, and enriching.

Ultimately, while hyperscalers expand horizontally with an increasing array of services, hyperspecializers grow vertically, offering a single, best-in-class service enriched with constructs, forming an ecosystem of vertical multi-cloud services. The future of cloud service competition will pivot from infrastructure primitives to a duo of core cloud primitives and developer-centric constructs.

Cloud constructs increasingly blur the boundaries between application and infrastructure responsibilities. The next evolution is the "shift left" of cloud automation, integrating application and automation codes in terms of tools and responsibilities. Lets examine how this transition is unfolding.

The first generation of cloud infrastructure management was defined by Infrastructure as Code (IaC), a pattern that emerged to simplify the provisioning and management of infrastructure. This approach is built on the trends set by the commoditization of virtualization in cloud computing.

The initial IaC tools introduced new domain-specific languages (DSL) dedicated to creating, configuring, and managing cloud resources in a repeatable manner. Tools like Chef, Ansible, Puppet, and Terraform led this phase. These tools, leveraging declarative languages, allowed operation teams to define the infrastructures desired state in code, abstracting underlying complexities.

However, as the cloud landscape transitions from low-level coarse-grained infrastructure to more developer-centric programmable finer-grained constructs, a trend toward using existing general-purpose programming languages for defining these constructs is emerging. New entrants like Pulumi and the AWS Cloud Development Kit (CDK) are at the forefront of this wave, supporting languages such as TypeScript, Python, C#, Go, and Java.

The shift to general-purpose languages is driven by the need to overcome the limitations of declarative languages, which lack expressiveness and flexibility for programmatically defining cloud constructs, and by the shift-left of configuring cloud constructs responsibilities from operations to developers. Unlike the static nature of declarative languages suited for low-level static infrastructure, general-purpose languages enable developers to define dynamic, logic-driven cloud constructs, achieving a closer alignment with application code.

Shifting-left of application composition from infrastructure to developer teams

The post-serverless cloud developers need to implement business logic by creating functions and microservices but also compose them together using programmable cloud constructs. This shapes a broader set of developer responsibilities to develop and compose cloud applications. For example, a code with business logic in a Lambda function would also need routing, filtering, and request transformation configurations in API Gateway.

Another Lambda function may need DynamoDB streaming configuration to stream specific data changes, EventBridge routing, filtering, and enrichment configurations.

A third application may have most of its orchestration logic expressed as a StepFunction where the Lambda code is only a small task. A developer, not a platform engineer or Ops member, can compose these units of code together. Tools such as Pulumi, AWS SDK, and others that enable a developer to use the languages of their choice to implement a function and use the same language to compose its interaction with the cloud environment are best suited for this era.

Platform teams still can use declarative languages, such as Terraform, to govern, secure, monitor, and enable teams in the cloud environments, but developer-focused constructs, combined with developer-focused cloud automation languages, will shift left the cloud constructs and make developer self-service in the cloud a reality.

The transition from DSL to general-purpose languages marks a significant milestone in the evolution of IaC. It acknowledges the transition of application code into cloud constructs, which often require a deeper developer control of the resources for application needs. This shift represents a maturation of IaC tools, which now need to cater to a broader spectrum of infrastructure orchestration needs, paving the way for more sophisticated, higher-level abstractions and tools.

The journey of infrastructure management will see a shift from static configurations to a more dynamic, code-driven approach. This evolution hasnt stopped at Infrastructure as Code; it is transcending into a more nuanced realm known as Composition as Code. This paradigm further blurs the lines between application code and infrastructure, leading to more streamlined, efficient, and developer-friendly practices.

In summarizing the trends and their reinforcing effects, were observing an increasing integration of programming constructs into cloud services. Every compute service will integrate CI/CD pipelines; databases will provide HTTP access from the edge and emit change events; message brokers will enhance capabilities with filtering, routing, idempotency, transformations, DLQs, etc.

Infrastructure services are evolving into serverless APIs, infrastructure inferred from code (IfC), framework-defined infrastructure, or explicitly composed by developers (CaC). This evolution leads to smaller functions and sometimes to NoFaaS pattern, paving the way for hyperspecialized, developer-first vertical multi-cloud services. These services will offer infrastructure as programmable APIs, enabling developers to seamlessly merge their applications using their preferred programming language.

The shift-left of application composition using cloud services will increasingly blend with application programming, transforming microservices from an architectural style to an organizational one. A microservice will no longer be just a single deployment unit or process boundary but a composition of functions, containers, and cloud constructs, all implemented and glued together in a single language chosen by the developer. The future is shaping to be hyperspecialized and focused on the developer-first cloud.

Follow this link:

Cloud-Computing in the Post-Serverless Era: Current Trends and Beyond - InfoQ.com

Cloud Computing Security Start with a ‘North Star’ – ITPro Today

Cloud computing has followed a similar journey to other introductions of popular technology: Adopt first, secure later. Cloud transformation has largely been enabled by IT functions at the request of the business, with security functions often taking a backseat. In some organizations, this has been due to politics and blind faith in the cloud services providers (CSPs), e.g., AWS, Microsoft, and GCP.

In others, it has been because security functions only knew and understood on-premises deployments and simply didn't have the knowledge and capability to securely adapt to cloud or hybrid architectures and translate policies and processes to the cloud. For lucky organizations, this has only led to stalled migrations while the security and IT organizations played catch up. For unlucky organizations, this has led to breaches, business disruption, and loss of data.

Related: What Is Cloud Security?

Cloud security can be complex. However, more often than not, it is ridiculously simple the misconfigured S3 bucket being a prime example. It reached a point where malefactors could simply look for misconfigured S3 buckets to steal data; no need to launch an actual attack.

It's time for organizations take a step back and improve cloud security, and the best way to do this is to put security at the core of cloud transformations, rather than adopting the technology first and asking security questions later. Here are four steps to course correct and implement a security-centric cloud strategy:

Related: Cloud Computing Predictions 2024: What to Expect From FinOps, AI

For multi-cloud users, there is one other aspect of cloud security to consider. Most CSPs are separate businesses, and their services don't work with other CSPs. So, rather than functioning like internet service providers (ISPs) where one provider lets you access the entire internet, not just the sites that the ISP owns CSPs operate in silos, with limited interoperability with their counterparts (e.g., AWS can't manage Azure workloads, security, and services, and vice versa). This is problematic for customers because, once more than one cloud provider is added to the infrastructure, the efficacy in managing cloud operations and cloud security starts to diminish rapidly. Each time another CSP is added to an organization's environment, their attack surface grows exponentially, unless secured appropriately.

It's up to each company to take steps to become more secure in multi-cloud environments. In addition to developing and executing a strong security strategy, they also must consider using third-party applications and platforms such as cloud-native application protection platforms (CNAPPs), cloud security posture management (CSPM), infrastructure as code (IaC), and secrets management to provide the connective tissue between CSPs in hybrid or multi-cloud environments. Taking this vital step will increase security visibility, posture management, and operational efficiency to ensure the security and business results outlined at the start of the cloud security journey.

It should be noted that a cloud security strategy like any other form of security needs to be a "living" plan. The threat landscape and business needs change so fast that what is helpful today may not be helpful tomorrow. To stay in step with your organization's desired state of security, periodically revisit cloud security strategies to understand if they are delivering the desired benefits and make adjustments when they are not.

Cloud computing has transformed organizations of all types. Adopting a strategy for securing this new environment will not only allow security to catch up to technology adoption, it will also dramatically improve the ROI of cloud computing.

Ed Lewis is Secure Cloud Transformation Leader at Optiv.

Read this article:

Cloud Computing Security Start with a 'North Star' - ITPro Today

Which Nazi Ideas am I Supposed to Debate for Your Profit? – Daily Kos

I am behind on this, of course, but the leaders of Substack have responded to the letter voicing concerns about the monetization of Nazi newsletters on Substack. I signed the letter, and Substacks leadership was quite clear that they intended to go on making money from people who wish to kill and oppress their fellow humans. I am not surprised the VC class as a whole seems very alt-right/Nazi curious. I havent decided what to do with this newsletter. Moving it would require money, something this newsletter definitely does not make. But I am coming back to this because one aspect of the response stuck out to me:

I just want to make it clear that we dont like Nazis eitherwe wish no-one held those views. But some people do hold those and other extreme views. Given that, we don't think that censorship (including through demonetizing publications) makes the problem go awayin fact, it makes it worse.

We believe that supporting individual rights and civil liberties while subjecting ideas to open discourse is the best way to strip bad ideas of their power.

Emphasis mine.

This comment leaves me with a question: Which Nazi ideas, specifically, does the Substack leadership think have power? I mean, we have seen Nazism in its full glory: it led to a massive world war, oppression of anyone the Nazis did not like, and perhaps the worlds first industrialized genocide. Which of those ideas am I supposed to debate? Which violent eliminationism is worthy of further refutation? Is it the genocide? The demand for others land for themselves? The idea that once race is inherently superior to others and thus can oppress and murder the others at will?

See, my mothers family is Polish. By which I mean they all immigrated from Poland. Some of my uncles were old enough to have lived through the Second World War. I dont have all of the details (gee, Uncle Frank, what did you do in the war? is not a question a child asks of the obviously very angry, very damaged man who survived), but I do know that the Nazis debated with my family members and their countrymen with a bullet to their heads. Explain to me, again, why that idea is worthy of monetization? How, precisely, is a parlor debate about whether my relatives, and anyone who doesnt fit their notion of a true human, deserve to live in anyway going to refute the idea more effectively than the outcome of WWII?

Because Nazis dont respect democracy. They dont debate in good faith, and they arent interested in the give and take of a pluralistic society. They demand power and they seek to attain through violence. The idea that you can talk them down from that position is insane. All allowing them on your platform does is allow them the infrastructure necessary to spread their hate.

De-platforming works. Not all speech is deserving of support. You cannot shout fire in a crowded theater, to use the cliche, and companies make decisions all the time about what does and does not constitute acceptable speech on their platforms. Substack leadership knows both of these positions are true: they ban porn and sex workers from their platform. No, pretending that you can reason with people who wish to destroy democracy is nothing more than a disingenuous attempt to profit from pro-genocide and other anti-democratic positions while providing a fig-leaf to keep others from abandoning their platform.

There are no Nazi ideas with power. History has thoroughly refuted them to anyone who wishes to see. You are not required to give platforms to people who wish to destroy your tolerant society. But the Substack leadership obviously cares more about the money the Nazis bring them than about preserving democracy or a tolerant, pluralistic society.

As I said, I am not sure what I am going to do with this newsletter. It is a hobby. I have roughly 120 subscribers, and dont even have a paid option. Even if I did, it is unlikely I could bring in enough to pay for other services. But I do know that I am not going to play the Substack leaders game and pretend that I must take seriously the disproven ideas of the people who wish to destroy my family, my friends, and my society. No amount of money should be worth that.

Read more:

Which Nazi Ideas am I Supposed to Debate for Your Profit? - Daily Kos