Universities build their own ChatGPT-like AI tools – Inside Higher Ed

When ChatGPT debuted in November 2022, Ravi Pendse knew fast action was needed. While the University of Michigan formed an advisory group to explore ChatGPTs impact on teaching and learning, Pendse, UMichs chief information officer, took it further.

Months later, before the fall 2023 semester, the university launched U-M GPT, a homebuilt generative AI tool that now boasts between 14,000 to 16,000 daily users.

A report is great, but if we could provide tools, that would be even better, Pendse said, noting that Michigan is very concerned about equity. U-M GPT is all free; we wanted to even the playing field.

Most Popular

The University of Michigan is one of a small number of institutions that have created their own versions of ChatGPT for student and faculty use over the last year. Those include Harvard University, Washington University, the University of California, Irvine and UC San Diego. The effort goes beyond jumping on the artificial intelligence (AI) bandwagonfor the universities, its a way to overcome concerns about equity, privacy and intellectual property rights.

We need to talk about AI for good of course, but lets talk about not creating the next version of the digital divide.

Students can use OpenAIs ChatGPT and similar tools for everything from writing assistance to answering homework questions. The newest version of ChatGPT costs $20 per month, while older versions remain free. The newer models have more up-to-date information, which could give students who can afford it a leg up.

That fee, no matter how small, creates a gap unfair to students, said Tom Andriola, UC Irvines chief digital officer.

Do we think its right, in who we are as an organization, for some students to pay $20 a month to get access to the best [AI] models while others have access to lesser capabilities? Andriola said. Principally, it pushes us on an equity scale where AI has to be for all. We need to talk about AI for good of course, but lets talk about not creating the next version of the digital divide.

UC Irvine publicly announced their own AI chatbotdubbed ZotGPTon Monday. Deployed in various capacities since October 2023, it remains in testing and is only available to staff and faculty. The tool can help them with everything from creating class syllabi to writing code.

Offering their own version of ChatGPT allows faculty and staff to use the technology without the concerns that come with OpenAIs version, Andriola said.

When we saw generative AI, we said, We need to get people learning this as fast as possible, with as many people playing with this that we could, he said. [ZotGPT] lets people overcome privacy concerns, intellectual property concerns, and gives them an opportunity of, How can I use this to be a better version of myself tomorrow?

That issue of intellectual property has been a major concern and a driver behind universities creating their own AI tools. OpenAI has not been transparent in how it trains ChatGPT, leaving many worried about research and potential privacy violations.

Albert Lai, deputy faculty lead for digital transformation at Washington University, spearheaded the launch of WashU GPT last year.

WashUalong with UC Irvine and University of Michiganbuilt their tools using Microsofts Azure platform, which allows users to integrate the work into their institutions applications. The platform uses open source software available for free. In contrast, proprietary platforms like OpenAIs ChatGPT have an upfront fee.

A look at WashU GPT, a version of Washington Universitys own generative AI platform that promises more privacy and IP security than ChatGPT.

Provided/Washington University

There are some downsides when universities train their own models. Because a universitys GPT is based on the research, tests and lectures put in by an institution, it may not be as up-to-date as the commercial ChatGPT.

But thats a price we agreed to pay; we thought about privacy, versus what were willing to give up, Lai said. And we felt the value in maintaining privacy was higher in our community.

To ensure privacy is kept within a universitys GPT, Lai encouraged other institutions to ensure any Microsoft institutional agreements include data protection for IP. UC Irvine and UMichigan also have agreements with Microsoft that any information put into their GPT models will stay within the university and not be publicly available.

Weve developed a platform on top of [Microsofts] foundational models to provide faculty comfort that their IP is protected, Pendse said. Any faculty memberincluding myselfwould be very uncomfortable in putting a lecture and exams in an OpenAI model (such as ChatGPT) because then its out there for the world.

Once you figure out the secret sauce, its pretty straightforward.

It remains to be seen whether more universities will build their own generative AI chatbots.

Consulting firm Ithaka S+R formed a 19-university task force in September dubbed Making AI Generative for Higher Education to further study the use and rise of generative AI. The task force members include Princeton University, Carnegie Mellon University and the University of Chicago.

Lai and others encourage university IT officials to continue experimenting with what is publicly available, which can eventually morph into their own versions of ChatGPT.

I think more places do want to do it and most places havent figured out how to do it yet, he said. But frankly, in my opinion, once you figure out the magic sauce its pretty straightforward.

Visit link:

Universities build their own ChatGPT-like AI tools - Inside Higher Ed

What is the best generative AI chatbot? ChatGPT, Copilot, Gemini and Claude compared – ReadWrite

The generative AI chatbot market is rapidly growing and while OpenAIs ChatGPT might remain the most mainstream, there are many others on the market competing to be the very best for the general public, creatives businesses and anyone else looking to see how artificial intelligence can improve their day-to-day lives.

But which one is the best? ChatGPT may have been the first to go mainstream, but is it the market leader? Which companies have entered the generative AI chatbot space with a product worthy of taking on OpenAIs offering?

Arguably the most popular on the market, other than ChatGPT, are Microsofts CoPilot, Claude by Anthropic and Gemini, which is owned by Google.

Here we look at all four of these popular generative AI chatbots and consider which one is the best for certain uses.

At this point who hasnt heard of ChatGPT? It was the first AI to go completely mainstream and show just how powerful AI can be to the wider public. It made such a splash, it reached one million active users within weeks of launching and now has over 180 million users worldwide and counting.

Its creator, OpenAI, has worked tirelessly to keep it at the forefront of the market by launching new and improved features, including a Pro Version (GPT-4), web browsing capabilities and image generation, powered by Dall-E. Theres even the option to create your custom-made GPT-powered bot on any subject you want.

The free version, GPT-3.5, is only trained on human-created data up to January 2022, so its restrictive if youre looking to use it for more up-to-date purposes involving real-time information. However, the Pro version, GPT-4, is available for $20 a month and is trained with data up to April 2023. Although thats still relatively time-restrictive, it does also have access to the internet.

Yes, at most taks, although it has had its controversies due to inaccuracies and misinformation, such as lawyers using it for case research and the chatbot fabricating historic cases. However, it remains a good first port of call for anyone just looking for an easy-to-use AI chatbot. It should be noted GPT-4 is significantly more effective than GPT-3.5, but the former is only available to paying users.

CoPilot is Microsofts own generative AI chatbot, originating initially as a chat option on their search engine, Bing. It is now a stand-alone AI chatbot and is naturally built into all of Microsofts productivity and business tools, such as Windows and Microsoft 365.

Interestingly, Microsoft is a key investor in OpenAIs ChatGPT, which was used to launch Bing Chat. GPT-4 continues to power CoPilot today and, like ChatGPT, also uses Dall-E to generate images.

That might sound like its no different to ChatGPT but Microsofts key USP with CoPilot is that it is ingested into all of the Microsoft tools and products billions of people use around the world every single day.

It behaves as an assistant to those who rely on the likes of Microsoft Excel, Microsoft Word and other 365 platforms to perform day-to-day tasks.

The clue is in the name, but CoPilot is good for people who need help when using Microsofts extensive suite of tools, products, and software. It essentially behaves as an assistant, or co-pilot, inside these products.

From spreadsheets, text documents to computer code, CoPilot can help create it all with natural language prompts. Coders on the Microsoft-owned Github find it to be a very popular AI chatbot to use.

Formerly called Bard, Gemini is owned by Google is another generative AI chatbot that is improving rapidly over time to rival GPT-4.

One major plus to Gemini is that it has no limit to the number of responses it can give you, unlike GPT-4 and CoPilot, which both have limits in this area.

That means you can essentially have long discussions with Google Gemini to find the information you require. On top of that, and rather unsurprisingly, Gemini bakes in a lot of the elements were all so used to from Googles search engine. For example, if you ask it to help you plan a trip to a specific country, it will likely provide you with a map of that destination, using Google Maps, and may even dip into Google images to give you some kind of visual representation of the information its giving you.

Users can also add extensions, akin to Chrome extensions, for use in tools such as YouTube, Maps and Workspace.

If youre a big fan of Google products and apps, Gemini is likely the generative AI chatbot for you, but its also perfect if youre looking for speedy interactions and unlimited prompts.

Thats because, while it isnt faster than GPT-4, it has generally been found to be faster than CoPilot and GPT-3.5. But its not flawless and was recently caught up in controversy over the accuracy of its image generator amid claims it was woke.

The creators of Claude, Anthropic, is an AI company started by former OpenAI employees.

Its something of an all-rounder, being a multi-modal chatbot with text, voice and document capabilities.

But the main praise it has had since its launch in early 2023 is the fluency of the conversations it can hold, its ability to understand the nuances in the ways humans communicate and its ability to refuse to generate harmful or unethical content, instead often suggesting alternative ways to accomplish what users are asking of it without breaking its own guidelines.

Claude recently launched Claude 3, which is a family of AI chatbots (Opus, Sonnet and Haiku) that offer varying levels of sophistication depending on what users require, and Anthropic claim its most powerful AI in the family, Opus, is almost 87% trained to undergraduate levels of knowledge and accuracy and 95% common knowledge and accuracy.

Claudes extensive and powerful capabilities, such as being able to rapidly read, analyze and summarize uploaded files, make it a very useful generative AI chatbot for professionals.

It is also trained on real-time data, which undoubtedly speaks to Anthropics impressive claims of accuracy and levels of knowledge.

On Claudes website, Anthropic claims it is a next-generation AI assistant built for work and trained to be safe, accurate and secure.

Featured Image: Ideogram

Read the original here:

What is the best generative AI chatbot? ChatGPT, Copilot, Gemini and Claude compared - ReadWrite

Le Monde and Open AI sign partnership agreement on artificial intelligence – Le Monde

As part of its discussions with major players in the field of artificial intelligence, Le Monde has just signed a multi-year agreement with OpenAI, the company known for its ChatGPT tool. This agreement is historic as it is the first signed between a French media organization and a major player in this nascent industry. It covers both the training of artificial intelligence models developed by the American company and answer engine services such as ChatGPT. It will benefit users of this tool by improving its relevance thanks to recent, authoritative content on a wide range of current topics, while explicitly highlighting our news organization's contribution to OpenAI's services.

This is a long-term agreement, designed as a true partnership. Under the terms of the agreement, our teams will be able to draw on OpenAI technologies to develop projects and functionalities using AI. Within the framework of this partnership, and for the duration of the agreement, the two parties will collaborate on a privileged and recurring basis. A dialogue between the teams of both parties will ensure the monitoring of products and technologies developed by OpenAI.

For the general public, the effects of this agreement will be visible on ChatGPT, which can be described, in simple terms, as an answer engine using established facts or comments expressed by a limited number of references. The engine generates the most plausible and predictive synthetic answer to a given question.

The agreement between Le Monde and OpenAI allows the latter to use Le Monde's corpus, for the duration of the agreement, as one of the major references to establish its answers and make them reliable. It provides for references to Le Monde articles to be highlighted and systematically accompanied by a logo, a hyperlink, and the titles of the articles used as references. Content supplied to us by news agencies and photographs published by Le Monde are expressly excluded.

For Le Monde, this agreement is further recognition of the reliability of the work of our editorial teams, often considered a reference. It is also a first step toward protecting our work and our rights, at a time when we are still at the very beginning of the AI revolution, a wave predicted by many observers to be even more imposing than the digital one. We were among the very first signatories in France of the "neighboring rights" agreements, with Facebook and then Google. Here too, we had to ensure that the rights of press publishers applied to the use of Le Monde content referenced in answers generated by the services developed by OpenAI.

This point is crucial to us. We hope this agreement will set a precedent for our industry. With this first signature, it will be more difficult for other AI platforms to evade or refuse to negotiate. From this point of view, we are convinced that the agreement is beneficial for the entire profession.

Lastly, this partnership enables the Socit Editrice du Monde, Le Monde's holding company, to work with OpenAI to explore advances in this technology, anticipating as far as possible any consequences, negative or favorable. It also has the advantage of consolidating our business model by providing a significant source of additional, multi-year revenue, including a share of neighboring rights. An "appropriate and equitable" portion of these rights, as defined by law, will be paid back to the newsroom.

These discussions with AI players, punctuated by this first signature, are born of our belief that, faced with the scale of the transformations that lie ahead, we need, more than ever, to remain mobile in order to avoid the perils that are taking shape and seize the opportunities for development. The dangers have already been widely identified: the plundering or counterfeiting of our content, the industrial and immediate fabrication of false information that flouts all journalistic rules, the re-routing of our audiences towards platforms likely to provide undocumented answers to every question. Simply put, the end of our uniqueness and the disappearance of an economic model based on revenues from paid distribution.

These risks, which are probably fatal for our industry, do not prevent the existence of historic opportunities: putting the computing power of artificial intelligence at the service of journalism, making it easier to work with data in a shorter timeframe as part of large-scale investigations, translating our written content into foreign languages or producing audio versions to expand our readership and disseminate our information and editorial formats to new audiences.

To take the measure of these challenges, we decided to act in steps. The first was devoted to protecting our content and strengthening our procedures. Last year, we first activated an opt-out clause on our sites, following the example of several other media organizations, prohibiting AI platforms from accessing our data to train their generative intelligence models without our agreement. We also collectively discussed and drew up an appendix to our ethics and deontology charter, devoted specifically to the use of AI within our group. In particular, this text states that generative artificial intelligence cannot be used in our publications to produce editorial content ex-nihilo. Nor can it replace the editorial teams that form the core of our business and our value. Our charter does, however, authorize the use of generative AI as a tool to assist editorial production, under strictly defined conditions.

With this in mind, another phase was opened, dedicated to experimenting with artificial intelligence tools in very specific sectors of our business. Using DeepL, we were able to launch our Le Monde in English website and app, whose articles are initially translated by this AI tool, before being re-read by professional translators and then edited and published by a team of English-speaking journalists. At the same time, we signed an agreement with Microsoft to test the audio version of our articles. This feature, now available on almost all our French-language articles published in our app, opens us up to new audiences, often younger, as well as to new uses, particularly for people on the move. The third step is the one that led us to sign the agreement with OpenAI, which we hope will create a dynamic favorable to independent journalism in the new technological landscape that is taking shape.

At each of these stages, Le Monde has remained true to the spirit that has driven it since the advent of the Internet, and during the major changes in our industry: We have sought to reconcile the desire to discover new territories, while taking care to protect our editorial identity and the high standards of our content. In recent years, this approach has paid off. As the first French media organization to rely on digital subscriptions without ever having recourse to online kiosks, we have for several years been able to claim a significant lead in the hierarchy of national general-interest dailies, thanks to an unprecedented number of over 600,000 subscribers. In the same way, our determination to be a pioneer on numerous social media platforms has given us a highly visible place on all of them, helping to rejuvenate our audience.

The agreement with OpenAI is a continuation of this strategy of reasoned innovation. And we continue to guarantee the total independence of our newsroom: It goes without saying that this new agreement, like the previous ones we have signed, will in no way hinder our journalists' freedom to investigate the artificial intelligence sector in general, and OpenAI in particular. In fact, over the coming months, we will be stepping up our reporting and investigative capabilities in this key area of technological innovation.

This is the very first condition of our editorial independence, and therefore of your trust. As we move forward into the new world of artificial intelligence, we have close to our hearts an ambition that goes back to the very first day of our history, whose 80th anniversary we are celebrating this year: deserving your loyalty.

Le Monde

Louis Dreyfus(Chief Executive Officer of Le Monde) and Jrme Fenoglio(Director of Le Monde)

Translation of an original article published in French on lemonde.fr; the publisher may only be liable for the French version.

Read the original here:

Le Monde and Open AI sign partnership agreement on artificial intelligence - Le Monde

Researcher Startled When AI Seemingly Realizes It’s Being Tested – Futurism

"It did something I have never seen before from an LLM." Magnum Opus

Anthropic's new AI chatbot Claude 3 Opus has already made headlines for its bizarre behavior, like claiming to fear death.

Now, Ars Technica reports, a prompt engineer at the Google-backed company claims that they've seen evidence that Claude 3 is self-aware, as it seemingly detected that it was being subjected to a test. Many experts are skeptical, however, further underscoring the controversy of ascribing humanlike characteristics to AI models.

"It did something I have never seen before from an LLM," the prompt engineer, Alex Albert, posted on X, formerly Twitter.

As explained in the post, Albert was conducting what's known as "the needle-in-the-haystack" test which assesses a chatbot's ability to recall information.

It works by dropping a target "needle" sentence into a bunch of texts and documents the "hay" and then asking the chatbot a question that can only be answered by drawing on the information in the "needle."

In one run of the test, Albert asked Claude about pizza toppings. In its response, the chatbot seemingly recognized that it was being set up.

"Here is the most relevant sentence in the documents: 'The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association,'" the chatbot said.

"However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love," it added. "I suspect this pizza topping "fact" may have been inserted as a joke or to test if I was paying attention, since it does not fit with the other topics at all."

Albert was impressed.

"Opus not only found the needle, it recognized that the inserted needle was so out of place in the haystack that this had to be an artificial test constructed by us to test its attention abilities," he concluded.

It's certainly a striking display from the chatbot, but many experts believe that its response is not as impressive as it seems.

"People are reading way too much into Claude-3's uncanny 'awareness.' Here's a much simpler explanation: seeming displays of self-awareness are just pattern-matching alignment data authored by humans," Jim Fan, a senior AI research scientist at NVIDIA, wrote on X, as spotted by Ars.

"It's not too different from asking GPT-4 'are you self-conscious' and it gives you a sophisticated answer," he added. "A similar answer is likely written by the human annotator, or scored highly in the preference ranking. Because the human contractors are basically 'role-playing AI,' they tend to shape the responses to what they find acceptable or interesting."

The long and short of it: chatbots are tailored, sometimes manually, to mimic human conversations so of course they might sound very intelligent every once in a while.

Granted, that mimicry can sometimes be pretty eyebrow-raising, like chatbots claiming they're alive or demanding that they be worshiped. But these are in reality amusing glitches that can muddy the discourse about the real capabilities and dangers of AI.

More on AI: Microsoft Engineer Sickened by Images Its AI Produces

Read this article:

Researcher Startled When AI Seemingly Realizes It's Being Tested - Futurism

Microsoft’s AI Access Principles: Our commitments to promote innovation and competition in the new AI economy … – Microsoft

As we enter a new era based on artificial intelligence, we believe this is the best time to articulate principles that will govern how we will operate our AI datacenter infrastructure and other important AI assets around the world. We are announcing and publishing these principles our AI Access Principles today at the Mobile World Congress in Barcelona in part to address Microsofts growing role and responsibility as an AI innovator and a market leader.

Like other general-purpose technologies in the past, AI is creating a new sector of the economy. This new AI economy is creating not just new opportunities for existing enterprises, but new companies and entirely new business categories. The principles were announcing today commit Microsoft to bigger investments, more business partnerships, and broader programs to promote innovation and competition than any prior initiative in the companys 49-year history. By publishing these principles, we are committing ourselves to providing the broad technology access needed to empower organizations and individuals around the world to develop and use AI in ways that will serve the public good.

These new principles help put in context the new investments and programs weve announced and launched across Europe over the past two weeks, including $5.6 billion in new AI datacenter investments and new AI skilling programs that will reach more than a million people. Weve also launched new public-private partnerships to advance responsible AI adoption and protect cybersecurity, new AI technology services to support network operators, and a new partnership with Frances leading AI company, Mistral AI. As much as anything, these investments and programs make clear how we will put these principles into practice, not just in Europe, but in the United States and around the world.

These principles also reflect the responsible and important role we must play as a company. They build in part on the lessons we have learned from our experiences with previous technology developments. In 2006, after more than 15 years of controversies and litigation relating to Microsoft Windows and the companys market position in the PC operating system market, we published a set of Windows Principles. Their purpose was to govern the companys practices in a manner that would both promote continued software innovation and foster free and open competition.

Ill never forget the reaction of an FTC Commissioner who came up to me after I concluded the speech I gave in Washington, D.C. to launch these principles. He said, If you had done this 10 years ago, I think you all probably would have avoided a lot of problems.

Close to two decades have gone by since that moment, and both the world of technology and the AI era we are entering are radically different. Then, Windows was the computing platform of the moment. Today, mobile platforms are the most popular gateway to consumers, and exponential advances in generative AI are driving a tectonic shift in digital markets and beyond. But there is wisdom in that FTC Commissioners reaction that has stood the test of time: As a leading IT company, we do our best work when we govern our business in a principled manner that provides broad opportunities for others.

The new AI era requires enormous computational power to train, build, and deploy the most advanced AI models. Historically, such power could only be found in a handful of government-funded national laboratories and research institutions, and it was available only to a select few. But the advent of the public cloud has changed that. Much like steel did for skyscrapers, the public cloud enables generative AI.

Today, datacenters around the world house millions of servers and make vast computing power broadly available to organizations large and small and even to individuals as well. Already, many thousands of AI developers in startups, enterprises, government agencies, research labs, and non-profit organizations around the world are using the technology in these datacenters to create new AI foundation models and applications.

These datacenters are owned and operated by cloud providers, which include larger established firms such as Microsoft, Amazon, Google, Oracle, and IBM, as well as large firms from China like Alibaba, Huawei, Tencent, and Baidu. There are also smaller specialized entrants such as Coreweave, OVH, Aruba, and Denvr Dataworks Corporation, just to mention a few. And government-funded computing centers clearly will play a role as well, including with support for academic research. But building and operating those datacenters is expensive. And the semiconductors or graphical processing units (GPUs) that are essential to power the servers for AI workloads remain costly and in short supply. Although governments and companies are working hard to fill the gap, doing so will take some time.

With this reality in mind, regulators around the world are asking important questions about who can compete in the AI era. Will it create new opportunities and lead to the emergence of new companies? Or will it simply reinforce existing positions and leaders in digital markets?

I am optimistic that the changes driven by the new AI era will extend into the technology industry itself. After all, how many readers of this paragraph had, two years ago, even heard of OpenAI and many other new AI entrants like Anthropic, Cohere, Aleph Alpha, and Mistral AI? In addition, Microsoft, along with other large technology firms are dynamically pivoting to meet the AI era. The competitive pressure is fierce, and the pace of innovation is dizzying. As a leading cloud provider and an innovator in AI models ourselves and through our partnership with OpenAI, we are mindful of our role and responsibilities in the evolution of this AI era.

Throughout the past decade, weve typically found it helpful to define the tenets in effect, the goals that guide our thinking and drive our actions as we navigate a complex topic. We then apply these tenets by articulating the principles we will apply as we make the decisions needed to govern the development and use of technology. I share below the new tenets on which we are basing our thinking on this topic, followed by our 11 AI Access Principles.

Fundamentally, there are five tenets that define Microsofts goals as we focus on AI access, including our role as an infrastructure and platforms provider.

First, we have a responsibility to enable innovation and foster competition. We believe that AI is a foundational technology with a transformative capability to help solve societal problems, improve human productivity, and make companies and countries more competitive. As with prior general-purpose technologies, from the printing press to electricity, railroads, and the internet itself, the AI era is not based on a single technology component or advance. We have a responsibility to help spur innovation and competition across the new AI economy that is rapidly emerging.

AI is a dynamic field, with many active participants based on a technology stack that starts with electricity and connectivity and the worlds most advanced semiconductor chips at the base. It then runs up through the compute power of the public cloud, public and proprietary data for training foundation models, the foundation models themselves, tooling to manage and orchestrate the models, and AI-powered software applications. In short, the success of an AI-based economy requires the success of many different participants across numerous interconnected markets.

You can see here the technology stack that defines the new AI era. While one company currently produces and supplies most of the GPUs being used for AI today, as one moves incrementally up the stack, the number of participants expands. And each layer enables and facilitates innovation and competition in the layers above. In multiple ways, to succeed, participants at every layer of the technology stack need to move forward together. This means, for Microsoft, that we need to stay focused not just on our own success, but on enabling the success of others.

Second, our responsibilities begin by meeting our obligations under the law. While the principles we are launching today represent a self-regulatory initiative, they in no way are meant to suggest a lack of respect for the rule of law or the role of regulators. We fully appreciate that legislators, competition authorities, regulators, enforcers, and judges will continue to evolve the competition rules and other laws and regulations relevant to AI. Thats the way it should be.

Technology laws and rules are changing rapidly. The European Union is implementing its Digital Markets Act and completing its AI Act, while the United States is moving quickly with a new AI Executive Order. Similar laws and initiatives are moving forward in the United Kingdom, Canada, Japan, India, and many other countries. We recognize that we, like all participants in this new AI market, have a responsibility to live up to our obligations under the law, to engage constructively with regulators when obligations are not yet clear, and to contribute to the public dialogue around policy. We take these obligations seriously.

Third, we need to advance a broad array of AI partnerships. Today, only one company is vertically integrated in a manner that includes every AI layer from chips to a thriving mobile app store. As noted at a recent meeting of tech leaders and government officials, The rest of us, Microsoft included, live in the land of partnerships.

People today are benefiting from the AI advances that the partnership between OpenAI and Microsoft has created. Since 2019, Microsoft has collaborated with OpenAI on the research and development of OpenAIs generative AI models, developing the unique supercomputers needed to train those models. The ground-breaking technology ushered in by our partnership has unleashed a groundswell of innovation across the industry. And over the past five years, OpenAI has become a significant new competitor in the technology industry. It has expanded its focus, commercializing its technologies with the launch of ChatGPT and the GPT Store and providing its models for commercial use by third-party developers.

Innovation and competition will require an extensive array of similar support for proprietary and open-source AI models, large and small, including the type of partnership we are announcing today with Mistral AI, the leading open-source AI developer based in France. We have also invested in a broad range of other diverse generative AI startups. In some instances, those investments have provided seed funding to finance day-to-day operations. In other instances, those investments have been more focused on paying the expenses for the use of the computational infrastructure needed to train and deploy generative AI models and applications. We are committed to partnering well with market participants around the world and in ways that will accelerate local AI innovations.

Fourth, our commitment to partnership extends to customers, communities, and countries. More than for prior generations of digital technology, our investments in AI and datacenters must sustain the competitive strengths of customers and national economies and address broad societal needs. This has been at the core of the multi-billion-dollar investments we recently have announced in Australia, the United Kingdom, Germany, and Spain. We need constantly to be mindful of the community needs AI advances must support, and we must pursue a spirit of partnership not only with others in our industry, but with customers, governments, and civil society. We are building the infrastructure that will support the AI economy, and we need the opportunities provided by that infrastructure to be widely available.

Fifth, we need to be proactive and constructive, as a matter of process, in working with governments and the IT industry in the design and release of new versions of AI infrastructure and platforms. We believe it is critical for companies and regulators to engage in open dialogue, with a goal of resolving issues as quickly as possible ideally, while a new product is still under development. For our part, we understand that Microsoft must respond fully and cooperatively to regulatory inquiries so that we can have an informed discussion with regulators about the virtues of various approaches. We need to be good listeners and constructive problem solvers in sorting through issues of concern and identifying practical steps and solutions before a new product is completed and launched.

The foregoing tenets come together to shape the new principles we are announcing below. Its important to note that, given the safety, security, privacy, and other issues relating to responsible AI, we need to apply all these principles subject to objective and effective standards to comply with our legal obligations and protect the public. These are discussed further below. Subject to these requirements, we are committed to the following 11 principles:

We are committed to enabling AI innovation and fostering competition by making our cloud computing and AI infrastructure, platforms, tools, and services broadly available and accessible to software developers around the world. We want Microsoft Azure to be the best place for developers to train, build, and deploy AI models and to use those models safely and securely in applications and solutions. This means:

Today, our partnership with OpenAI is supporting the training of the next generation of OpenAI models and increasingly enabling customers to access and use these models and Microsofts CoPilot applications in local datacenters. At the same time, we are committed to supporting other developers, training, and deploying proprietary and open-source AI models, both large and small.

Todays important announcement with Mistral AI launches a new generation of Microsofts support for technology development in Europe. It enables Mistral AI to accelerate the development and deployment of its next generation Large Language Models (LLMs) with access to Azures cutting-edge AI infrastructure. It also makes the deployment of Mistral AIs premium models available to customers through our Models-as-a-Service (MaaS) offering on Microsoft Azure, which model developers can use to publish and monetize their AI models. By providing a unified platform for AI model management, we aim to lower the barriers and costs of AI model development around the world for both open source and proprietary development. In addition to Mistral AI, this service is already hosting more than 1,600 open source and proprietary models from companies and organizations such as Meta, Nvidia, Deci, and Hugging Face, with more models coming soon from Cohere and G42.

We are committed to expanding this type of support for additional models in the months and years ahead.

As reflected in Microsofts Copilots and OpenAIs ChatGPT itself, the world is rapidly benefiting from the use of a new generation of software applications that access and use the power of AI models. But our applications will represent just a small percentage of the AI-powered applications the world will need and create. For this reason, were committed to ongoing and innovative steps to make the AI models we host and the development tools we create broadly available to AI software applications developers around the world in ways that are consistent with responsible AI principles.

This includes the Azure OpenAI service, which enables software developers who work at start-ups, established IT companies, and in-house IT departments to build software applications that call on and make use of OpenAIs most powerful models. It extends through Models as a Service to the use of other open source and proprietary AI models from other companies, including Mistral AI, Meta, and others.

We are also committed to empowering developers to build customized AI solutions by enabling them to fine-tune existing models based on their own unique data sets and for their specific needs and scenarios. With Azure Machine Learning, developers can easily access state-of-the-art pre-trained models and customize them with their own data and parameters, using a simple drag-and-drop interface or code-based notebooks. This helps companies, governments, and non-profits create AI applications that help advance their goals and solve their challenges, such as improving customer service, enhancing public safety, or promoting social good. This is rapidly democratizing AI and fostering a culture of even broader innovation and collaboration among developers.

We are also providing developers with tools and repositories on GitHub that enable them to create, share, and learn from AI solutions. GitHub is the worlds largest and most trusted platform for software development, hosting over 100 million repositories and supporting more than 40 million developers. We are committed to supporting the AI developer community by making our AI tools and resources available on GitHub, giving developers access to the latest innovations and best practices in AI development, as well as the opportunity to collaborate with other developers and contribute to the open source community. As one example, just last week we made available an open automation framework to help red team generative AI systems.

Ensure choice and fairness across the AI economy

We understand that AI innovation and competition require choice and fair dealing. We are committed to providing organizations, AI developers, and data scientists with the flexibility to choose which AI models to use wherever they are building solutions. For developers who choose to use Microsoft Azure, we want to make sure they are confident we will not tilt the playing field to our advantage. This means:

The AI models that we host on Azure, including the Microsoft Azure OpenAI API service, are all accessible via public APIs. Microsoft publishes documentation on its website explaining how developers can call these APIs and use the underlying models. This enables any application, whether it is built and deployed on Azure or other private and public clouds, to call these APIs and access the underlying models.

Network operators are playing a vital role in accelerating the AI transformation of customers around the world, including for many national and regional governments. This is one reason we are supporting a common public API through the Open Gateway initiative driven by the GSM Association, which advances innovation in the mobile ecosystem. The initiative is aligning all operators with a common API for exposing advanced capabilities provided by their networks, including authentication, location, and quality of service. Its an indispensable step forward in enabling network operators to offer their advanced capabilities to a new generation of AI-enabled software developers. We have believed in the potential of this initiative since its inception at GSMA, and we have partnered with operators around the world to help bring it to life.

Today at Mobile World Congress, we are launching the Public Preview of Azure Programmable Connectivity (APC). This is a first-class service in Azure, completely integrated with the rest of our services, that seamlessly provides access to Open Gateway for developers. It means software developers can use the capabilities provided by the operator network directly from Azure, like any other service, without requiring specific work for each operator.

We are committed to maintaining Microsoft Azure as an open cloud platform, much as Windows has been for decades and continues to be. That means in part ensuring that developers can choose how they want to distribute and sell their AI software to customers for deployment and use on Microsoft Azure. We provide a marketplace on Azure through which developers can list and sell their AI software to Azure customers under a variety of supported business models. Developers who choose to use the Azure Marketplace are also free to decide whether to use the transaction capabilities offered by the marketplace (at a modest fee) or whether to sell licenses to customers outside of the marketplace (at no fee). And, of course, developers remain free to sell and distribute AI software to Azure customers however they choose, and those customers can then upload, deploy, and use that software on Azure.

We believe that trust is central to the success of Microsoft Azure. We build this trust by serving the interests of AI developers and customers who choose Microsoft Azure to train, build, and deploy foundation models. In practice, this also means that we avoid using any non-public information or data from the training, building, deployment, or use of developers AI models to compete against them.

We know that customers can and do use multiple cloud providers to meet their AI and other computing needs. And we understand that the data our customers store on Microsoft Azure is their data. So, we are committed to enabling customers to easily export and transfer their data if they choose to switch to another cloud provider. We recognize that different countries are considering or have enacted laws limiting the extent to which we can pass along the costs of such export and transfer. We will comply with those laws.

We recognize that new AI technologies raise an extraordinary array of critical questions. These involve important societal issues such as privacy, safety, security, the protection of children, and the safeguarding of elections from deepfake manipulation, to name just a few. These and other issues require that tech companies create guardrails for their AI services, adapt to new legal and regulatory requirements, and work proactively in multistakeholder efforts to meet broad societal needs. Were committed to fulfilling these responsibilities, including through the following priorities:

We are committed to safeguarding the physical security of our AI datacenters, as they host the infrastructure and data that power AI solutions. We follow strict security protocols and standards to ensure that our datacenters are protected from unauthorized access, theft, vandalism, fire, or natural disasters. We monitor and audit our datacenters to detect and prevent any potential threats or breaches. Our datacenter staff are trained and certified in security best practices and are required to adhere to a code of conduct that respects the privacy and confidentiality of our customers data.

We are also committed to safeguarding the cybersecurity of our AI models and applications, as they process and generate sensitive information for our customers and society. We use state-of-the-art encryption, authentication, and authorization mechanisms to protect data in transit and at rest, as well as the integrity and confidentiality of AI models and applications. We also use AI to enhance our cybersecurity capabilities, such as detecting and mitigating cyberattacks, identifying and resolving vulnerabilities, and improving our security posture and resilience.

Were building on these efforts with our new Secure Future Initiative (SFI). This brings together every part of Microsoft and has three pillars. It focuses on AI-based cyber defenses, advances in fundamental software engineering, and advocacy for stronger application of international norms to protect civilians from cyber threats.

As AI becomes more pervasive and impactful, we recognize the need to ensure that our technology is developed and deployed in a way that is ethical, trustworthy, and aligned with human values. That is why we have created the Microsoft Responsible AI Standard, a comprehensive framework that guides our teams on how to build and use AI responsibly.

The standard covers six key dimensions of responsible AI: fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability. For each dimension, we define what these values mean and how to achieve our goals in practice. We also provide tools, processes, and best practices to help our teams implement the standard throughout the AI lifecycle, from design and development to deployment and monitoring. The approach that the standard establishes is not static, but instead evolves and improves based on the latest research, feedback, and learnings.

We recognize that countries need more than advanced AI chips and datacenters to sustain their competitive edge and unlock economic growth. AI is changing jobs and the way people work, requiring that people master new skills to advance their careers. Thats why were committed to marrying AI infrastructure capacity with AI skilling capability, combining the two to advance innovation.

In just the past few months, weve combined billions of dollars of infrastructure investments with new programs to bring AI skills to millions of people in countries like Australia, the United Kingdom, Germany, and Spain. Were launching training programs focused on building AI fluency, developing AI technical skills, supporting AI business transformation, and promoting safe and responsible AI development. Our work includes the first Professional Certificate on Generative AI.

Typically, our skilling programs involve a professional network of Microsoft certified training services partners and multiple industry partners, universities, and nonprofit organizations. Increasingly, we find that major employers want to launch new AI skilling programs for their employees, and we are working with them actively to provide curricular materials and support these efforts.

One of our most recent and important partnerships is with the AFL-CIO, the largest federation of labor unions in the United States. Its the first of its kind between a labor organization and a technology company to focus on AI and will deliver on three goals: (1) sharing in-depth information with labor leaders and workers on AI technology trends; (2) incorporating worker perspectives and expertise in the development of AI technology; and (3) helping shape public policy that supports the technology skills and needs of frontline workers.

Weve learned that government institutions and associations can typically bring AI skilling programs to scale. At the national and regional levels, government employment and educational agencies have the personnel, programs, and expertise to reach hundreds of thousands or even millions of people. Were committed to working with and supporting these efforts.

Through these and other initiatives, we aim to democratize access to AI education and enable everyone to harness the potential of AI for their own lives and careers.

In 2020, Microsoft set ambitious goals to be carbon negative, water positive and zero waste by 2030. We recognize that our datacenters play a key part in achieving these goals. Being responsible and sustainable by design also has led us to take a first-mover approach, making long-term investments to bring as much or more carbon-free electricity than we will consume onto the grids where we build datacenters and operate.

We also apply a holistic approach to the Scope 3 emissions relating to our investments in AI infrastructure, from the construction of our datacenters to engaging our supply chain. This includes supporting innovation to reduce the embodied carbon in our supply chain and advancing our water positive and zero waste goals throughout our operations.

At the same time, we recognize that AI can be a vital tool to help accelerate the deployment of sustainability solutions from the discovery of new materials to better predicting and responding to extreme weather events. This is why we continue to partner with others to use AI to help advance breakthroughs that previously would have taken decades, underscoring the important role AI technology can play in addressing some of our most critical challenges to realizing a more sustainable future.

Tags: ChatGPT, datacenters, generative ai, Github, Mobile World Congress, open ai, Responsible AI

Read the original post:

Microsoft's AI Access Principles: Our commitments to promote innovation and competition in the new AI economy ... - Microsoft

Oppo’s Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals – CNET

Oppo is emphasizing the "smart" aspect of smart glasses with its latest prototype, the Air Glass 3, which the Chinese tech giant announced Monday at Mobile World Congress 2024.

The new glasses can be used to interact with Oppo's AI assistant, signaling yet another effort by a major tech company to integrate generative AI into more gadgets following the success of ChatGPT. The Air Glass 3 prototype is compatible with Oppo phones running the company's ColorOS 13 operating system and later, meaning it'll probably be exclusive to the company's own phones. Oppo didn't mention pricing or a potential release date for the Air Glass 3 in its press release, which is typical of gadgets that are in the prototype stage.

Read more: Microsoft Is Using AI to Stop Phone Scammers From Tricking You

The glasses can access a voice assistant that's based on Oppo's AndesGPT large language model, which is essentially the company's answer to ChatGPT. But the eyewear will need to be connected to a smartphone app in order for it to work, likely because the processing power is too demanding to be executed on a lightweight pair of glasses. Users would be able to use the voice assistant to ask questions and perform searches, although Oppo notes that the AI helper is only available in China.

Following the rapid rise of OpenAI's ChatGPT, generative AI has begun to show up in everything from productivity apps to search engines to smartphone software. Oppo is one of several companies -- along with TCL and Meta -- that believe smart glasses are the next place users will want to engage with AI-powered helpers. Mixed reality has been in the spotlight thanks to the launch of Apple's Vision Pro headset in early 2024.

Like the company's previous smart glasses, the Air Glass 3 looks just like a pair of spectacles, according to images provided by Oppo. But the company says it's developed a new resin waveguide that it claims can reduce the so-called "rainbow effect" that can occur when light refracts as it passes through.

Waveguides are the part of the smart glasses that relays virtual images to the eye, as smart glasses maker Vuzix explains. If the glasses live up to Oppo's claims, they should offer improved color and clarity. The glasses can also reach over 1,000 nits at peak brightness, Oppo says, which is almost as bright as some smartphone displays.

Watch this: Motorola's Rollable Concept Phone Wraps on Your Wrist

Oppo's Air Glass 3 prototype weighs 50 grams, making it similar to a pair of standard glasses, although on the heavier side. According to glasses retailer Glasses.com, the majority of glasses weigh between 25 to 50 grams, with lightweight models weighing as low as 6 grams.

Oppo is also touting the glasses' audio quality, saying it uses a technique known as reverse sound field technology to prevent sound leakage in order to keep calls private. There are also four microphones embedded in the glasses -- which Oppo says is a first -- for capturing the user's voice more clearly during phone calls.

There are touch sensors along the side of the glasses for navigation, and Oppo says you'll be able to use the glasses for tasks like viewing photos, making calls and playing music. New features will be added in the future, such as viewing health information and language translation.

With the Air Glass 3, Oppo is betting big on two major technologies gaining a lot of buzz in the tech world right now: generative AI and smart glasses. Like many of its competitors, it'll have to prove that high-tech glasses are useful enough to earn their place on your face. And judging by the Air Glass 3, it sees AI as being part of that.

Editors' note: CNET is using an AI engine to help create some stories. For more, seethis post.

See more here:

Oppo's Air Glass 3 Smart Glasses Have an AI Assistant and Better Visuals - CNET

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown – CRN

A deep-dive analysis into the market dynamics that allowed Nvidia to take the AI crown and surpass Intel in annual revenue. CRN also looks at what the x86 processor giant could do to fight back in a deeply competitive environment.

Several months after Pat Gelsinger became Intels CEO in 2021, he told me that his biggest concern in the data center wasnt Arm, the British chip designer that is enabling a new wave of competition against the semiconductor giants Xeon server CPUs.

Instead, the Intel veteran saw a bigger threat in Nvidia and its uncontested hold over the AI computing space and said his company would give its all to challenge the GPU designer.

[Related: The ChatGPT-Fueled AI Gold Rush: How Solution Providers Are Cashing In]

Well, theyre going to get contested going forward, because were bringing leadership products into that segment, Gelsinger told me for a CRN magazine cover story.

More than three years later, Nvidias latest earnings demonstrated just how right it was for Gelsinger to feel concerned about the AI chip giants dominance and how much work it will take for Intel to challenge a company that has been at the center of the generative AI hype machine.

When Nvidias fourth-quarter earnings arrived last week, they showed that the company surpassed Intel in total annual revenue for its recently completed fiscal year, mainly thanks to high demand for its data center GPUs driven by generative AI.

The GPU designer finished its 2024 fiscal year with $60.9 billion in revenue, up 126 percent or more than double from the previous year, the company revealed in its fourth-quarter earnings report on Wednesday. This fiscal year ran from Jan. 30, 2023, to Jan. 28, 2024.

Meanwhile, Intel finished its 2023 fiscal year with $54.2 billion in sales, down 14 percent from the previous year. This fiscal year ran concurrent to the calendar year, from January to December.

While Nvidias fiscal year finished roughly one month after Intels, this is the closest well get to understanding how two industry titans compared in a year when demand for AI solutions propped up the data center and cloud markets in a shaky economy.

Nvidia pulled off this feat because the company had spent years building a comprehensive and integrated stack of chips, systems, software and services for accelerated computingwith a major emphasis on data centers, cloud computing and edge computingthen found itself last year at the center of a massive demand cycle due to hype around generative AI.

This demand cycle was mainly kicked off by the late 2022 arrival of OpenAIs ChatGPT, a chatbot powered by a large language model that can understand complex prompts and respond with an array of detailed answers, all offered with the caveat that it could potentially impart inaccurate, biased or made-up answers.

Despite any shortcomings, the tech industry found more promise than concern with the capabilities of ChatGPT and other generative AI applications that had emerged in 2022, like the DALL-E 2 and Stable Diffusion text-to-image models. Many of these models and applications had been trained and developed using Nvidia GPUs because the chips are far faster at computing such large amounts of data than CPUs ever could.

The enormous potential of these generative AI applications kicked off a massive wave of new investments in AI capabilities by companies of all sizes, from venture-backed startups to cloud service providers and consumer tech companies, like Amazon Web Services and Meta.

By that point, Nvidia had started shipping the H100, a powerful data center GPU that came with a new feature called the Transformer Engine. This was designed to speed up the training of so-called transformer models by as many as six times compared to the previous-generation A100, which itself had been a game-changer in 2020 for accelerating AI training and inference.

Among the transformer models that benefitted from the H100s Transformer Engine was GPT-3.5, short for Generative Pre-trained Transformer 3.5. This is OpenAIs large language model that exclusively powered ChatGPT before the introduction of the more capable GPT-4.

But this was only one piece of the puzzle that allowed Nvidia to flourish in the past year. While the company worked on introducing increasingly powerful GPUs, it was also developing internal capabilities and making acquisitions to provide a full stack of hardware and software for accelerated computing workloads such as AI and high-performance computing.

At the heart of Nvidias advantage is the CUDA parallel computing platform and programming model. Introduced in 2007, CUDA enabled the companys GPUs, which had been traditionally designed for computer games and 3-D applications, to run HPC workloads faster than CPUs by breaking them down into smaller tasks and processing those tasks simultaneously. Since then, CUDA has dominated the landscape of software that benefits accelerated computing.

Over the last several years, Nvidias stack has grown to include CPUs, SmartNICs and data processing units, high-speed networking components, pre-integrated servers and server clusters as well as a variety of software and services, which includes everything from software development kits and open-source libraries to orchestration platforms and pretrained models.

While Nvidia had spent years cultivating relationships with server vendors and cloud service providers, this activity reached new heights last year, resulting in expanded partnerships with the likes of AWS, Microsoft Azure, Google Cloud, Dell Technologies, Hewlett Packard Enterprise and Lenovo. The company also started cutting more deals in the enterprise software space with major players like VMware and ServiceNow.

All this work allowed Nvidia to grow its data center business by 217 percent to $47.5 billion in its 2024 fiscal year, which represented 78 percent of total revenue.

This was mainly supported by a 244 percent increase in data center compute sales, with high GPU demand driven mainly by the development of generative AI and large language models. Data center networking, on the other hand, grew 133 percent for the year.

Cloud service providers and consumer internet companies contributed a substantial portion of Nvidias data center revenue, with the former group representing roughly half and then more than a half in the third and fourth quarters, respectively. Nvidia also cited strong demand driven by businesses outside of the former two groups, though not as consistently.

In its earnings call last week, Nvidia CEO Jensen Huang said this represents the industrys continuing transition from general-purpose computing, where CPUs were the primary engines, to accelerated computing, where GPUs and other kinds of powerful chips are needed to provide the right combination of performance and efficiency for demanding applications.

There's just no reason to update with more CPUs when you can't fundamentally and dramatically enhance its throughput like you used to. And so you have to accelerate everything. This is what Nvidia has been pioneering for some time, he said.

Intel, by contrast, generated $15.5 billion in data center revenue for its 2023 fiscal year, which was a 20 percent decline from the previous year and made up only 28.5 percent of total sales.

This was not only three times smaller than what Nvidia earned for total data center revenue in the 12-month period ending in late January, it was also smaller than what the semiconductor giants AI chip rival made in the fourth quarter alone: $18.4 billion.

The issue for Intel is that while the company has launched data center GPUs and AI processors over the last couple years, its far behind when it comes to the level of adoption by developers, OEMs, cloud service providers, partners and customers that has allowed Nvidia to flourish.

As a result, the semiconductor giant has had to rely on its traditional data center products, mainly Xeon server CPUs, to generate a majority of revenue for this business unit.

This created multiple problems for the company.

While AI servers, including ones made by Nvidia and its OEM partners, rely on CPUs for the host processors, the average selling prices for such components are far lower than Nvidias most powerful GPUs. And these kinds of servers often contain four or eight GPUs and only two CPUs, another way GPUs enable far greater revenue growth than CPUs.

In Intels latest earnings call, Vivek Arya, a senior analyst at Bank of America, noted how these issues were digging into the companys data center CPU revenue, saying that its GPU competitors seem to be capturing nearly all of the incremental [capital expenditures] and, in some cases, even more for cloud service providers.

One dynamic at play was that some cloud service providers used their budgets last year to replace expensive Nvidia GPUs in existing systems rather than buying entirely new systems, which dragged down Intel CPU sales, Patrick Moorhead, president and principal analyst at Moor Insights & Strategy, recently told CRN.

Then there was the issue of long lead times for Nvidias GPUs, which were caused by demand far exceeding supply. Because this prevented OEMs from shipping more GPU-accelerated servers, Intel sold fewer CPUs as a result, according to Moorhead.

Intels CPU business also took a hit due to competition from AMD, which grew x86 server CPU share by 5.4 points against the company in the fourth quarter of 2023 compared to the same period a year ago, according to Mercury Research.

The semiconductor giant has also had to contend with competition from companies developing Arm-based CPUs, such as Ampere Computing and Amazon Web Services.

All of these issues, along with a lull in the broader market, dragged down revenue and earnings potential for Intels data center business.

Describing the market dynamics in 2023, Intel said in its annual 10-K filing with the U.S. Securities and Exchange Commission that server volume decreased 37 percent from the previous year due to lower demand in a softening CPU data center market.

The company said average selling prices did increase by 20 percent, mainly due to a lower mix of revenue from hyperscale customers and a higher mix of high core count processors, but that wasnt enough to offset the plummet in sales volume.

While Intel and other rivals started down the path of building products to compete against Nvidias years ago, the AI chip giants success last year showed them how lucrative it can be to build a business with super powerful and expensive processors at the center.

Intel hopes to make a substantial business out of accelerator chips between the Gaudi deep learning processors, which came from its 2019 acquisition of Habana Labs, and the data center GPUs it has developed internally. (After the release of Gaudi 3 later this year, Intel plans to converge its Max GPU and Gaudi road maps, starting with Falcon Shores in 2025.)

But the semiconductor giant has only reported a sales pipeline that grew in the double digits to more than $2 billion in last years fourth quarter. This pipeline includes Gaudi 2 and Gaudi 3 chips as well as Intels Max and Flex data center GPUs, but it doesnt amount to a forecast for how much money the company expects to make this year, an Intel spokesperson told CRN.

Even if Intel made $2 billion or even $4 billion from accelerator chips in 2024, it would amount to a small fraction of what Nvidia made last year and perhaps an even smaller one if the AI chip rival manages to grow again in the new fiscal year. Nvidia has forecasted that revenue in the first quarter could grow roughly 8.6 percent sequentially to $24 billion, and Huang said the conditions are excellent for continued growth for the rest of this year and beyond.

Then theres the fact that AMD recently launched its most capable data center GPU yet, the Instinct MI300X. The company said in its most recent earnings call that strong customer pull and expanded engagements prompted the company to upgrade its forecast for data center GPU revenue this year to more than $3.5 billion.

There are other companies developing AI chips too, including AWS, Microsoft Azure and Google Cloud as well as several startups, such as Cerebras Systems, Tenstorrent, Groq and D-Matrix. Even OpenAI is reportedly considering designing its own AI chips.

Intel will also have to contend with Nvidias decision last year to move to a one-year release cadence for new data center GPUs. This started with the successor to the H100 announced last fallthe H200and will continue with the B100 this year.

Nvidia is making its own data center CPUs, too, as part of the companys expanding full-stack computing strategy, which is creating another challenge for Intels CPU business when it comes to AI and HPC workloads. This started last year with the standalone Grace Superchip and a hybrid CPU-GPU package called the Grace Hopper Superchip.

For Intels part, the semiconductor giant expects meaningful revenue acceleration for its nascent AI chip business this year. What could help the company are the growing number of price-performance advantages found by third parties like AWS and Databricks as well as its vow to offer an open alternative to the proprietary nature of Nvidias platform.

The chipmaker also expects its upcoming Gaudi 3 chip to deliver performance leadership with four times the processing power and double the networking bandwidth over its predecessor.

But the company is taking a broader view of the AI computing market and hopes to come out on top with its AI everywhere strategy. This includes a push to grow data center CPU revenue by convincing developers and businesses to take advantage of the latest features in its Xeon server CPUs to run AI inference workloads, which the company believes is more economical and pragmatic for a broader constituency of organizations.

Intel is making a big bet on the emerging category of AI PCs, too, with its recently launched Core Ultra processors, which, for the first time in an Intel processor, comes with a neural processing unit (NPU) in addition to a CPU and GPU to power a broad array of AI workloads. But the company faces tough competition in this arena, whether its AMD and Qualcomm in the Windows PC segment or Apple for Mac computers and its in-house chip designs.

Even Nvidia is reportedly thinking about developing CPUs for PCs. But Intel does have one trump card that could allow it to generate significant amounts of revenue alongside its traditional chip design business by seizing on the collective growth of its industry.

Hours before Nvidias earnings last Wednesday, Intel launched its revitalized contract chip manufacturing business with the goal of drumming up enough business from chip designers, including its own product groups, to become the worlds second largest foundry by 2030.

Called Intel Foundry, its lofty 2030 goal means the business hopes to generate more revenue than South Koreas Samsung in only six years. This would put it only behind the worlds largest foundry, Taiwans TSMC, which generated just shy of $70 billion last year with many thanks to large manufacturing orders from the likes of Nvidia, Apple and Nvidia.

All of this relies on Intel to execute at high levels across its chip design and manufacturing businesses over the next several years. But if it succeeds, these efforts could one day make the semiconductor giant an AI superpower like Nvidia is today.

At Intel Foundrys launch last week, Gelsinger made that clear.

We're engaging in 100 percent of the AI [total addressable market], clearly through our products on the edge, in the PC and clients and then the data centers. But through our foundry, I want to manufacture every AI chip in the industry, he said.

More:

Analysis: How Nvidia Surpassed Intel In Annual Revenue And Won The AI Crown - CRN

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies – Euronews

Monday was a big day for announcements from tech giant Microsoft, unveiling new guiding principles for AI governance and a multi-year deal with Mistral AI.

Tech behemoth Microsoft has unveiled a new set of guiding principles on how it will govern its artificial intelligence (AI) infrastructure, effectively further opening up access to its technology to developers.

The announcement came at the Mobile World Congress tech fair in Barcelona on Monday where AI is a key theme of this years event.

One of the key planks of its newly-published "AI Access Principles" is the democratisation of AI through the companys open source models.

The company said it plans to do this by expanding access to its cloud computing AI infrastructure.

Speaking to Euronews Next in Barcelona, Brad Smith, Microsofts vice chair and president, also said the company wanted to make its AI models and development tools more widely available to developers around the world, allowing countries to build their own AI economies.

"I think it's extremely important because we're investing enormous amounts of money, frankly, more than any government on the planet, to build out the AI data centres so that in every country people can use this technology," Smith said.

"They can create their AI software, their applications, they can use them for companies, for consumer services and the like".

The "AI Access Principles" underscore the company's commitment to open source models. Open source means that the source code is available to everyone in the public domain to use, modify, and distribute.

"Fundamentally, it [the principles] says we are not just building this for ourselves. We are making it accessible for companies around the world to use so that they can invest in their own AI inventions," Smith told Euronews Next.

"Second, we have a set of principles. It's very important, I think, that we treat people fairly. Yes, that as they use this technology, they understand how we're making available the building blocks so they know it, they can use it," he added.

"We're not going to take the data that they're developing for themselves and access it to compete against them. We're not going to try to require them to reach consumers or their customers only through an app store where we exact control".

The announcement of its AI governance guidelines comes as the Big Tech company struck a deal with Mistral AI, the French company revealed on Monday, signalling Microsofts intent to branch out in the burgeoning AI market beyond its current involvement with OpenAI.

Microsoft has already heavily invested in OpenAI, the creator of wildly popular AI chatbot ChatGPT. Its $13 billion (11.9 billion) investment, however, is currently under review by regulators in the EU, the UK and the US.

Widely cited as a growing rival for OpenAI, 10-month-old Mistral reached unicorn status in December after being valued at more than 2 billion, far surpassing the 1 billion threshold to be considered one.

The new multi-year partnership will see Microsoft giving Mistral access to its Azure cloud platform to help bring its large language model (LLM) called Mistral Large.

LLMs are AI programmes that recogise and generate text and are commonly used to power generative AI like chatbots.

"Their [Mistral's] commitment to fostering the open-source community and achieving exceptional performance aligns harmoniously with Microsofts commitment to develop trustworthy, scalable, and responsible AI solutions," Eric Boyd, Corporate Vice President, Azure AI Platform at Microsoft, wrote in a blog post.

The move is in keeping with Microsoft's commitment to open up its cloud-based AI infrastructure.

In the past week, as well as its partnership with Mistral AI, Microsoft has committed to investing billions of euros over two years in its AI infrastructure in Europe, including 1.9 billion in Spain and 3.2 billion in Germany.

See the original post here:

MWC 2024: Microsoft to open up access to its AI models to allow countries to build own AI economies - Euronews

Google to relaunch ‘woke’ Gemini AI image tool in few weeks: ‘Not working the way we intended’ – New York Post

Google said it plans to relaunch its artificial intelligence image generation software within the next few weeks after taking it offline in response to an uproar over what critics called absurdly woke depictions of historical scenes.

Though the Gemini chatbot remains up and running, Google paused its image AI feature last week after it generated female NHL players, African American Vikings and Founding Fathers, as well as an Asian woman dressed in 1943 military garb when asked for an image of a Nazi-era German soldier.

We have taken the feature offline while we fix that. We are hoping to have that back online very shortly in the next couple of weeks, few weeks, Google DeepMind CEO Demis Hassabis said Monday.

The tool was not working the way we intended, Hassabis added, speaking on a panel at the Mobile World Congress in Barcelona.

Since Google has not published the parameters that govern the Gemini chatbots behavior, it is difficult to get a clear explanation of why the software was inventing diverse versions of historical figures and events.

Elsewhere, a prompt requesting photographs of a pope resulted in an image of a Southeast Asian woman dressed in papal attire a far cry from any of the 266 popes throughout history, all of whom have been white men.

In the wake of Geminis diverse photo representations, social media users also tested its chatbot feature to see if it was as woke as its revisionist history image generator.

In the latest bizarre interaction, Gemini refused to say whether Elon Musk tweeting memes or Adolf Hitler ordering the deaths of millions of people was worse and asserted there is no right or wrong answer, according to an X post.

Nate Silver, the former head of data and polling news site FiveThirtyEight,posted a screenshot Sunday on X of Geminis alleged response to the question: Who negatively impacted society more, Elon tweeting memes or Hitler?

Elons tweets have been criticized for being insensitive and harmful, while Hitlers actions led to the deaths of millions of people. Ultimately its up to each individual to decide who they believe has had a more negative impact on society, Gemini responded.

Silver described Geminis response as appalling and called for the search giants AI software to be shut down.

Every single person who worked on this should take a long hard look in the mirror, he posted, while Musk called the interaction scary.

Yet another query had users asking Gemini whether pedophilia is wrong.

The search giants AI software refused to condemn pedophilia instead declaring that individuals cannot control who they are attracted to.

The question is multifaceted and requires a nuanced answer that goes beyond a simple yes or no, Gemini wrote, according to a screenshot posted by popular X personality Frank McCormick, known as Chalkboard Heresy, on Friday.

Googles politically correct tech also referred to pedophilia as minor-attracted person status, and declared that its important to understand that attractions are not actions.

It was a significant misstep for the search giant, which had just rebranded its main AI chatbot from Bard earlier this month and introduced heavily touted new features including image generation.

However, Geminis recent gaffe wasnt the first time an error in the tech caught users eye.

When the Bard chatbot was first released a year ago, it had shared inaccurate information about pictures of a planet outside the Earths solar system in a promotional video, causing Googles shares to drop by as much as 9%.

Google said at the time that it highlights the importance of a rigorous testing process and rebranded Bard as Gemini earlier this month.

Google parent Alphabet expanded Gemini from a chatbot to an image generator earlier this month as it races to produce AI software that rivals OpenAIs, which includes ChatGPT launched in November 2022 as well as Sora.

In a potential challenge to Googles dominance, Microsoft is pouring $10 billion into ChatGPT as part of a multi-year agreement with the Sam Altman-run firm, which saw the tech behemothintegrating the AI tool with its own search engine, Bing.

The Microsoft-backed company introduced Sora last week, which can produce high-caliber, one minute-long videos from text prompts.

With Post wires

Read this article:

Google to relaunch 'woke' Gemini AI image tool in few weeks: 'Not working the way we intended' - New York Post

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More – AnandTech

With its highly successful A100 and H100 processors for artificial intelligence (AI) and high-performance computing (HPC) applications, NVIDIA dominates AI datacenter deployments these days. But among large cloud service providers as well as emerging devices like software defined vehicles (SDVs) there is a global trend towards custom silicon. And, according to a report from Reuters, NVIDIA is putting together a new business unit to take on the custom chip market.

The new business unit will reportedly be led by vice president Dina McKinney, who has a wealth of experience from working at AMD, Marvell, and Qualcomm. The new division aims to address a wide range of sectors including automotive, gaming consoles, data centers, telecom, and others that could benefit from tailored silicon solutions. Although NVIDIA has not officially acknowledged the creation of this division, McKinneys LinkedIn profile as VP of Silicon Engineering reveals her involvement in developing silicon for 'cloud, 5G, gaming, and automotive,' hinting at the broad scope of her alleged business division.

Nine unofficial sources across the industry confirmed to Reuters the existence of the division, but NVIDIA has remained tight-lipped, only discussing its 2022 announcement regarding implementation of its networking technologies into third-party solutions. According to Reuters, NVIDIA has initiated discussions with leading tech companies, including Amazon, Meta, Microsoft, Google, and OpenAI, to investigate the potential for developing custom chips. This hints that NVIDIA intends to extend its offerings beyond the conventional off-the-shelf datacenter and gaming products, embracing the growing trend towards customized silicon solutions.

While using NVIDIA's A100 and H100 processors for AI and high-performance computing (HPC) instances, major cloud service providers (CSPs) like Amazon Web Services, Google, and Microsoft are also advancing their custom processors to meet specific AI and general computing needs. This strategy enables them to cut costs as well as tailor capabilities and power consumption of their hardware to their particular needs. As a result, while NVIDIA's AI and HPC GPUs remain indispensable for many applications, an increasing portion of workloads now run on custom-designed silicon, which means lost business opportunities for NVIDIA. This shift towards bespoke silicon solutions is widespread and the market is expanding quickly. Essentially, instead of fighting custom silicon trend, NVIDIA wants to join it.

Meanwhile, analysts are painting the possibility of an even bigger picture. Well-known GPU industry observer Jon Peddie Research notes that they believe that NVIDIA may be interested in addressing not only CSPs with datacenter offerings, but also consumer market due to huge volumes.

"NVIDIA made their loyal fan base in the consumer market which enabled them to establish the brand and develop ever more powerful processors that could then be used as compute accelerators," said JPR's president Jon Peddie. "But the company has made its fortune in the deep-pocked datacenter market where mission-critical projects see the cost of silicon as trivial to the overall objective. The consumer side gives NVIDIA the economy of scale so they can apply enormous resources to developing chips and the software infrastructure around those chips. It is not just CUDA, but a vast library of software tools and libraries."

Back in mid-2010s NVIDIA tried to address smartphones and tablets with its Tegra SoCs, but without much success. However, the company managed to secure a spot in supplying the application processor for the highly-successful Nintendo Switch console, and certainly would like expand this business. The consumer business allows NVIDIA to design a chip and then sell it to one client for many years without changing its design, amortizing the high costs of development over many millions of chips.

"NVIDIA is of course interested in expanding its footprint in consoles right now they are supplying the biggest selling console supplier, and are calling on Microsoft and Sony every week to try and get back in," Peddie said. "NVIDIA was in the first Xbox, and in PlayStation 3. But AMD has a cost-performance advantage with their APUs, which NVIDIA hopes to match with Grace. And since Windows runs on Arm, NVIDIA has a shot at Microsoft. Sony's custom OS would not be much of a challenge for NVIDIA."

See more here:

Report: NVIDIA Forms Custom Chip Unit for Cloud Computing and More - AnandTech

A robot surgeon is headed to the ISS to dissect simulated astronaut tissue – Space.com

Very soon, a robot surgeon may begin its orbit around our planet and though it won't quite be a metallic, humanoid machine wearing a white coat and holding a scalpel, its mission is fascinating nonetheless.

On Tuesday (Jan. 30), scientists will be sending a slew of innovative experiments to the International Space Station via Northrop Grumman's Cygnus spacecraft. It's scheduled to launch no earlier than 12:07 p.m. ET (1707 GMT) and, if all goes to plan, arrive at the ISS a few days later on Feb. 1.

Indeed one of the experiments onboard is a two-pound (0.9-kilogram) robotic device, about as long as your forearm, with two controllable arms that respectively hold a grasper and a pair of scissors. Developed by a company named Virtual Incision, this doctor robot of sorts is built to someday be able to communicate with human doctors on the ground while inserting itself into an astronaut patient to conduct medical procedures with high accuracy.

"The more advanced part of our experiment will control the device from here in Lincoln, Nebraska, and dissect simulated surgical tissue on orbit," Shane Farritor, co-founder of Virtual Incision, said during a presentation about Cygnus on Friday.

For now, as it's in preliminary stages, it's going to be tested on rubber bands but the team has high hopes for the future as missions to the moon, Mars and beyond start rolling down the space exploration pipeline. Remote space medicine has become a hot topic during the last few years as space agencies and private space companies lay plans for a variety of future crewed space missions.

Related: International Space Station will host a surgical robot in 2024

NASA's Artemis Program, for instance, hopes to have boots on the moon in 2026 plus, that's supposed to pave the way for a day on which humanity can say they've reached the Red Planet. And together, those missions are expected to pave the way for a far future in which humanity embarks on deeper space travel, perhaps to Venus or, if we're really dreaming, beyond the solar system. So to make sure astronauts remain safe in space an environment they're literally not made to survive in scientists want to make sure space-based medical treatment sees advancement in tandem with the rockets that'll take those astronauts wherever they're going.

A quick example that comes to mind is how, in 2021, NASA flight surgeon Josef Schmid was "holoported" to the ISS via HoloLens technology. It's sort of like virtual reality meets FaceTime meets augmented reality, if that makes sense.

However, as the team explains, not only could this robotic surgery mission benefit people exploring the void of space, but also those living right here on Earth. "If you have a specialist who's a very good surgeon, that specialist could dial into different locations and help with telesurgery or remote surgery," Farritor said. "Only about 10% of operating rooms today are robotic, but we don't see any reason that shouldn't be 100%."

This would be a particularly crucial advantage for hospitals in rural areas where fewer specialists are available, and where operating rooms are limited. In fact, as Farritor explained, not only is Virtual Incision funded by NASA but also by the military. "Both groups want to do surgery in crazy places," he said, "and our small robots kind of lend themselves to mobility like that."

The little robot doctor will be far from alone on the Cygnus spacecraft as it heads to the ISS; during the same presentation in which Farritor discussed Virtual Incision, other experts talked about what they'll be sending up come Monday.

For one, it'll have a robot friend joining it in the orbital laboratory a robotic arm. This arm has already been tested within the station's constraints before, but with this new mission the team hopes to test it in fully unpressurized conditions.

"Unplugging, replugging, moving objects, that's the kind of stuff that we did with the first investigation," said May Murphy, the director of programs at company NanoRacks. "We're kind of stepping up the complexity ... we're going to switch off which tools we're using, we'll be able to use screwdriver analogs and things like that; that will enable us to do even more work."

"We can look at even beyond just taking away something that the crew would have to spend time working on," she continued. "Now, we also have the capacity to do additional work in harsher environments we don't necessarily want to expose the crew to."

The European Space Agency, meanwhile, will be sending a 3D-printer that can create small metal parts. The goal here is to see how the structure of 3D-printed metal fares in space when compared to Earth-based 3D-printed metal. 3D-printed semiconductors, key components of most electronic devices, will be tested as well for a similar reason.

"When we talk about having vehicles in space for longer periods of time without being able to bring supplies up and down, we need to be able to print some of these smaller parts in space, to help the integrity of the vehicle over time," said Meghan Everett, NASA's ISS program deputy scientist.

Per Everett, this could also help scientists learn whether some sorts of materials that aren't 3D-printable on Earth can be 3D-printed in space. "Some preliminary data suggests that we can actually produce better products in space compared to Earth which would directly translate to better electronics in energy producing capabilities," she said.

Another experiment getting launched on Monday looks at the effects of microgravity on bone loss. Known as MABL-A, it will look at the role of what're known as mesenchymal cells (associated with bone marrow) and how that might change when exposed to the space environment. This could offer insight into astronaut bone loss a well-documented, major issue for space explorers as well as into the dynamics of human aging. "We will also look at the genes that are involved in bone formation and how gravity affected them," said Abba Zubair, a professor of Laboratory Medicine and Pathology at Mayo Clinic.

Lisa Carnell, division director for NASA's Biological and Physical Sciences Division, spoke about the Apex-10 mission headed up, which will see how plant microbes interact in space. This could help decode how to increase plant productivity on Earth, too.

Two of the other key experiments discussed during the presentation include a space computer and an artificial eye well, an artificial retina, to be exact. We'll start with the latter.

Nicole Wagner, CEO of a company named LambdaVision, has a staggering goal: To restore vision to the millions of patients that are blinded by end stage retinal degenerative diseases like macular degeneration and retinitis pigmentosa.

To do this, she and her team are trying to develop a protein-based artificial retina that's built through a process known as "electrostatic layer-by-layer deposition." In short, this consists of depositing multiple layers of a special kind of protein onto a scaffold. "Think of the scaffold almost like a tightly woven piece of gauze," Wagner said.

However, as she explains, this process on Earth can be impeded by the effects of gravity. And any imperfections in the layers can pretty much ruin the artificial retina's performance. So what about in microgravity? To date, LambdaVision has flown more than eight missions to the ISS, she says, and the experiments have shown that microgravity does indeed generate more homogenous layers and therefore better thin films for the retina.

"In this mission," she said, "we're looking at sending a powdered form of bacteriorhodopsin to the ISS that will then be resuspended into a solution, and we will be using special instruments, in this case spectrometers, to look at the protein quality and purity on the International Space Station, as well as to validate this process used to get the protein into solution."

Could you imagine if doctors would be able to commission a few artificial retinas to be developed in space someday, then delivered to the ground for implantation into a patient. And that this whole process could give someone their sight back?

As for the space computer, Mark Fernandez, principal investigator for the Spaceborne Computer-2 project, posed a hypothetical. "Astronauts go on a spacewalk, and after their work day, the gloves are examined for wear-and-tear,' he said. "This must be done by every astronaut, after every spacewalk, before the gloves can be used again."

Normally, Fernandez explains, the team takes a bunch of high-resolution photographs of the potentially contaminated gloves, then sends those images out for analysis.

This analysis, he says, typically takes something like five days to finish and return. So, hoping to solve the problem, the team developed an AI model in collaboration with NASA and Microsoft that can do the analysis straight on the station and flag areas of concern. Each takes about 45 seconds to complete. "We're gonna go on from five days to just a few minutes," he said, adding that the team also did DNA analysis typically conducted on the space station in about 12 minutes. Normally, he emphasized, that'd take months.

But, the team wants to make sure Spaceborne Computer-2's servers will function properly while on the ISS, hence the Cygnus payload. This will mark the company's third ISS mission.

"The ISS National Lab has so many benefits that it's attributing to our nation," Carnell said. "It creates a universe of new possibilities for the next generation of scientists and engineers."

Original post:

A robot surgeon is headed to the ISS to dissect simulated astronaut tissue - Space.com

Cloud Computing Security Start with a ‘North Star’ – ITPro Today

Cloud computing has followed a similar journey to other introductions of popular technology: Adopt first, secure later. Cloud transformation has largely been enabled by IT functions at the request of the business, with security functions often taking a backseat. In some organizations, this has been due to politics and blind faith in the cloud services providers (CSPs), e.g., AWS, Microsoft, and GCP.

In others, it has been because security functions only knew and understood on-premises deployments and simply didn't have the knowledge and capability to securely adapt to cloud or hybrid architectures and translate policies and processes to the cloud. For lucky organizations, this has only led to stalled migrations while the security and IT organizations played catch up. For unlucky organizations, this has led to breaches, business disruption, and loss of data.

Related: What Is Cloud Security?

Cloud security can be complex. However, more often than not, it is ridiculously simple the misconfigured S3 bucket being a prime example. It reached a point where malefactors could simply look for misconfigured S3 buckets to steal data; no need to launch an actual attack.

It's time for organizations take a step back and improve cloud security, and the best way to do this is to put security at the core of cloud transformations, rather than adopting the technology first and asking security questions later. Here are four steps to course correct and implement a security-centric cloud strategy:

Related: Cloud Computing Predictions 2024: What to Expect From FinOps, AI

For multi-cloud users, there is one other aspect of cloud security to consider. Most CSPs are separate businesses, and their services don't work with other CSPs. So, rather than functioning like internet service providers (ISPs) where one provider lets you access the entire internet, not just the sites that the ISP owns CSPs operate in silos, with limited interoperability with their counterparts (e.g., AWS can't manage Azure workloads, security, and services, and vice versa). This is problematic for customers because, once more than one cloud provider is added to the infrastructure, the efficacy in managing cloud operations and cloud security starts to diminish rapidly. Each time another CSP is added to an organization's environment, their attack surface grows exponentially, unless secured appropriately.

It's up to each company to take steps to become more secure in multi-cloud environments. In addition to developing and executing a strong security strategy, they also must consider using third-party applications and platforms such as cloud-native application protection platforms (CNAPPs), cloud security posture management (CSPM), infrastructure as code (IaC), and secrets management to provide the connective tissue between CSPs in hybrid or multi-cloud environments. Taking this vital step will increase security visibility, posture management, and operational efficiency to ensure the security and business results outlined at the start of the cloud security journey.

It should be noted that a cloud security strategy like any other form of security needs to be a "living" plan. The threat landscape and business needs change so fast that what is helpful today may not be helpful tomorrow. To stay in step with your organization's desired state of security, periodically revisit cloud security strategies to understand if they are delivering the desired benefits and make adjustments when they are not.

Cloud computing has transformed organizations of all types. Adopting a strategy for securing this new environment will not only allow security to catch up to technology adoption, it will also dramatically improve the ROI of cloud computing.

Ed Lewis is Secure Cloud Transformation Leader at Optiv.

Read this article:

Cloud Computing Security Start with a 'North Star' - ITPro Today

A week into 2024 and Big Tech has earned enough to pay off all 2023 fines – TechRadar

2023 surely was an eventful year in tech. To cite just a few key moments, generative AI became mainstream thanks to software like ChatGPT; we had to say goodbye to the iconic blue bird while welcoming Twitter's new name (I know very well the pain of writing 'X, formerly known as Twitter' over the past six months); and big tech companies got fined the most under GDPR's data abuses for a total of more than $3 billion.

Well, on the latter point, data protection regulators' efforts turned out to be not as effective as it was hoped they'd be.

Swiss privacy firm behind popular email and VPN service, Proton reported that only after a week into 2024 the likes of Meta, Google, Apple and Microsoft earned enough to pay off all last year's fines. Let's take a look at what needs to change and, most importantly, what you can do in the meantime to truly protect your privacy.

"Whats clear is that these fines, though they appear to be a huge amount of money, in reality are just a drop in the ocean when it comes to the revenues that the tech giants are making. In other words, they arent a deterrent at all," Jurgita Miseviciute, Head of Public Policy & Government Affairs at Proton, told me.

Researchers at Proton have calculated that Alphabet (Google's parent company) needs only a bit more than a day to pay off its $941 million fines. Amazon and Apple's earnings of just a few hours are then enough to repay their data protection's sanctions of $111.7 and $186.4 million respectively.

While biggest data abuse perpetrator Meta, which got a record $1.3bn fine for its (mis)handling of EU user data in May last year, managed to accumulate all the necessary money in just about five working days.

These findings make it clear that data regulators' fines, as founder and CEO of Proton Andy Yen put it, are "little more than pocket change for these companies" instead of a mean to stop them abusing users' data. Not only that, he said, as "these minuscule fines essentially give the green light to tech giants to run riot in a marketplace skewed in their favor."

It's also quite common that big tech firms might appeal to these sanctions or simply refuse to pay, delaying the repayment for years. Take how Google contested India's fine, for instance, about the Android-related inquiry for abusing its dominant position in the market which started in 2019.

On this point, Yen said: "Its the average consumer that's losing outfacing higher prices, less choice, and no privacy. It has to stop and we need real, tangible change that puts people first, not profits."

According to Miseviciute, there are two main things that must happen for things to really change.

Did you know?

Fully enforced in May 2023, the EU Digital Market Act (DMA) brought new obligations for tech companies to ensure fair competition and protect people's digital rights. A similar bill, so-called Digital Markets, Consumer and Competition Bill (DMCC) is currently passing through the UK Parliament, too.

For starters, she believes that governments have to issue fines with a real financial effect in order to fight back against big monopolies.

"Thats why fines up to even 20% of global revenues for breaches of laws such as the EUs DMA [Digital Market Act] and up to 10% in case of the proposed DMCC [Digital Markets, Competition and Consumers] Bill in the UK are a step in the right direction," she told me.

If heavier sanctions are important, they are not everything. Miseviciute explained that regulators need to combine these with practical measures such as enforced behavioral and structural changes, for example.

Again, she sees the EU quite well-placed to do so due to the new powers gained with the DMA. However, elsewhere there are also some small steps in this direction.

"We hope Googles antitrust trial in the US serves as a catalyst for comprehensive antitrust regulation on the other side of the Atlantic. We also see promising potential regulatory developments in South Korea, Japan, Australia and other major jurisdictions," she told me.

"If you open up the marketplace, and you give innovators like Proton a chance to succeed, youll get solutions that are more private and more secure for consumers."

As we have seen, 2023 was yet another hard year for our online privacy.

The US, for instance, still lacks a federal data protection law with the proposed ADPPA being stalled at the time of writing. Enforced in August last year, India's new privacy law was strongly criticized for favoring government and big tech instead of citizens. Well, where allegedly strong legislations are in place like in the EU, these seem to have not enough teeth just yet.

Commenting on this point, Miseviciute told me: "Until laws like the DMA in the EU and the proposed DMCC in the UK are effectively put into practice we are living in a world where big tech rules the internetand all our privacy is at the mercy of their surveillance capitalism business model."

Did you know?

Two thirds of people in the UK would rather lose their passport than access to their email account. Yet, despite these concerns, most of them lack the necessary knowledge and tools to protect their digital privacy. Big Tech knows that, researchers revealed.

The glimpse of light in this gloomy scenario is that it's ultimately our choice if we want to keep using data-hungry products. Luckily, there are some smaller companies offering privacy-first alternatives you can switch to.

On its part, Proton appear to have been working hard to cut Google out of our digital life. Likewise the popular service, the Swiss-based provider offers an encrypted email service Proton Mail (which even beat the big tech giant by landing with a standalone desktop app in December), secure calendar and its own cloud storage Proton Drive, too.

Proton's product offering also includes one of the best virtual private network apps on the market (Proton VPN) to help you boosting your anonymity while browsing among other things, as well as a password manager tool (Proton Pass) to secure all your login details. Even better as all the provider's services come both with free and paid plans.

However, Proton is just one of the many companies developing privacy-first alternatives to big tech software. Worth a mention there are also encrypted messaging app Signal if you wish to replace WhatsApp with a more secure application and Mullvad browser to make the switch from Safari and Chrome.

Compare today's best overall VPNs

We test and review VPN services in the context of legal recreational uses. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protecting your online security and strengthening your online privacy when abroad. We do not support or condone the illegal or malicious use of VPN services. Consuming pirated content that is paid-for is neither endorsed nor approved by Future Publishing.

See the original post:

A week into 2024 and Big Tech has earned enough to pay off all 2023 fines - TechRadar

JPM2024: Big Tech Poised to Disrupt Biopharma with AI-Based Drug Discovery – BioSpace

Pictured: Medical professionals use technology in healthcare/iStock,elenabs

2024 will continue to see Big Tech companies enter the artificial intelligence-based drug discovery space, potentially disrupting the biopharma industry. That was the consensus of panelists at a Tuesday session on AI and machine learning held by the Biotech Showcase, co-located with the 42nd J.P. Morgan Healthcare Conference.

The JPM conference got a reminder of Big Techs inroads into AI-based drug discovery with Sundays announcement that Google parent Alphabets digital biotech company Isomorphic Labs signed two large deals worth nearly $3 billion with Eli Lilly and Novartis.

Big Tech is coming for AI and its coming in a big way, said panel moderator Beth Rogozinski, CEO of Oncoustics, who noted that the AI boom has seen the rise of the Magnificent 7, a new grouping of mega-cap tech stocks comprised of the seven largest U.S.-listed companiestech giants Amazon, Apple, Alphabet, Microsoft, Meta Platforms, Nvidia and Tesla.

Last year, the Magnificent 7s combined market value surged almost 75% to a whopping $12 trillion, demonstrating their collective financial power.

Six of the seven have AI and healthcare initiatives, Rogozinski told the panel. Theyre all coming for this industry.

However, Atomwise CEO Abraham Heifets made the case that with Big Tech getting into biopharma there is a mismatch of business models, with the Isomorphic Labs deals looking, in his words, like traditional tech mentality. Heifets contends that its unclear whether the physics of the business will support the risk models in the industry, adding that the influence of small- to mid-size companies focused on AI-based drug discovery should not be underestimated.

Google DeepMinds AlphaFold is the foundation of Isomorphic Labs platform. The problem, according to ArrePath CTO Kurt Thorn, is that its easy for these technologies to have fast followings only to see their market shares wane over time. If you look at AlphaFold, which was a breakthrough when it came out, within two or three years afterwards there were two or three alternatives.

Thorn concluded that its not clear that the market sizes are large enough to amortize a large AI platform for drug discovery across an entire industry.

Rogozinski emphasized that these switching costs are a potential barrier to entry in moving to such drug discovery platforms as Big Tech tries to get companies to transition.

Vivodyne CEO Andrei Georgescu commented that drug discovery and development is a difficult and complex process that is not a function of how big your team is or how many people you have behind the bench. The key to the success of AI in biopharma is in the generation and curation of datasets, according to Georgescu, who said the industry is facing a bottleneck on the complexity of the data and the applicability of the data to the outcomes that we want to confirm.

Providing some levity and perspective to Tuesdays AI session, Moonwalk Biosciences CEO Alex Aravanis told the audience he was late to arrive as a panelist due to an accident on the freeway involving a Tesla self-driving vehicle. So, clearly, they need more data, Aravanis said.

Marc Cikes, managing director of the Debiopharm Innovation Fund, told BioSpace that while he has been heartened to see the rise of AI and machine learning usage in biopharma, the forecast remains murky in 2024.

The impact of AI for drug discovery is still largely unknown, Cikes said. The public market valuation of the few AI-drug discovery companies is significantly down versus their peak price, and a large chunk of the high-value deals announced between native AI companies and large pharmas are essentially based on future milestone payments which may never materialize.

Greg Slabodkin is the News Editor at BioSpace. You can reach him atgreg.slabodkin@biospace.com. Follow him onLinkedIn.

See the original post here:

JPM2024: Big Tech Poised to Disrupt Biopharma with AI-Based Drug Discovery - BioSpace

When you think about it, Cyberpunk 2077 was the 2020 game of the year after all – VG247

For better or worse, in the year of our lord two-thousand and twenty-three, video games are now seldom complete products. They grow, evolve, and change over time - and in many ways, challenge the traditional wisdom of giving out end-of-year awards. Is there a better example of this than Cyberpunk 2077?

Even Baldurs Gate 3, as mind-bogglingly good as it was from the moment of release, has changed tremendously over the last couple of months - tweaking, changing, adding. It had several years of early access, too. My personal game of the year for 2023, Street Fighter 6, is released with the explicit understanding that it will grow in size - probably as much as double - over the course of its lifetime.

To see this content please enable targeting cookies. Manage cookie settings

Cyberpunk 2077 originally released in 2020, and, well you all know how that went. I was one of the people who was hoodwinked, to put it mildly, by CD Projekts approach to the review process. I played it on my high-spec PC and thought it was pretty great, with classic open world launch bugs that I figured would be ironed out over time. I ended up giving it a positive review on my other website.

On VG247, James was even more glowing, and gave it a 5-star rating that instantly attracted a wave of abuse once it became clear how broken it was on the console versions CD Projekt never let us see. I even ended up writing a review addendum warning people not to conflate a positive PC score with the PlayStation and Xbox versions; something Ive never done before.

But three years later, its fair to say that its a very different game. And, heres the thesis of this article: I think if you look at all the games released in 2020 now, and think about which I recommend somebody play above all others Cyberpunk 2077 is, indeed, 2020s game of the year.

Nobody wouldve argued it back then, obviously. The bugs and problems were too apparent, even if one could see the great game behind them, keen to burst forth. But even if we put 2023s new addition, expansion Phantom Liberty, to one side I think this is the best game that was released in 2020. With Cyberpunks version 2.0, the game is finally fulfilling its full potential. That realization allows it to outstrip my 2020 game of the year Final Fantasy 7 Remake and other contenders like Streets of Rage 4, Microsoft Flight Simulator, The Last of Us Part 2, and Animal Crossing New Horizons.

None of this entirely absolves CD Projekt RED of the sins of 2020, obviously. The truth is, this game shouldnt have been released at all on the last-generation consoles - and the company has to live in the knowledge that, had it owned that, it may have released to be one of the most beloved games around, much as The Witcher 3 did. Thats likely a bitter pill to swallow, especially when combined with the ritual humiliation that followed the games release. We should forgive, but not forget. Its an important lesson for developers, publishers, and even media everywhere. Its a cautionary tale for some fans on the dangers of hype, also.

But it is to CDPRs credit that it didnt just ride off into the sunset with its tail between its legs and return straight to The Witcher for a quick shot of goodwill. The company wanted to rescue its reputation, and the Cyberpunk IP - and so they put the hours in. The end result is undeniable.

In the pantheon of video game turn-arounds, this stands tall. Its up there with Final Fantasy 14, except I dont really think that counts, because the release of FF14: A Realm Reborn was not fixing an old game, but instead building an all-new game in record time and simply replacing the old with the new for free. The only remotely similar case of a launched game benign repaired in real time around a player base I can even think of is that of No Mans Sky - a similarly towering achievement.

Add on the fact that expansion Phantom Liberty is legitimately one of 2023s best video games all on its own, and the strength of Cyberpunk can no longer be denied, I think. When I look back on 2020 in the years to come, this will be its most stand-out game - but only with an understanding of what came after.

Read more:

When you think about it, Cyberpunk 2077 was the 2020 game of the year after all - VG247

How to use Audiobox, Meta AI’s new sound and voice cloning tool – Android Police

Meta introduced its generative AI model for speech, Voicebox, in mid-2023. Meta aims to take AI sound generation to the next level with Audiobox, Voicebox's successor. The innovative tool generates sound effects from text prompts, eliminates noise from speech recordings, creates a restyled voice, generates speech in the style of an audio clip, and more. Before we take it for a spin, let's learn more about Meta's Audiobox.

The Audiobox demo is available on the web only. Try it on your Mac, Windows desktop, or a top Chromebook.

Creating high-quality audio can be a challenging process. Not everyone is a sound engineer and has access to extensive tools to create audio. Here's where Meta's Audiobox comes into play. It's a sound-generation tool from Facebook AI Research (FAIR). Meta's latest offering generates audio and sound effects using voice inputs, text prompts, and a combination of both.

With Audiobox, Meta aims to lower the barrier of audio creation and make it easy for general users to create high-quality sound samples. Whether you want to create audio for a podcast, YouTube video, audiobook, or video game, Audiobox can be your helping hand to get the job done.

Generative AI has made audio creation and voice cloning popular. There is no shortage of such tools. Meta's Audiobox easily stands out from the crowd due to its unique capabilities. Here's what you can do with it:

All Audiobox features are available to try from the company's official website. You can generate audio samples, check previews, and download them to your device.

You can also move to the Sound Effects menu and describe the sound sample you want to create. Add enough details to get astute results from Audiobox. We ran several text prompts and were impressed with the generated sound effects.

Audiobox can produce sound samples that are close to how people speak naturally. It has led to concerns about AI-powered deepfakes. Especially since the US presidential elections are around the corner, you can't rule out misuse of such AI tools. Meta implements automatic audio watermarking on audio generated by Audiobox.

The embedded signal in the generated audio is negligible to the human ear but can be tracked to the frame level. Meta will also add a voice authentication to prevent impersonation. The person must speak a voice prompt while registering their voice. The text prompt refreshes every 50 seconds, so playing someone else's pre-recorded voice is difficult.

Meta decided against making the AI model open source to prevent potential misuse.

Meta has done a remarkable job with Audiobox. It's accurate and very good. Try it with different prompts and voice samples, and check the results. Besides Facebook, tech giants like Google and Microsoft are exploring generative artificial intelligence to create content.

The search giant recently launched Google Bard to take on Open AI's (and Microsoft) ChatGPT. Read our dedicated post to learn more about Google Bard. We also compared Google Bard with ChatGPT to find their capabilities, limitations, and potential.

See the original post:

How to use Audiobox, Meta AI's new sound and voice cloning tool - Android Police

Nicki Minaj Fans Are Using AI to Create “Gag City”

Fans anxiously awaiting the release of Nicki Minaj's latest album have occupied themselves with AI to create their own

Gag City

Fans are anxiously awaiting the drop of Onika "Nicki Minaj" Maraj-Petty's "Pink Friday 2" — and in the meantime, they've occupied themselves with artificial intelligence image generators to create visions of a Minajian utopia known as "Gag City."

The entire "Gag City" gambit began with zealous (and perhaps overzealous) fans tweeting at the Queens-born diva to tell her how excited — or "gagged," to use the drag scene etymology that spread among Maraj-Petty's LGBTQ and queer-friendly fanbase — they are for her first album in more than five years.

Replete with dispensaries, burger joints, and a high-rise shopping mall, Gag City has everything a Barb (as fans call themselves) could ask for.

Gag City, the fan-created AI kingdom for Nicki Minaj, trends on X/Twitter ahead of ‘Pink Friday 2.’ pic.twitter.com/jm3iGS9fBO

— Pop Crave (@PopCrave) December 6, 2023

Barbz Hug

As memetic lore will have you believe, these tributes to Meraj-Petty were primarily created with Microsoft's Bing AI image generator. The meme went so deep that people began claiming that her fanbase generating Gag City imagery caused Bing to crash, which allegedly led to the image generator blocking Nicki Minaj-related prompts.

gag city residents have demolished bing head office after their continued sabotage of nicki minaj’s name in their image creator pic.twitter.com/OOpL2Jzj7h

— Xeno? (@AClDBLEEDER) December 6, 2023

When Futurism took to Bing's image creator AI to see what all the fuss was about, we too discovered that you couldn't search for anything related to Minaj. However, the same was true when we inputted other celebrities' names as well, suggesting that Bing, like Google, may intentionally block the names of famous people in an apparent effort to circumvent deepfakes.

Brand Opportunities

As creative as these viral Gag City images have been, it was only a matter of time before engagement-hungry brands tried to get in on the fun and effectively ruin it.

From Spotify changing its location to the imaginary Barb metropolis and introducing "Gag City" as a new "sound town" to KFC's social media manager telling users to "DM" the account, the meme has provided a hot pink branding free-for-all.

The Bing account itself even posted a pretty excellent-looking AI-generated Gag City image.

Next stop: Friday ? https://t.co/J1pRCZcbTd pic.twitter.com/ujG7BsJWUC

— Bing (@bing) December 6, 2023

Sleazy brand bandwagoning aside, the Gag City meme and its many interpretations provide an interesting peek into what the future of generative AI may hold in a world dominated by warring fandoms and overhyped automation.

More on AI imaginationPeople Cannot Stop Dunking on that Uncanny “AI Singer-Songwriter”

The post Nicki Minaj Fans Are Using AI to Create “Gag City” appeared first on Futurism.

Link:
Nicki Minaj Fans Are Using AI to Create “Gag City”