Intel CEO Pat Gelsinger, Open AI’s Anna Makanju, Senators Mark Warner and Todd Young, and other power players … – The Washington Post

Washington Post Live, The Posts award-winning live journalism platform, is unveiling its speakers for The Futurist Summit: The New Age of Tech, its second recent major convening focused on technology.

An all-star lineup of Post journalists will moderate interviews with influential business leaders and policy-makers about the promise and risks posed by emerging technologies. The event will be held on Thursday, March 21 at The Post Live Center in The Washington Posts D.C. headquarters.

Notable interviews include:

The summit will also feature an interactive session about the challenge of deepfakes led by Post technology columnist Geoffrey Fowler and a drone demonstration during an interview with Skydio CEO Adam Bry by associate editor Jonathan Capehart.

Todays ever-changing technology presents unlimited opportunities to better co-pilot our lives, said Vineet Khosla, Chief Technology Officer for The Washington Post. The conversations at The Posts tech summit will highlight some of these solutions and explore the pressing questions facing the world. Khosla, a founding engineer of Siri, joined The Post in 2023 from Uber where he was responsible for its routing engine. He will kick off the summit with opening remarks.

View the program agenda and the full list of speakers here.

The Washington Posts Futurist Summit is hosted with presenting sponsor Mozilla.

Originally posted here:

Intel CEO Pat Gelsinger, Open AI's Anna Makanju, Senators Mark Warner and Todd Young, and other power players ... - The Washington Post

Le Monde and Open AI sign partnership agreement on artificial intelligence – Le Monde

As part of its discussions with major players in the field of artificial intelligence, Le Monde has just signed a multi-year agreement with OpenAI, the company known for its ChatGPT tool. This agreement is historic as it is the first signed between a French media organization and a major player in this nascent industry. It covers both the training of artificial intelligence models developed by the American company and answer engine services such as ChatGPT. It will benefit users of this tool by improving its relevance thanks to recent, authoritative content on a wide range of current topics, while explicitly highlighting our news organization's contribution to OpenAI's services.

This is a long-term agreement, designed as a true partnership. Under the terms of the agreement, our teams will be able to draw on OpenAI technologies to develop projects and functionalities using AI. Within the framework of this partnership, and for the duration of the agreement, the two parties will collaborate on a privileged and recurring basis. A dialogue between the teams of both parties will ensure the monitoring of products and technologies developed by OpenAI.

For the general public, the effects of this agreement will be visible on ChatGPT, which can be described, in simple terms, as an answer engine using established facts or comments expressed by a limited number of references. The engine generates the most plausible and predictive synthetic answer to a given question.

The agreement between Le Monde and OpenAI allows the latter to use Le Monde's corpus, for the duration of the agreement, as one of the major references to establish its answers and make them reliable. It provides for references to Le Monde articles to be highlighted and systematically accompanied by a logo, a hyperlink, and the titles of the articles used as references. Content supplied to us by news agencies and photographs published by Le Monde are expressly excluded.

For Le Monde, this agreement is further recognition of the reliability of the work of our editorial teams, often considered a reference. It is also a first step toward protecting our work and our rights, at a time when we are still at the very beginning of the AI revolution, a wave predicted by many observers to be even more imposing than the digital one. We were among the very first signatories in France of the "neighboring rights" agreements, with Facebook and then Google. Here too, we had to ensure that the rights of press publishers applied to the use of Le Monde content referenced in answers generated by the services developed by OpenAI.

This point is crucial to us. We hope this agreement will set a precedent for our industry. With this first signature, it will be more difficult for other AI platforms to evade or refuse to negotiate. From this point of view, we are convinced that the agreement is beneficial for the entire profession.

Lastly, this partnership enables the Socit Editrice du Monde, Le Monde's holding company, to work with OpenAI to explore advances in this technology, anticipating as far as possible any consequences, negative or favorable. It also has the advantage of consolidating our business model by providing a significant source of additional, multi-year revenue, including a share of neighboring rights. An "appropriate and equitable" portion of these rights, as defined by law, will be paid back to the newsroom.

These discussions with AI players, punctuated by this first signature, are born of our belief that, faced with the scale of the transformations that lie ahead, we need, more than ever, to remain mobile in order to avoid the perils that are taking shape and seize the opportunities for development. The dangers have already been widely identified: the plundering or counterfeiting of our content, the industrial and immediate fabrication of false information that flouts all journalistic rules, the re-routing of our audiences towards platforms likely to provide undocumented answers to every question. Simply put, the end of our uniqueness and the disappearance of an economic model based on revenues from paid distribution.

These risks, which are probably fatal for our industry, do not prevent the existence of historic opportunities: putting the computing power of artificial intelligence at the service of journalism, making it easier to work with data in a shorter timeframe as part of large-scale investigations, translating our written content into foreign languages or producing audio versions to expand our readership and disseminate our information and editorial formats to new audiences.

To take the measure of these challenges, we decided to act in steps. The first was devoted to protecting our content and strengthening our procedures. Last year, we first activated an opt-out clause on our sites, following the example of several other media organizations, prohibiting AI platforms from accessing our data to train their generative intelligence models without our agreement. We also collectively discussed and drew up an appendix to our ethics and deontology charter, devoted specifically to the use of AI within our group. In particular, this text states that generative artificial intelligence cannot be used in our publications to produce editorial content ex-nihilo. Nor can it replace the editorial teams that form the core of our business and our value. Our charter does, however, authorize the use of generative AI as a tool to assist editorial production, under strictly defined conditions.

With this in mind, another phase was opened, dedicated to experimenting with artificial intelligence tools in very specific sectors of our business. Using DeepL, we were able to launch our Le Monde in English website and app, whose articles are initially translated by this AI tool, before being re-read by professional translators and then edited and published by a team of English-speaking journalists. At the same time, we signed an agreement with Microsoft to test the audio version of our articles. This feature, now available on almost all our French-language articles published in our app, opens us up to new audiences, often younger, as well as to new uses, particularly for people on the move. The third step is the one that led us to sign the agreement with OpenAI, which we hope will create a dynamic favorable to independent journalism in the new technological landscape that is taking shape.

At each of these stages, Le Monde has remained true to the spirit that has driven it since the advent of the Internet, and during the major changes in our industry: We have sought to reconcile the desire to discover new territories, while taking care to protect our editorial identity and the high standards of our content. In recent years, this approach has paid off. As the first French media organization to rely on digital subscriptions without ever having recourse to online kiosks, we have for several years been able to claim a significant lead in the hierarchy of national general-interest dailies, thanks to an unprecedented number of over 600,000 subscribers. In the same way, our determination to be a pioneer on numerous social media platforms has given us a highly visible place on all of them, helping to rejuvenate our audience.

The agreement with OpenAI is a continuation of this strategy of reasoned innovation. And we continue to guarantee the total independence of our newsroom: It goes without saying that this new agreement, like the previous ones we have signed, will in no way hinder our journalists' freedom to investigate the artificial intelligence sector in general, and OpenAI in particular. In fact, over the coming months, we will be stepping up our reporting and investigative capabilities in this key area of technological innovation.

This is the very first condition of our editorial independence, and therefore of your trust. As we move forward into the new world of artificial intelligence, we have close to our hearts an ambition that goes back to the very first day of our history, whose 80th anniversary we are celebrating this year: deserving your loyalty.

Le Monde

Louis Dreyfus(Chief Executive Officer of Le Monde) and Jrme Fenoglio(Director of Le Monde)

Translation of an original article published in French on lemonde.fr; the publisher may only be liable for the French version.

Read the original here:

Le Monde and Open AI sign partnership agreement on artificial intelligence - Le Monde

Why is Elon Musk suing Open AI and Sam Altman? In a word: Microsoft. – Morningstar

By Jurica Dujmovic

Potential ramifications extend far beyond the courtroom

In a striking turn of events, Elon Musk, Tesla's (TSLA) CEO, has initiated legal action against OpenAI and its leadership, alleging that the organization he helped found has moved from its original altruistic mission toward a profit-driven approach, particularly after partnering with Microsoft (MSFT).

The lawsuit accentuates Musk's deep-seated concerns that OpenAI has deviated from its foundational manifesto of developing artificial general intelligence (AGI) for the betterment of humanity, choosing instead to prioritize financial gains. But is that really so, or is there something else at hand?

Musk was deeply involved with OpenAI since its inception in 2015, as his concerns about AI's potential risks and the vision to advance AI in a way that benefits humanity aligned with OpenAI's original ethos as a non-profit organization.

In 2018, however, Musk became disillusioned with OpenAI because, in his view, it no longer operated as a nonprofit and was building technology that took sides in political and social debates. The recent OpenAI drama that culminated with a series of significant changes in OpenAI's structure and ethos, as well as a what can only be seen as Microsoft's power grab, seems to have sparked Musk's discontent.

To understand his reasoning, it helps to remember that Microsoft is a company with a long history of litigation. Over the years, Microsoft has faced numerous high-profile legal battles related to its market practices.

Here are some prominent cases to illustrate the issue:

-- In the United States v. Microsoft Corp. case, which began in 1998, the U.S. Department of Justice accused Microsoft of holding a monopolistic position in the PC operating-systems market and taking actions to crush threats to that monopoly. In April 2000, the case resulted in a verdict that Microsoft had engaged in monopolization and attempted monopolization in violation of the Sherman Antitrust Act.

-- In Europe, Microsoft has faced significant fines for abusing its dominant market position. In 2004, the European Commission fined Microsoft 497.2 million euros, the largest sum it had ever imposed on a single company at the time??. In 2008, Microsoft was fined an additional 899 million euros for failing to comply with the 2004 antitrust order.

-- In 2013, the European Commission levied a 561 million euro fine against Microsoft for failing to comply with a 2009 settlement agreement to offer Windows users a choice of internet browsers instead of defaulting to Internet Explorer.

In light of these past litigations, it's much easier to understand why OpenAI's CEO Sam Altman's brief departure from the company and subsequent return late last year - which culminated in a significant shift in the organization's governance and its relationship with Microsoft - was the straw that likely broke Musk's back.

After Altman was reinstated, Microsoft solidified its influence over OpenAI by securing a permanent position on its board. Furthermore, the restructuring of OpenAI's board to include business-oriented members, rather than AI experts or ethicists, signaled a permanent shift in the organization's priorities and marked a pivotal turn toward a profit-driven model underpinned by corporate governance.

The consequences of this power grab are plain to see: Microsoft is already implementing various AI models designed by the company in its various products while none of the code is being released to the public. These models also include a specific political and ideological bias that makes them problematic from an ethical point of view. This too, is an issue that cannot be addressed due to the closed-source nature of AI models generated and shaped under the watchful eye of Microsoft.

Musk's own ventures, like xAI and Neuralink, suggest he's still deeply invested in the AI space, albeit in a way he has more control over, presumably to ensure that the technology develops according to his vision for the future of humanity.

On the other hand, proponents of Microsoft's partnership with OpenAI emphasize strategic and mutually-beneficial aspects. Microsoft's $1 billion investment in OpenAI is viewed as a significant step in advancing artificial-intelligence technology as it allows OpenAI to utilize Microsoft's Azure cloud services to train and run its AI software. Additionally, the collaboration is positioned as a way for Microsoft to stay competitive against other tech giants by integrating AI into its cloud services and developing more sophisticated AI models????.

Proponents say Microsoft's involvement with OpenAI is a strategic business decision aimed at promoting Azure's AI capabilities and securing a leading position in the industry. The partnership is framed as a move to democratize AI technology while ensuring AI safety, which aligns with broader industry goals of responsible and ethical AI development. It is also seen as a way for OpenAI to access necessary resources and expertise to further its research, emphasizing the collaborative nature of the partnership rather than a mere financial transaction??.

Hard truths and consequences

While many point out that Musk winning the case is extremely unlikely, it's still worth looking into potential consequences. Such a verdict could mandate that OpenAI returns to a non-profit status or open-source its technology, significantly impacting its business model, revenue generation and future collaborations. It could also affect Microsoft's investment in OpenAI, particularly if the court determines that the latter has strayed from its founding mission, influencing the tech giant's ability to protect its investment and realize expected returns.

The lawsuit's outcome might influence public and market perceptions of OpenAI and Microsoft, possibly affecting customer trust and market share, with Musk potentially seen as an advocate for ethical AI development. Additionally, the case could drive the direction of AI development, balancing between open-source and proprietary models, and possibly accelerating innovation while raising concerns about controlling and misusing advanced AI technologies.

The scrutiny from this lawsuit might lead to more cautious approaches in contractual relationships within the tech sector, focusing on partnerships and intellectual property. Furthermore, the case could draw regulatory attention, possibly leading to increased oversight or regulation of AI companies, particularly concerning transparency, data privacy and ethical considerations in AI development. While Musk's quest might seem like a longshot to some legal experts, the potential ramifications of this lawsuit extend far beyond the courtroom.

More: Here's what an AI chatbot thinks of Elon Musk's lawsuit against OpenAI and Sam Altman

Also read: Microsoft hasn't been worth this much more than Apple since 2003

-Jurica Dujmovic

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

(END) Dow Jones Newswires

03-09-24 1003ET

Original post:

Why is Elon Musk suing Open AI and Sam Altman? In a word: Microsoft. - Morningstar

Of open AI, panic and storytelling | Nation – Nation

In 1897, the great Polish-British novelist Joseph Conrad wrote a letter to his friend Robert Cunninghame Graham, on the fears he had about humans one day inventing a machine that would go rogue and one that couldnt be switched off setting off a terrifying clash with humans that is unprecedented in its scale and devastation.

Today many people join Conrad in conjuring up scary images of robots swooping down a town with murderous intent. Conrad wrote that, There is let us say a machine. It evolved itself (I am severely scientific) out of a chaos of scraps of iron and behold! it knits.

I am horrified at the horrible work and stand appalled. I feel it ought to embroider but it goes on knitting. The infamous thing has made itself: made itself without thought, without conscience, without foresight, without eyes, without heart. It knits us in and it knits us out. It has knitted time, space, pain, death, corruption, despair and all the illusions and nothing matters.

Conrads fear is now real, especially at the rate with which technology is driving the world. On November 17, Open AI, the company behind the revolutionary Chat GPT, fired its CEO Sam Altman. Less than five days later, he was reinstated as CEO. Its alleged that Mr Altman could have disagreed with the board on the direction Artificial Intelligence (AI) is taking.

The news of Mr Altmans firing had sent shockwaves throughout the AI world, raising trust concerns around the growing technology. Was the board or Mr Altman right? Are we on the verge of making technologies that could destroy humanity or at least their livelihoods?

Indeed, many questions about AI have been raised since ChatGPT made a flashy debut last year with people warning that chatbots and the new language models could be used for good as well as for harm or other devious reasons. Already chatbots write school essays, poems, cheat in exams and do everything in between.

For fiction writers, the increasing sophistication of AI presents endless possibilities for storytelling. This genre is science fiction (sometimes shortened to SF or sci-fi), which has been defined as a genre of speculative fiction, which typically deals with imaginative and futuristic concepts such as advanced science and technology, space exploration, time travel, parallel universes, and extraterrestrial life. Science fiction can trace its roots to ancient mythology. It is related to fantasy, horror, and superhero fiction and contains many subgenres.

Sci-fi writers have conjured up the psychic tenor of ambient doom occasioned by robots that unleash terror; marauding like charging bulls a confrontation, a pile of bleeding limbs, some rolling around on the floor; the robot beating people up and even killing them in the streets, totally out of control. This is the AI apocalypse. This would only be in the fertile imagination of science fiction writers except that modern development in AI is making it a possible reality each passing day.

One of the most terrifying short stories on AI apocalypse is The Last Human by Eric Steven Johnson. The story follows, the life of the last human survivor of the second robot apocalypse. Jay has been wandering aimlessly for ages and passes the time by reflecting on the better days which, sadly, were his days spent in servitude to the robot overlords. This is the story of Jay's struggle to survive with only himself to rely on. Jay must have walked in an uninhabitable moonscape neighbourhoods blasted, scorched and erased.

This idea of hostile takeover by AI has been the bane of Hollywood for a long time. The famous movie The Terminator is a science fiction action film featuring Arnold Schwarzenegger as the Terminator, a cybernetic assassin (cyborg) sent to save mankind from extinction by Skynet, a hostile artificial intelligence.

Whether one day robots will roam the streets and turn against us or not, writers have fodder for their works. Technology has its beauty even if it is sometimes offered in a context of danger. Writers can come up with narratives on how technology is helping humans or swing to the dark side and give us tales of robots rounding up people and slapping them in the streets.

For readers, science fiction can stimulate imagination, creativity, and problem-solving. Elon Musk, the worlds richest man, was famously inspired by science fiction that has reportedly shaped his companies. He even names some of his products after ones found in the science fiction books he has read. Science fiction can also encourage curiosity and interest in the world around us.

This is very important especially for children so they can be curious about the world and explore it for discovery. Its encouraging that writers are churning out more books on AI in the era of ChatGPT. Thats the way to go.

See the original post:

Of open AI, panic and storytelling | Nation - Nation

What’s new in AI this week? Amazon releases new ChatBot, Open AI … – Android Authority

Welcome to the third edition of Whats New in AI, our weekly update where we bring you all the latest AI news, tools, and tips to help you excel in this new AI-driven future.

The biggest news of the week revolved around Amazon and its announcement of new AI tools for AWS, including a new AI chatbot. Were also starting to see OpenAI return mostly to normal it seems. Lets jump in and look at the biggestheadlines from last week:

While we try to focus this segment on apps that are widely available, thats not always the case. Sometimes this segment will instead focus on cool new tools that just have a lot of future potential, even if they are quite niche. This week will lean heavily into the latter, as several of these weeks spotlights are for tools that are niche projects that arent easily available just yet.

Solve Intelligence is a tool specifically crafted to assist attorneys in drafting patents, simplifying and streamlining the often labor-intensive process. While the tool is not openly accessible to everyone, interested parties can request a demo of the technology for their business if they find it beneficial.

The Gen-2 Motion Brush is a recent addition to Runways Gen-2 Suite. This tool enables users to create brief videos from a single image, including images generated by other AI tools. While there is a free trial available for experimentation with the tool and the entire suite, a subscription plan is required for full access and utilization.

While well consider this a single entry, its worth noting that Amazons Re:Invent showcase unveiled multiple new AWS serverless tools for its latest AWS preview. Notable mentions include the Amazon Aurora Limitless Database, Amazon ElastiCache, and Amazon Redshift. These tools serve various functions, such as predicting workloads for employers and optimizing resources.

The latest AI suite from GE HealthCare is designed to make a radiologists job easier, processing huge amounts of data to detect breast cancer and other issues sooner. MyBreastAI incorporates three AI applications to enhance efficiency: ProFound AI for DBT, 3D Mammography, and PowerLook Density. For a detailed overview of each tools specific functionalities, you can explore further by clicking the button below. While this toolset may not be directly applicable to mainstream AI users, it represents a significant breakthrough and underscores the innovative tools and use cases that AI is advancing.

Looking to learn more about AI, how to make better use of AI tools, or how to protect your privacy from AI? Each week we share a different how-to guide or tip we feel is worth sharing.

Calvin Wankhede / Android Authority

Its hard to believe that ChatGPT is now a year old. In that time it has made more than $30 million in revenue and has had more than 110 million mobile installs. If you subscribe to this newsletter you are very likely already rocking the app, but I know plenty who have avoided it myself included.

I had previously tried the ChatGPT early on and found it was easier to just use the web portal and place a Chrome web app for it onto my Android desktop. With the recent update that finally adds voice support for free users, the official app has finally become a must-have item. Although it cant do everything Google Assistant can, Ive found that its responses and voice sound so much more natural that I have fallen in love.

Dont already have it? You can grab the official ChatGPT app from either Google Play or the Apple App Store, depending on your phones platform.

Link:

What's new in AI this week? Amazon releases new ChatBot, Open AI ... - Android Authority