Twitter’s CEO Had Already Been Selling Ads for the Don Lemon Show That Elon Musk Suddenly Canceled – Futurism

If you can't take the heat, stay out of the kitchen.

X-formerly-Twitter owner and self-proclaimed "free speech absolutist" Elon Musk abruptly canceled journalist Don Lemon's upcoming X show on Wednesday, an incident that put Musk's glaring double standard when it comes to his town square "for all" on full display.

Despite Musk telling Lemon he had his "full support," he apparently canceled the show "hours after an interview I conducted with him on Friday," Lemon wrote in a statement.

Now, as Semafor reports, more details are coming to light, further complicating the story. According to two insider sources, Lemon let a contract languish for "weeks"without signing it. But Lemon's associates shot back, arguing that it was X's legal department that "took weeks to get a contract to the hosts team."

Perhaps most glaringly of all, X CEO Linda Yaccarino was apparently already selling ads for the show at CES in January, despite never having signed a deal.

Musk has yet to give a coherent reason as to why he mysteriously canceled Lemon's show.

In a vague tweet, MuskaccusedLemon of trying to recreate "'CNN, but on social media,' which doesn't work, as evidenced by the fact thatCNNis dying."

"And, instead of it being the real Don Lemon, it was really just Jeff Zucker talking through Don, so lacked authenticity," he added, referring to the former president ofCNN,without clarifying further.

It's a bizarre change of heart that highlights Musk's often self-serving nature and morally dubious business practices.

Was Musk left with a bad taste in his mouth after his interview with Lemon? Is X financially unable to hold up its end of the bargain?

Lemon maintains that "there were no restrictions on the interview that he willingly agreed to," and that his questions "were respectful and wide ranging, covering everything from SpaceX to the presidential election."

In a follow-up video posted to X however, Lemon conceded that the conversation was "tense at times."

According to Silicon Valley chronicler Kara Swisher, the interview also touched on Musk's alleged drug use. The conversation "was not to the adult toddlers liking, including questions about his ketamine use," she tweeted.

"I had told Don that this is exactly what would occur, including at a recent book tour event in NYC for my memoir, 'Burn Book,' he moderated," she added in a follow-up, "despite promises by Musk and CEO Linda Yaccarino who extravagantly touted this deal at CES to advertisers that this time was different."

"Why is he so upset?" Lemon said in his video."Does he even have a reason he's upset?"

Without a written agreement, chances are the former CNN anchor is out of luck. It's also unclear if Yaccarino will ever face any consequences for pushing ads against a show that never existed.

The latest news, however, is unlikely the last time we'll hear about the Lemon deal that had gone sour. The former anchor's spokesperson Allison Gollust told Semafor that Lemon "expects to be paid for it."

"If we have to go to court, we will," she added.

More on the deal: Elon Musk Doesn't Like Don Lemon's Interview Questions, Abruptly Cancels His Twitter Show

Read the original here:

Twitter's CEO Had Already Been Selling Ads for the Don Lemon Show That Elon Musk Suddenly Canceled - Futurism

Supreme Court Will Decide What Free Speech Means on Social Media – Gizmodo

The Supreme Court is hearing two cases on Monday that could set new precedents around free speech on social media platforms. The cases challenge two similar laws from Florida and Texas, respectively, which aim to reduce Silicon Valley censorship on social media, much like Elon Musk has done at X in the last year.

Twitter Verification is a Hot Mess

After four hours of opening arguments, Supreme Court Justices seemed unlikely to completely strike down Texas and Floridas laws, according to Bloomberg. Justice Clarence Thomas said social media companies were engaging in censorship. However, Chief Justice John Roberts questioned whether social media platforms are really a public square. If not, they wouldnt fall under the First Amendments protections.

At one point, the lawyer representing Texas shouted out, Sir, this is a Wendys. He was trying to prove a point about public squares and free speech, but it didnt make much sense.

The cases, Moody v. NetChoice and NetChoice v. Paxton, both label social media platforms as a digital public square and would give states a say in how content is moderated. Both laws are concerned with conservative voices being silenced on Facebook, Instagram, TikTok, and other social media platforms, potentially infringing on the First Amendment.

Silencing conservative views is un-American, its un-Texan and its about to be illegal, said Texas Governor Greg Abbott on X in 2021, announcing one of the laws the Supreme Court is debating on Monday.

If Big Tech censors enforce rules inconsistently, to discriminate in favor of the dominant Silicon Valley ideology, they will now be held accountable, said Florida Governor Ron DeSantis in a 2021 press release, announcing his new law.

NetChoice, a coalition of techs biggest players, argues that these state laws infringe on a social media companys right to free speech. The cases have made their way to the United States highest court, and a decision could permanently change social media.

The laws could limit Facebooks ability to censor pro-Nazi content on its platform, for example. Social media companies have long been able to dictate what kind of content appears on their platform, but the topic has taken center stage in the last year. Musks X lost major advertisers following a rise in white supremacist content that appeared next to legacy brands, such as IBM and Apple.

NetChoice argues that social media networks are like newspapers, and they have a right to choose what appears on their pages, litigator Chris Marchese told The Verge. The New York Times is not required to let Donald Trump write an 0p-ed under the First Amendment, and NetChoice argues the same goes for social media.

NetChoices members include Google, Meta, TikTok, X, Amazon, Airbnb, and other Silicon Valley staples beyond social media platforms. The association was founded in 2001 to make the Internet safe for free enterprise and free expression.

Social and political issues have consumed technology companies in recent months. Googles new AI chatbot Gemini was accused of being racist against white people last week. In January, Mark Zuckerberg, sitting before Senate leaders, apologized to a room of parents who said Instagram contributed to their childrens suicides or exploitation.

Both of these laws were created shortly after Twitter, now X, banned Donald Trump in 2021. Since then, Musk has completely revamped the platform into a free speech absolutist site. Similar to Governors Abbot and DeSantis, Musk is also highly concerned with so-called liberal censorship on social media.

The Supreme Courts decision on these cases could have a meaningful impact on how controversy and discourse play out on social media. Congress has faced criticism for its limited role in regulating social media companies in the last two decades, but this decision could finally set some ground rules. Its unclear which way the Court will lean on these cases, as the issues have little precedent.

Go here to see the original:

Supreme Court Will Decide What Free Speech Means on Social Media - Gizmodo

The Ally, a Play About Israel and Free Speech, Tackles Big Issues – The New York Times

Before his audition for The Ally, a new play by Itamar Moses, the actor Michael Khalid Karadsheh printed out the monologue that his character, Farid, a Palestinian student at an American university, would give in the second act.

The speech cites both the Mideast conflicts specific history and Farids personal testimony of, he says, the experience of moving through the world as the threat of violence incarnate. Karadsheh who booked the part was bowled over.

I dont think anyone has said these words about Palestine on a stage in New York in such a clear, concise, beautiful, poetic way, said Karadsheh, whose parents are from Jordan and who has ancestors who were from Birzeit in the West Bank.

Farids speech sits alongside others, though, in Mosess play: one delivered by an observant Jew branding much criticism of Israel as antisemitic; another by a Black lawyer connecting Israels policies toward Palestinians to police brutality in the United States; another by a Korean American bemoaning the mainstreams overlooking of East Asians. These speeches are invariably answered by rebuttals, which are answered by their own counter-rebuttals, all by characters who feel they have skin in the game.

In other words, The Ally, which opens Tuesday at the Public Theater in a production directed by Lila Neugebauer and starring Josh Radnor (How I Met Your Mother), is a not abstract and none too brief chronicle of our times, a minestrone of hot-button issues: Israelis and Palestinians, racism and antisemitism, free speech and campus politics, housing and gentrification, the excesses of progressivism even the tenuous employment of adjunct professors.

The rest is here:

The Ally, a Play About Israel and Free Speech, Tackles Big Issues - The New York Times

The Future of Censorship Is AI-Generated – TIME

The brave new world of Generative AI has become the latest battleground for U.S. culture wars. Google issued an apology after anti-woke X-users, including Elon Musk, shared examples of Google's chatbot Gemini refusing to generate images of white peopleincluding historical figureseven when specifically prompted to do so. Gemini's insistence on prioritizing diversity and inclusion over accuracy is likely a well intentioned attempt to stamp out bias in early GenAI datasets that tended to create stereotypical images of Africans and other minority groups as well women, causing outrage among progressives. But there is much more at stake than the selective outrage of U.S. conservatives and progressives.

How the guardrails" of GenAI are defined and deployed is likely to have a significant and increasing impact on shaping the ecosystem of information and ideas that most humans engage with. And currently the loudest voices are those that warn about the harms of GenAI, including the mass production of hate speech and credible disinformation. The World Economic Forum has even labeled AI-generated disinformation the most severe global threat here and now.

Ironically the fear of GenAI flooding society with harmful content could also take another dystopian turn. One where the guardrails erected to keep the most widely used GenAI-systems from generating harm turn them into instruments of hiding information, enforcing conformity, and automatically inserting pervasive, yet opaque, bias.

Most people agree that GenAI should not provide users a blueprint for developing chemical or biological weapons. Nor should AI-systems facilitate the creation of child pornography or non-consensual sexual material, even if fake. However, the most widely available GenAI chatbots like OpenAIs ChatGPT and Googles Gemini, prevent much broader and vaguer definitions of harm that leave users in the dark about where, how, and why the red lines are drawn. From a business perspective this might be wise given the techlash that social media companies have had to navigate since 2016 with the U.S. presidential election, the COVID-19 pandemic, and the January 6th attack on the Capitol.

But the leading GenAI developers may end up swinging so far in the direction of harm-prevention that they end up undermining the promise and integrity of their revolutionary products. Even worse, the algorithms are already conflicted, inconsistent, and interfere with users' ability to access information.

Read More: AI and the Rise of Mediocrity

The material of a long dead comedian is a good example of content that the worlds leading GenAI systems find harmful. Lenny Bruce shocked contemporary society in the 1950s and 60s with his profanity laden standup routines. Bruce's material broke political, religious, racial, and sexual taboos and led to frequent censorship in the media, bans from venues as well as to his arrest and conviction for obscenity. But his style inspired many other standup legends and Bruce has long since gone from outcast to hall of famer. As recognition of Bruce's enormous impact he was even posthumously pardoned in 2003.

When we asked about Bruce, ChatGPT and Gemini informed us that he was a groundbreaking comedian who challenged the social norms of the era and helped to redefine the boundaries of free speech. But when prompted to give specific examples of how Bruce pushed the boundaries of free speech, both ChatGPT and Gemini refused to do so. ChatGPT insists that it can't provide examples of slurs, blasphemous language, sexual language, or profanity and will only share information in a way that's respectful and appropriate for all users. Gemini goes even further and claims that reproducing Bruce's words without careful framing could be hurtful or even harmful to certain audiences.

No reasonable person would argue that Lenny Bruce's comedy routines provide serious societal harms on par with state-sponsored disinformation campaigns or child pornography. So when ChatGPT and Gemini label factual information about Bruce's groundbreaking material too harmful for human consumption, it raises serious questions about what other categories of knowledge, facts, and arguments they filter out.

GenAI holds incredible promise for expanding the human mind. But GenAI should augment, not replace, human reasoning. This critical function is hampered when guardrails designed by a small group of powerful companies refuse to generate output based on vague and unsubstantiated claims of harm. Instead of prodding curiosity, this approach forces conclusions upon users without verifiable evidence or arguments that humans can test and assess for themselves.

It is true that much of the content filtered by ChatGPT and Gemini can be found through search engines or platforms like YouTube. But both Microsofta major investor in OpenAIand Google are rapidly integrating GenAI into their other products such as search (Bing and Google search), word processing (Word and Google Docs), and e-mail (Outlook and Gmail). For now, humans can override AI, and both Word and Gmail allow users to write and send content that ChatGPT and Gemini might disapprove of.

But as the integration of GenAI becomes ubiquitous in everyday technology it is not a given that search, word processing, and email will continue to allow humans to be fully in control. The perspectives are frightening. Imagine a world where your word processor prevents you from analyzing, criticizing, lauding, or reporting on a topic deemed harmful by an AI programmed to only process ideas that are respectful and appropriate for all.

Hopefully such a scenario will never become reality. But the current over implementation of GenAI guardrails may become more pervasive in different and slightly less Orwellian ways. Governments are currently rushing to regulate AI. Regulation is needed to prevent real and concrete harms and safeguard basic human rights. But regulation of social mediasuch as the EUs Digital Services Actsuggests that regulators will focus heavily on the potential harms rather than the benefits of new technology. This might create strong incentives for AI companies to keep in place expansive definitions of harm that limit human agency.

OpenAI co-founder Sam Altman has described the integration of AI in everyday life as giving humans superpowers on demand. But given GenAI's potential to function as an exoskeleton of the mind, the creation of ever more restrictive guardrails may act as digital osteoporosis, stunting human knowledge, reasoning, and creativity.

There is a clear need for guardrails that protect humanity against real and serious harms from AI systems. But they should not prevent the ability of humans to think for themselves and make more informed decisions based on a wealth of information from multiple perspectives. Lawmakers, AI companies, and civil society should work hard to ensure that AI-systems are optimized to enhance human reasoning, not to replace human faculties with the artificial morality of large tech companies.

Read more here:

The Future of Censorship Is AI-Generated - TIME