Neil Young’s Spotify tiff is a reminder that tech giants always win – Euronews

The opinions expressed in this article are those of the author and do not represent in any way the editorial position of Euronews.

As a listener, you might not care. But as an artist, it can be a tough pill to swallow to know that an algorithm, as opposed to human preference, might be behind your success or failure, Jonah Prousky writes.

Neil Young and Joni Mitchell begrudgingly returned their music to Spotify last month, two years after leaving the platform in protest of its largest podcaster, Joe Rogan.

According to Young, Rogan was using the platform to spread misinformation about the COVID-19 pandemic.

They can have Rogan or Young. Not both, wrote Young to his manager at Warner Music Group.

It turns out, Spotify can have both.

And, no matter what you think of Youngs protest (or boycott, or whatever it was), his clash with Spotify is a reminder that tech giants have a funny way of getting what they want and resistance from artists is usually futile.

Many creators have long been frustrated with platforms like Spotify and YouTube due to the algorithms they employ, which in part drive views and streams, and by extension, pay.

Most creators, however, dont have the clout to issue ultimatums, nor the money to leave these platforms.

While some artists on Spotify make a decent living, there is a far, far greater volume of artists literally millions of them who are struggling to make ends meet from their streaming royalties, according to Rolling Stone.

Also, without an established audience of ones own, artists are pretty much beholden to Spotify and YouTube for views.

According to Forbes, Spotify holds a dominant 30.5% of the music streaming market, more than double its nearest competitor, Apple Music, which has a 13.7% share. YouTube is virtually unrivalled.

Who cares, you might say, Spotify is beloved. And, hasnt the company done a lot to democratise music?

Its true, the company cut out a lot of the red tape associated with the legacy music business by giving new artists a direct line (and business model) for reaching listeners.

That ethos is even enshrined in the companys mission statement, which is to unlock the potential of human creativity by giving a million creative artists the opportunity to live off their art and billions of fans the opportunity to enjoy and be inspired by it.

The company has done much to advance that mission. Its capable of launching music careers in ways that never would have been possible in decades past. An artists streams and by extension, earnings can skyrocket almost overnight if their songs make it onto one of the platform's most-listened-to playlists.

It can quite literally be the difference between driving Uber and making music on the side and earning $200,000 (187,880) in streaming royalties.

So any attempt to criticise the platform ought to be wary of what its done for some musicians. But, in many ways, the platforms algorithm has homogenised music tastes around a small number of top artists, making it harder for new musicians to gain traction.

Algorithms", wrote Scott Timberg in a column for Salon, "are about driving you closer and closer to what you already know. And instead of taking you toward what you want to listen to, they direct you toward slight variations of what youre already consuming.

What people are already consuming is just a small subset of Spotifys artist base, whose tunes gobble up our collective attention.

In 2013, the top 1% of artists accounted for over three-quarters of all revenue from recorded music sales. In that year 20% of songs on Spotify had never been streamed, wrote Ludovic Hunter-Tilney for the Financial Times.

Maybe thats always been the case, youll wonder. I mean, anyone who's seen The X Factor knows that not every artist is worthy of our attention. But the decision of what and who to listen to used to be a human one.

As a listener, you might not care, especially if you think the algorithm has a good handle on your taste. But as an artist, it can be a tough pill to swallow to know that an algorithm, as opposed to human preference, might be behind your success or failure.

So, say youre a musician or content creator who feels the algorithm has treated you unfavourably. What are you going to do, leave? Boycott?

Well, some are. A growing wave of artists and content creators are leaving Spotify and YouTube, often for platforms like Substack and Patreon, where their earnings arent beholden to the algorithm.

Platforms like Substack and Patreon allow creators to own their audience since earnings on these platforms arent tied to views, rather, audience members pay creators directly and the platforms take a small cut.

Still, that move is really only viable for established artists like Young and Mitchell who have audiences.

So, if youre just starting out as a musician or content creator, you really have no choice but to dig in your heels and hope the algorithm likes your stuff.

Jonah Prousky is a Canadian freelance writer based in London. His work has appeared in several leading publications including the Canadian Broadcasting Corporation (CBC), Toronto Star, and Calgary Herald.

At Euronews, we believe all views matter. Contact us at view@euronews.com to send pitches or submissions and be part of the conversation.

Continue reading here:

Neil Young's Spotify tiff is a reminder that tech giants always win - Euronews

How Supreme Court arguments over social media laws and free speech defined social media itself – Quartz

The Supreme Court heard arguments Monday for two lawsuits about how social media giants should or should not be able to regulate speech on their platforms. Chief justices went back and forth with state solicitors general and their opposing party, making what may seem like far-fetched comparisons between social media and everything from bookstores to parade organizers and wedding planners.

Facebook's 2016 election problems will be the same in 2024 | What's Next for Meta?

The two cases in question one from Florida, one from Texas were brought by NetChoice, a trade association that represents social media sites like Metas Facebook, X (formerly Twitter), TikTok, and more. NetChoice said two state laws in Florida and Texas that ban companies from censoring content on their platforms are actually forms of censorship themselves. Paul Clement, the attorney for NetChoice, argued that the laws violate the First Amendment because they compel speech, forcing platforms to host posts that violate their policies.

At the heart of NetChoices argument is that social media platforms are like newspapers, so editorializing content is their First Amendment right.

But Florida solicitor general Henry Whitaker said social media is more like a telephone company (pdf): If Verizon asserted a First Amendment right to cancel disfavored subscribers at a whim, that claim would fail.

The design of the First Amendment is to prevent the suppression of speech not to enable it. That is why the telephone company and the delivery service have no First Amendment right to use their services as a chokepoint to silence those they disfavor, he said.

Texas solicitor general Aaron Nielson had a similar argument (pdf), but likened social media to a public square. [I]f platforms that passively host the speech of billions of people are themselves the speakers and can discriminate, there will be no public square to speak of.

One concern of chief justice Amy Coney Barrett is that the state laws would consider algorithms to be editors, meaning that states could ban how algorithms are applied by online sites or other businesses that sell content. Florida solicitor general Whitaker said algorithms are just a means of sites organizing content, not editorializing it.

That led to more concern, though. Could Florida enact a law telling bookstores that they have to put everything out by alphabetical order? Coney Barrett asked.

Whitaker said, no, the state laws prevent social media sites from censorship, not how they organize their content.

But NetChoices Clement argued that algorithms are editors: These algorithms dont spring from the ether. They are essentially computer programs designed by humans to try to do some of this editorial function. That means that a Supreme Court ruling allowing the state laws to remain would open the door for lawsuits against how algorithms function.

Were not quite sure who it covers, chief justice Ketanji Brown told Whitaker about the Florida law.

So Whitaker said the Florida law would apply to sites like Etsy and Uber, meaning those sites couldnt ban user-generated content unless they provide thorough rationale. Meanwhile, Nielson said the Texas state law, which is narrower than Floridas in scope, wouldnt apply to platforms outside of classic social media sites.

Read the original here:

How Supreme Court arguments over social media laws and free speech defined social media itself - Quartz

U.S. Supreme Court to hear Texas and Florida cases about free speech and social media platforms – Texas Standard

The U.S. Supreme Court will hear arguments today in two cases related to some of the worlds biggest social media platforms.

Considered by many to be two of the hottest free speech cases of the internet age, one case is from Texas, the other from Florida. And though there are slight differences between the two state laws being challenged here, the cases appear to center on a central question: do social media companies have the right to independently decide what content appears on their platforms, amplifying or removing content as they see fit?

The social media companies say their First Amendment free speech rights are being violated with the Texas and Florida laws. The states say those social media companies arent entitled to First Amendment free speech protection. And it may come down to whether a majority of the court sees social media as more like a newspaper or more like a telephone company.

Charles Rocky Rhodes, a professor of law at South Texas College of Law in Houston, said both of these laws are on hold and have not yet gone into effect because of pending court cases.

They were a response to some of the social media platforms de-platforming Donald Trump and other politicians in the wake of the Jan. 6 riots at the Capitol, Rhodes said. And there was a concern from Texas and from Florida that [these politicians] were being targeted because of their conservative beliefs.

And so the idea of both of these laws was to try to keep social media platforms from banning individuals or discriminating against individuals based on the viewpoints of their speech. And it also placed some very onerous burdens on social media companies with respect to disclosure requirements of their terms and their policies with respect to data management and content, and the use policies that they would be using.

GET MORE NEWS FROM AROUND THE STATE:Sign up for Texas Standards weekly newsletters

The plaintiff in the case is NetChoice, an industry association that includes most of the big platforms we all think of Facebook, X (formerly Twitter), YouTube, etc.

Theyre making the play that when they are deciding which messages to amplify and which messages that they want to remove from their platform, that they are acting as the modern editor of a newspaper, and there are good precedent for the United States Supreme Court saying that a state cant tell a newspaper what to print, Rhodes said.

Theyre arguing that the same principle applies to them, that they are allowed to make editorial decisions on their private platform. And this is something that people have to keep in mind that the social media companies, as big and important as they are, are not the government. They are actually privately-owned.

Texas and Florida, however, say these companies are acting as a common carrier and therefore do not have a claim to free speech.

Theyre trying to say that social media companies are a modern equivalent of what used to be a very familiar idea of the common carrier, that they dont have the ability to discriminate with respect to their service. They have to accept everyone, Rhodes said. And the social media companies come back and say, well, common carriers were different because they never engaged in their own expressive activities.

Common carriers did sometimes transmit the speech of others, like a telegraph would be the old example, or telephone But they did not actually engage in their own expressive activities. And the social media companies are claiming that we do because we are trying to communicate messages. Were creating news feeds for individuals. Were trying to increase, of course, advertising streams that we are engaged in expressive activities in a way that your internet service provider or in a way that your telephone company is not.

As this case goes forward, Rhodes said the states arguments are rooted in political ideology.

The Texas law has a specific exemption for companies under 50 million users. So it wouldnt cover conservative sites like Parler, he said. The Florida law had exemptions for Disney and for Universal that were then taken out once Disney and Universal started criticizing Florida [political leaders]. A big part of the underlying motivation for these laws was the political concern that conservatives thought that their voices were being removed from the site and the marketplace of ideas.

View original post here:

U.S. Supreme Court to hear Texas and Florida cases about free speech and social media platforms - Texas Standard

Generative AI, Free Speech, & Public Discourse: Why the Academy Must Step Forward | TechPolicy.Press – Tech Policy Press

On Tuesday, Columbia Engineering and the Knight First Amendment Institute at Columbia University co-hosted a well-attended symposium, Generative AI, Free Speech, & Public Discourse. The event combined presentations about technical research relevant to the subject with addresses and panels discussing the implications of AI for democracy and civil society.

While a range of topics were covered across three keynotes, a series of seed funding presentations, and two panelsone on empirical and technological questions and a second on legal and philosophical questionsa number of notable recurring themes emerged, some by design and others more organically:

This event was part of one partnership amongst others in an effort that Columbia University president Manouche Shafik and engineering school dean Shih-Fu Chang referred to as AI+x, where the school is seeking to engage with various other parts of the university outside of computer engineering to better explore the potential impacts of current developments in artificial intelligence. (This event was also a part of Columbias Dialogue Across Difference initiative, which was established as part of a response to campus conflict around the Israel-Gaza conflict.) From its founding, the Knight Institute has focused on how new technologies affect democracy, requiring collaboration with experts in those technologies.

Speakers on the first panel highlighted sectors where they have already seen potential for positive societal impact of AI, outside of the speech issues that the symposium was focussed on. These included climate science, drug discovery, social work, and creative writing. Columbia engineering professor Carl Vondrick suggested that current large language models are optimized for social media and search, a legacy of their creation by corporations that focus on these domains, and the panelists noted that only by working directly with diverse groups can their needs for more customized models be understood. Princeton researcher Arvind Narayanan proposed that domain experts play a role in evaluating models as, in his opinion, the current approach of benchmarking using standardized tests is seriously flawed.

During the conversation between Jameel Jaffer, Director of the Knight Institute, and Harvard Kennedy School security technologist Bruce Schneier, general principles for successful interdisciplinary work were discussed, like humility, curiosity and listening to each other; gathering early in the process; making sure everyone is taken seriously; and developing a shared vocabulary to communicate across technical, legal, and other domains. Jaffer recalled that some proposals have a lot more credibility in the eyes of policymakers when they are interdisciplinary. Cornell Tech law professor James Grimmelman, who specializes in helping lawyers and technologists understand each other, remarked that these two groups are particularly well-equipped to work together, once they can figure out what the other needs to know.

President Shafik declared that if a responsible approach to AIs impact on society requires a +x, Columbia (surely along with other large research universities) has lots of xs. This positions universities as ideal voices for the public good, to balance out the influence of the tech industry that is developing and controlling the new generation of large language models.

Stanfords Tatsunori Hashimoto, who presented his work on watermarking generative AI text outputs, emphasized that the vendors of these models are secretive, and so the only way to develop a public technical understanding of them is to build them within the academy, and take on the same tasks as the commercial engineers, like working on alignment fine-tuning and performing independent evaluations. One relevant and striking finding by his group was that the reinforcement learning from human feedback (RLHF) process tends to push models towards the more liberal opinions common amongst highly-educated Americans.

The engineering panel developed a wishlist of infrastructure resources that universities (and others outside of the tech industry) need to be able to study how AI can be used to benefit and not harm society, such as compute resources, common datasets, separate syntax models so that vetted content datasets can be added for specific purposes, and student access to models. In the second panel, Camille Franois, a lecturer at the Columbia School of International and Public Affairs and presently a senior director of trust & safety at Niantic Labs, highlighted the importance of having spaces, presumably including university events such as the one at Columbia, to discuss how AI developments are impacting civil discourse. On a critical note, Knight Institute executive director Katy Glenn Bass also pointed out that universities often do not value cross-disciplinary work to the same degree as typical research, and this is an obstacle to progress in this area, given how essential collaboration across disciplines is.

Proposals for regulation were made throughout the symposium, a number of which are listed below, but the keynote by Bruce Schneier was itself an argument for government intervention. Schneiers thesis was, in brief, that corporation-controlled development of generative AI has the potential to undermine the trust that society needs to thrive, as chatbot assistants and other AI systems may present as interpersonally trustworthy, but in reality are essentially designed to drive profits for corporations. To restore trust, it is incumbent on governments to impose safety regulations, much as they do for airlines. He proposed a regulatory agency for the AI and robotics industry, and the development of public AI models, created under political accountability and available for academic and new for-profit uses, enabling a freer market for AI innovation.

Specific regulatory suggestions included:

A couple of cautions were also voiced: Narayanan warned that the Liars Dividend could be weaponized by authoritarian governments to crack down on free expression, and Franois noted the focus on watermarking and deepfakes at the expense of unintended harms, such as chatbots giving citizens incorrect voting information.

There was surprisingly little discussion during the symposium of how generative AI specifically influences public discourse, which Jaffer defined in his introductory statement as acts of speaking and listening that are part of the process of democracy and self-governance. Rather, much of the conversation was about online speech generally, and how it can be influenced by this technology. As such, an earlier focus of online speech debates, social media, came up a number of times, with clear parallels in terms of concern over corporate control and a need for transparency.

Hashimoto referenced the notion that social media causes feedback loops that greatly amplify certain opinions. LLMs can develop data feedback loops which may cause a similar phenomenon that is very difficult to identify and unpick without substantial research. As chatbots become more personalized, suggested Vondrick, they may also create feedback on an individual user level, directing them to more and more of the type of content that they have already expressed an affinity for, akin to the social media filter bubble hypothesis.

Another link to social media was drawn in the last panel, during which both Grimmelmann and Franois drew on their expertise in content moderation. They agreed that the most present danger to discourse from generative AI is inauthentic content and behavior overwhelming the platforms that we rely on, and worried that we may not yet have the tools and infrastructure to counter it. (Franois described a key tension between the Musk effect pushing disinvestment in content moderation and the Brussels effect encouraging a ramping up in on-platform enforcement via the DSA.) At the same time, trust and safety approaches like red-teaming and content policy development are proving key to developing LLMs responsibly. The correct lesson to draw from the failures to regulate social media, proposed Grimmelmann, was the danger of giving up on antitrust enforcement, which could be of great value when current AI foundation models are developed and controlled by a few (and in several cases the same) corporations.

One final theme was a framing of the current moment as one of transition. Even though we are grappling with how to adapt to realistic, readily available synthetic content at scale, there will be a point in the future, perhaps even for todays young children, that this will be intuitively understood and accounted for, or at least that media literacy education, or tools (like watermarking) will have caught up.

Several speakers referenced prior media revolutions. Narayanan was one of several who discussed the printing press, pointing out that even this was seen as a crisis of authority: no longer could the written word be assumed to be trusted. Wikipedia was cited by Columbia Engineering professor Kathy McKeown as an example of media that was initially seen as untrustworthy, but whose benefits, shortcomings, and suitable usage are now commonly understood. Franois noted that use of generative AI is far from binary and that we have not yet developed good frameworks to evaluate the range of applications. Grimmelman mentioned both Wikipedia and the printing press as examples of technologies where no one could have accurately predicted how things would shake out in the end.

As the Knight Institutes Glenn Bass stated explicitly, we should not assume that generative AI is harder to work through than previous media crises, or that we are worse equipped to deal with it. However, two speakers flagged that the tech industry should not be the given free rein: USC Annenbergs Mike Ananny warned that those with invested interests may attempt to prematurely push for stabilization and closure, and we should treat this with suspicion; and Princetons Narayanan noted that this technology is producing a temporary societal upheaval and that its costs should be distributed fairly. Returning to perhaps the dominant takeaways from the event, these comments again implied a role for the academy and for the government in guiding the development of, adoption of, and adaptation to the emerging generation of generative AI.

Read more:

Generative AI, Free Speech, & Public Discourse: Why the Academy Must Step Forward | TechPolicy.Press - Tech Policy Press

Why Online Free Speech Is Now Up to the Supreme Court – Bloomberg

Conspiracy theories, election lies and Covid misinformation before the 2020 US presidential election led social media companies to implement rules policing online speech and suspending some users including former President Donald Trump. That practice, known as content moderation, will be put to the test after two Republican-led states, Florida and Texas, passed laws in 2021 to stop what they believed were policies censoring conservatives. The fate of those social media laws now rests with the US Supreme Court, which could fundamentally reshape how platforms handle speech online in the run-up to the 2024 election and beyond.

The central issue is whether the laws violate the free speech rights of social media platforms by limiting the companies editorial control. The laws apply to companies including Meta Platforms Inc.s Facebook, Alphabet Inc.s Google, X Corp. (formerly Twitter) and Reddit Inc. The justices will scrutinize provisions of the new laws that require the companies to carry content that violates their internal guidelines and to provide a rationale to users whose posts are taken down.

View original post here:

Why Online Free Speech Is Now Up to the Supreme Court - Bloomberg

Supreme Court to hear landmark case on social media, free speech – University of Southern California

Today, the U.S. Supreme Court will hear oral arguments in a pair of cases that could fundamentally change how social media platforms moderate content online. The justices will consider the constitutionality of laws introduced by Texas and Florida targeting what they see as the censorship of conservative viewpoints on social media platforms.

The central issue is whether platforms like Facebook and X should have sole discretion over what content is permitted on their platforms. A decision is expected by June.USC experts are available to discuss.

Depending on the ruling, companies may face stricter regulations or be allowed more autonomy in controlling their online presence. Tighter restrictions would require marketers to exercise greater caution in content creation and distribution, prioritizing transparency, and adherence to guidelines to avoid legal repercussions. Alternatively, a ruling in favor of greater moderation powers could potentially raise consumer concerns about censorship and brand authenticity, said Kristen Schiele, an associate professor of clinical marketing at the USC Marshall School of Business.

Regardless of the verdict, companies will need to adapt their strategies to align with advancing legal standards and consumer expectations in the digital landscape. Stricter regulations will require a more thorough screening of content to ensure compliance. Marketers may need to invest more resources to understand and adhere to the evolving legislations, which would lead to shifts in budget allocation and strategy development. In response, the industry will most likely see new content moderation technologies and platforms emerge to help companies navigate legal challenges and still create effective marketing campaigns, she said.

Erin Miller is an expert on theories of speech and free speech rights, and especially their application to mass media. She also writes on issues of moral and criminal responsibility. Her teaching areas include First Amendment theory and criminal procedure. Miller is an assistant professor of law at the USC Gould School of Law.

Content:emiller@law.usc.edu

###

Jef Pearlman is a clinical associate professor of law and director of the Intellectual Property & Technology Law Clinic at the USC Gould School of Law.

Contact:jef@law.usc.edu

###

Karen Northis a recognized expert in the field of digital and social media, with interests spanning personal and corporate brand building, digital election meddling, reputation management, product development, and safety and privacy online. North is a clinical professor of communication at the USC Annenberg School for Communication and Journalism.

Contact:knorth@usc.edu

###

Wendy Wood is an expert in the nature of habits. Wood co-authored a study exploring how fake news spreads on social media, which found that platforms more than individual users have a larger role to play in stopping the spread of misinformation online.

Contact:wendy.wood@usc.edu

###

Emilio Ferrara is an expert incomputational social sciences who studies socio-technical systems and information networks to unveil the communication dynamics that govern our world. Ferrara isis a professor of computer science and communication at the USC Viterbi School of Engineering and USC Annenberg School for Communication and Journalism.

Contact:emiliofe@usc.edu

###

(Photo/Benjamin Sow/Unsplash)

See original here:

Supreme Court to hear landmark case on social media, free speech - University of Southern California

An Argument for Free Speech, the Lifeblood of Democracy – Tufts Now

You devote the first part of the book to Oliver Wendell Holmes Jr. and his journey into skepticism about universal morality. To whom is that relevant today?

Many of todays students have a keen thirst for social justice, which I admire. When Holmes was their age, he shared that thirst, dropping out of college to enlist in the Union Army in a war against slavery, in which he was nearly killed several times.

He became very skeptical of people who believe they have unique access to universal, absolute truth, who view their adversaries as evil incarnate. That, he believed, leads ultimately to violence.

All of us today need to approach public debate with a bit of humility, recognizing that none of us is infallible and that rigid moral certitude leads down a dangerous path.

We know from centuries of experience, in many countries, that censorship inevitably backfires. It discredits the censors, who are seen as patronizing elites. It demeans listeners who are told they cant handle the truth. It makes martyrs and heroes out of the censored and drives their speech underground where its harder to rebut.

Suffragettes, civil rights leaders, and LGBTQ+ activists all have relied on free speech to get their messages out. Censorship alienates the public, generates distrust, fosters social division, and sparks political instability.

Its not that some speech isnt harmfulits that trying to suppress it causes greater harm.

Not all hateful speech is protected. Incitement to violence, fighting words, defamation, and true threats are all often hateful yet that speech is not protected. But other hateful speech is protected, for several reasons.

Hatred is a viewpoint. Its for the individual to think and feel as he or she wishes; its only when the individual crosses the line between thought and action to incite violence or defame or threaten someone that the state can intervene.

Hate speech laws are also invariably vague and overbroad, leading to arbitrary and abusive enforcement. In the real world, speech rarely gets punished because it hurts dominant majorities. It gets punished because it hurts disadvantaged minorities.

The ultimate problem with banning falsehoods is that to do so youd need an official Ministry of Truth, which could come up with an endless list of officially banned falsehoods. Not only would that list inevitably be self-serving, but it could be wrong.

Even when it comes to clear falsehoods, there are reasons to leave them up. [Former President Donald] Trump claimed, for example, that the size of the crowd at his inauguration was larger than [former President Barack] Obamas, which was indisputably false. But the statement had the effect of calling into question not only Trumps veracity but also his mental soundness, which is important for voters to assess.

They were wrong to apply a norm of international human rights law in banning hima supposed prohibition against glorifying violence. Thats a vague, overly broad standard that can pick up everything from praising Medal of Honor winners to producing Top Gun.

Were dealing here with an American president speaking from the White House to the American people, so I say the proper standard should have been the U.S. First Amendment and whether Trump intended to incite imminent violence and whether that violence was likely. Under that test, I think its a close case.

Justice Louis Brandeis [who served on the Supreme Court from 1916 to 1939] said that the fitting remedy for evil counsels is good ones.

If someone counsels drinking bleach to cure COVID, the remedy is not to suppress itits to point out why thats wrong. But over and over, the governments remedy for speech it didnt like was to strongarm social media platforms to take it down.

The government wouldnt have lost so much credibility if it had only said, This is our best guess based on available evidence. Instead, it spoke ex cathedra on masks, lockdowns, school closings, vaccine efficacy, infection rates, myocarditis, social distancing, you name itclaims that often turned out to be untenableand then it bullied the platforms to censor prominent experts who took issue with its misinformation.

The remedy for falsehoods is more speech, not enforced silence. If someone thinks a social media post contains altered imagery or audio, the initial solution is simply to say that and let the marketplace of ideas sort it out.

Obviously counter-speech isnt always the answer: You still run into eleventh-hour deep fakes that theres no time to rebut. People do have privacy rights and interference with elections undercuts democracy.

The trick is to write legislation that catches malign fakery but doesnt also pick up satire and humor that is obviously bogus. Thats not easy. Well-intended but sloppy laws often trigger serious unintended consequences.

See the original post here:

An Argument for Free Speech, the Lifeblood of Democracy - Tufts Now

Supreme Court justices appear skeptical of GOP states in major internet free speech case – Washington Examiner

The Supreme Court appeared skeptical of arguments Monday by the states of Florida and Texas that they are justified in regulating social media content moderation in a landmark case with major implications for speech on the internet.

The court heard oral arguments for two major speech-related cases on Monday: NetChoice v. Moody and NetChoice v. Paxton. The technology industry group NetChoice sued the states of Texas and Florida over laws imposed by Republicans meant to hold social media platforms accountable for banning users based on viewpoint.

Floridas law would allow residents to take legal action and the state to fine companies if they remove political candidates from social media platforms. The Texas law would require platforms to be content-neutral and allow the states attorney general and residents to sue platforms for removing content or blocking accounts. The court pressed the states to provide a justification for restricting speech. The justices, though, also asked questions aimed at determining the extent of Big Techs power over speech on the internet.

NetChoice v. Moody

Florida Solicitor General Henry Whitaker was the first to appear before the court to argue in NetChoice v. Moody. He said that platforms had to be neutral when it comes to content moderation and that the law merely regulates the conduct of a platform rather than the content. He also alleged that platforms such as Facebook and Google need to be treated as common carriers. Being defined as a common carrier, a term initially used for public transportation services and utilities but expanded to include radio stations and telephone services, would subject platforms to additional restrictions, including anti-discrimination regulations.

Multiple members of the court appear skeptical of Floridas law, noting that it was very broad and affected more platforms than some claimed it would. [Floridas law is] covering almost everything, Justice Sonia Sotomayor said. The one thing I know about the internet is that its variety is infinite.

Justice Samuel Alito noted there is also no list of platforms covered by Floridas statutes. This broadness makes it challenging to deal with the cases particulars, Justice Clarence Thomas argued. Were not talking about anything specific, Thomas said. Now were just speculating as to what the law means. The e-commerce platform Etsy was brought up multiple times by the court as an example of a platform that would be inadvertently affected by Floridas law.

Paul Clement, NetChoices representative, responded in his arguments by saying that Floridas law violated the First Amendment multiple times over. He also tried to create a distinction between content moderation decisions made by government entities versus private entities. There are things that if the government does, its a First Amendment problem, and if a private speaker does it, we recognize that as protected activity, Clement argued.

The Biden administrations Solicitor General Elizabeth Prelogar seemed to affirm Clements arguments, arguing in favor of NetChoice and limiting Floridas power over speech.

Netchoice v. Paxton

The court reconvened a short time after to hear arguments about Texass law. Clement returned to represent NetChoice, arguing that Texass law requiring neutrality on the platform would make social media less attractive to users and advertisers since it would require platforms to host both anti-suicide and pro-suicide content as well as pro-Semitic and antisemitic content.

He also emphasized to the justices that a social media company was more like a parade or newspaper than a common carrier, trying to focus on the state of speech on the platform.

Aaron Nielson, Texass solicitor general, emphasized that social media platforms are a lot like telegraphs and that this nature should be why the state should restrict the sorts of censorship that platforms allow.

Nielson was questioned multiple times about how the state would handle its viewpoint-neutral emphasis. When asked how platforms could regulate viewpoint-neutral approaches to subjects such as terrorism, Nielson said platforms could just remove it. Instead of saying that you can have anti-al Qaeda but not the pro-al Qaeda, if you just want to say, Nobody is talking about al Qaeda here, they can turn that off, Nielson argued.

Court conclusions

The court appeared divided on the extent to which content moderation was allowed. On one hand, they saw government-enforced moderation as questionable, mainly if it focused on content. On the other hand, they criticized the power exerted by Big Tech companies. Justice Neil Gorsuch brought up the example of private messaging services such as Gmail deciding to delete communications due to them violating certain viewpoint communications, a matter that multiple justices brought up before Clement.

The court appeared bothered by the two cases being facial challenges, a legal term for cases in which a party claims that a specific law is unconstitutional and should be voided. This approach offers little flexibility for the Supreme Court since the court could not limit the laws effect to only a specific form of speech but leave other parts of the law intact.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Section 230, a part of the Communications Decency Act that protects platforms from being held accountable for content posted by third parties, was also brought up by the justices multiple times. The justices tried to weigh how that law would interact with the states attempts to block speech, as well as NetChoices arguments in favor of the platforms. Thomas argued that NetChoices argument that platforms had editorial control undermined its defense under Section 230.

The court is expected to release a decision on both cases sometime before July. The court will only be ruling on the preliminary injunction, which means that the decision will come quicker than other cases and that the decision will decide if the lower courts blocking of the laws will be upheld or overturned.

Read the rest here:

Supreme Court justices appear skeptical of GOP states in major internet free speech case - Washington Examiner

JPM2024: Big Tech Poised to Disrupt Biopharma with AI-Based Drug Discovery – BioSpace

Pictured: Medical professionals use technology in healthcare/iStock,elenabs

2024 will continue to see Big Tech companies enter the artificial intelligence-based drug discovery space, potentially disrupting the biopharma industry. That was the consensus of panelists at a Tuesday session on AI and machine learning held by the Biotech Showcase, co-located with the 42nd J.P. Morgan Healthcare Conference.

The JPM conference got a reminder of Big Techs inroads into AI-based drug discovery with Sundays announcement that Google parent Alphabets digital biotech company Isomorphic Labs signed two large deals worth nearly $3 billion with Eli Lilly and Novartis.

Big Tech is coming for AI and its coming in a big way, said panel moderator Beth Rogozinski, CEO of Oncoustics, who noted that the AI boom has seen the rise of the Magnificent 7, a new grouping of mega-cap tech stocks comprised of the seven largest U.S.-listed companiestech giants Amazon, Apple, Alphabet, Microsoft, Meta Platforms, Nvidia and Tesla.

Last year, the Magnificent 7s combined market value surged almost 75% to a whopping $12 trillion, demonstrating their collective financial power.

Six of the seven have AI and healthcare initiatives, Rogozinski told the panel. Theyre all coming for this industry.

However, Atomwise CEO Abraham Heifets made the case that with Big Tech getting into biopharma there is a mismatch of business models, with the Isomorphic Labs deals looking, in his words, like traditional tech mentality. Heifets contends that its unclear whether the physics of the business will support the risk models in the industry, adding that the influence of small- to mid-size companies focused on AI-based drug discovery should not be underestimated.

Google DeepMinds AlphaFold is the foundation of Isomorphic Labs platform. The problem, according to ArrePath CTO Kurt Thorn, is that its easy for these technologies to have fast followings only to see their market shares wane over time. If you look at AlphaFold, which was a breakthrough when it came out, within two or three years afterwards there were two or three alternatives.

Thorn concluded that its not clear that the market sizes are large enough to amortize a large AI platform for drug discovery across an entire industry.

Rogozinski emphasized that these switching costs are a potential barrier to entry in moving to such drug discovery platforms as Big Tech tries to get companies to transition.

Vivodyne CEO Andrei Georgescu commented that drug discovery and development is a difficult and complex process that is not a function of how big your team is or how many people you have behind the bench. The key to the success of AI in biopharma is in the generation and curation of datasets, according to Georgescu, who said the industry is facing a bottleneck on the complexity of the data and the applicability of the data to the outcomes that we want to confirm.

Providing some levity and perspective to Tuesdays AI session, Moonwalk Biosciences CEO Alex Aravanis told the audience he was late to arrive as a panelist due to an accident on the freeway involving a Tesla self-driving vehicle. So, clearly, they need more data, Aravanis said.

Marc Cikes, managing director of the Debiopharm Innovation Fund, told BioSpace that while he has been heartened to see the rise of AI and machine learning usage in biopharma, the forecast remains murky in 2024.

The impact of AI for drug discovery is still largely unknown, Cikes said. The public market valuation of the few AI-drug discovery companies is significantly down versus their peak price, and a large chunk of the high-value deals announced between native AI companies and large pharmas are essentially based on future milestone payments which may never materialize.

Greg Slabodkin is the News Editor at BioSpace. You can reach him atgreg.slabodkin@biospace.com. Follow him onLinkedIn.

See the original post here:

JPM2024: Big Tech Poised to Disrupt Biopharma with AI-Based Drug Discovery - BioSpace