Page 73«..1020..72737475..8090..»

Category Archives: Big Tech

Big Tech is a state actor with constitutional obligations | TheHill – The Hill

Posted: March 31, 2021 at 3:14 am

Readers of the political press are familiar with the actions of Big Tech to censor the social media speech of former President TrumpDonald TrumpThe Hill's Morning Report - Biden officials brace for worst despite vaccine data Trump launches 'the official website of the 45th President' Judge rules Ohio professor can sue university over not using student's preferred pronoun MORE and several Republican congressmen and purges of thousands of conservative social media accounts. Since these actions were taken by private parties against private parties, it is generally assumed that the Constitution does not apply and Big Tech, with congressional immunity from suit, can regulate the internet activities of private parties as it wishes.

When Big Tech uses the powers authorized by section 230 of the Communications Decency Act (CDA) to restrict access to materials on the internet it considers objectionable, it is acting for the state (State Action). As a state actor, Big Tech must provide the same constitutional protections as government provides.

In a prior article, I argued that section 230 was an unconstitutional delegation of authority by Congress to private parties. The seminal case supporting this position is Carter v. Carter Coal, a 1936 U.S. Supreme Court case invalidating the delegation of government power to private coal producers to regulate other coal producers. The court characterized such action as Legislative delegation in its most obnoxious form. The holding has not been challenged for 85 years.

Unfortunately, Congress continues to ignore its unlawful delegation while Big Tech continues to regulate speech in the social marketplace as if the delegation is valid. Due to the significant impact on free speech, this controversy should be quickly resolved. There are three possible outcomes: Congress re-writes the statute; the court declares section 230 constitutional or unconstitutional, or courts provide due process rights for objectionable speakers deprived of free speech by state actors. The first two options are years in the future. Affording due process can be immediate.

When are actions by private parties State Action?

There are two situations in which the actions of private parties are deemed State Action: (1) when there is a close relationship between the actions of the private party and what government seeks to have accomplished; or (2) when the private party performs a traditional government function.

Constitutional protections are mandated when private parties are state actors

While State Action is a factual matter, the Supreme Court, in Skinner v. Railway Labor Executives Assn. (Labor Assn.) ruled on a situation similar to the actions of Big Tech. In Skinner, the government authorized but did not compel, private railroads to drug test employees as part of accident investigations. Railroads voluntarily conducted the tests. The Labor Assn. sought to enjoin the railroads from conducting drug tests, claiming unlawful searches in violation of the Fourth Amendment. The Supreme Court held that while the railroads program was a private initiative, the tests, encouraged by the government, cannot be viewed as private action outside of the reach of constitutional protections, i.e., it is state action.

As with Skinner, section 230 of CDA did not compel Big Tech to restrict materials it deemed objectionable. Moreover, like Skinner, governments grant of section 230 immunity and power to restrict materials produced a close relationship between Big Tech and government that encouraged Big Tech to actively implement governments goals, i.e., state action.

Another case, Marsh v. Alabama, involved a company-owned town that operated like any other town, except that it prohibited the distribution of certain religious literature. The U.S. Supreme Court held that when private parties exercise powers traditionally reserved for the state, they perform a public function; thus, bound to respect constitutional rights, the same as government.

The private parties owning the town of Marsh, like the private parties operating the internet, both regulated speech. When Big Tech controls speech in the public square, it exercises state regulatory power. And, like Marsh, it must respect the constitutional rights of those in the square.

Courts have the power to immediately protect objectionable free speech

The actions of Big Tech are State Actions reviewable by courts that can balance the property interests of private parties against the free speech and due process rights of objectionable speakers.

Determining the process due a litigant depends on the situation. If only property rights are involved and other administrative processes are available to protect those rights, a hearing is generally not required before the deprivation occurs. However, when fundamental liberties, e.g., speech, are involved, courts must provide hearings before the deprivation of rights occurs.

Litigants cannot seek monetary damages due to Big Techs immunity from civil liability. But they can seek a hearing for injunctive relief and discovery of why their free speech is being denied, before losing their right to speak in the public square.

William L. Kovacs is the author Reform the Kakistocracy: Rule by the Least Able or Least Principled Citizens and former senior vice president at the U.S. Chamber of Commerce. The author has no financial or lobbying interest in this issue.

Read the original:

Big Tech is a state actor with constitutional obligations | TheHill - The Hill

Posted in Big Tech | Comments Off on Big Tech is a state actor with constitutional obligations | TheHill – The Hill

Virginia’s new big tech-backed data privacy law is the nation’s second. Critics say it doesn’t go far enough. – Prince William Times

Posted: at 3:14 am

When Virginias General Assemblyfirst took up legislation billed as a major step toward giving regular people more control over their data in an increasingly online world, some of the first testimony lawmakers heard came from tech giants like Microsoft and Amazon.

Both companies said they were in full support of Virginias effort to become just the second state in America to pass its owndata privacy bill, an early marker in a debate still unfolding in other states and at the national level.

Supporters of Virginias Consumer Data Protection Act, approved by the General Assembly this year and already signed by Gov. Ralph Northam, say the fact that Virginia was able to pass such significant legislation without a major fight is a testament to the quality of the bill, which lays out new consumer protections while largely shielding companies from a flood of data-related lawsuits.

Noting that an estimated70%of internet traffic flows through servers in Virginia, Sen. Dave Marsden, D-37th, of Fairfax, said Virginias legislation could be a good starting place for a national privacy bill.

We really need to take a leadership role here, Marsden, one of the bills main sponsors, said in an interview. We are a technology state.

During testimony on the bill, a Microsoft representative said it could help earn the publics trust in technology, while an Amazon rep said addressing privacy concerns is critical to meet our customers high expectations.

The Future of Privacy Forum, a data privacy think tank supported by corporate benefactors such as Google, Amazon, Facebook and Twitter as well as the Bill and Melinda Gates Foundation and the Robert Wood Johnson Foundation, hailed the passage of the Virginia bill as a significant milestone on a national issue.

In the absence of a comprehensive federal privacy law, we are encouraged to see Virginia lawmakers and other states continue to establish and improve legal protections for personal information, Future of Privacy Forum CEO CEO Jules Polonetsky said in a news release after Northam approved the legislation earlier this month.

Though the bill passed with broad, bipartisan support, some critics say the momentum was the result of an industry-friendly proposal that avoided hot-button issues that have derailed similar efforts elsewhere, most notably the question of whether ordinary Virginians should have the right to sue companies profiting from the sale and use of their data.

Near the end of the 2021 legislative session last month, Sen. Scott Surovell, D-36th, who represents parts of Fairfax, Prince William an Stafford counties, warned colleagues they might face questions over their votes in a few years when the law is in place and the worms start to crawl out.

It is property relating to you. And if anybody should have a right to do something about it, its the person who is generating the information, Surovell said. The only person who can fight for your rights under this is the attorney general. And I just believe thats fundamentally wrong.

The Virginia Trial Lawyers Association and some consumer-rights advocates agreed.

As is typical, Virginia has taken a business-first perspective that codifies business-designed obstacles to consumers having meaningful control of their personal information, Irene Leech, president of the Virginia Citizens Consumer Council said in a news release that called for the bill to be vetoed.

California is the only other state to pass comprehensive data privacy legislation. Its law grants consumers a limited right to sue companies who misuse data.

Policymakers are paying close attention toprivacy bills emerging across the country, with the expectation that compliance concerns and the desire to avoid a patchwork of state laws will eventually encourage states to find a uniform regulatory model.

The Virginia law wont take effect until the beginning of 2023, and the bill includes the establishment of a work group that will continue studying the issue and potentially recommend future changes.

As approved, heres what the law covers:

What kind of data are we talking about?

The law defines personal data as any information that is linked or reasonably linkable to an identified or identifiable natural person, covering a broad swathe of consumer activity that might give companies a valuable profile of someones purchasing habits, interests, location and financial status.

The legislation has exemptions for certain types of data already regulated by other areas of the law, including data dealing with health care, creditworthiness, drivers licenses and education.

It also creates a category of sensitive information, such as data dealing with race, ethnicity, religion, health conditions and citizenship status, genetics or biometrics, precise geolocation data, and data collected from children. Companies wouldnt be allowed to process sensitive data without first getting the consumers consent.

The law creates separate rules for aggregate or de-identified data thats not inherently linked to a specific person.

The new rules would only apply to companies that keep data on more than 100,000 consumers in a given calendar year, or data brokers companies that either take publicly available information and sell it or buy data from one company to sell to another that get more than half their gross revenue from the sale of personal data. Nonprofits would not be subject to the law.

What rights does the law give consumers?

Under the new rules, Virginia consumers would have the right to access certain data a company has on them and request that data be corrected or deleted. If a consumer asks a company for a copy of their data, it would have to be provided in a portable format that would presumably allow the consumer to take it to another business or a competing social media platform.

Consumers will also have the right to opt out of having their data used for targeted advertising, sold to other companies or compiled into a personal profile used to analyze or predict behavior.

Companies whose business models rely on such data would have to create data protection assessments for targeted advertising, profiling and other sensitive uses of data. Those assessments would have to weigh the benefits of a certain data practice against potential risks to consumers.

Those assessments would not be available to the general public, because the law states they are exempt from the Virginia Freedom of Information Act. However, the records would be available to the state in an enforcement action.

Parents and legal guardians would be able to invoke data protections on behalf of children under 13, but teenagers would have to act independently to assert control over their data.

How will those rights be protected?

Enforcement of the law will be handled exclusively by the Attorney Generals Office, a setup that prohibits individual consumers from suing companies if they feel their data privacy rights have been violated.

While the attorney general could take companies to court, potentially securing civil penalties of $7,500 per data violation that would go into a new Consumer Privacy Fund, businesses would have a chance to get out of legal trouble by correcting whatever they had done wrong. The law requires the attorney general to give a company a written notice detailing its potential violations at least 30 days before bringing any action in court.

In that time, a company whose practices are in question could halt any court action by giving the attorney generals office a written statement promising any violations have been addressed and agreeing to stop breaking the law. The attorney generals office could only proceed with enforcement if the company is still failing to comply.

Several tech companies applauded the passage of the Virginia law and said they hope to see similar steps in other states.

Sen. Mark Warner, D-Va., who has prioritized data privacy issues at the federal level, called Virginias law an important first step but said hed like to see stronger protections making it easier for Virginia citizens to invoke their privacy rights.

Warner specifically highlighted the need to rein in so-called dark patterns, manipulative online tactics used to obtain more customer data, and the possibility of new internet standards empowering customers to control their data through web browser settings.

Marsden defended the initial structure of the bill, saying the debate over including a private right to sue was a killswitch for similar efforts in other states. Running enforcement through the attorney generals office, he said, will prevent courts from being inundated with data privacy lawsuits.

Were just trying to sort through it and I think we came up with a really good bill, he said.

More:

Virginia's new big tech-backed data privacy law is the nation's second. Critics say it doesn't go far enough. - Prince William Times

Posted in Big Tech | Comments Off on Virginia’s new big tech-backed data privacy law is the nation’s second. Critics say it doesn’t go far enough. – Prince William Times

Congress sending mixed messages on antitrust and Big Tech – The Whittier Daily News

Posted: at 3:14 am

Congress has been sending contradictory signals about the duties owed by internet related firms with monopoly power.

Last October, the House Judiciary Committee issued a massive report on monopolistic practices by Apple, Google, Facebook, and Amazon.The federal antitrust statutes have long required firms with market power not to exclude competitors.The report proposed going beyond those rules to reach how internet monopolists should treat their customers, especially content providers.

[M]arket power online has corresponded with a significant decline in the availability of trustworthy sources of news, the report states. Through dominating key communication platforms, Google and Facebook have [created] an uneven playing field in which news publishers are beholden to their decisions.

This observation applies to Amazons activities as well, in making providers of news and commentary beholden to [Amazons] decisions, to deny access to the internet.

An extension of antitrust from rules of fair play with competitors to treating customers fairly is not unprecedented. European antitrust law has long included that element. American antitrust investigations against each of the high tech companies identified in the Judiciary Committee Report have already been launched.

The Federal Trade Commission opened an inquiry into Amazon in 2019, in which the California Attorney Generals office is also participating. The FTC has focused on Amazons using its access to information about consumers to favor its own products when they compete with other vendors using Amazons market place and, in some cases, squeezing out those competitors.

President Biden will soon appoint a successor to the former FTC Chair, bringing that independent commission under the control of a majority of three Democrats, out of the five commissioners. Federal antitrust enforcement is split between the Justice Department and the FTC. Both agencies will now reflect the new Presidents antitrust priorities.

On the issue of high tech monopolization, however, there has not been much partisanship. Both the Justice Department and the FTC commenced their investigations of Apple, Amazon, and Google, under Republican leadership.

That bipartisanship might now begin to fracture over the question of whether the Sherman Acts prohibition of monopolization extends to prohibit conduct allegedly practiced by Amazon and other media powerhouses to exclude conservative news sources and commentary.

The immediate irritant is Parler, a content and commentary site catering to conservatives. Amazon terminated Parler access to Amazons web services. Apple and Google did the same. In each case, the reason was Parlers having carried posts associated with the January 6 attack on the U.S. Capitol that the web service provider asserted encouraged violence, with many of the criminals allegedly communicating during the attack using Parler.

Amazon, Apple, and Google maintain that such posts violate their contractual terms of use by which Parler was bound.

Parler had sued Amazon under federal antitrust laws, for having used Amazons monopoly power to exclude it from the internet. It is difficult to find an internet host other than the big three. Two weeks ago, Parler dropped its antitrust lawsuit, having cobbled together an alternative route onto the web. Though off for a month, Parler is now back on the internet.

The House Oversight Committee has just demanded that Parler identify the funding sources which enabled it to circumvent the exclusion from the internet effectuated by Amazon, Apple, and Google.

That request seems to approve Big Techs decision to ban Parler, an action that Congress could not have itself ordered because of the First Amendment, but which it applauds when implemented by a monopolist.

The House Judiciary Committee, by contrast, wants to extend antitrust laws prohibitions to encompass how these monopolists deal with content providers. Because of January 6, Parler might not be the best champion for that principle, but other credible providers of conservative viewpoints, like Prager University, have made the same complaint.

If we extend the reach of antitrust laws to how monopolists treat their customers, protecting diversity of opinion is a commendable premise to do so.

Tom Campbell is a professor of law and a professor of economics at Chapman University. He served five terms in the U.S. Congress, including service on the Judiciary Committee. He has taught antitrust law for over thirty years.

Go here to read the rest:

Congress sending mixed messages on antitrust and Big Tech - The Whittier Daily News

Posted in Big Tech | Comments Off on Congress sending mixed messages on antitrust and Big Tech – The Whittier Daily News

Big Tech’s data watchdog in Europe is facing accusations of bureaucracy and lethargy – CNBC

Posted: at 3:14 am

FAANG stocks displayed at the Nasdaq.

Adam Jeffery | CNBC

DUBLIN The EU's landmark privacy rules were hailed as a success when launched in 2018, but some believe they have placed too much weight on individual authorities and have led to sluggish activity and more bureaucracy.

TikTok recently came under the jurisdiction of Ireland's Data Protection Commission, adding to a hefty workload for the Irish regulator.

With several major tech firms, including Facebook, Google and Twitter, holding their European headquarters in Dublin, the DPC has become Europe's most high-profile data watchdog in enforcing GDPR, the region's data privacy rules.

The regulation, with its possibility for big fines, is seen as the most robust piece of data protection law in history. But the DPC's elevated status since it came into effect has raised questions around how well resourced it is to handle such a large and important workload.

The DPC's annual report for 2020 outlined that it handled 10,151 cases in total that year, an increase of 9%. Meanwhile, the authority is in the middle of a high-profile legal case with Facebook over data transfers to the U.S.

In December, more than 2 years after GDPR came into effect, the DPC issued its first GDPR financial penalty against a major U.S. tech company when Twitter was fined 450,000 euros ($535,594).

The length of the investigation and the sum of money drew criticism from Max Schrems and other data protection advocates.

Noyb, the organization founded by Schrems, is a frequent critic of the DPC. Romain Robert, a senior lawyer at Noyb, said that the organization has been frustrated by the enforcement of GDPR by most data protection authorities in Europe.

"The expectations towards the DPC are really disappointing. We don't see that many decisions," Robert told CNBC.

Graham Doyle, the deputy commissioner at the DPC, told CNBC that investigations, especially cross-border probes into big tech firms, take some time.

"I've been saying this since May 2018, trying to manage expectations, do not be expecting these big headline fines (immediately). It's going to take time," Doyle said.

"There is this focus on the pace at which investigations go and a belief that just because you have more people, it means things will happen quicker. That's not necessarily the case. In some areas it will help but in others it means that you can do more simultaneously," Doyle said.

In the country's last budget, the DPC received 19.1 million euros in funding from the Irish government, up from 16.9 million euros the year before. The agency has close to 150 employees and will be at 200 by the end of the year.

Doyle countered calls for swift decisions to be made once complaints are filed.

"That's not taking into account fair procedures, that's just making an assumption," he said.

GDPR established the one-stop-shop mechanism, which allows companies operating across the EU to report to one member state's data protection authority. It is under this mechanism that TikTok and several others report to the DPC.

It means the Irish watchdog is often the lead investigator on cross-border investigations, such as the probe into Twitter and several open investigations into Facebook and its services.

"Absolutely it is the case that the one-stop-shop has meant that the Irish DPC has become the de facto lead regulator for many of the big tech platforms," Doyle said.

Johannes Caspar, the chief of Hamburg's data protection authority, has been vocal on the effectiveness of this approach.

A view of the Google EMEA HQ building in the western part of the Grand Canal Docks in Dublin, seen during Level 5 Covid-19 lockdown. On Friday, 22 January, 2021, in Dublin, Ireland.

NurPhoto | NurPhoto | Getty Images

"The one-stop-shop procedure has shown massive deficits as it leads to inefficiency, bureaucratic structures and to massive differences between law enforcement in purely national and EU-wide procedures," Caspar told CNBC.

He said the procedures for carrying out cross-border inquiries can be "extremely bureaucratic." It can lead to domestic investigations carrying on swiftly but the large banner investigations moving at a slower pace.

"Effective protection of the rights and freedoms of data subjects, but also fair competition in the digital market, cannot be achieved in this way," he said.

As GDPR's third birthday approaches in May, the DPC has a "strong pipeline" of major decisions that will be published in 2021, Doyle said.

One of those is an investigation into Facebook-owned WhatsApp over how data is shared between the messaging app and its owner. The probe is expected to yield a fine between 30 million euros and 50 million euros, marking the first massive fine from the DPC in the GDPR age.

"I would counter the argument that is being put forward in terms of the pace of investigations. We've made ground-breaking steps in terms of the GDPR in cross-border investigations. It's a new piece of legislation that's only in almost three years," Doyle said.

For Noyb's Robert, it's still not enough. He said that with a few notable exceptions such as French authority CNIL's 50 million-euro sanction on Google many of the continent's data protection authorities have been acting too slow.

"A lot of people are focusing on the DPC but some of the other DPAs (Data Protection Authorities) are really disappointing as well," he said, pointing to the Luxembourg authority, which has Amazon under its umbrella but has not taken any action.

He added there is a need for an objective analysis of all DPAs' resources, budgets and workloads to get a true sense of how GDPR is performing.

View post:

Big Tech's data watchdog in Europe is facing accusations of bureaucracy and lethargy - CNBC

Posted in Big Tech | Comments Off on Big Tech’s data watchdog in Europe is facing accusations of bureaucracy and lethargy – CNBC

Congress is fed up with Big Tech. Now what? – MarketWatch

Posted: at 3:14 am

The insurrection at the U.S. Capitol in January appears to have been the last straw for Congress, which is poised to step in to try and put the brakes on Big Tech and its inability to stop misinformation and hate speech on its platforms.

In a nearly six-hour virtual hearing of the House Energy and Commerce Committee on Thursday, many lawmakers exhibited little or no patience with the chief executives of Alphabet Inc. GOOG, -0.02% GOOGL, +0.03%, Facebook Inc. FB, -0.97% and Twitter Inc. TWTR, +0.08% and the attempts by the companies to self-regulate.

Trump supporters stormed the Capitol complex on Jan. 6 as the 2020 election results were being ratified, falsely claiming the election was rigged, leaving five people dead. The terrifying incident was referred to or mentioned by several committee members, who blamed social-media companies as venues where the insurrectionists made their plans.

The attack started and was nourished on your platforms, said Rep. Frank Pallone, D-N.J., the committees chairman, adding that the companies have in the past just shrugged off billion-dollar fines.

Your business model itself has become the problem, he said.

How this committee and other legislative bodies in Congress plan to attack the business models of the social-media giants and Googles YouTube is another question, one that was not really answered Thursday. But there were references to at least two pieces of legislation in progress.

One is the Protecting Americans from Dangerous Algorithms Act, which was reintroduced this week and seeks to narrowly amend Section 230 of the Internet Communications Act of 1996, which gives companies immunity from liability for comments or actions made on their platforms. The revised bill, co-authored by Rep. Anna Eshoo, D-Calif., and Rep. Tom Malinowski, D-N.J., seeks to hold large social-media platforms liable if their algorithms amplify misinformation that leads to offline violence.

And Rep. Jan Schakowsky, D-Ill., said she will introduce the Online Consumer Protection Act to hold tech companies more accountable and weaken their liability shield, because the witnesses here today have demonstrated time and time again that self-regulation has not worked. They must be held accountable.

Read more: Section 230 and the fallout Big Tech may see after the Capitol insurrection

Many of the committee members tried, often in vain, to get simple yes or no answers from Facebooks Mark Zuckerbug, Twitters Jack Dorsey and Alphabets Sundar Pichai on a variety of questions about potential reform, and what the companies have done since their last rounds of grilling. Some of the most interesting and apt comparisons, though, was comparing social media to Big Tobacco, which is now viewed as having created addictive products in cigarettes, through nicotine.

Big Tech is handing our children a lit cigarette and they become hooked for life, said Rep. Doris Matsui, D-Calif., who pointedly asked each executive if they agreed that they made money from an addictive habit of its users on social media. All three said no to that question.

Its still not clear, though, if the current proposed legislation or any other new bills in the works will successfully wend their way through both the House and the Senate.

One thing was clear, though Congress has had enough from Big Tech. How that shakes out over the next year is anyones guess, whether its through bills that seek to limit Section 230 or ongoing antitrust investigations. But the message from Washington on Thursday was clear: The gloves are off.

See the original post:

Congress is fed up with Big Tech. Now what? - MarketWatch

Posted in Big Tech | Comments Off on Congress is fed up with Big Tech. Now what? – MarketWatch

No one agrees on how to fix Big Tech – Yahoo Tech

Posted: at 3:14 am

Well, that was depressing.

Nobody should expect anything of value to emerge from these congressional committee hearings. They arent designed for it. These tech company CEOs have had between six and 16 years of practice at not saying anything controversial, trained by the finest lawyers money can buy. The infinite panel of politicians generally weren't asking probing questions, but instead ranted and raved about things they often didnt seem to understand.

Broadly speaking, the hearing was meant to cover social medias role in promoting extremism, but the subtext was to see if these social platforms still deserve legal protections. Section 230 (s230) is a general immunity afforded to websites that host content produced by other people. Put (very) simply, you cant, as a platform, be held liable for content someone publishes on your site that you didnt know about. Essentially, no matter what its users write, Facebook wont be held accountable for it.

The time for self-regulation is over. Its time we legislate to hold you accountable. - Representative Frank Pallone

Fortunately, we dont need yesterdays hearing to understand what Facebook, Google and Twitter are thinking about. The real meat was in the opening statements released the day before the hearing. Of course, it requires a measure of Kremlinology to dissect what exactly is being said, but the CEOs responses are all telling nonetheless.

Zuckerberg says, rightly, that people are arguing for contradictory reasons that the law is doing more harm than good. (This is, essentially, because its the one legal protection that benefits Big Tech rather than the wider business world, so it's the lawmakers only direct weapon beyond writing new laws.) He advocates for Congress making s230 protection conditional on companies ability to meet best practices to combat the spread of this content, adding that platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it.

Story continues

Essentially, you only get the benefit of s230 protection if you have robust moderation tools and policies. He adds that platforms should not be held liable if a particular piece of content evades its detection, which is impractical for platforms with billions of posts per day. In order to talk about adequacy, Zuckerberg recommends something that Facebook is increasingly relying upon these days. Definitions of an adequate system could be proportionate to platform size and set by a third party, a body that ensures that practices are fair and clear for companies to understand.

Of course, the system that Zuckerberg is advocating is essentially the status quo, at least for the social network itself. It can already demonstrate that it has a system in place for identifying unlawful content, and its own third party, the Facebook Oversight Board, to which it could refer such constitutional questions. What this tweak to the rules would do, however, is potentially box out any challengers looking to set up a rival social platform.

Professor Jeff Kosseff, a Section 230 expert and author of the book The Twenty Six Words That Created the Internet, said in an interview with Engadget early last year about s230 reform that weakening those protections may even be good for Facebook. There are other platforms, he said, and Facebook probably wont be harmed as much by repeal. Facebook is big enough, and wealthy enough, that it can both afford to deal with the flurry of early litigation, and has a substantial moderation platform.

What it will do, however, is force rival platforms that may lack the resources available to a company with close to $29 billion in annual profit to fight wars on many fronts. Last year, Kosseff pointed to Yelp as an example, saying that if a restaurant gets a one-star review, under s230, it doesnt have to do anything. If, however, the aggrieved restaurateur gets involved, Yelp will either need to go to court (an expensive, time-consuming ordeal) or delete the review (thereby reducing its usefulness). A cynic would, at this point, mention that Facebook has its own restaurant listings, which host their own reviews.

Sundar Pichai, meanwhile, is the only CEO who speaks out in favor of s230 as it currently stands, without any alteration. This is likely because Google is able to evade much of the criticism that Facebook receives due to the way its ownership of YouTube is structured. Certainly, most of the questions directed at YouTube on Thursday were around its Kids product, rather than its content moderation. (Earlier this week The Washington Post asked why Susan Wojcicki, YouTube CEO, isnt given more regulatory scrutiny given YouTubes role in spreading conspiracy theories and extremist content more generally.)

At Thursdays hearing, Dorsey let his frustration show more than either of his counterparts. He treated the proceedings with a degree of contempt and, as it wore on, made less effort to hide his feelings. Dorseys responses came across as if he felt many of the questions directed to him were beneath him. That wasnt helped by the fact that the CEO was tweeting during the hearing, and liked several responses that were critical of the committees questions. At one point, he said I dont think we should be the arbiters of truth, before sharply adding that he didnt think the government should be either.

Dorseys statement is the only one that doesnt mention s230 by name or make a proposal for its reform. His interest has moved toward both Bitcoin, Blockchain and decentralization more generally, so perhaps he no longer sees the value in US legislation. His Bluesky project essentially places the power of the platform into the hands of its users, and others who wish to get involved. That means theres less need for Twitter to find an acceptable way of reforming s230, since its looking to build something entirely distinct.

You could suggest that each companys statement on s230 is a reflection of their general values and attitude. Facebook wants to tweak the law to potentially weaken competitors, Google is hoping not to make waves, but wont shout for the status quo too loudly, while Twitter is already mentally elsewhere. Unfortunately for Zuckerberg, Pichai and Dorsey, none of those positions are likely to sate politicians who understand that something needs to change, but aren't sure what.

See the original post here:

No one agrees on how to fix Big Tech - Yahoo Tech

Posted in Big Tech | Comments Off on No one agrees on how to fix Big Tech – Yahoo Tech

The Simplest Way to Rein In Facebook and Big Tech – Crooked

Posted: at 3:14 am

The scenes from the U.S. Capitol on January 6 were surreal for the insurrectionists too, I imagine.

After all, consider what they thought they were looking at: The theft of the presidency before their eyes, with the complicity of the traitorous vice president.

The motley mob looked familiar, only out of placelike a far-right Facebook group superimposed on a Capitol green screen. All the regulars were there: Proud Boys and Oath Keepers, QAnon adherents and Boogaloo Bois, anti-vaxxers, anti-maskers, and anti-Semites.

As one news anchor after another expressed shock, I kept involuntarily muttering, then yelling, the same thing: What did you think was gonna happen?

For months, President Trump and his allies told millions of devout followers that the electionthe countryhad been hijacked. As the Big Lie cascaded across social media, his supporters began openly organizing in Facebook Groups to #StopTheSteal, swapping pictures of weaponry and finalizing plans to invade Washington.

For years before that, Facebook and YouTube had helped nurture extremist movements like QAnon, whose adherents eagerly await the execution of prominent Democrats in the reckoning they call The Storm, and Boogaloo, which agitates for a second Civil War.

For the better part of a decade, Trump and the tech giants at the center of our information ecosystem have been fueling tribalism, spreading conspiracy theories, radicalizing people, and ultimately eroding the shared reality upon which democracy depends.

Trump, of course, did it purely to advance his own interests. But heres what you need to understand about the tech giants: So did they.

The insurrection has rightfully been seen as an outgrowth of dark forces that Trump accelerated for political gainracism, truth decay, political sectarianism. Yet its worth examining the violent siege also as an outgrowth of an unchecked industry with a financial incentive to exploit our worst instincts; an industry that has been an accelerant of those festering evils in its own right.

The worlds most profitable corporations dont become what they are without unflinching commitment to a business model. And while social media platforms may shape the global public discourse, theyre also free for uswhich is to say, we are not their customers. We are only valuable insofar as our attention and data serves their actual business: surveillance advertising.

Surveillance advertising is the practice of tracking and profiling individuals and groups, and then microtargeting ads at them based on their behavioral history, relationships, and identity.

Facebook, which earned a record $86 billion in 2020, made 98 percent of its revenue from selling these ads. Combined with Google (which owns YouTube), the two tech giants control over 60 percent of the entire U.S. digital-ad market, and their dominance rests on mountains of data theyve extracted from users.

These companies track our every move to build digital dossierseverything youve ever clicked, what youve done on other apps and sites, even your offline behavior. Then they curate the content you seenot just ads, but your newsfeeds, recommendations, notifications, and trends, using profit-driven algorithms that determine what will keep you clicking, so they can serve you more ads and collect more of your data.

The algorithms disproportionately amplify hate and disinformation, because thats what generates the most engagement. They introduce us to increasingly extreme content and communities to capture our attention. The more of our attention they capture, the more of it they can auction off to advertisers, who bid in real-time to microtarget us, impression-by-impression. Thats the business model.

Echo chambers, radicalization, and viral conspiracy theories are features of these platforms, not bugs. Each userand our entire discourseis constantly manipulated to pad Big Techs bottom line. Your doom-scrolling is their doom-profiteering.

The real costs of the business, however, are incalculable, as we saw on January 6. The deadly siege didnt begin in the heat of the moment, but after a process of nurturing collective delusion. Make no mistake: thousands of rioters truly believe they were the heroic patriots at the Capitol that day. Many proudly posted incriminating content during and after their siegeposts that surely generated exuberant engagement well before being sent to the FBI.

We cannot deplatform our way out of this information crisis. This is not a content problem, but a deep structural one. And if we dont fix it, the consequences will only worsen.

While more than enough lawmakers agree that change is necessary, they are divided across three main schools of thoughtantitrust action (break em up), Section 230 reform (hold em liable) and privacy legislation (keep em out). Each of these approaches has merit, yet there is no consensus around which is best.

IdeallyCongress would advance on all of these fronts, in recognition that no single action will likely be sufficient to rebuild democratic culture steeped in shared reality.

But given that the surveillance-advertising business model sits at the nexus of these divergent paths, one step that should appeal to all factions would be a ban on surveillance advertising. Its a unifying call-to-actionone that four-in-five Americans support, and that has united a sweeping coalition of national and international advocacy organizations. Silicon Valleys own Congresswoman, Rep. Anna Eshoo (D-CA) is apparently on board too, teasing forthcoming legislation at Thursdays Big Tech hearing.

There are plenty of ways to advertise effectively that arent designed in ways that cause profound harm to society. It turns out placing ads next to relevant articles or search terms works just fine. That was the norm until quite recentlyand until quite recently, America had enjoyed an unbroken streak of coup-free elections.

In 2018, Mark Zuckerberg explained, One of the biggest issues social networks face is that, when left unchecked, people will engage disproportionately with more sensationalist and provocative content At scale it can undermine the quality of public discourse This engagement pattern seems to exist no matter where we draw the lines, so we need to change this incentive and not just remove content.

For once, Zuck was rightwe need to change the incentives. Lets start by banning surveillance advertising.

Jesse Lehrich is a co-founder of Accountable Tech and former foreign policy spokesman for Hillary Clinton.

Originally posted here:

The Simplest Way to Rein In Facebook and Big Tech - Crooked

Posted in Big Tech | Comments Off on The Simplest Way to Rein In Facebook and Big Tech – Crooked

Ayn Rand-inspired ‘myth of the founder’ puts tremendous power in hands of Big Tech CEOs like Zuckerberg posing real risks to democracy – The…

Posted: at 3:14 am

Coinbases plan to go public in April highlights a troubling trend among tech companies: Its founding team will maintain voting control, making it mostly immune to the wishes of outside investors.

The best-known U.S. cryptocurrency exchange is doing this by creating two classes of shares. One class will be available to the public. The other is reserved for the founders, insiders and early investors, and will wield 20 times the voting power of regular shares. That will ensure that after all is said and done, the insiders will control 53.5% of the votes.

Coinbase will join dozens of other publicly traded tech companies many with household names such as Google, Facebook, Doordash, Airbnb and Slack that have issued two types of shares in an effort to retain control for founders and insiders. The reason this is becoming increasingly popular has a lot to do with Ayn Rand, one of Silicon Valleys favorite authors, and the myth of the founder her writings have helped inspire.

Engaged investors and governance experts like me generally loathe dual-class shares because they undermine executive accountability by making it harder to rein in a wayward CEO. I first stumbled upon this method executives use to limit the influence of pesky outsiders while working on my doctoral dissertation on hostile takeovers in the late 1980s.

But the risks of this trend are greater than simply entrenching bad management. Today, given the role tech companies play in virtually every corner of American life, it poses a threat to democracy as well.

Dual-class voting structures have been around for decades.

When Ford Motor Co. went public in 1956, its founding family used the arrangement to maintain 40% of the voting rights. Newspaper companies like The New York Times and The Washington Post often use the arrangement to protect their journalistic independence from Wall Streets insatiable demands for profitability.

In a typical dual-class structure, the company will sell one class of shares to the public, usually called class A shares, while founders, executives and others retain class B shares with enough voting power to maintain majority voting control. This allows the class B shareholders to determine the outcome of matters that come up for a shareholder vote, such as who is on the companys board.

Advocates see a dual-class structure as a way to fend off short-term thinking. In principle, this insulation from investor pressure can allow the company to take a long-term perspective and make tough strategic changes even at the expense of short-term share price declines. Family-controlled businesses often view it as a way to preserve their legacy, which is why Ford remains a family company after more than a century.

It also makes a company effectively immune from hostile takeovers and the whims of activist investors.

But this insulation comes at a cost for investors, who lose a crucial check on management.

Indeed, dual-class shares essentially short-circuit almost all the other means that limit executive power. The board of directors, elected by shareholder vote, is the ultimate authority within the corporation that oversees management. Voting for directors and proposals on the annual ballot are the main methods shareholders have to ensure management accountability, other than simply selling their shares.

Recent research shows that the value and stock returns of dual-class companies are lower than other businesses, and theyre more likely to overpay their CEO and waste money on expensive acquisitions.

Companies with dual-class shares rarely made up more than 10% of public listings in a given year until the 2000s, when tech startups began using them more frequently, according to data collected by University of Florida business professor Jay Ritter. The dam began to break after Facebook went public in 2012 with a dual-class stock structure that kept founder Mark Zuckerberg firmly in control he alone controls almost 60% of the company.

In 2020, over 40% of tech companies that went public did so with two or more classes of shares with unequal voting rights.

This has alarmed governance experts, some investors and legal scholars.

If the dual-class structure is bad for investors, then why are so many tech companies able to convince them to buy their shares when they go public?

I attribute it to Silicon Valleys mythology of the founder - what I would dub an Ayn Rand theory of corporate governance that credits founders with superhuman vision and competence that merit deference from lesser mortals. Rands novels, most notably Atlas Shrugged, portray an America in which titans of business hold up the world by creating innovation and value but are beset by moochers and looters who want to take or regulate what they have created.

Perhaps unsurprisingly, Rand has a strong following among tech founders, whose creative genius may be threatened by any form of outside regulation. Elon Musk, Coinbase founder Brian Armstrong and even the late Steve Jobs all have recommended Atlas Shrugged.

Her work is also celebrated by the venture capitalists who typically finance tech startups many of whom were founders themselves.

The basic idea is simple: Only the founder has the vision, charisma and smarts to steer the company forward.

It begins with a powerful founding story. Michael Dell and Zuckerberg created their multibillion-dollar companies in their dorm rooms. Founding partner pairs Steve Jobs and Steve Wozniak and Bill Hewlett and David Packard built their first computer companies in the garage Apple and Hewlett-Packard, respectively. Often the stories are true, but sometimes, as in Apples case, less so.

And from there, founders face a gantlet of rigorous testing: recruiting collaborators, gathering customers and, perhaps most importantly, attracting multiple rounds of funding from venture capitalists. Each round serves to further validate the founders leadership competence.

The Founders Fund, a venture capital firm that has backed dozens of tech companies, including Airbnb, Palantir and Lyft, is one of the biggest proselytizers for this myth, as it makes clear in its manifesto.

The entrepreneurs who make it have a near-messianic attitude and believe their company is essential to making the world a better place, it asserts. True to its stated belief, the fund says it has never removed a single founder, which is why it has been a big supporter of dual-class share structures.

Another venture capitalist who seems to favor giving founders extra power is Netscape founder Marc Andreessen. His venture capital firm Andreessen Horowitz is Coinbases biggest investor. And most of the companies in its portfolio that have gone public also used a dual-class share structure, according to my own review of their securities filings.

Giving founders voting control disrupts the checks and balances needed to keep business accountable and can lead to big problems.

WeWork founder Adam Neumann, for example, demanded unambiguous authority to fire or overrule any director or employee. As his behavior became increasingly erratic, the company hemorrhaged cash in the lead-up to its ultimately canceled initial public offering.

Investors forced out Ubers Travis Kalanick in 2017, but not before hes said to have created a workplace culture that allegedly allowed sexual harassment and discrimination to fester. When Uber finally went public in 2019, it shed its dual-class structure.

There is some evidence that founder-CEOs are less gifted at management than other kinds of leaders, and their companies performance can suffer as a consequence.

But investors who buy shares in these companies know the risks going in. Theres much more at stake than their money.

What happens when powerful, unconstrained founders control the most powerful companies in the world?

The tech sector is increasingly laying claim to central command posts of the U.S. economy. Americans access to news and information, financial services, social networks and even groceries is mediated by a handful of companies controlled by a handful of people.

Recall that in the wake of the Jan. 6 Capitol insurrection, the CEOs of Facebook and Twitter were able to eject former President Donald Trump from his favorite means of communication virtually silencing him overnight. And Apple, Google and Amazon cut off Parler, the right-wing social media platform used by some of the insurrectionists to plan their actions. Not all of these companies have dual-class shares, but this illustrates just how much power tech companies have over Americas political discourse.

One does not have to disagree with their decision to see that a form of political power is becoming increasingly concentrated in the hands of companies with limited outside oversight.

[Deep knowledge, daily. Sign up for The Conversations newsletter.]

Read more:

Ayn Rand-inspired 'myth of the founder' puts tremendous power in hands of Big Tech CEOs like Zuckerberg posing real risks to democracy - The...

Posted in Big Tech | Comments Off on Ayn Rand-inspired ‘myth of the founder’ puts tremendous power in hands of Big Tech CEOs like Zuckerberg posing real risks to democracy – The…

The tough questions Congress should ask Big Tech CEOs this week – Yahoo Tech

Posted: at 3:14 am

Jack Dorsey, Sundar Pichai, and Mark Zuckerberg will face the House Commerce Committee on Thursday to answer questions about disinformation and misinformation on their platforms.(AP Photo/Jose Luis Magana, LM Otero, Jens Meyer)

Wednesday, March 24, 2021

This article was first featured in Yahoo Finance Tech, a weekly newsletter highlighting our original content on the industry. Get it sent directly to your inbox every Wednesday by 4 p.m. ET. Subscribe

Facebook (FB) CEO Mark Zuckerberg, Google (GOOG, GOOGL) CEO Sundar Pichai, and Twitter (TWTR) CEO Jack Dorsey will face another Congressional grilling on Thursday about lies on their platforms.

The hearing before the House Energy and Commerce Committee bills itself as addressing social medias role in promoting extremism and misinformation. In reality, lawmakers will likely target Washingtons favorite bogeyman: Section 230 of the Communications Decency Act.

Section 230, considered the cornerstone of the modern internet, protects websites from liability for third-party content posted to their sites and allows them to moderate that content freely. While the laws fans say it allows the internet to function as a free marketplace of ideas, it was also passed back when Mark Zuckerberg was in middle school. Its worth re-examining in light of how the internet might have changed since 1996.

But if the hearing resembles two other Big Tech hearings in Congress from the past year, it will devolve into political theater. Thats because the law is blamed for, impossibly, doing two things at once. Many Republicans say sites use it to censor conservative content they dont agree with. Others on the opposite side of the aisle, however, say sites use the law to allow disinformation and anger to run rampant ultimately benefiting their bottom lines.

Facebook CEO Mark Zuckerberg testifies remotely during a Senate Judiciary Committee hearing titled, "Breaking the News: Censorship, Suppression, and the 2020 Election" on Capitol Hill on November 17, 2020 in Washington, DC. (Photo by Hannah McKay-Pool/Getty Images)

If either side is going to have an actual discussion about the law, theyre going to have to ask the CEOs difficult questions. They will have to probe the CEOs about how exactly they moderate their content, and ask them questions that force them to acknowledge their roles in real-world violence. Members of Congress will also have to ask themselves just what they want to change about Section 230.

Story continues

Congress needs to kick things off by asking the CEOs serious questions about the algorithms their sites use to recommend content to their users.

Both Facebook as well as Googles YouTube have been criticized for using algorithms that guide users to more divisive and extreme content. According to The Wall Street Journal, Facebook ignored its own internal studies showing that the companys algorithm aggravated polarization on the platform.

In October, Facebook announced that it was suspending algorithmic recommendations for political groups ahead of the 2020 election. It made the change permanent following the Capitol attack.

Google CEO Sundar Pichai appears on a screen as he speaks remotely during a hearing before the Senate Commerce Committee on Capitol Hill, Wednesday, Oct. 28, 2020, in Washington. (Michael Reynolds/Pool via AP)

During prior hearings, the CEOs have fallen back on the popular refrain that their algorithms function as black boxes. They say they feed the algorithms information about users, and the algorithms spit out content. But a better look at how those algorithms work or the kind of content they favor would provide Congress with a greater understanding of why disinformation, misinformation, and hate speech spread across these platforms.

The way Big Tech moderates platforms can often seem arbitrary, even though they supposedly outline rules of conduct in their terms of service. Facebook, in particular, has faced criticism for allowing hate speech that seemingly violates its terms of service.

India McKinney, director of federal affairs at the Electronic Frontier Foundation, wants lawmakers to probe CEOs about how they make these decisions.

Theyre not altruistic decisions...and they're very clear about this, McKinney said. Their mission is to make money for their shareholders. The questions are really around transparency, and why the businesses make the decisions they make.

Twitter CEO Jack Dorsey appears on a screen as he speaks remotely during a hearing before the Senate Commerce Committee on Capitol Hill, in Washington. (Michael Reynolds/Pool Photo via AP, File)

Experts have been asking about the decision-making processes behind companies moderation practices for years, and for good reason. Facebook, for instance, has reportedly softened its stance on moderating content from professional agitators like Alex Jones, and its crucial to understand how those companies make those decisions.

Social media sites have been widely accused of allowing former President Trumps supporters to plan and coordinate their Jan. 6 attack on the Capitol. Congress will need to bluntly ask the CEOs if they believe their sites play a role in real-world violence, and to what degree.

Facebook has also been linked to a slew of international incidents of violence including attacks on Myanmars Muslim Rohingya population, while misinformation on Facebooks WhatsApp has been blamed for gang killings in India.

More recently hate speech and disinformation about the coronavirus have coalesced on social media sites like Facebook and Twitter, as well as fringe sites like 4chan, which The New York Times explains, has led to real-world violence against Asian Americans.

Even if Congress gets answers out of the CEOs, theyll still have to determine exactly what they want to change about Section 230.

The core question that Congress needs to answer is to define for itself what the problem is, and then ask the services what they can do to fix that problem, Eric Goldman, associate dean for research and professor at Santa Clara University School of Law, told Yahoo Finance. Since Congress doesn't have a good sense about what problems it wants to fix, it can never elicit information to answer its questions.

If Congress cant find common ground on how to fix Section 230, this hearing and others wont lead to any real change no matter how many probing questions lawmakers ask.

By Daniel Howley, tech editor. Follow him at @DanielHowley

Go here to read the rest:

The tough questions Congress should ask Big Tech CEOs this week - Yahoo Tech

Posted in Big Tech | Comments Off on The tough questions Congress should ask Big Tech CEOs this week – Yahoo Tech

Why Are We Still Doing These Big Tech Hearings? Mother Jones – Mother Jones

Posted: at 3:14 am

Let our journalists help you make sense of the noise: Subscribe to the Mother Jones Daily newsletter and get a recap of news that matters.

On Thursday afternoon, the House Energy and Commerce Committee sat three executives of the most powerful companies in the world down and scolded them.

For roughly five hours, members of the House told Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Twitter CEO Jack Dorsey that their companies were up to no good. A few lawmakersgot in a sick dunk or two. And then the legislators promised the tech lords would soon face the consequences. Again.

If you pay attention to Congress a bit, youll know that these kinds of hearings happen kind of often now. Lawmakers pillory the CEOs in (decreasingly) viral clips in the hearing. They threaten to bring down the hammer.And then no one brings down the hammer.

Today, hearing co-chair Rep. Jan Schakowsky (D-Ill.) promised impending legislation to regulate the companies. Committee chair Frank Pallone (D-N.J.) said that the time has come to hold online platforms accountable for their part in the rise of disinformation and extremism. But no bills are expected to pass soon that would do anything like that, really.

This isnt exactly a big secret. Previous hearings have produced these headlines: Two words describe the Senates latest Big Tech hearing: Worthless and petty, (CNN); How Congress missed another chance to hold big tech accountable (The Verge); and the bluntest, Why is Congress so dumb? (a Washington Post opinion piece).

In the beginning, it made sense.For years, tech companies werent called before Congress as they became increasingly powerful. In 2018, when Mark Zuckerberg was set to testify on Capitol Hill for the first time concerning Russian interference during the 2016 election,it was a massive deal. At the time, I had a bum foot and I asked my doctor about dates for surgery, with the hope that we could find a day that would still allow me to cover it. As I explained my plan for medical leave following the surgery and covering the hearing, my editor nodded knowingly. This is the Super Bowl of your beat, he said. I hobbled on crutches into a packed Senate room to cover it one day before my scheduled surgery. It felt like something big was about to happen and as a tech reporter, I felt like I couldnt miss it.

It turns out I could have, because, for the next two and a half years, that same hearing happened, with some differences in personnel, with slightly different details, over and over. The formerly primetime event became a recurring daytime soap opera.

Several months go by, something bad happens. Another hearing is announced. Sometimes legislation gets drafted. And it doesnt go anywhere. Work is done, papers are pushed around, very little is achieved but everyone is very busy. Its almost as if everyone involved is more interested in creating an illusion of progress more than achieving any actual progress.

What worked in 2018, no longer seems to make any sense. Making tech companies feel pressured to make changes sounds nice, except they arent handling the supposedly pressing issues that their lawmaker antagonizers say that they must handle immediately: disinformation thats damaging to public health, exacerbating violent extremism, and furthering voter suppression.

The hearings, in theory, could be good for holding people to account without legislation. But, as of now, they almost seem like a stand-in for creating legislation, instead of a tool of accountability paired with meaningful bills. They even allow the CEOs to get away with misleading lawmakers if they want to, like when Zuckerberg gave Rep. Alexandria Ocasio-Cortez (D-N.Y.) incorrect information about how Facebook picked the Daily Caller to be one of its fact-checking partners. Or, companies give belated answers to specific questions that executives couldnt answer off the top of their heads.

The hearings also just give some lawmakers a chance to publicly gripe about anti-conservative biasan argument that theyre usually forced to give within the confines of the conservative media echo chamber, which is where it belongs until anyone can produce data suggesting that it is true, and not the product of sloppy moderation that also affects the left.

The farce is compounded by the fact that the lawmakers staffs will continue to leave for six-figure salary jobs on their government affairs (the dignified rebranded term for lobbying) teams, and use the connections they forged on Capitol Hill to help make that the status quo is preserved as much as possible.

Its possible that with a Democratic majority in both chambers, and President Biden tapping high-profile technology critics, that something finally gives, but if theyre interested in doing something, policymakers and regulators can just go do it. Hold the hearings if you must. But its starting to feel like getting in the dunks is the only point unless you actually do something.

See the article here:

Why Are We Still Doing These Big Tech Hearings? Mother Jones - Mother Jones

Posted in Big Tech | Comments Off on Why Are We Still Doing These Big Tech Hearings? Mother Jones – Mother Jones

Page 73«..1020..72737475..8090..»