{"id":185214,"date":"2017-03-29T11:03:00","date_gmt":"2017-03-29T15:03:00","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/the-future-of-free-speech-trolls-anonymity-and-fake-news-online-pew-research-centers-internet-and-american-life-project\/"},"modified":"2017-03-29T11:03:00","modified_gmt":"2017-03-29T15:03:00","slug":"the-future-of-free-speech-trolls-anonymity-and-fake-news-online-pew-research-centers-internet-and-american-life-project","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/freedom-of-speech\/the-future-of-free-speech-trolls-anonymity-and-fake-news-online-pew-research-centers-internet-and-american-life-project\/","title":{"rendered":"The Future of Free Speech, Trolls, Anonymity and Fake News Online &#8211; Pew Research Center&#8217;s Internet and American Life Project"},"content":{"rendered":"<p><p>  Many experts fear uncivil and manipulative behaviors on the  internet will persist  and may get worse. This will lead to a  splintering of social media into AI-patrolled and regulated safe  spaces separated from free-for-all zones. Some worry this will  hurt the open exchange of ideas and compromise privacy<\/p>\n<p>            The internet supports a global ecosystem of social interaction.    Modern life revolves around the network, with its status    updates, news feeds, comment chains, political advocacy,    omnipresent reviews, rankings and ratings. For its first few    decades, this connected world was idealized as an unfettered    civic forum: a space where disparate views, ideas and    conversations could constructively converge. Its creators were    inspired by the optimism underlying     Stuart Brands WELL in 1985,     Tim Berners-Lees World Wide Web and     Electronic Frontier Foundation co-founder John Perry    Barlows 1996 Declaration    of Independence of Cyberspace. They expected the internet    to create a level playing field for information sharing and    communal activity among individuals, businesses, other    organizations and government actors.  <\/p>\n<p>      One of the biggest challenges will be finding an appropriate      balance between protecting anonymity and enforcing      consequences for the abusive behavior that has been allowed      to characterize online discussions for far too long.      Bailey Poland    <\/p>\n<p>    Since the early 2000s, the wider diffusion of the network, the        dawn of Web 2.0 and social medias increasingly influential    impacts, and the maturation of strategic uses of online    platforms to influence the public for economic and political    gain have altered discourse. In recent years, prominent    internet analysts and the public at large have expressed    increasing concerns that the content, tone and intent of online    interactions have undergone an evolution that threatens its    future and theirs. Events and discussions unfolding over the    past year highlight the struggles ahead. Among them:  <\/p>\n<p>    To illuminate current attitudes about the potential impacts of    online social interaction over the next decade, Pew Research    Center and Elon Universitys Imagining the Internet Center    conducted a large-scale canvassing of technology experts,    scholars, corporate practitioners and government leaders. Some    1,537 responded to this effort between July 1 and Aug. 12, 2016    (prior to the late-2016 revelations about potential    manipulation of public opinion via hacking of social media).    They were asked:  <\/p>\n<p>    In the next decade, will public discourse online    become more or less shaped by bad actors, harassment, trolls,    and an overall tone of griping, distrust, and    disgust?  <\/p>\n<p>    In response to this question, 42% of    respondents indicated that they expect no major    changein online social climate in the coming decade    and 39% said they expect the online future    will be more shaped by negative activities. Those who    said they expect the internet to be less shaped by    harassment, trolling and distrust were in the minority.    Some 19% said this.    Respondents were asked to elaborate on how they anticipate    online interaction progressing over the next decade. (See    About    this canvassing of experts for further details about the    limits of this sample.)  <\/p>\n<p>    Participants were also asked to explain their answers in a    written elaboration and asked to consider the following    prompts: 1) How do you expect social media and    digital commentary will evolve in the coming decade?    2) Do you think we will see a widespread    demand for technological systems or solutions that encourage    more inclusive online interactions? 3) What do    you think will happen to free speech? And 4)    What might be the consequences for anonymity and privacy?  <\/p>\n<p>    While respondents expressed a range of opinions from deep    concern to disappointment to resignation to optimism, most    agreed that people  at their best and their worst  are    empowered by networked communication technologies. Some said    the flame wars and strategic manipulation of the zeitgeist    might just be getting started if technological and human    solutions are not put in place to bolster diverse civil    discourse.  <\/p>\n<p>    A number of respondents predicted online reputation systems and    much better security and moderation solutions will become near    ubiquitous in the future, making it increasingly difficult for    bad actors to act out disruptively. Some expressed concerns    that such systems  especially those that remove the ability to    participate anonymously online  will result in an altered    power dynamic between government\/state-level actors, the elites    and regular citizens.  <\/p>\n<p>    Anonymity, a key affordance of the early internet, is an    element that many in this canvassing attributed to enabling bad    behavior and facilitating uncivil discourse in shared online    spaces. The purging of user anonymity is seen as possibly    leading to a more inclusive online environment and also setting    the stage for governments and dominant institutions to even    more freely employ surveillance tools to monitor citizens,    suppress free speech and shape social debate.  <\/p>\n<p>    Most experts predicted that the builders of open social spaces    on global communications networks will find it difficult to    support positive change in cleaning up the real-time exchange    of information and sharing of diverse ideologies over the next    decade, as millions more people around the world become    connected for the first time and among the billions already    online are many who compete in an arms race of sorts to hack    and subvert corrective systems.  <\/p>\n<p>    Those who believe the problems of trolling and other toxic    behaviors can be solved say the cure might also be quite    damaging. One of the biggest challenges will be finding an    appropriate balance between protecting anonymity and enforcing    consequences for the abusive behavior that has been allowed to    characterize online discussions for far too long, explained    expert respondent Bailey Poland, author of    Haters: Harassment, Abuse, and Violence Online.  <\/p>\n<p>    The majority in this canvassing were sympathetic to those    abused or misled in the current online environment while    expressing concerns that the most likely solutions will allow    governments and big businesses to employ surveillance systems    that monitor citizens, suppress free speech and shape discourse    via algorithms, allowing those who write the algorithms to    sculpt civil debate.  <\/p>\n<p>    Susan Etlinger, an industry analyst at    Altimeter Group, walked through a future scenario of    tit-for-tat, action-reaction that ends in what she calls a    Potemkin internet. She wrote: In the next several years we    will see an increase in the type and volume of bad behavior    online, mostly because there will be a corresponding increase    in digital activity.  Cyberattacks, doxing, and trolling will    continue, while social platforms, security experts, ethicists,    and others will wrangle over the best ways to balance security    and privacy, freedom of speech, and user protections. A great    deal of this will happen in public view. The more worrisome    possibility is that privacy and safety advocates, in an effort    to create a more safe and equal internet, will push bad actors    into more-hidden channels such as Tor. Of course, this is    already happening, just out of sight of most of us. The worst    outcome is that we end up with a kind of Potemkin internet in    which everything looks reasonably bright and sunny, which hides    a more troubling and less transparent reality.  <\/p>\n<p>    One other point of context for this non-representative sample    of a particular population: While the question we posed was not    necessarily aimed at getting peoples views about the role of    political material in online social spaces, it inevitably drew    commentary along those lines because this survey was fielded in    the midst of a bitter, intense election in the United States    where one of the candidates, in particular, was a provocative    user of Twitter.  <\/p>\n<p>    Most participants in this canvassing wrote detailed    elaborations explaining their positions. Their well-considered    comments provide insights about hopeful and concerning trends.    They were allowed to respond anonymously, and many chose to do    so.  <\/p>\n<p>    These findings do not represent all points of view possible,    but they do reveal a wide range of striking observations.    Respondents collectively articulated four key themes that are    introduced and briefly explained below and then     expanded upon in more-detailed sections.  <\/p>\n<p>    The following section presents a brief overview of the most    evident themes extracted from the written responses, including    a small selection of representative quotes supporting each    point. Some responses are lightly edited for style or due to    length.  <\/p>\n<p>    While some respondents saw issues with uncivil behavior online    on somewhat of a plateau at the time of this canvassing in the    summer of 2016 and a few expect solutions will cut hate speech,    misinformation and manipulation, the vast majority shared at    least some concerns that things could get worse, thus two of    the four overarching themes of this report start with the    phrase, Things will stay bad.  <\/p>\n<p>      The individuals voice has a much higher perceived value than      it has in the past. As a result, there are more people who      will complain online in an attempt to get attention,      sympathy, or retribution.      Anonymous software engineer    <\/p>\n<p>    A number of expert respondents observed that negative online    discourse is just the latest example of the many ways humans    have exercised social vitriol for millennia. Jerry    Michalski, founder at REX, wrote, I would very much    love to believe that discourse will improve over the next    decade, but I fear the forces making it worse havent played    out at all yet. After all, it took us almost 70 years to    mandate seatbelts. And were not uniformly wise about how to    conduct dependable online conversations, never mind debates on    difficult subjects. In that long arc of history that bends    toward justice, particularly given our accelerated times, I do    think we figure this out. But not within the decade.  <\/p>\n<p>    Vint Cerf, Internet Hall of Fame member,    Google vice president and co-inventor of the Internet Protocol,    summarized some of the harmful effects of disruptive discourse:  <\/p>\n<p>    The internet is threatened with fragmentation, he wrote.     People feel free to make unsupported claims, assertions, and    accusations in online media.  As things now stand, people are    attracted to forums that align with their thinking, leading to    an echo effect. This self-reinforcement has some of the    elements of mob (flash-crowd) behavior. Bad behavior is somehow    condoned because everyone is doing it.  It is hard to see    where this phenomenon may be heading.  Social media bring    every bad event to our attention, making us feel as if they all    happened in our back yards  leading to an overall sense of    unease. The combination of bias-reinforcing enclaves and global    access to bad actions seems like a toxic mix. It is not clear    whether there is a way to counter-balance their socially    harmful effects.  <\/p>\n<p>    An anonymous respondent commented, The tone    of discourse online is dictated by fundamental human psychology    and will not easily be changed. This statement reflects the    attitude of expert internet technologists, researchers and    pundits, most of whom agree that it is the people using the    network, not the network, that is the root of the problem.  <\/p>\n<p>    Paul Jones, clinical professor and director of    ibiblio.org at the University of North Carolina, Chapel Hill,    commented, The id unbound from the monitoring and control by    the superego is both the originator of communication and the    nemesis of understanding and civility.  <\/p>\n<p>    John Cato, a senior software engineer, wrote,    Trolling for arguments has been an internet tradition since    Usenet. Some services may be able to mitigate the problem    slightly by forcing people to use their real identities, but    wherever you have anonymity you will have people who are there    just to make other people angry.  <\/p>\n<p>    And an anonymous software engineer explained    why the usual level of human incivility has been magnified by    the internet, noting, The individuals voice has a much higher    perceived value than it has in the past. As a result, there are    more people who will complain online in an attempt to get    attention, sympathy, or retribution.  <\/p>\n<p>    Michael Kleeman, formerly with the Boston    Consulting Group, Arthur D. Little and Sprint, now senior    fellow at the Institute on Global Conflict and Cooperation at    the University of California, San Diego, explained:    Historically, communities of practice and conversation had    other, often physical, linkages that created norms of behavior.    And actors would normally be identified, not anonymous.    Increased anonymity coupled with an increase in    less-than-informed input, with no responsibility by the actors,    has tended and will continue to create less open and honest    conversations and more one-sided and negative activities.  <\/p>\n<p>      Trolls now know that their methods are effective and carry      only minimal chance of social stigma and essentially no other      punishment.      Anonymous respondent    <\/p>\n<p>    An expert respondent who chose not to be    identified commented, People are snarky and awful    online in large part because they can be anonymous. And    another such respondent wrote, Trolls now    know that their methods are effective and carry only minimal    chance of social stigma and essentially no other punishment. If        Gamergate can harass and dox any woman with an opinion and    experience no punishment as a result, how can things get    better?  <\/p>\n<p>    Anonymously, a professor at Massachusetts Institute of    Technology (MIT) commented, We see a dark current of    people who equate free speech with the right to say anything,    even hate speech, even speech that does not sync with respected    research findings. They find in unmediated technology a place    where their opinions can have a multiplier effect, where    they become the elites.  <\/p>\n<p>    Some leading participants in this canvassing said the tone of    discourse will worsen in the next decade due to inequities and    prejudice, noting wealth disparity, the hollowing out of the    middle class, and homophily (the tendency of people to bond    with those similar to themselves and thus also at times to shun    those seen as the other).  <\/p>\n<p>      Unfortunately, I see the present prevalence of trolling as an      expression of a broader societal trend across many developed      nations, towards belligerent factionalism in public debate,      with particular attacks directed at women as well as ethnic,      religious, and sexual minorities.      Axel Bruns    <\/p>\n<p>    Cory Doctorow, writer, computer science    activist-in-residence at MIT Media Lab and co-owner of Boing    Boing, offered a bleak assessment, writing, Thomas    Piketty, etc., have correctly predicted that we are in an    era of greater social instability created by greater wealth    disparity which can only be solved through either the wealthy    collectively opting for a redistributive solution (which feels    unlikely) or everyone else compelling redistribution (which    feels messy, unstable, and potentially violent). The internet    is the natural battleground for whatever breaking point we    reach to play out, and its also a useful surveillance,    control, and propaganda tool for monied people hoping to    forestall a redistributive future. The Chinese internet    playbook  the 50c army, masses of astroturfers, libel    campaigns against enemies of the state, paranoid    war-on-terror rhetoric  has become the playbook of all states,    to some extent (see, e.g., the HB Gary leak that revealed U.S.    Air Force was putting out procurement tenders for persona    management software that allowed their operatives to control    up to 20 distinct online identities, each). That will create    even more inflammatory dialogue, flamewars, polarized debates,    etc.  <\/p>\n<p>    And an anonymous professor at    MIT remarked, Traditional elites have lost their    credibility because they have become associated with income    inequality and social injustice.  This dynamic has to shift    before online life can play a livelier part in the life of the    polity. I believe that it will, but slowly.  <\/p>\n<p>    Axel Bruns, a professor at the Queensland    University of Technologys Digital Media Research Centre, said,    Unfortunately, I see the present prevalence of trolling as an    expression of a broader societal trend across many developed    nations, towards belligerent factionalism in public debate,    with particular attacks directed at women as well as ethnic,    religious, and sexual minorities.  <\/p>\n<p>    As billions more people are connected online and technologies    such as AI chatbots, the Internet of Things, and virtual and    augmented reality continue to mature, complexity is always on    the rise. Some respondents said well-intentioned attempts to    raise the level of discourse are less likely to succeed in a    rapidly changing and widening information environment.  <\/p>\n<p>      As more people get internet access  and especially      smartphones, which allow people to connect 24\/7  there will      be increased opportunities for bad behavior.      Jessica Vitak    <\/p>\n<p>    Matt Hamblen, senior editor at Computerworld,    commented, [By 2026] social media and other forms of discourse    will include all kinds of actors who had no voice in the past;    these include terrorists, critics of all kinds of products and    art forms, amateur political pundits, and more.  <\/p>\n<p>    An anonymous respondent wrote, Bad actors    will have means to do more, and more significant bad actors    will be automated as bots are funded in extra-statial ways to    do more damage  because people are profiting from this.  <\/p>\n<p>    Jessica Vitak, an assistant professor at the    University of Maryland, commented, Social medias affordances,    including increased visibility and persistence of content,    amplify the volume of negative commentary. As more people get    internet access  and especially smartphones, which allow    people to connect 24\/7  there will be increased opportunities    for bad behavior.  <\/p>\n<p>    Bryan Alexander, president of Bryan Alexander    Consulting, added, The number of venues will rise with the    expansion of the Internet of Things and when    consumer-production tools become available for virtual and    mixed reality.  <\/p>\n<p>    Many respondents said power dynamics push trolling along. The    business model of social media platforms is driven by    advertising revenues generated by engaged platform users. The    more raucous and incendiary the material, at times, the more    income a site generates. The more contentious a political    conflict is, the more likely it is to be an attention getter.    Online forums lend themselves to ever-more hostile    arguments.  <\/p>\n<p>    Frank Pasquale, professor of law at the    University of Maryland and author of Black Box Society,    commented, The major internet platforms are driven by a profit    motive. Very often, hate, anxiety and anger drive participation    with the platform. Whatever behavior increases ad revenue will    not only be permitted, but encouraged, excepting of course some    egregious cases.  <\/p>\n<p>      Its a brawl, a forum for rage and outrage.  The more we      come back, the more money they make off of ads and data about      us. So the shouting match goes on.      Andrew Nachison    <\/p>\n<p>    Kate Crawford, a well-known internet    researcher studying how people engage with networked    technologies, observed, Distrust and trolling is happening at    the highest levels of political debate, and the lowest. The        Overton Window has been widened considerably by the 2016    U.S. presidential campaign, and not in a good way. We have    heard presidential candidates speak of banning Muslims from    entering the country, asking foreign powers to hack former    White House officials, retweeting neo-Nazis. Trolling is a    mainstream form of political discourse.  <\/p>\n<p>    Andrew Nachison, founder at We Media, said,    Its a brawl, a forum for rage and outrage. Its also    dominated social media platforms on the one hand and content    producers on the other that collude and optimize for quantity    over quality. Facebook adjusts its algorithm to provide a kind    of quality  relevance for individuals. But thats really a    ruse to optimize for quantity. The more we come back, the more    money they make off of ads and data about us. So the shouting    match goes on. I dont know that prevalence of harassment and    bad actors will change  its already bad  but if the    overall tone is lousy, if the culture tilts negative, if    political leaders popularize hate, then theres good reason to    think all of that will dominate the digital debate as well.  <\/p>\n<p>    Several of the expert respondents said because algorithmic    solutions tend to reward that which keeps us agitated, it is    especially damaging that the pre-internet news organizations    that once employed fairly objective and well-trained (if not    well-paid) armies of arbiters as democratic shapers of the    defining climate of social and political discourse have fallen    out of favor, replaced by creators of clickbait headlines read    and shared by short-attention-span social sharers.  <\/p>\n<p>      It is in the interest of the paid-for media and most      political groups to continue to encourage echo-chamber      thinking and to consider pragmatism and compromise as things      to be discouraged.      David Durant    <\/p>\n<p>    David Clark, a senior research scientist at    MIT and Internet Hall of Famer commented that he worries over    the loss of character in the internet community. It is    possible, with attention to the details of design that lead to    good social behavior, to produce applications that better    regulate negative behavior, he wrote. However, it is not    clear what actor has the motivation to design and introduce    such tools. The application space on the internet today is    shaped by large commercial actors, and their goals are    profit-seeking, not the creation of a better commons. I do not    see tools for public discourse being good money makers, so we    are coming to a fork in the road  either a new class of actor    emerges with a different set of motivations, one that is    prepared to build and sustain a new generation of tools, or I    fear the overall character of discourse will decline.  <\/p>\n<p>    An anonymous principal security    consultant wrote, As long as success  and in the    current climate, profit as a common proxy for success  is    determined by metrics that can be easily improved by throwing    users under the bus, places that run public areas online will    continue to do just that.  <\/p>\n<p>    Steven Waldman, founder and CEO of LifePosts,    said, It certainly sounds noble to say the internet has    democratized public opinion. But its now clear: It has given    voice to those who had been voiceless because they were    oppressed minorities and to those who were    voiceless because they are crackpots.  It may not necessarily    be bad actors  i.e., racists, misogynists, etc.  who win    the day, but I do fear it will be the more strident. I suspect    there will be ventures geared toward counter-programming    against this, since many people are uncomfortable with it. But    venture-backed tech companies have a huge bias toward    algorithmic solutions that have tended to reward that which    keeps us agitated. Very few media companies now have staff    dedicated to guiding conversations online.  <\/p>\n<p>    John Anderson, director of journalism and    media studies at Brooklyn College, wrote, The continuing    diminution of what Cass Sunstein once called     general-interest intermediaries such as newspapers,    network television, etc. means we have reached a point in our    society where wildly different versions of reality can be    chosen and customized by people to fit their existing    ideological and other biases. In such an environment there is    little hope for collaborative dialogue and consensus.  <\/p>\n<p>    David Durant, a business analyst at U.K.    Government Digital Service, argued, It is in the interest of    the paid-for media and most political groups to continue to    encourage echo-chamber thinking and to consider pragmatism    and compromise as things to be discouraged. While this trend    continues, the ability for serious civilized conversations    about many topics will remain very hard to achieve.  <\/p>\n<p>    The     weaponization of social media and capture of online    belief systems, also known as narratives, emerged from    obscurity in 2016 due to the perceived impact of social media    uses by terror organizations and political factions.     Accusations of Russian influence via social media on the    U.S. presidential election brought to public view the ways in    which strategists of all stripes are endeavoring to influence    people through the sharing of often false or misleading    stories, photos and videos.     Fake news moved to the forefront of ongoing discussions    about the displacement of traditional media by social    platforms. Earlier, in the summer of 2016, participants in this    canvassing submitted concerns about misinformation in online    discourse creating distorted views.  <\/p>\n<p>      Theres money, power, and geopolitical stability at stake      now, its not a mere matter of personal grumpiness from      trolls.      Anonymous respondent    <\/p>\n<p>    Anonymously, a futurist, writer, and author at    Wired, explained, New levels of cyberspace    sovereignty and heavy-duty state and non-state actors are    involved; theres money, power, and geopolitical stability at    stake now, its not a mere matter of personal grumpiness from    trolls.  <\/p>\n<p>    Karen Blackmore, a lecturer in IT at the    University of Newcastle, wrote, Misinformation and anti-social    networking are degrading our ability to debate and engage in    online discourse. When opinions based on misinformation are    given the same weight as those of experts and propelled to    create online activity, we tread a dangerous path. Online    social behaviour, without community-imposed guidelines, is    subject to many potentially negative forces. In particular,    social online communities such as Facebook also function as    marketing tools, where sensationalism is widely employed and    community members who view this dialogue as their news source    gain a very distorted view of current events and community    views on issues. This is exacerbated with social network and    search engine algorithms effectively sorting what people see to    reinforce worldviews.  <\/p>\n<p>    Laurent    Schpbach, a    neuropsychologist at University Hospital in Zurich, focused his    entire response about negative tone online on burgeoning acts    of economic and political manipulation, writing, The reason it    will probably get worse is that companies and governments are    starting to realise that they can influence peoples opinions    that way. And these entities sure know how to circumvent any    protection in place. Russian troll armies are a good example of    something that will become more and more common in the future.  <\/p>\n<p>    David Wuertele, a software engineer at Tesla    Motors, commented, Unfortunately, most people are easily    manipulated by fear.  Negative activities on the internet will    exploit those fears, and disproportionate responses will also    attempt to exploit those fears. Soon, everyone will have to    take off their shoes and endure a cavity search before boarding    the internet.  <\/p>\n<p>    Most respondents said it is likely that the coming decade will    see a widespread move to more-secure services, applications,    and platforms and more robust user-identification policies.    Some said people born into the social media age will adapt.    Some predict that more online systems will require clear    identification of participants. This means that the online    social forums could splinter into various formats, some of    which are highly protected and monitored and others which could    retain the free-for-all character of todays platforms.  <\/p>\n<p>    Some experts in this canvassing say progress is already being    made on some fronts toward better technological and human    solutions.  <\/p>\n<p>      The future Web will give people much better ways to control      the information that they receive, which will ultimately make      problems like trolling manageable.      David Karger    <\/p>\n<p>    Galen Hunt, a research manager at Microsoft    Research NExT, replied, As language-processing technology    develops, technology will help us identify and remove bad    actors, harassment, and trolls from accredited public    discourse.  <\/p>\n<p>    Stowe Boyd, chief researcher at Gigaom,    observed, I anticipate that AIs will be developed that will    rapidly decrease the impact of trolls. Free speech will remain    possible, although AI filtering will make a major dent on how    views are expressed, and hate speech will be blocked.  <\/p>\n<p>    Marina Gorbis, executive director at the    Institute for the Future, added, I expect we will develop more    social bots and algorithmic filters that would weed out the    some of the trolls and hateful speech. I expect we will create    bots that would promote beneficial connections and potentially    insert context-specific data\/facts\/stories that would benefit    more positive discourse. Of course, any filters and algorithms    will create issues around what is being filtered out and what    values are embedded in algorithms.  <\/p>\n<p>    Jean Russell of Thrivable Futures wrote,    First, conversations can have better containers that filter    for real people who consistently act with decency. Second,    software is getting better and more nuanced in sentiment    analysis, making it easier for software to augment our    filtering out of trolls. Third, we are at peak identity crisis    and a new wave of people want to cross the gap in dialogue to    connect with others before the consequences of being tribal get    worse (Brexit, Trump, etc.).  <\/p>\n<p>    David Karger, a professor of computer science    at MIT, said, My own research group is exploring several novel    directions in digital commentary. In the not too distant future    all this work will yield results. Trolling, doxxing, echo    chambers, click-bait, and other problems can be solved. We will    be able to ascribe sources and track provenance in order to    increase the accuracy and trustworthiness of information    online. We will create tools that increase peoples awareness    of opinions differing from their own and support conversations    with and learning from people who hold those opinions.  The    future Web will give people much better ways to control the    information that they receive, which will ultimately make    problems like trolling manageable (trolls will be able to say    what they want, but few will be listening).  <\/p>\n<p>      Technology will mediate who and what we see online more and      more, so that we are drawn more toward communities with      similar interests than those who are dissimilar.      Lindsay Kenzig    <\/p>\n<p>    Facebook, Twitter, Instagram, Google and other platform    providers already shape and thus limit what the public views    via the implementation of algorithms. As people have become    disenchanted with uncivil discourse open platforms they stop    using them or close their accounts, sometimes moving to smaller    online communities of people with similar needs or ideologies.    Some experts expect that these trends will continue and even    more partitions, divisions and exclusions may emerge as    measures are taken to clean things up. For instance, it is    expected that the capabilities of AI-based bots dispatched to    assist with information sorting, security, and regulation of    the tone and content of discourse will continue to be refined.  <\/p>\n<p>    Lindsay Kenzig, a senior design researcher,    said, Technology will mediate who and what we see online more    and more, so that we are drawn more toward communities with    similar interests than those who are dissimilar. There will    still be some places where you can find those with whom to    argue, but they will be more concentrated into only a few    locations than they are now.  <\/p>\n<p>    Valerie Bock, of VCB Consulting, commented,    Spaces where people must post under their real names and where    they interact with people with whom they have multiple bonds    regularly have a higher quality of discourse.  In response to    this reality, well see some consolidation as it becomes easier    to shape commercial interactive spaces to the desired audience.    There will be free-for-all spaces and more-tightly-moderated    walled gardens, depending on the sponsors strategic goals.    There will also be private spaces maintained by individuals and    groups for specific purposes.  <\/p>\n<p>    Lisa Heinz, a doctoral student at Ohio    University, commented, Humanitys reaction to negative forces    will likely contribute more to the ever-narrowing     filter bubble, which will continue to create an online    environment that lacks inclusivity by its exclusion of opposing    viewpoints. An increased demand for systemic internet-based AI    will create bots that will begin to interact  as proxies for    the humans that train them  with humans online in real-time    and with what would be recognized as conversational language,    not the word-parroting bot behavior we see on Twitter now.     When this happens, we will see bots become part of the     filter bubble phenomenon as a sort of mental bodyguard that    prevents an intrusion of people and conversations to which    individuals want no part. The unfortunate aspect of this    iteration of the filter bubble means that while free speech    itself will not be affected, people will project their voices    into the chasm, but few will hear them.  <\/p>\n<p>    Bob Frankston, internet pioneer and software    innovator, wrote, I see negative activities having an effect    but the effect will likely be from communities that shield    themselves from the larger world. Were still working out how    to form and scale communities.  <\/p>\n<p>    The expert comments in response to this canvassing were    recorded in the summer of 2016; by early 2017, after many    events (Brexit, the U.S. election, others mentioned earlier in    this report) surfaced concerns about civil discourse,    misinformation and impacts on democracy, an acceleration of    activity tied to solutions emerged. Facebook, Twitter and    Google announced some new efforts toward technological    approaches; many conversations about creating new methods of    support for public affairs journalism began to be undertaken;    and consumer bubble-busting tools including     Outside Your Bubble and     Escape Your Bubble were introduced.  <\/p>\n<p>    Some participants in this canvassing said they expect the    already-existing continuous arms race dynamic will expand, as    some people create and apply new measures to ride herd over    online discourse while others constantly endeavor to thwart    them.  <\/p>\n<p>    Cathy Davidson, founding director of the    Futures Initiative at the Graduate Center of the City    University of New York, said, Were in a spy vs. spy internet    world where the faster that hackers and trolls attack, the    faster companies (Mozilla, thank you!) plus for-profits come up    with ways to protect against them and then the hackers develop    new strategies against those protections, and so it goes. I    dont see that ending.  I would not be surprised at more    publicity in the future, as a form of cyber-terror. Thats    different from trolls, more geo-politically orchestrated to    force a national or multinational response. That is terrifying    if we do not have sound, smart, calm leadership.  <\/p>\n<p>    Sam Anderson, coordinator of instructional    design at the University of Massachusetts, Amherst, said, It    will be an arms race between companies and communities that    begin to realize (as some online games companies like Riot    have) that toxic online communities will lower their long-term    viability and potential for growth. This will war with    incentives for short-term gains that can arise out of bursts of    angry or sectarian activity (Twitters character limit inhibits    nuance, which increases reaction and response).  <\/p>\n<p>    A share of respondents said greater regulation of speech and    technological solutions to curb harassment and trolling will    result in more surveillance, censorship and cloistered    communities. They worry this will change peoples sharing    behaviors online, limit exposure to diverse ideas and challenge    freedom.  <\/p>\n<p>    While several respondents indicated that there is no longer a    chance of anonymity online, many say privacy and choice are    still options, and they should be protected.  <\/p>\n<p>      Terrorism and harassment by trolls will be presented as the      excuses, but the effect will be dangerous for democracy.      Richard Stallman    <\/p>\n<p>    Longtime internet civil libertarian Richard    Stallman, Internet Hall of Fame member and president    of the Free Software Foundation, spoke to this fear. He    predicted, Surveillance and censorship will become more    systematic, even in supposedly free countries such as the U.S.    Terrorism and harassment by trolls will be presented as the    excuses, but the effect will be dangerous for democracy.  <\/p>\n<p>    Rebecca MacKinnon, director of Ranking Digital    Rights at New America, wrote, Im very concerned about the    future of free speech given current trends. The demands for    governments and companies to censor and monitor internet users    are coming from an increasingly diverse set of actors with very    legitimate concerns about safety and security, as well as    concerns about whether civil discourse is becoming so poisoned    as to make rational governance based on actual facts    impossible. Im increasingly inclined to think that the    solutions, if they ever come about, will be    human\/social\/political\/cultural and not technical.  <\/p>\n<p>    James Kalin of Virtually Green wrote,    Surveillance capitalism is increasingly grabbing and mining    data on everything that anyone says, does, or buys online. The    growing use of machine learning processing of the data will    drive ever more subtle and pervasive manipulation of our    purchasing, politics, cultural attributes, and general    behavior. On top of this, the data is being stolen routinely by    bad actors who will also be using machine learning processing    to steal or destroy things we value as individuals: our    identities, privacy, money, reputations, property, elections,    you name it. I see a backlash brewing, with people abandoning    public forums and social network sites in favor of intensely    private black forums and networks.  <\/p>\n<p>    A number of respondents said they expect governments or other    authorities will begin implementing regulation or other reforms    to address these issues, most indicating that the competitive    instincts of platform providers do not work in favor of the    implementation of appropriate remedies without some incentive.  <\/p>\n<p>      My fear is that because of the virtually unlimited      opportunities for negative use of social media globally we      will experience a rising worldwide demand for restrictive      regulation.      Paula Hooper Mayhew    <\/p>\n<p>    Michael Rogers, author and futurist at    Practical Futurist, predicted governments will assume control    over identifying internet users. He observed, I expect there    will be a move toward firm identities  even legal identities    issued by nations  for most users of the Web. There will as a    result be public discussion forums in which it is impossible to    be anonymous. There would still be anonymity available, just as    there is in the real world today. But there would be online    activities in which anonymity was not permitted. Clearly this    could have negative free-speech impacts in totalitarian    countries but, again, there would still be alternatives for    anonymity.  <\/p>\n<p>    Paula Hooper Mayhew, a professor of humanities    at Fairleigh Dickinson University, commented, My fear is that    because of the virtually unlimited opportunities for negative    use of social media globally we will experience a rising    worldwide demand for restrictive regulation. This response may    work against support of free speech in the U.S.  <\/p>\n<p>    Marc Rotenberg, executive director of the    Electronic Privacy Information Center (EPIC), wrote, The    regulation of online communications is a natural response to    the identification of real problems, the maturing of the    industry, and the increasing expertise of government    regulators.  <\/p>\n<p>    John Markoff, senior writer at The New York    Times, commented, There is growing evidence that that the Net    is a polarizing force in the world. I dont believe to    completely understand the dynamic, but my surmise is that it is    actually building more walls than it is tearing down.  <\/p>\n<p>    Marcus Foth, a professor at Queensland    University of Technology, said, Public discourse online will    become less shaped by bad actors  because the majority of    interactions will take place inside walled gardens.  Social    media platforms hosted by corporations such as Facebook and    Twitter use algorithms to filter, select, and curate content.    With less anonymity and less diversity, the two biggest    problems of the Web 1.0 era have been solved from a commercial    perspective: fewer trolls who can hide behind anonymity. Yet,    what are we losing in the process? Algorithmic culture creates    filter     bubbles, which risk an opinion polarisation inside echo    chambers.  <\/p>\n<p>    Emily Shaw, a U.S. civic technologies    researcher for mySociety, predicted, Since social networks     are the most likely future direction for public discourse, a    million (self)-walled gardens are more likely to be the outcome    than is an increase in hostility, because thats whats more    commercially profitable.  <\/p>\n<p>    Experts predict increased oversight and surveillance, left    unchecked, could lead to dominant institutions and actors using    their power to suppress alternative news sources, censor ideas,    track individuals, and selectively block network access. This,    in turn, could mean publics might never know what they are    missing out on, since information will be filtered, removed, or    concealed.  <\/p>\n<p>      The fairness and freedom of the internets early days are      gone. Now its run by big data, Big Brother, and big      profits.      Thorlaug Agustsdottir    <\/p>\n<p>    Thorlaug Agustsdottir of Icelands Pirate    Party, said, Monitoring is and will be a massive problem, with    increased government control and abuse. The fairness and    freedom of the internets early days are gone. Now its run by    big data, Big Brother, and big profits. Anonymity is a myth, it    only exists for end-users who lack lookup resources.  <\/p>\n<p>    Joe McNamee, executive director at European    Digital Rights, said, In the context of a political    environment where deregulation has reached the status of    ideology, it is easy for governments to demand that social    media companies do more to regulate everything that happens    online. We see this with the European Unions code of conduct    with social media companies. This privatisation of regulation    of free speech (in a context of huge, disproportionate,    asymmetrical power due to the data stored and the financial    reserves of such companies) raises existential questions for    the functioning of healthy democracies.  <\/p>\n<p>    Randy Bush, Internet Hall of Fame member and    research fellow at Internet Initiative Japan, wrote, Between    troll attacks, chilling effects of government surveillance and    censorship, etc., the internet is becoming narrower every day.  <\/p>\n<p>    Dan York, senior content strategist at the    Internet Society, wrote, Unfortunately, we are in for a period    where the negative activities may outshine the positive    activities until new social norms can develop that push back    against the negativity. It is far too easy right now for anyone    to launch a large-scale public negative attack on someone    through social media and other channels  and often to do so    anonymously (or hiding behind bogus names). This then can be    picked up by others and spread. The mob mentality can be    easily fed, and there is little fact-checking or    source-checking these days before people spread information and    links through social media. I think this will cause some    governments to want to step in to protect citizens and thereby    potentially endanger both free speech and privacy.  <\/p>\n<p>    This section features responses by several more of the many top    analysts who participated in this canvassing. Following this    wide-ranging set of comments on the topic will be     a much-more expansive set of quotations directly tied to    the set of four themes.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See the original post here:<br \/>\n<a target=\"_blank\" href=\"http:\/\/www.pewinternet.org\/2017\/03\/29\/the-future-of-free-speech-trolls-anonymity-and-fake-news-online\/\" title=\"The Future of Free Speech, Trolls, Anonymity and Fake News Online - Pew Research Center's Internet and American Life Project\">The Future of Free Speech, Trolls, Anonymity and Fake News Online - Pew Research Center's Internet and American Life Project<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Many experts fear uncivil and manipulative behaviors on the internet will persist and may get worse. This will lead to a splintering of social media into AI-patrolled and regulated safe spaces separated from free-for-all zones.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/freedom-of-speech\/the-future-of-free-speech-trolls-anonymity-and-fake-news-online-pew-research-centers-internet-and-american-life-project\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":3,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[162383],"tags":[],"class_list":["post-185214","post","type-post","status-publish","format-standard","hentry","category-freedom-of-speech"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/185214"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/3"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=185214"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/185214\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=185214"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=185214"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=185214"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}