{"id":1115022,"date":"2023-05-30T00:13:57","date_gmt":"2023-05-30T04:13:57","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/opinion-beyond-the-matrix-theory-of-the-human-mind-the-new-york-times\/"},"modified":"2023-05-30T00:13:57","modified_gmt":"2023-05-30T04:13:57","slug":"opinion-beyond-the-matrix-theory-of-the-human-mind-the-new-york-times","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/opinion-beyond-the-matrix-theory-of-the-human-mind-the-new-york-times\/","title":{"rendered":"Opinion | Beyond the Matrix Theory of the Human Mind &#8211; The New York Times"},"content":{"rendered":"<p><p>      Imagine I told you in 1970 that I was going to invent a      wondrous tool. This new tool would make it possible for      anyone with access  and most of humanity would have access       to quickly communicate and collaborate with anyone else. It      would store nearly the sum of human knowledge and thought up      to that point, and all of it would be searchable, sortable      and portable. Text could be instantly translated from one      language to another, news would be immediately available from      all over the world, and it would take no longer for a      scientist to download a journal paper from 15 years ago than      to flip to an entry in the latest issue.    <\/p>\n<p>      What would you have predicted this leap in information and      communication and collaboration would do for humanity? How      much faster would our economies grow?    <\/p>\n<p>      Now imagine I told you that I was going to invent a sinister      tool. (Perhaps, while telling you this, I would cackle.) As      people used it, their attention spans would degrade, as the      tool would constantly shift their focus, weakening their      powers of concentration and contemplation. This tool would      show people whatever it is they found most difficult to look      away from  which would often be what was most threatening      about the world, from the worst ideas of their political      opponents to the deep injustices of their society. It would      fit in their pockets and glow on their night stands and never      truly be quiet; there would never be a moment when people      could be free of the sense that the pile of messages and      warnings and tasks needed to be checked.    <\/p>\n<p>      What would you have thought this engine of distraction,      division and cognitive fracture would do to humanity?    <\/p>\n<p>      Thinking of the internet in these terms helps solve an      economic mystery. The embarrassing truth is that productivity      growth  how much more we can make with the same number of      people and factories and land  was far faster for much of      the 20th century than it is now. We average about half      the productivity growth rate today that we saw in the 1950s      and 60s. That means stagnating incomes, sluggish economies      and a political culture thats more about fighting over what      we have than distributing the riches and wonders weve      gained. So what went wrong?    <\/p>\n<p>      You can think of two ways the internet could have sped up      productivity growth. The first way was obvious: by allowing      us to do what we were already doing and do it more easily and      quickly. And that happened. You can see a bump in      productivity growth from roughly 1995 to 2005 as companies      digitized their operations. But its the second way that was      always more important: By connecting humanity to itself and      to nearly its entire storehouse of information, the internet      could have made us smarter and more capable as a collective.    <\/p>\n<p>      I dont think that promise proved false, exactly. Even in      working on this article, it was true for me: The speed with      which I could find information, sort through research,      contact experts  its marvelous. Even so, I doubt I wrote      this faster than I would have in 1970. Much of my mind was      preoccupied by the constant effort needed just to hold a      train of thought in a digital environment designed to      distract, agitate and entertain me. And I am not alone.    <\/p>\n<p>      Gloria Mark, a professor of information science at the      University of California, Irvine, and the author of      Attention      Span, started researching the way people used computers      in 2004. The average time people spent on a single screen was      2.5 minutes. I was astounded, she told me. That was so      much worse than Id thought it would be. But that was just      the beginning. By 2012, Mark and her colleagues found the      average time on a single task was 75 seconds. Now its down      to about 47.    <\/p>\n<p>      This is an acid bath for human cognition. Multitasking is      mostly a myth. We can focus on one thing at a time. Its      like we have an internal whiteboard in our minds, Mark said.      If Im working on one task, I have all the info I need on      that mental whiteboard. Then I switch to email. I have to      mentally erase that whiteboard and write all the information      I need to do email. And just like on a real whiteboard, there      can be a residue in our minds. We may still be thinking of      something from three tasks ago.    <\/p>\n<p>      The cost is in more than just performance. Mark and others in      her field have hooked people to blood pressure machines and      heart rate monitors and measured chemicals in the blood. The      constant switching makes us stressed and irritable. I didnt      exactly need experiments to prove that  I live that, and you      probably do, too  but it was depressing to hear it      confirmed.    <\/p>\n<p>      Which brings me to artificial intelligence. Here Im talking      about the systems we are seeing now: large language models      like OpenAIs GPT-4 and Googles Bard. What these systems do,      for the most part, is summarize information they have been      shown and create content that resembles it. I recognize that      sentence can sound a bit dismissive, but it shouldnt: Thats      a huge amount of what human beings do, too.    <\/p>\n<p>      Already, we are being told that A.I. is making coders      and customer service      representatives and writers more productive. At least one chief      executive plans to add      ChatGPT use in employee performance evaluations. But Im      skeptical of this early hype. It is measuring A.I.s      potential benefits without considering its likely costs  the      same mistake we made with the internet.    <\/p>\n<p>      I worry were headed in the wrong direction in at least three      ways.    <\/p>\n<p>      One is that these systems will do more to distract and      entertain than to focus. Right now, the large language models      tend to hallucinate information: Ask them to answer a complex      question, and you will receive a convincing, erudite response      in which key facts and citations are often made up. I suspect      this will slow their widespread use in important industries      much more than is being admitted, akin to the way driverless      cars have been tough to roll out because they need to be      perfectly reliable rather than just pretty good.    <\/p>\n<p>      A question to ask about large language models, then, is where      does trustworthiness not matter? Those are the areas where      adoption will be fastest. An example from media is telling, I      think. CNET, the technology website, quietly started using      these models to write articles, with humans editing the      pieces. But the process failed. Forty-one of the 77      A.I.-generated articles proved to have errors the editors      missed, and CNET, embarrassed, paused the      program. BuzzFeed, which recently shuttered its news      division, is racing ahead      with using A.I. to generate quizzes and travel guides. Many      of the results have been      shoddy, but it doesnt really matter. A BuzzFeed quiz      doesnt have to be reliable.    <\/p>\n<p>      A.I. will be great for creating content where reliability      isnt a concern. The personalized video games and childrens      shows and music mash-ups and bespoke images will be dazzling.      And new domains of delight and distraction are coming: I      believe were much closer to A.I. friends, lovers and companions becoming a widespread part      of our social lives than society is prepared for. But where      reliability matters  say, a large language model devoted to      answering medical questions or summarizing doctor-patient      interactions  deployment will be more troubled, as oversight      costs will be immense. The problem is that those are the      areas that matter most for economic growth.    <\/p>\n<p>      Marcela Martin, BuzzFeeds president, encapsulated my next      worry nicely when she told investors, Instead of generating      10 ideas in a minute, A.I. can generate hundreds of ideas in      a second. She meant that as a good thing, but is it? Imagine      that multiplied across the economy. Someone somewhere will      have to process all that information. What will this do to      productivity?    <\/p>\n<p>      One lesson of the digital age is that more is not always      better. More emails and more reports and more Slacks and more      tweets and more videos and more news articles and more slide      decks and more Zoom calls have not led, it seems, to more      great ideas. We can produce more information, Mark said.      But that means theres more information for us to process.      Our processing capability is the bottleneck.    <\/p>\n<p>      Email and chat systems like Slack offer useful analogies      here. Both are widely used across the economy. Both were      initially sold as productivity boosters, allowing more      communication to take place faster. And as anyone who uses      them knows, the productivity gains  though real  are more      than matched by the cost of being buried under vastly more      communication, much of it junk and nonsense.    <\/p>\n<p>      The magic of a large language model is that it can produce a      document of almost any length in almost any style, with a      minimum of user effort. Few have thought through the costs      that will impose on those who are supposed to respond to all      this new text. One of my favorite examples of this comes from      The      Economist, which imagined NIMBYs  but really, pick your      interest group  using GPT-4 to rapidly produce a 1,000-page      complaint opposing a new development. Someone, of course,      will then have to respond to that complaint. Will that really      speed up our ability to build housing?    <\/p>\n<p>      You might counter that A.I. will solve this problem by      quickly summarizing complaints for overwhelmed policymakers,      much as the increase in spam is (sometimes, somewhat)      countered by more advanced spam filters. Jonathan Frankle,      the chief scientist at MosaicML and a computer scientist at      Harvard, described this to me as the boring apocalypse      scenario for A.I., in which we use ChatGPT to generate long      emails and documents, and then the person who received it      uses ChatGPT to summarize it back down to a few bullet      points, and there is tons of information changing hands, but      all of it is just fluff. Were just inflating and compressing      content generated by A.I.    <\/p>\n<p>      When we spoke, Frankle noted the magic of feeding a 100-page      Supreme Court document into a large language model and      getting a summary of the key points. But was that, he      worried, a good summary? Many of us have had the experience      of asking ChatGPT to draft a piece of writing and seeing a      fully formed composition appear, as if by magic, in seconds.    <\/p>\n<p>      My third concern is related to that use of A.I.: Even if      those summaries and drafts are pretty good, something is lost      in the outsourcing. Part of my job is reading 100-page      Supreme Court documents and composing crummy first drafts of      columns. It would certainly be faster for me to have A.I. do      that work. But the increased efficiency would come at the      cost of new ideas and deeper insights.    <\/p>\n<p>      Our societywide obsession with speed and efficiency has given      us a flawed model of human cognition that Ive come to think      of as the Matrix theory of knowledge. Many of us wish we      could use the little jack from The Matrix to download the      knowledge of a book (or, to use the movies example, a kung      fu master) into our heads, and then wed have it, instantly.      But that misses much of whats really happening when we spend      nine hours reading a biography. Its the time inside that      book spent drawing connections to what we know and having      thoughts we would not otherwise have had that matters.    <\/p>\n<p>      Nobody likes to write reports or do emails, but we want to      stay in touch with information, Mark said. We learn when we      deeply process information. If were removed from that and      were delegating everything to GPT  having it summarize and      write reports for us  were not connecting to that      information.    <\/p>\n<p>      We understand this intuitively when its applied to students.      No one thinks that reading the SparkNotes summary of a great      piece of literature is akin to actually reading the book. And      no one thinks that if students have ChatGPT write their      essays, they have cleverly boosted their productivity rather      than lost the opportunity to learn. The analogy to office      work is not perfect  there are many dull tasks worth      automating so people can spend their time on more creative      pursuits  but the dangers of overautomating cognitive and      creative processes are real.    <\/p>\n<p>      These are old concerns, of course. Socrates questioned the      use of writing (recorded, ironically, by Plato), worrying      that if men learn this, it will implant forgetfulness in      their souls; they will cease to exercise memory because they      rely on that which is written, calling things to remembrance      no longer from within themselves but by means of external      marks. I think the trade-off here was worth it  I am, after      all, a writer  but it was a trade-off. Human beings really      did lose faculties of memory we once had.    <\/p>\n<p>      To make good on its promise, artificial intelligence needs to      deepen human intelligence. And that means human beings need      to build A.I. and build the workflows and office environments      around it, in ways that dont overwhelm and distract and      diminish us. We failed that test with the internet. Lets not      fail it with A.I.    <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original post:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/www.nytimes.com\/2023\/05\/28\/opinion\/artificial-intelligence-thinking-minds-concentration.html\" title=\"Opinion | Beyond the Matrix Theory of the Human Mind - The New York Times\">Opinion | Beyond the Matrix Theory of the Human Mind - The New York Times<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Imagine I told you in 1970 that I was going to invent a wondrous tool. This new tool would make it possible for anyone with access and most of humanity would have access to quickly communicate and collaborate with anyone else. It would store nearly the sum of human knowledge and thought up to that point, and all of it would be searchable, sortable and portable.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-intelligence\/opinion-beyond-the-matrix-theory-of-the-human-mind-the-new-york-times\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187742],"tags":[],"class_list":["post-1115022","post","type-post","status-publish","format-standard","hentry","category-artificial-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1115022"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1115022"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1115022\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1115022"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1115022"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1115022"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}