{"id":1122385,"date":"2024-02-22T19:59:30","date_gmt":"2024-02-23T00:59:30","guid":{"rendered":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/uncategorized\/why-despite-all-the-hype-we-hear-ai-is-not-one-of-us-walter-bradley-center-for-natural-and-artificial-intelligence\/"},"modified":"2024-02-22T19:59:30","modified_gmt":"2024-02-23T00:59:30","slug":"why-despite-all-the-hype-we-hear-ai-is-not-one-of-us-walter-bradley-center-for-natural-and-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/why-despite-all-the-hype-we-hear-ai-is-not-one-of-us-walter-bradley-center-for-natural-and-artificial-intelligence\/","title":{"rendered":"Why, Despite All the Hype We Hear, AI Is Not One of Us &#8211; Walter Bradley Center for Natural and Artificial Intelligence"},"content":{"rendered":"<p><p>    Artificial Intelligence (AI) systems are inferencing    systems. They make decisions     based on information. Thats not a particularly    controversial point: inference is central to thinking. If AI    performs the right types of inference, at the right time, on    the right problem, we should view them as thinking machines.  <\/p>\n<p>    The problem is, AI currently performs the wrong type of    inference, on problems selected precisely because this type of    inference works well. Ive called this Big Data AI, because    the problems AI currently solves can only be cracked if very    large repositories of data are available to solve them. ChatGPT    is no exception  in fact, it drives the point home. Its a    continuation of previous innovations of Big Data AI taken to an    extreme. The AI scientists dream of general intelligence,    often referred to as Artificial General Intelligence (AGI),    remains as elusive as ever.  <\/p>\n<p>    Computer scientists who were not specifically trained on    mathematical or philosophical logic probably dont think in    terms of inference. Still, it pervades everything we do. In a    nutshell, inference in the scientific sense is: given what I    know already, and what I see or observe around me, what is    proper to conclude? The conclusion is known as the    inference, and for any cognitive system its    ubiquitous.  <\/p>\n<p>    For humans, inferring something is like a condition of being    awake; we do it constantly, in conversation (what does she    mean?), when walking down a street (do I turn here?), and    indeed in having any thought where theres an implied question    at all. If you try to pay attention to your thoughts for one    day  one hour  youll quickly discover you cant count the    number of inferences your brain is making. Inference is    cognitive intelligence. Cognitive intelligence is inference.  <\/p>\n<p>    What difference have 21st-century    innovations made?  <\/p>\n<p>    In the last decade, the computer science community innovated    rapidly, and dramatically. These innovations are genuine and    importantmake no mistake. In 2012, a team at the University of    Toronto led by neural network guru Geoffrey    Hinton roundly defeated all competitors at a popular photo    recognition competition called     ImageNet. The task was to recognize images from a dataset    curated from fifteen million high resolution images on Flickr    and representing twenty-two thousand classes, or varieties of    photos (caterpillars, trees, cars, terrier dogs, etc.).  <\/p>\n<p>    The system, dubbed AlexNet, after Hintons graduate student    Alex Krizhevsky, who largely developed it, used a souped-up    version of an old technology: the artificial neural network    (ANN), or just neural network. Neural networks were developed    in rudimentary form in the 1950s, when AI had just begun. They    had been gradually refined and improved over the decades,    though they were generally thought to be of little value for    much of AIs history.  <\/p>\n<p>    Moores    Law, gave them a boost. As many know, Moores Law isnt a    law, but an observation made by Intel co-founder and CEO Gordon    Moore in 1965: the number of transistors on a microchip doubles    roughly every two years (the other part is that the cost of    computers is also halved during that time). Neural networks are    computationally expensive on very large datasets, and the    catch-22 for many years was that very large datasets are the    only datasets they work well on.  <\/p>\n<p>    But by the 2010s the roughly accurate Moores Law had made deep    neural networks, known at that time as convolutional neural    networks (CNNs), computationally practical. CPUs were swapped    for the more mathematically powerful GPUsalso used in computer    game enginesand suddenly CNNs were not just an option, but the    go-to technology for AI. Though all the competitors at ImageNet    contests used some version of machine learninga subfield of AI    that is specifically inductive because it learns from prior    examples or observationsthe CNNs were found wholly superior,    once the hardware was in place to support the gargantuan    computational requirements.  <\/p>\n<p>    The second major innovation occurred just two years later, when    a well-known limitation to neural networks in general was    solved or at least partially solved  the limitation of    overfitting. Overfitting happens when the neural    network fits to its training data, and doesnt adequately    generalize to its unseen, or test data. Overfitting is bad; it    means the system isnt really learning the underlying rule or    pattern in the data. Its like someone memorizing the answers    to the test without really understanding the questions. The    overfitting problem bedeviled early attempts at using neural    networks for problems like image recognition (CNNs are also    used for face recognition, machine translation between    languages, autonomous navigation, and a host of other useful    tasks).  <\/p>\n<p>    In 2014, Geoff Hinton and his team developed a technique known    as dropout which helped solve the overfitting problem. While    the public consumed the latest smartphones and argued, flirted,    and chatted away on myriad social networks and technologies,    real innovations on an old AI technology were taking place, all    made possible by the powerful combination of talented    scientists and engineers, and increasingly powerful computing    resources.  <\/p>\n<p>    There was a catch, however.  <\/p>\n<p>    Black Boxes and Blind Inferences  <\/p>\n<p>    Actually, there were two catches. One, it takes quite an    imaginative computer scientist to believe that the neural    network knows what its classifying or identifying. Its a    bunch of math in the background, and relatively simple math at    that: mostly matrix multiplication, a technique learned by    any undergraduate math student. There are other mathematics    operations in neural networks, but its still not string    theory. Its the computation of the relatively simple math    equations that counts, along with the overall design of the    system. Thus,neural networks were performing cognitive feats    while not really knowing they were performing anything at all.  <\/p>\n<p>    This brings us to the second problem, which ended up spawning    an entire field itself, known as Explainable AI.  <\/p>\n<p>    Next: Because AIs dont know why they make decisions,    they cant explain them to programmers.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>Read the original here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow noopener\" href=\"https:\/\/mindmatters.ai\/2024\/02\/why-despite-all-the-hype-we-hear-ai-is-not-one-of-us\/\" title=\"Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence\">Why, Despite All the Hype We Hear, AI Is Not One of Us - Walter Bradley Center for Natural and Artificial Intelligence<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> Artificial Intelligence (AI) systems are inferencing systems. They make decisions based on information. Thats not a particularly controversial point: inference is central to thinking.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/artificial-general-intelligence\/why-despite-all-the-hype-we-hear-ai-is-not-one-of-us-walter-bradley-center-for-natural-and-artificial-intelligence\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1214666],"tags":[],"class_list":["post-1122385","post","type-post","status-publish","format-standard","hentry","category-artificial-general-intelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1122385"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=1122385"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/1122385\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=1122385"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=1122385"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=1122385"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}