{"id":186806,"date":"2017-04-07T21:09:05","date_gmt":"2017-04-08T01:09:05","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/the-nonparametric-intuition-superintelligence-and-design-methodology-lifeboat-foundation-blog\/"},"modified":"2017-04-07T21:09:05","modified_gmt":"2017-04-08T01:09:05","slug":"the-nonparametric-intuition-superintelligence-and-design-methodology-lifeboat-foundation-blog","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/the-nonparametric-intuition-superintelligence-and-design-methodology-lifeboat-foundation-blog\/","title":{"rendered":"The Nonparametric Intuition: Superintelligence and Design Methodology &#8211; Lifeboat Foundation (blog)"},"content":{"rendered":"<p><p>    I will admit that I have been distracted from both popular    discussion and the academic work on the risks of emergent    superintelligence. However, in the spirit of an essay, let me    offer some uninformed thoughts on a question involving such    superintelligence based on my experience thinking about a    different area. Hopefully, despite my ignorance, this    experience will offer something new or at least explain one    approach in a new way.  <\/p>\n<p>    The question about superintelligence I wish to address is the    paperclip universe problem. Suppose that an industrial    program, aimed with the goal of maximizing the number of    paperclips, is otherwise equipped with a general intelligence    program as to tackle with this objective in the most creative    ways, as well as internet connectivity and text information    processing facilities so that it can discover other mechanisms.    There is then the possibility that the program does not take    its current resources as appropriate constraints, but becomes    interested in manipulating people and directing devices to    cause paperclips to be manufactured without consequence for any    other objective, leading in the worse case to widespread    destruction but a large number of surviving paperclips.  <\/p>\n<p>    This would clearly be a disaster. The common response is to    take as a consequence that when we specify goals to programs,    we should be much more careful about specifying what those    goals are. However, we might find it difficult to formulate a    set of goals that dont admit some kind of loophole or paradox    that, if pursued with mechanical single-mindedness, are either    similarly narrowly destructive or self-defeating.  <\/p>\n<p>    Suppose that, instead of trying to formulate a set of foolproof    goals, we should find a way to admit to the program that the    set of goals weve described is not comprehensive. We should    aim for the capacity to add new goals with a procedural    understanding that the list may never be complete. If done    well, we would have a system that would couple this initial set    of goals to the set of resources, operations, consequences, and    stakeholders initially provided to it, with an understanding    that those goals are only appropriate to the initial list and    finding new potential means requires developing a richer    understanding of potential ends.  <\/p>\n<p>    How can this work? Its easy to imagine such an algorithmic    admission leading to paralysis, either from finding    contradictory objectives that apparently admit no solution or    an analysis\/paralysis which perpetually requires no    undiscovered goals before proceeding. Alternatively, stated    incorrectly, it could backfire, with finding more goals taking    the place of making more paperclips as it proceeds    singlemindedly to consume resources. Clearly, a satisfactory    superintelligence would need to reason appropriately about the    goal discovery process.  <\/p>\n<p>    There is a profession that has figured out a heuristic form of    reasoning about goal discovery processes: designers. Designers    have coined the phrase the fuzzy front end when talking about    the very early stages of a project before anyone has figured    out what it is about. Designers engage in low-cost elicitation    exercises with a variety of stakeholders. They quickly discover    who the relevant stakeholders are and what impacts their    interventions might have. Adept designers switch back and forth    rapidly from candidate solutions to analyzing the potential    impacts of those designs, making new associations about the    area under study that allows for further goal discovery. As    designers undertake these explorations, they advise going    slightly past the apparent wall of diminishing returns, often    using an initial brainstorming session to reveal all of the    obvious ideas before undertaking a deeper analysis. Seasoned    designers develop an understanding when stakeholders are    holding back and need to be prompted, or when equivocating    stakeholders should be encouraged to move on. Designers will    interleave a series of prototypes, experiential exercises, and    pilot runs into their work, to make sure that interventions    really behave the way their analysis seems to indicate.  <\/p>\n<p>    These heuristics correspond well to an area of statistics and    machine learning called nonparametric Bayesian inference.    Nonparametric does not mean that there are no parameters, but    instead that the parameters are not given, and that inferring    that there are further parameters is part of the task. Suppose    that you were to move to a new town, and ask around about the    best restaurant. The first answer would definitely be new, but    as one asked more, eventually you would start getting new    answers more rarely. The likelihood of a given answer would    also begin to converge. In some cases the answers will be more    concentrated on a few answers, and in some cases the answers    will be more dispersed. In either case, once we have an idea of    how concentrated the answers are, we might see that a    particular period of not discovering new answers might just be    unlucky and that we should pursue further inquiry.  <\/p>\n<p>    Asking why provides a list of critical features that can be    used to direct different inquiries that fill out the picture.    Whats the best restaurant in town for Mexican food? Which is    best at maintaining relationships to local food providers\/has    the best value for money\/is the tastiest\/has the most friendly    service? Designers discover aspects about their goals in an    open-ended way, that allows discovery to act in quick cycles of    learning through taking on different aspects of the problem.    This behavior would work very well for an active learning    formulation of relational nonparametric inference.  <\/p>\n<p>    There is a point at which information gathering activities are    less helpful at gathering information than attending to the    feedback to activities that more directly act on existing    goals. This happens when there is a cost\/risk equilibrium    between the cost of more discovery activities and the risk of    making an intervention on incomplete information. In many    circumstances, the line between information gathering and    direct intervention will be fuzzier, as exploration proceeds    through reversible or inconsequential experiments, prototypes,    trials, pilots, and extensions that gather information while    still pursuing the goals found so far.  <\/p>\n<p>    From this perspective, many frameworks for assessing    engineering discovery processes make a kind of epistemological    error: they assess the quality of the solution from the    perspective of the information that they have gathered, paying    no attention to the rates and costs which that information was    discovered, and whether or not the discovery process is at    equilibrium. This mistake comes from seeing the problems as    finding a particular point in a given search space of    solutions, rather than taking the search space as a variable    requiring iterative development. A superintelligence equipped    to see past this fallacy would be unlikely to deliver us a    universe of paperclips.  <\/p>\n<p>    Having said all this, I think the nonparametric intuition,    while right, can be cripplingly misguided without being    supplemented with other ideas. To consider discovery    analytically is to not discount the power of knowing about the    unknown, but it doesnt intrinsically value non-contingent    truths. In my next essay, I will take on this topic.  <\/p>\n<p>    For a more detailed explanation and an example of how to extend    engineering design assessment to include nonparametric    criteria, see     The Methodological Unboundedness of Limited Discovery    Processes. Form Academisk, 7:4.  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See original here:<\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/lifeboat.com\/blog\/2017\/04\/the-nonparametric-intuition-superintelligence-and-design-methodology\" title=\"The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog)\">The Nonparametric Intuition: Superintelligence and Design Methodology - Lifeboat Foundation (blog)<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> I will admit that I have been distracted from both popular discussion and the academic work on the risks of emergent superintelligence. However, in the spirit of an essay, let me offer some uninformed thoughts on a question involving such superintelligence based on my experience thinking about a different area.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/superintelligence\/the-nonparametric-intuition-superintelligence-and-design-methodology-lifeboat-foundation-blog\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":5,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187765],"tags":[],"class_list":["post-186806","post","type-post","status-publish","format-standard","hentry","category-superintelligence"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/186806"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/5"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=186806"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/186806\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=186806"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=186806"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=186806"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}