{"id":196017,"date":"2017-06-01T22:39:56","date_gmt":"2017-06-02T02:39:56","guid":{"rendered":"http:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/hpe-and-the-machine-potentially-the-next-big-it-blockbuster-but-one-helluva-gamble-diginomica\/"},"modified":"2017-06-01T22:39:56","modified_gmt":"2017-06-02T02:39:56","slug":"hpe-and-the-machine-potentially-the-next-big-it-blockbuster-but-one-helluva-gamble-diginomica","status":"publish","type":"post","link":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/mind-uploading\/hpe-and-the-machine-potentially-the-next-big-it-blockbuster-but-one-helluva-gamble-diginomica\/","title":{"rendered":"HPE and `The Machine&#8217;  potentially the next big IT blockbuster, but one helluva gamble &#8211; Diginomica"},"content":{"rendered":"<p><p>    So HP has now got as    far as announcing its first prototype of `The Machine, first    talked about towards the back end of last year. The beast is    real and, if the numbers surrounding it are to be believed (and    who am I to argue) it represents a significant step forward in    resources available and performance.  <\/p>\n<p>    For example, the prototype features 160 TBytes of memory spread    across 40 separate nodes connected using photonics links. And    as its architecture is designed squarely around in-memory    processing models, that means it is all available, all of the    time. According to the company, this allows the equivalent of    allowing simultaneous work on some 160 million books, or five    times the number of books in the US Library of Congress.  <\/p>\n<p>    But this is only a prototype and these numbers are, in the    great scheme of what HPE envisages for The Machine, really only    chicken feed. If its dreams come true, we are now staring at an    architecture that can easily scale to an Exabyte-scale,    single-memory system as it stands. Out into the future the    company is already talking mind-boggling numbers: how about    4,096 Yottabytes? (where a Yottabyte equals1024 bytes).    That, the company reckons, is the equivalent of 250,000 times    the entire digital universe that exists todayin a box.  <\/p>\n<p>      This is a new class of memory technology based on large,      persistent memory pools that can stretch right out to the      edge.    <\/p>\n<p>    That is the basic outline of it given by Andrew Wheeler, the    Deputy Director of the HPE Labs team that has developed the    architecture and the prototype. The interesting factor here is    that HPE has set out to develop an inclusive architecture,    rather than an exclusive buy-all-or-nothing approach. So when    it comes to working out at the edge, the devices used can be    whatever is extant and\/or appropriate for the specific task in    hand at that point.  <\/p>\n<p>    The system is based on an enhanced version of Linux, so the    ability to run Linux may even be the only requirement made on    such devices. So, while the prototype has been built on devices    developed by Cavium and based on ARM architectures, this does    not mean that everything out at the end needs to be based on    that same device.  <\/p>\n<p>      The premise of our Intelligenrt Edge design is that users      will want to do analytics processing as close to where the      data is generated. Take an application like video processing;      users wont want to be pushing all that data to some central      location for processing. That is just not sustainable or cost      effective. The question then is just `what is the processor      relevant to getting the job done.    <\/p>\n<p>    So the idea is to do as much processing as close to the point    of generation as possible. Ask the local device if someone    carrying a red backpack was spotted in a time frame, rather    than send all the data to a central location and then process    it. It is only the results that are actually important and need    uploading.    This does create another problem that Wheelers team have been    doing a lot of work on. Communicating between the core and the    edge does require agents capable of ensuring that instructions    are interpreted correctly, that relevant standards are adhered    to and returned data is in a form that can be used immediately.  <\/p>\n<p>    The primary goal however, is to have an analytics space that is    sufficiently large to hold both current and historical data at    a scale that is currently not possible to achieve and to get    real-time results out of it. And because it is in-memory    processing, all the latency introduced by taking data from disk    to memory, memory to processor, processor to cache, back to    processor (and repeat several times) and finally out to memory    and then to disk.  <\/p>\n<p>    The next steps going toward a real product include building up    the growing set of hardware and software technologies that can    now be engineered as `products and High Performance Computing    road maps.  <\/p>\n<p>      The second step, having moved from simulations to emulators      running on SuperDomes, and on to where we are now with this      prototype, we now need to select the partners and customers      that we want to land actual workloads on to further increase      out understanding. This will help us determine what will be      the first real instantiation of what we would call `The      Machine. I can tell you right now we have a pretty clear      line of sight on how it can address problems in High      Performance Computing and analytics work.    <\/p>\n<p>    An obvious target here is SAP and its growing range of    HANA-based applications. Wheeler agreed that HP has a long    history with running SAP applications, and estimated it    currently runs some 70% of all HANA-based applications. He    would confirm nothing, of course, but it seems unlikely that    SAP, and some of its customers, will fail to make that list of    test subjects.  <\/p>\n<p>    There are still so many questions to be answered about `The    Machine, some of which may yet just kill it. For example, when    asked about addressing Yottabytes of memory that is    simultaneously processing in real time his response is a    classic of the scientific milieu.  <\/p>\n<p>      We have found some operating system issues with this getting      to the 160TByte level. But we do have a conceptual handle on      what is required to get to the Yottabyte level.    <\/p>\n<p>    The big question of course, is `when, and while Wheeler was    understandably reticent to give any indication, the signs are    that the short version of the answer is `not any time soon.    This, in turn, raises of areas of speculation, some of quite a    serious nature.  <\/p>\n<p>    For example, while SAP has garnered some reasonable traction    with its HANA in-memory processing technology, it is    interesting that not too many others have really piled in    behind them. This begs the question as to whether the    technology is really only good for certain types of brute    analytic applications.  <\/p>\n<p>    That would explain why others, even those playing in the    analytics space, are none-too-fussed about following the SAP    lead.  <\/p>\n<p>    Or is it a case that there are times when technologies and use    cases coincide. It is not uncommon for early iterations of    technologies to appear and then fade away, because the tech    itself is not quite ready, or the functional need has not yet    developed amongst users. Later, however, the time, the    technology advances and the user need can be right. Example?    The mobile phone: it was a housebrick you could make and    receive telephone calls on. But when it gained a camera and an    internet connection, and could slip into a pocket, it became an    extension of the `self.  <\/p>\n<p>    Where is `The Machine on this scale? As a prototype it is    difficult to say, and it is even more difficult to suggest when    might be a good time for HPE to be ready with a product. Some    of the answer will not even be in HPEs hands for it will    depend upon how well the legacy technologies hold out. Current    commodity processors are really only pumped up versions of the    Intel 4004 processor chip introduced in 1971, and the work    within a basic systems architecture, first described by John    Von Neumann back in 1945.  <\/p>\n<p>    Fundamentally, current `stuff  just about all of it  is    definably old. But it works, and generally works well. Is it    time to replace it? Quite possibly, and it is possible to see    much of current tech development work as just trusses, Band    Aids and other surgical appliances designed to keep those aged    architectures hanging together.  <\/p>\n<p>    But it is also possible to see just what HPE has riding on the    future success if this in-memory architecture. The company has    divested itself of its big systems\/SI capabilities, as well as    much of its middleware\/software activities. It seems determined    to be a leading technology developer and provider, with a    strong emphasis on hardware, to boot. Yet that, inevitably,    puts it up against leaner, faster, lower cost competition that    might not have the depth of experience and expertise, but will    have more daring and greatly reduced risk aversion to HPE.  <\/p>\n<p>    It is reasonable to suppose, therefore, that `The Machine will    not appear as a product before three years have past, more    likely five. A lot of tech water will have passed under the    bridge in that time, and it is quite possible that one of the    small, smart companies will come up with an analytical tech    that sits between what is now, and what can be when HPE brings    forth. If that is good enough, it might be the death of `The    Machine and even HPE.  <\/p>\n<p>    If not, maybe CIOs need to start thinking, fantasising, about    what they might want to achieve if they could analyse anything    against any number of other anythings, in real time. Give it    five years and it might be available.  <\/p>\n<p>    For reasons I cannot defend by any other justification than    there lies the direction in which my knee doth jerk, I think    `The Machine prototype marks the birth of the next big    technology blockbuster. But I also think HPE now has a tiger by    the tail, and with the departure of so many other businesses    which were, while maybe not desperately profitable, potentially    resilient alternatives for the company, that tiger may well    bite. Now the company seems increasingly exposed as a mainly    hardware tech business playing high roller poker with an    unknown high-risk tech development as its stake.  <\/p>\n<p>    Image credit - Images free for commercial use  <\/p>\n<p><!-- Auto Generated --><\/p>\n<p>See more here: <\/p>\n<p><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/diginomica.com\/2017\/06\/01\/hpe-machine-potentially-next-big-blockbuster-one-helluva-gamble\/\" title=\"HPE and `The Machine'  potentially the next big IT blockbuster, but one helluva gamble - Diginomica\">HPE and `The Machine'  potentially the next big IT blockbuster, but one helluva gamble - Diginomica<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p> So HP has now got as far as announcing its first prototype of `The Machine, first talked about towards the back end of last year. The beast is real and, if the numbers surrounding it are to be believed (and who am I to argue) it represents a significant step forward in resources available and performance.  <a href=\"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/mind-uploading\/hpe-and-the-machine-potentially-the-next-big-it-blockbuster-but-one-helluva-gamble-diginomica\/\">Continue reading <span class=\"meta-nav\">&rarr;<\/span><\/a><\/p>\n","protected":false},"author":6,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[187745],"tags":[],"class_list":["post-196017","post","type-post","status-publish","format-standard","hentry","category-mind-uploading"],"_links":{"self":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/196017"}],"collection":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/users\/6"}],"replies":[{"embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/comments?post=196017"}],"version-history":[{"count":0,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/posts\/196017\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/media?parent=196017"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/categories?post=196017"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.euvolution.com\/prometheism-transhumanism-posthumanism\/wp-json\/wp\/v2\/tags?post=196017"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}