Sealand The Mystery Solved – Part One – YouTube

Welcome to one of the few OFFICIAL short documentary programmes about the worlds smallest Principality The Principality of Sealand.

Goto part two http://www.youtube.com/watch?v=cktpmy...

Commissioned by the Principality of Sealand itself and produced in 2003, this programme provides a fascinating insight into the operation, business, security, quality of life and many other aspects of this unique island fortress at that time.

Since its inception in the mid sixties of the last century, Sealand has gained a de facto recognition as being a legitimate Principality in the eyes of the diverse and far-reaching international community of states.

Formerly known as The Roughs Tower, Sealands superstructure comprises a man-made former World War 2 Navy Fortress that was sunk some 7 nautical miles off Harwich in the North Sea, off the coast of the UK.

Sitting firmly on its permanent pontoon base, it has its own territory that was declared and officially sealed in 1967, just before the British Government could change its legislation on UK Territorial Waters limits etc. in its hurried yet failed attempt to oust Sealand from any form of legitimate status!

The approximate location of the Principality is Latitude 51.53 N Longitude 01.28 E, in the Rough Shoals.

Although Sealand suffered a damaging fire in the middle of the last decade and the opportunity was taken to refurbish much of the island after that time, little has changed in Sealand since this programme was produced. However, the introduction of environmentally friendly wind turbine equipment for supplementary power-generation is one of the few significant enhancements out there in recent years.

So, come and take a look at this altogether unique, fascinating, remote and otherwise isolated place; a place like no other!

Excerpt from:

Sealand The Mystery Solved - Part One - YouTube

Liberty Taproom – Reading, PA – Yelp

97

We like coming here for a casual night out, and it's nice enough for a relaxed date night, but we can also bring the kids without feeling out of place. Went last week and had the chicken wild rice soup and Bavarian pretzel to start- both were amazing. Stromboli was good. Pittsburgh salad was very good as well, steak came out very rare (asked for medium) but they were kind enough to bring a whole other salad with steak cooked medium for us. Kids meals were standard. We also got wings which seemed almost breaded, like fried chicken, rather than buffalo wings that we're used to. Very crispy but missed that true wing flavor. Love the beer list, wines also decent.

A great place for drinks and food and to relax from your day. Definitely a place for family and friends anytime just fine and get together staff there are nice and friendly

I have a mixed review. My brother and I visited yesterday and I had both a good and bad experience. The service was great, one bar tender was a guy who obviously lifts weights [don't remember names] and the young woman that has purple hair. Both wonderful! The problem was that some of my songs on the jukebox were being mysteriously skipped. I said "Why are my songs being skipped". A guy next to me said management [owner?] is skipping them. I don't think it was patrons as I saw no way in the app to skip someone else's song. I said to the guy next to me "If this happens again I'm going to rip them a new one" and he said "Please don't that's my aunt", at least that's what I thought I heard. The food was great, the beer the service, BUT if management is messing with song selections I find that to be some serious BS.

Good service and good food. Was extremely loud inside with the music during dinner time so had to be asked to be moved outside to be able to even hear each other speak. We got beer and appetizers and everything was delicious! Will definitely go again!

Been here twice now and the best things here are the atmosphere and beer selection.The atmosphere is divy....Very good for people watching. Had Wings my first visit, they were OK, but nothing special...Second visit I was excited to try the JB Burger...with Bacon and a fried egg...and was very disappointed...the burger itself was packed way to tight, like a hockey puck...and cooked thru, even tho I asked for medium...It was very dry and bland. The poor thing never had a chance, it was packed so tight i'm sure I could have thrown it to my buddy like a frisbee...A hand formed patty should be on the brink of falling apart in your hands, not like you could use it as a coaster. The egg may have saved it, but it came on top of the cheese, which is a very thick slab that won't allow any yolk to penetrate...rookie mistake, I should have flipped the burger upside down and put the egg on the beef...but the real disappointment was the bacon...I pulled it off and tossed it, seemed like it didn't even hit the grill. Very soft and chewy, not the crunchy and flavorful texture I want on a burger. I'm not sure who would want that on a burger, it was really gross! Parking lot makes life easy, just tell them you want crispy bacon and you should be OK.

Pet friendly!Extensive craft beer selection, outdoor patio where you can bring your dog, and a menu that at least tries to think outside the box. And that counts for something!Let's talk about the dogs for a moment - this can either be a pro or a con. If dogs frighten you, then take heed - you may run in to some here. Now, most people won't even be aware that you're ALLOWED to bring your dog and sit on the patio, but in summer months, don't be surprised to see at least one or two canines on the deck. Locals know about the pet-friendly nature of this establishment, and they take full advantage of it.The food gets an "A" for effort - there are some dishes on the menu that you just won't find in many other Berks County establishments. And in that regard, I like where their head's at. But at the same time, if you're going leave your comfort zone, you'd better know what you're doing. Take, for example, the lobster roll. A New England mainstay, this iconic sandwich is often imitated, never duplicated. Hell, even McDonald's has even rolled out their take this summer (https://www.thrillist.com/news/nation/lobster-rolls-return-to-some-mcdonalds-locations-for-2016).There's a lobster roll on the menu here, and it's an abomination. The roll itself was a limp, soggy mess, but what should have been the star - ya know, the lobster - was a chalky, flavorless mess. Is there imitation lobster? I know there's imitation crab ("krab"), but at least the texture is somewhat similar. Perhaps humanity hasn't perfected imitation lobster just yet. Or maybe this was actual lobster but aged entirely too long (hint: aging lobster isn't a thing). Lobster meat should be a sumptuous delight. The lobster here has scarred me for life.Ok, that's it for the negativity. Some standouts? The beer selection. This place earns every right to have "taproom" in its name. Craft beer enthusiasts will be in for a treat.I also like their unique take on classic dishes. Take the pierogi appetizer for example. No, it's not a dozen pathetically sized frozen pierogies thrown on a plate with a side of sour cream. Try to imagine a giant potato pancake, folded in half, topped with sauteed peppers, melty cheese, and smothered in jalapeno ranch and fresh chunky salsa. Pierogies, meet the south west.Really pleasurable experience overall. The chefs can sometimes be a little overambitious for their own good, but I do appreciate their attempt to spruce up timeless classics.

I go most Mondays for wing night, they have excellent wings although they are always inconsistent in how much sauce used on them, sometimes they are almost dry and others they are drenched. If they found that perfect medium they would be excellent. Every employee I've met has been very nice and the bartender is 10/10

We enjoyed our time at the Liberty Taproom. They have an extensive list of craft beers so for those who love their craft beers this is the place to be. Their food is better than most here in Berks County. We had the Crab Pretzel, Poke Tuna and Cheese Steak. Out of the 3, I would recommend the Crab Pretzel because it is some different that you don't see at other restaurants. It is LOADED with real crab meat. I think it pairs pretty well with a dark beer. 🙂

The bartenders were nice but the clientele....THATS....what sets this place apart. My first time there.. a group of guys bought me two shots( and no its not what you think) I'm noticeably gay but everyone at the bar was welcoming, friendly ...down to earthI will certainly be back.. (the DJ was killing it) and I will recommend to my peeps,coworkers,etc..I love that it is close to home.Two thumbs up for liberty!!!

So many delicious options. Some of my favorites are the tuna dishes and the hamburger. It is a great place for both lunch and dinner. Not only is the food good, but they have a wide selection of beers.

A great place to eat any time of the day or week! We come here often, mostly for the wings and beer, but recently strayed and tried their burgers, and we were both impressed to say the least. Their burgers are big and juicy, with plenty of various toppings to choose from like shoestring onions, blue cheese and maple bacon jam (maple bacon jam!). Way better than some of the chain burger places like Red Robin. Rotating beer selection is top notch, with approx 40 beers on tap, always something new, plus their large beer room for take out. Live music on the weekends, nice outdoor patio area, and plenty of TVs to watch all your local sports. A favorite, highly recommended. Usually gets busy on the weekends and many week nights, so call ahead for reservations.

Monday is all you can eat wings night for $13. Combine that with their good selection of reasonably priced craft beers and you've got yourself a fun night. The bartenders are knowledgeable, attentive and friendly. The one down side with wing night is they started me off with a serving of 10 (you can mix and match) but later in the night they cut the serving down to 4. Kind of sucks if you want to sample the flavors. The bacon Sriracha was killer IMO.

I discovered Liberty Tap Room in the last year or so and have been back many times since. I have always had a positive experience. I always ask the bartenders or waitstaff what IPA is popular at the moment and I've always been happy with their recommendations. My glass is never empty, and the waitstaff are friendly and efficient. I think the wings are fried...but they are tasty. Try the boom boom sauce! I love the Liberty sandwich which I think is grilled guyere cheese and short rib...and anything else on the menu I have tried has been delicious. I like the atmosphere at the bar...a very casual, local favorite. If you haven't been there, try it. If you love beer, they have quite a selection.

They got a new chef a few months ago and what an improvement!!I had a pizza and it was delicious!! Despite having fried chicken and bacon on it, it was not at all greasy. The crust was cooked perfectly. I also had the giant pierogi which was awesome and my boyfriend's chicken cheese steak was excellent as well (fresh cut fries!). The beer selection is one of the best around. Prices are reasonable and they fill growlers too. We have been here a lot more frequently thanks to the new chef! Great food and great beer.

I grew up in the Reading area and was in town for a high school class reunion. Before I even suggested that we do a pre-reunion gathering here, a few people had already recommended this place for craft beer lovers like myself.So we had our little gathering here out on the patio and I returned to check out the inside on July 4th. The beer selection is amazing. The taps change a bit so make sure you have the current menu. They have a wide variety of beer on tap in every style imaginable. In addition they have a bottle room which you can select from various bottles to drink on-site or to take home. If you can't find a beer that interests you here, you are too picky. The food is really good here. On my first visit here, I had the blackened fish tacos which were spicy. Just wish that maybe the fish could have been cut up. But the sauce and the toppings were great and provided a cool contrast to the spiciness of the fish. It also came with a salad that was beautifully presented. The next visit I had their wings which I got a whiff of from the gentleman sitting next to me. The hot sauce isn't very hot to me...maybe for Berks County standards it is but I've had hotter so I asked for the XXX Hot on the side. That was worth it. Good flavor in a rather chunky sauce.Inside they have 3 TV's each on either side of the bar and there are TV's scattered throughout the restaurant. The restaurant does seem a little dark. They also had an acoustic musician playing on Friday night. Outside, they have a really nice patio with its own bar and beer taps.Service was good here whether it was on the patio or inside.If you are into craft beer, this is definitely a place to check out.

Great food, great service, great atmosphere. We actually tried this place due to Yelp and it was well worth the trip from ourside of town.The wings are excellant, they make PJ Wantabees look and taiste worse then we all know they do lol . They won't be giving Liberty Taverns wings away at the wing bowl because they don't have to lol, they are too good to have to give away for free and they don't need the publicity ! We can not find one thing on the menu that is not excellant ! The beer was somewhat a surprise as less then cold but we like our beer extra cold so perhaps it was just us. This is definitly the best place to watch sports, eat great food, drink beer withen the Lancaster, Reading, Pottstown area !

Great spot. Loved the original liberty in Allentown so was very excited to move to the area and find this one here! It's like a bigger and better version! Good food great beer selection!! Job well done!

Please try their Dirty Bird pizza! Otherwise the menu is stacked, you won't leave lacking anything in the savory department.

Great idea: 32 brews on tap with a focus on craft beer! What could possibly go wrong?Nice building and a nice setup and vibe going on inside. Rectangular bar surrounding all those taps allows for plenty of seats at the bar.We only wheeled by because I love craft beer AND I heard there was a chicken wing special.On the beer side: out of 32 taps, two were colored water (Miller Lite or similar) and virtually all the others were heavy dark beers a result of the previous days "Annual Dark Beer Day" and were only available at a high price in a sippy-cup-size "goblet." C'mon guys, Dark Beer Day was YESTERDAY: normal beer drinkers might want something a little lighter. Note that the only non-dark beer was served in a 12 oz glass: a little small given that it was not particularly strong ("Hop Hands")...looks like a beer only smaller. Looks like only the dishwater gets a full-size pint. It's beer guys: put it in a beer glass.The chicken wing special on Mondays is "AYCE," which is an acronym for All You Can Eat (should have been obvious but I'd never seen it before). Turns out that deal is not shareable, so we just ordered a dozen between two of us. The wings were a huge disappointment. Dry like they'd been pre-cooked & reheated. We asked for "regular buffalo syle" - there idea of which is a heavly-vinagered pepper sauce with no detectable Red Hot or butter flavor. Maybe they're better when they're not "AYCE?"Only one of the three bartenders was good at his job. One of the others walked around looking at the floor instead of the customers and the other was bumbling and focused on conversation with regulars.I love places like this and most of the places I've been to are well-run and do a great job which is why we drove out of our way to come here. I hate to leave a poor review, but we had a mediocre experience and that falls on them, not me.

Lewis the bartender short hair and a mustache. Caucasian. My wife on a quiet night on the patio. No service so she walks in. Wife -"hi I'm sitting on the patio" Bartender- "And". W- " I'd like to order some drinks". BT- "and". W- "uhhhh a martini". BT- "and". Service at its piss poor bottom end. Music so loud inside she had to yell "keep the tab open" while we sit outside and it sounds like a club inside. Gave 2 stars for this visit cause historically we have had good service indicative of their good reviews. NOT today. Went to speak to a manager and of course none on duty. Order our last drink Tito and cranberry and it tasted like water. This bartender with this description worked behind the bar at 12:30 Am May 25, 2016. We got here May 24th around 11 pm. Sad. it's all about customer service and respect to the ladies

Continued here:

Liberty Taproom - Reading, PA - Yelp

Molecular nanotechnology – Wikipedia

Molecular nanotechnology (MNT) is a technology based on the ability to build structures to complex, atomic specifications by means of mechanosynthesis.[1] This is distinct from nanoscale materials. Based on Richard Feynman's vision of miniature factories using nanomachines to build complex products (including additional nanomachines), this advanced form of nanotechnology (or molecular manufacturing[2]) would make use of positionally-controlled mechanosynthesis guided by molecular machine systems. MNT would involve combining physical principles demonstrated by biophysics, chemistry, other nanotechnologies, and the molecular machinery of life with the systems engineering principles found in modern macroscale factories.

While conventional chemistry uses inexact processes obtaining inexact results, and biology exploits inexact processes to obtain definitive results, molecular nanotechnology would employ original definitive processes to obtain definitive results. The desire in molecular nanotechnology would be to balance molecular reactions in positionally-controlled locations and orientations to obtain desired chemical reactions, and then to build systems by further assembling the products of these reactions.

A roadmap for the development of MNT is an objective of a broadly based technology project led by Battelle (the manager of several U.S. National Laboratories) and the Foresight Institute.[3] The roadmap was originally scheduled for completion by late 2006, but was released in January 2008.[4] The Nanofactory Collaboration[5] is a more focused ongoing effort involving 23 researchers from 10 organizations and 4 countries that is developing a practical research agenda[6] specifically aimed at positionally-controlled diamond mechanosynthesis and diamondoid nanofactory development. In August 2005, a task force consisting of 50+ international experts from various fields was organized by the Center for Responsible Nanotechnology to study the societal implications of molecular nanotechnology.[7]

One proposed application of MNT is so-called smart materials. This term refers to any sort of material designed and engineered at the nanometer scale for a specific task. It encompasses a wide variety of possible commercial applications. One example would be materials designed to respond differently to various molecules; such a capability could lead, for example, to artificial drugs which would recognize and render inert specific viruses. Another is the idea of self-healing structures, which would repair small tears in a surface naturally in the same way as self-sealing tires or human skin.

A MNT nanosensor would resemble a smart material, involving a small component within a larger machine that would react to its environment and change in some fundamental, intentional way. A very simple example: a photosensor might passively measure the incident light and discharge its absorbed energy as electricity when the light passes above or below a specified threshold, sending a signal to a larger machine. Such a sensor would supposedly cost less and use less power than a conventional sensor, and yet function usefully in all the same applications for example, turning on parking lot lights when it gets dark.

While smart materials and nanosensors both exemplify useful applications of MNT, they pale in comparison with the complexity of the technology most popularly associated with the term: the replicating nanorobot.

MNT nanofacturing is popularly linked with the idea of swarms of coordinated nanoscale robots working together, a popularization of an early proposal by K. Eric Drexler in his 1986 discussions of MNT, but superseded in 1992. In this early proposal, sufficiently capable nanorobots would construct more nanorobots in an artificial environment containing special molecular building blocks.

Critics have doubted both the feasibility of self-replicating nanorobots and the feasibility of control if self-replicating nanorobots could be achieved: they cite the possibility of mutations removing any control and favoring reproduction of mutant pathogenic variations. Advocates address the first doubt by pointing out that the first macroscale autonomous machine replicator, made of Lego blocks, was built and operated experimentally in 2002.[8] While there are sensory advantages present at the macroscale compared to the limited sensorium available at the nanoscale, proposals for positionally controlled nanoscale mechanosynthetic fabrication systems employ dead reckoning of tooltips combined with reliable reaction sequence design to ensure reliable results, hence a limited sensorium is no handicap; similar considerations apply to the positional assembly of small nanoparts. Advocates address the second doubt by arguing that bacteria are (of necessity) evolved to evolve, while nanorobot mutation could be actively prevented by common error-correcting techniques. Similar ideas are advocated in the Foresight Guidelines on Molecular Nanotechnology,[9] and a map of the 137-dimensional replicator design space[10] recently published by Freitas and Merkle provides numerous proposed methods by which replicators could, in principle, be safely controlled by good design.

However, the concept of suppressing mutation raises the question: How can design evolution occur at the nanoscale without a process of random mutation and deterministic selection? Critics argue that MNT advocates have not provided a substitute for such a process of evolution in this nanoscale arena where conventional sensory-based selection processes are lacking. The limits of the sensorium available at the nanoscale could make it difficult or impossible to winnow successes from failures. Advocates argue that design evolution should occur deterministically and strictly under human control, using the conventional engineering paradigm of modeling, design, prototyping, testing, analysis, and redesign.

In any event, since 1992 technical proposals for MNT do not include self-replicating nanorobots, and recent ethical guidelines put forth by MNT advocates prohibit unconstrained self-replication.[9][11]

One of the most important applications of MNT would be medical nanorobotics or nanomedicine, an area pioneered by Robert Freitas in numerous books[12] and papers.[13] The ability to design, build, and deploy large numbers of medical nanorobots would, at a minimum, make possible the rapid elimination of disease and the reliable and relatively painless recovery from physical trauma. Medical nanorobots might also make possible the convenient correction of genetic defects, and help to ensure a greatly expanded lifespan. More controversially, medical nanorobots might be used to augment natural human capabilities. One study has reported on the conditions like tumors, arteriosclerosis, blood clots leading to stroke, accumulation of scar tissue and localized pockets of infection can be possibly be addressed by employing medical nanorobots.[14][15]

Another proposed application of molecular nanotechnology is "utility fog"[16] in which a cloud of networked microscopic robots (simpler than assemblers) would change its shape and properties to form macroscopic objects and tools in accordance with software commands. Rather than modify the current practices of consuming material goods in different forms, utility fog would simply replace many physical objects.

Yet another proposed application of MNT would be phased-array optics (PAO).[17] However, this appears to be a problem addressable by ordinary nanoscale technology. PAO would use the principle of phased-array millimeter technology but at optical wavelengths. This would permit the duplication of any sort of optical effect but virtually. Users could request holograms, sunrises and sunsets, or floating lasers as the mood strikes. PAO systems were described in BC Crandall's Nanotechnology: Molecular Speculations on Global Abundance in the Brian Wowk article "Phased-Array Optics."[18]

Molecular manufacturing is a potential future subfield of nanotechnology that would make it possible to build complex structures at atomic precision.[19] Molecular manufacturing requires significant advances in nanotechnology, but once achieved could produce highly advanced products at low costs and in large quantities in nanofactories weighing a kilogram or more.[19][20] When nanofactories gain the ability to produce other nanofactories production may only be limited by relatively abundant factors such as input materials, energy and software.[20]

The products of molecular manufacturing could range from cheaper, mass-produced versions of known high-tech products to novel products with added capabilities in many areas of application. Some applications that have been suggested are advanced smart materials, nanosensors, medical nanorobots and space travel.[19] Additionally, molecular manufacturing could be used to cheaply produce highly advanced, durable weapons, which is an area of special concern regarding the impact of nanotechnology.[20] Being equipped with compact computers and motors these could be increasingly autonomous and have a large range of capabilities.[20]

According to Chris Phoenix and Mike Treder from the Center for Responsible Nanotechnology as well as Anders Sandberg from the Future of Humanity Institute molecular manufacturing is the application of nanotechnology that poses the most significant global catastrophic risk.[20][21] Several nanotechnology researchers state that the bulk of risk from nanotechnology comes from the potential to lead to war, arms races and destructive global government.[20][21][22] Several reasons have been suggested why the availability of nanotech weaponry may with significant likelihood lead to unstable arms races (compared to e.g. nuclear arms races): (1) A large number of players may be tempted to enter the race since the threshold for doing so is low;[20] (2) the ability to make weapons with molecular manufacturing will be cheap and easy to hide;[20] (3) therefore lack of insight into the other parties' capabilities can tempt players to arm out of caution or to launch preemptive strikes;[20][23] (4) molecular manufacturing may reduce dependency on international trade,[20] a potential peace-promoting factor;[24] (5) wars of aggression may pose a smaller economic threat to the aggressor since manufacturing is cheap and humans may not be needed on the battlefield.[20]

Since self-regulation by all state and non-state actors seems hard to achieve,[25] measures to mitigate war-related risks have mainly been proposed in the area of international cooperation.[20][26] International infrastructure may be expanded giving more sovereignty to the international level. This could help coordinate efforts for arms control.[27] International institutions dedicated specifically to nanotechnology (perhaps analogously to the International Atomic Energy Agency IAEA) or general arms control may also be designed.[26] One may also jointly make differential technological progress on defensive technologies, a policy that players should usually favour.[20] The Center for Responsible Nanotechnology also suggest some technical restrictions.[28] Improved transparency regarding technological capabilities may be another important facilitator for arms-control.[29]

A grey goo is another catastrophic scenario, which was proposed by Eric Drexler in his 1986 book Engines of Creation,[30] has been analyzed by Freitas in "Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations" [31] and has been a theme in mainstream media and fiction.[32][33] This scenario involves tiny self-replicating robots that consume the entire biosphere using it as a source of energy and building blocks. Nanotech experts including Drexler now discredit the scenario. According to Chris Phoenix a "So-called grey goo could only be the product of a deliberate and difficult engineering process, not an accident".[34] With the advent of nano-biotech, a different scenario called green goo has been forwarded. Here, the malignant substance is not nanobots but rather self-replicating biological organisms engineered through nanotechnology.

Nanotechnology (or molecular nanotechnology to refer more specifically to the goals discussed here) will let us continue the historical trends in manufacturing right up to the fundamental limits imposed by physical law. It will let us make remarkably powerful molecular computers. It will let us make materials over fifty times lighter than steel or aluminium alloy but with the same strength. We'll be able to make jets, rockets, cars or even chairs that, by today's standards, would be remarkably light, strong, and inexpensive. Molecular surgical tools, guided by molecular computers and injected into the blood stream could find and destroy cancer cells or invading bacteria, unclog arteries, or provide oxygen when the circulation is impaired.

Nanotechnology will replace our entire manufacturing base with a new, radically more precise, radically less expensive, and radically more flexible way of making products. The aim is not simply to replace today's computer chip making plants, but also to replace the assembly lines for cars, televisions, telephones, books, surgical tools, missiles, bookcases, airplanes, tractors, and all the rest. The objective is a pervasive change in manufacturing, a change that will leave virtually no product untouched. Economic progress and military readiness in the 21st Century will depend fundamentally on maintaining a competitive position in nanotechnology.

[35]

Despite the current early developmental status of nanotechnology and molecular nanotechnology, much concern surrounds MNT's anticipated impact on economics[36][37] and on law. Whatever the exact effects, MNT, if achieved, would tend to reduce the scarcity of manufactured goods and make many more goods (such as food and health aids) manufacturable.

MNT should make possible nanomedical capabilities able to cure any medical condition not already cured by advances in other areas. Good health would be common, and poor health of any form would be as rare as smallpox and scurvy are today. Even cryonics would be feasible, as cryopreserved tissue could be fully repaired.

Molecular nanotechnology is one of the technologies that some analysts believe could lead to a technological singularity.Some feel that molecular nanotechnology would have daunting risks.[38] It conceivably could enable cheaper and more destructive conventional weapons. Also, molecular nanotechnology might permit weapons of mass destruction that could self-replicate, as viruses and cancer cells do when attacking the human body. Commentators generally agree that, in the event molecular nanotechnology were developed, its self-replication should be permitted only under very controlled or "inherently safe" conditions.

A fear exists that nanomechanical robots, if achieved, and if designed to self-replicate using naturally occurring materials (a difficult task), could consume the entire planet in their hunger for raw materials,[39] or simply crowd out natural life, out-competing it for energy (as happened historically when blue-green algae appeared and outcompeted earlier life forms). Some commentators have referred to this situation as the "grey goo" or "ecophagy" scenario. K. Eric Drexler considers an accidental "grey goo" scenario extremely unlikely and says so in later editions of Engines of Creation.

In light of this perception of potential danger, the Foresight Institute, founded by Drexler, has prepared a set of guidelines[40] for the ethical development of nanotechnology. These include the banning of free-foraging self-replicating pseudo-organisms on the Earth's surface, at least, and possibly in other places.

The feasibility of the basic technologies analyzed in Nanosystems has been the subject of a formal scientific review by U.S. National Academy of Sciences, and has also been the focus of extensive debate on the internet and in the popular press.

In 2006, U.S. National Academy of Sciences released the report of a study of molecular manufacturing as part of a longer report, A Matter of Size: Triennial Review of the National Nanotechnology Initiative[41] The study committee reviewed the technical content of Nanosystems, and in its conclusion states that no current theoretical analysis can be considered definitive regarding several questions of potential system performance, and that optimal paths for implementing high-performance systems cannot be predicted with confidence. It recommends experimental research to advance knowledge in this area:

A section heading in Drexler's Engines of Creation reads[42] "Universal Assemblers", and the following text speaks of multiple types of assemblers which, collectively, could hypothetically "build almost anything that the laws of nature allow to exist." Drexler's colleague Ralph Merkle has noted that, contrary to widespread legend,[43] Drexler never claimed that assembler systems could build absolutely any molecular structure. The endnotes in Drexler's book explain the qualification "almost": "For example, a delicate structure might be designed that, like a stone arch, would self-destruct unless all its pieces were already in place. If there were no room in the design for the placement and removal of a scaffolding, then the structure might be impossible to build. Few structures of practical interest seem likely to exhibit such a problem, however."

In 1992, Drexler published Nanosystems: Molecular Machinery, Manufacturing, and Computation,[44] a detailed proposal for synthesizing stiff covalent structures using a table-top factory. Diamondoid structures and other stiff covalent structures, if achieved, would have a wide range of possible applications, going far beyond current MEMS technology. An outline of a path was put forward in 1992 for building a table-top factory in the absence of an assembler. Other researchers have begun advancing tentative, alternative proposed paths [5] for this in the years since Nanosystems was published.

In 2004 Richard Jones wrote Soft Machines (nanotechnology and life), a book for lay audiences published by Oxford University. In this book he describes radical nanotechnology (as advocated by Drexler) as a deterministic/mechanistic idea of nano engineered machines that does not take into account the nanoscale challenges such as wetness, stickiness, Brownian motion, and high viscosity. He also explains what is soft nanotechnology or more appropriatelly biomimetic nanotechnology which is the way forward, if not the best way, to design functional nanodevices that can cope with all the problems at a nanoscale. One can think of soft nanotechnology as the development of nanomachines that uses the lessons learned from biology on how things work, chemistry to precisely engineer such devices and stochastic physics to model the system and its natural processes in detail.

Several researchers, including Nobel Prize winner Dr. Richard Smalley (19432005),[45] attacked the notion of universal assemblers, leading to a rebuttal from Drexler and colleagues,[46] and eventually to an exchange of letters.[47] Smalley argued that chemistry is extremely complicated, reactions are hard to control, and that a universal assembler is science fiction. Drexler and colleagues, however, noted that Drexler never proposed universal assemblers able to make absolutely anything, but instead proposed more limited assemblers able to make a very wide variety of things. They challenged the relevance of Smalley's arguments to the more specific proposals advanced in Nanosystems. Also, Smalley argued that nearly all of modern chemistry involves reactions that take place in a solvent (usually water), because the small molecules of a solvent contribute many things, such as lowering binding energies for transition states. Since nearly all known chemistry requires a solvent, Smalley felt that Drexler's proposal to use a high vacuum environment was not feasible. However, Drexler addresses this in Nanosystems by showing mathematically that well designed catalysts can provide the effects of a solvent and can fundamentally be made even more efficient than a solvent/enzyme reaction could ever be. It is noteworthy that, contrary to Smalley's opinion that enzymes require water, "Not only do enzymes work vigorously in anhydrous organic media, but in this unnatural milieu they acquire remarkable properties such as greatly enhanced stability, radically altered substrate and enantiomeric specificities, molecular memory, and the ability to catalyse unusual reactions."[48]

For the future, some means have to be found for MNT design evolution at the nanoscale which mimics the process of biological evolution at the molecular scale. Biological evolution proceeds by random variation in ensemble averages of organisms combined with culling of the less-successful variants and reproduction of the more-successful variants, and macroscale engineering design also proceeds by a process of design evolution from simplicity to complexity as set forth somewhat satirically by John Gall: "A complex system that works is invariably found to have evolved from a simple system that worked. . . . A complex system designed from scratch never works and can not be patched up to make it work. You have to start over, beginning with a system that works." [49] A breakthrough in MNT is needed which proceeds from the simple atomic ensembles which can be built with, e.g., an STM to complex MNT systems via a process of design evolution. A handicap in this process is the difficulty of seeing and manipulation at the nanoscale compared to the macroscale which makes deterministic selection of successful trials difficult; in contrast biological evolution proceeds via action of what Richard Dawkins has called the "blind watchmaker"[50] comprising random molecular variation and deterministic reproduction/extinction.

At present in 2007 the practice of nanotechnology embraces both stochastic approaches (in which, for example, supramolecular chemistry creates waterproof pants) and deterministic approaches wherein single molecules (created by stochastic chemistry) are manipulated on substrate surfaces (created by stochastic deposition methods) by deterministic methods comprising nudging them with STM or AFM probes and causing simple binding or cleavage reactions to occur. The dream of a complex, deterministic molecular nanotechnology remains elusive. Since the mid-1990s, thousands of surface scientists and thin film technocrats have latched on to the nanotechnology bandwagon and redefined their disciplines as nanotechnology. This has caused much confusion in the field and has spawned thousands of "nano"-papers on the peer reviewed literature. Most of these reports are extensions of the more ordinary research done in the parent fields.

The feasibility of Drexler's proposals largely depends, therefore, on whether designs like those in Nanosystems could be built in the absence of a universal assembler to build them and would work as described. Supporters of molecular nanotechnology frequently claim that no significant errors have been discovered in Nanosystems since 1992. Even some critics concede[51] that "Drexler has carefully considered a number of physical principles underlying the 'high level' aspects of the nanosystems he proposes and, indeed, has thought in some detail" about some issues.

Other critics claim, however, that Nanosystems omits important chemical details about the low-level 'machine language' of molecular nanotechnology.[52][53][54][55] They also claim that much of the other low-level chemistry in Nanosystems requires extensive further work, and that Drexler's higher-level designs therefore rest on speculative foundations. Recent such further work by Freitas and Merkle [56] is aimed at strengthening these foundations by filling the existing gaps in the low-level chemistry.

Drexler argues that we may need to wait until our conventional nanotechnology improves before solving these issues: "Molecular manufacturing will result from a series of advances in molecular machine systems, much as the first Moon landing resulted from a series of advances in liquid-fuel rocket systems. We are now in a position like that of the British Interplanetary Society of the 1930s which described how multistage liquid-fueled rockets could reach the Moon and pointed to early rockets as illustrations of the basic principle."[57] However, Freitas and Merkle argue [58] that a focused effort to achieve diamond mechanosynthesis (DMS) can begin now, using existing technology, and might achieve success in less than a decade if their "direct-to-DMS approach is pursued rather than a more circuitous development approach that seeks to implement less efficacious nondiamondoid molecular manufacturing technologies before progressing to diamondoid".

To summarize the arguments against feasibility: First, critics argue that a primary barrier to achieving molecular nanotechnology is the lack of an efficient way to create machines on a molecular/atomic scale, especially in the absence of a well-defined path toward a self-replicating assembler or diamondoid nanofactory. Advocates respond that a preliminary research path leading to a diamondoid nanofactory is being developed.[6]

A second difficulty in reaching molecular nanotechnology is design. Hand design of a gear or bearing at the level of atoms might take a few to several weeks. While Drexler, Merkle and others have created designs of simple parts, no comprehensive design effort for anything approaching the complexity of a Model T Ford has been attempted. Advocates respond that it is difficult to undertake a comprehensive design effort in the absence of significant funding for such efforts, and that despite this handicap much useful design-ahead has nevertheless been accomplished with new software tools that have been developed, e.g., at Nanorex.[59]

In the latest report A Matter of Size: Triennial Review of the National Nanotechnology Initiative[41] put out by the National Academies Press in December 2006 (roughly twenty years after Engines of Creation was published), no clear way forward toward molecular nanotechnology could yet be seen, as per the conclusion on page 108 of that report: "Although theoretical calculations can be made today, the eventually attainablerange of chemical reaction cycles, error rates, speed of operation, and thermodynamicefficiencies of such bottom-up manufacturing systems cannot be reliablypredicted at this time. Thus, the eventually attainable perfection and complexity ofmanufactured products, while they can be calculated in theory, cannot be predictedwith confidence. Finally, the optimum research paths that might lead to systemswhich greatly exceed the thermodynamic efficiencies and other capabilities ofbiological systems cannot be reliably predicted at this time. Research funding thatis based on the ability of investigators to produce experimental demonstrationsthat link to abstract models and guide long-term vision is most appropriate toachieve this goal." This call for research leading to demonstrations is welcomed by groups such as the Nanofactory Collaboration who are specifically seeking experimental successes in diamond mechanosynthesis.[60] The "Technology Roadmap for Productive Nanosystems"[61] aims to offer additional constructive insights.

It is perhaps interesting to ask whether or not most structures consistent with physical law can in fact be manufactured. Advocates assert that to achieve most of the vision of molecular manufacturing it is not necessary to be able to build "any structure that is compatible with natural law." Rather, it is necessary to be able to build only a sufficient (possibly modest) subset of such structuresas is true, in fact, of any practical manufacturing process used in the world today, and is true even in biology. In any event, as Richard Feynman once said, "It is scientific only to say what's more likely or less likely, and not to be proving all the time what's possible or impossible."[62]

There is a growing body of peer-reviewed theoretical work on synthesizing diamond by mechanically removing/adding hydrogen atoms [63] and depositing carbon atoms [64][65][66][67][68][69] (a process known as mechanosynthesis). This work is slowly permeating the broader nanoscience community and is being critiqued. For instance, Peng et al. (2006)[70] (in the continuing research effort by Freitas, Merkle and their collaborators) reports that the most-studied mechanosynthesis tooltip motif (DCB6Ge) successfully places a C2 carbon dimer on a C(110) diamond surface at both 300K (room temperature) and 80K (liquid nitrogen temperature), and that the silicon variant (DCB6Si) also works at 80K but not at 300K. Over 100,000 CPU hours were invested in this latest study. The DCB6 tooltip motif, initially described by Merkle and Freitas at a Foresight Conference in 2002, was the first complete tooltip ever proposed for diamond mechanosynthesis and remains the only tooltip motif that has been successfully simulated for its intended function on a full 200-atom diamond surface.

The tooltips modeled in this work are intended to be used only in carefully controlled environments (e.g., vacuum). Maximum acceptable limits for tooltip translational and rotational misplacement errors are reported in Peng et al. (2006) -- tooltips must be positioned with great accuracy to avoid bonding the dimer incorrectly. Peng et al. (2006) reports that increasing the handle thickness from 4 support planes of C atoms above the tooltip to 5 planes decreases the resonance frequency of the entire structure from 2.0THz to 1.8THz. More importantly, the vibrational footprints of a DCB6Ge tooltip mounted on a 384-atom handle and of the same tooltip mounted on a similarly constrained but much larger 636-atom "crossbar" handle are virtually identical in the non-crossbar directions. Additional computational studies modeling still bigger handle structures are welcome, but the ability to precisely position SPM tips to the requisite atomic accuracy has been repeatedly demonstrated experimentally at low temperature,[71][72] or even at room temperature[73][74] constituting a basic existence proof for this capability.

Further research[75] to consider additional tooltips will require time-consuming computational chemistry and difficult laboratory work.

A working nanofactory would require a variety of well-designed tips for different reactions, and detailed analyses of placing atoms on more complicated surfaces. Although this appears a challenging problem given current resources, many tools will be available to help future researchers: Moore's law predicts further increases in computer power, semiconductor fabrication techniques continue to approach the nanoscale, and researchers grow ever more skilled at using proteins, ribosomes and DNA to perform novel chemistry.

View original post here:

Molecular nanotechnology - Wikipedia

Black Holes Bolster Case For Quantum Physics’ Spooky Action …

Dust partially obscures a distant quasar in this artists illustration. (Credit: NASA/ESA/G.Bacon, STScI)

With the help of two extremely bright quasars located more than 7 billion light-years away, researchers recently bolstered the case for quantum entanglement a phenomenon Einstein described as spooky action at a distance by eliminating one classical alternative: The freedom-of-choice loophole.

Of the many mindboggling facets of quantum mechanics, one of the most intriguing is the idea of quantum entanglement. This occurs when two particles are inextricably linked together no matter their separation from one another. Although these entangled particles are not physically connected, they can still able to share information with each other instantaneously seemingly breaking one of the most hard-and-fast rules of physics: No information can be transmitted faster than the speed of light.

As far-out as the idea seems, quantum entanglement has been proven time and time again over the years. When researchers create two entangled particles and independently measure their properties, they find that the outcome of one measurement influences the observed properties of the other particle.

But what if the apparent relationship between particles is not due to quantum entanglement, but instead is a result of some hidden, classical law of physics? In 1964, physicist John Bell addressed this question by calculating a theoretical limit beyond which correlations can only be explained by quantum entanglement, not classical physics. However, as is often the case, there are loopholes.

One of the most stubborn of these loopholes is the so-called freedom-of-choice loophole, which suggests a hidden classical variable can influence how an experimenter decides to measure seemingly entangled particles. This causes the particles to appear quantumly correlated even when they are not.

To help constrain the impact of the freedom-of-choice loophole, the authors of the new study used extremely distant quasars (exceptionally bright and energetic galactic cores) to decide which properties of entangled particles to measure. By allowing the quasars light to choose what properties to measure, the researchers effectively removed the freedom-of-choice loophole from the experiment. This is because the quasars are located 7.8 and 12.2 billion light-years away, so their observed light was emitted billions of years before the researchers even conceived of the experiment.

If some conspiracy is happening to simulate quantum mechanics by a mechanism that is actually classical, that mechanism would have had to begin its operations somehow knowing exactly when, where, and how this experiment was going to be done at least 7.8 billion years ago, said co-author Alan Guth, a physics professor at MIT, in a press release. That seems incredibly implausible, so we have very strong evidence that quantum mechanics is the right explanation.

In fact, according to Guth, the probability that a classical process could explain their results is about 1 in 100 billion billion.

The Earth is about 4.5 billion years old, so any alternative mechanism different from quantum mechanics that might have produced our results by exploiting this loophole wouldve had to be in place long before even there was a planet Earth, adds co-author David Kaiser of MIT. So weve pushed any alternative explanations back to very early in cosmic history.

This graphic shows the experimental setup used to test the freedom-of-choice loophole. Researchers produce two entangled photons (middle) and shoot them in opposite directions toward detectors located at each telescope. The telescope then use ancient quasar light to determine which properties of the photons to measure.(Credit: D. Rauch et al. [Phys. Rev. Lett. 121, 080403])

To carry out the study, the team utilized two 4-meter-wide telescopes the William Herschel Telescope and the Telescopio Nazionale Galileo located just over half a mile (1 kilometer) apart on a mountain in La Palma, Spain. Both telescopes were trained on different quasars located billions of light-years away.

Meanwhile, at a station between these two telescopes, the researchers generated pairs of seemingly entangled photons or particles of light and beamed one member of each pair to a detector at each telescope. As the entangled photons traveled to the detectors, the telescopes analyzed light from the quasars and determined whether the light was more red or more blue than a baseline.

Depending on the measurement, the entangled-photon detectors automatically adjusted the angle of their polarizers, which are devices that measure the orientations of photon electric fields. This allowed the researchers to test whether the photon pairs were truly linked to one another, or if they were just faking it.

Over the course of two 15-minutes experiments (each utilizing two different pairs of quasars), the researchers measured over 17,000 and 12,000 pairs of entangled photons, respectively. According to the study, the results show it is extremely unlikely a classical mechanism is responsible for the strong correlations observed between the photon pairs, meaning the photon pairs truly were quantumly entangled.

This experiment is not the first time the freedom-of-choice loophole has been tested. Just last February, the same team of researchers led a similar study that used 600-year-old starlight to determine which photon properties to measure. And although the new research uses light that is nearly 8 billion years older, there still remains a small window of time for the freedom-of-choice loophole to slip through. In order to close this window completely, the researchers are already planning to look back even further in cosmic time, concentrating on the earliest light in the universe photons from the cosmic microwave background.

It is fun to think about new types of experiments we can design in the future, but for now, we are very pleased that we were able to address this particular loophole so dramatically, said Kaiser. Our experiment with quasars puts extremely tight constraints on various alternatives to quantum mechanics. As strange as quantum mechanics may seem, it continues to match every experimental test we can devise.

This article originally appeared on Astronomy.com.

View original post here:

Black Holes Bolster Case For Quantum Physics' Spooky Action ...

The Posthuman Project – yts.am

I really wanted this to be good. I had a long day and stayed up watching this on Amazon Prime. I was so dissatisfied that I had to say something.

I have no idea who wrote this script. They should have at least sat down and did a table read of the dialogue. Most of the lines had horrible delivery, the jokes (there were none), and most the dialogue was way too long or unnecessary. The first act of the film we are introduced to the protagonist but he doesn't carry the film. There's literally nothing of interest about him or the cast throughout a 100% of the film. His supporting cast has more uninteresting dialogue, background, and action than the lead. It took half of the movie to find out what happened to the lead characters legs, his father, and his relationship with his girlfriend.

Nothing holds your interest through the first 20 minutes of the movie. When they do get their powers the lead character isn't even noticeable anymore and he does nothing of interest accept lose his powers at the end of the film; which made absolutely no sense. Maybe it wasn't the intent of the writer or the director but if they were to go back and review the film; they basically took the main characters powers away. The protagonist at the end of the film (at graduation, states that the last blast fixed him, his uncle, father, and his uncles facilities have vanished.

So, he lost his lame healing powers, after recovering from a gunshot to the head, over powering his Uncle with the help of his brother, and getting blasted with the Zero power gun again. Everyone else apparently kept their abilities. I was like what a waste of time!

The other horrible thing, was that the camera kept floating (the jitters) back and forth. (smh)

It was hard to believe that they shot this movie with a RED Digitial Camera. I'm like haven't you ever heard of a Steady Cam, Post Production, Story boards, editing filters, or even lightning? This movie looked like they wrote something, shot something, added lame special effect, and that was it. They couldn't have gone over the script, they didn't re-shoot scenes, didn't work out any of the action or fight scenes, and they didn't work out the plot or the effects.

The production was just horrible. I've seen better filming on Youtube, with an iPhone / iPad, no budget, made with a lot of imagination and real planning. http://filmriot.squarespace.com

It is a great thing that they put something together but by no means is it any good. It is like making a beautiful and very poisonous chocolate cake. It looks good but it's just gonna kill you in the end.

The rest is here:

The Posthuman Project - yts.am

John McAfee: I keep a gun in my hand while showering …

John McAfee, who made a fortune developing antivirus software, revealed on Twitter that he is always armed to the teeth, even in bed.

John McAfee, who made a fortune developing antivirus software, revealed on Twitter that he is always armed to the teeth, even in bed.

A deceptively tranquil view in front of John McAfee's former compound in Belize. Photo: Martin Savidge/CNN via McAfee Security UK.org.

A deceptively tranquil view in front of John McAfee's former compound in Belize. Photo: Martin Savidge/CNN via McAfee Security UK.org.

Photo: Todd J. Van Emst, Associated Press

John McAfee, who made a fortune developing antivirus software, revealed on Twitter that he is always armed to the teeth, even in bed.

John McAfee, who made a fortune developing antivirus software, revealed on Twitter that he is always armed to the teeth, even in bed.

John McAfee: I keep a gun in my hand while showering, sitting on toilet

John McAfeewants you to know that he never goes anywhere unarmed.

The controversial tech mogul, who claims to have been targeted by numerous bad actors in the past, is protected around the clock by his own security team, but should they be comprised, he'd be more than ready.

In a Twitter post this week, he revealed just how ready.

His sidearm never leaves his hand, he wrote.

Not when he sleeps. Not in the shower. Not when he's sitting on the toilet. Not even when he makes love.

"Puts a kink in foreplay but some women love it."

My sidearm is with me 24/7. More: my sidearm is ALWAYS in my hand when I'm on the toilet, sleeping and making love. Puts a kink in foreplay but some women love it. Accidental discharge? No. Some intentional when idiots tried to rush me. 2 women spontaneously orgasmed. Go figure. pic.twitter.com/LuLvdSuMka

And in a follow-up tweet, he shared that the Glock in the photo was just part of his on-person arsenal:

"I carry three sidearms. Hip, shoulder, ankle," he said.

Why pack so much firepower? Well, because a lot folks are out to get him, he says the U.S. government, violent cartels, corrupt Belizean officials, food poisoners and "six women" who tried to shoot him.

In the last few months, the cybersecurity pioneer and cryptocurrency evangelist has claimed to be on the run from Securities and Exchange Commission, which forced him into hiding. His life was in danger, he said.

Then again, it always seems to be in danger.

Irrational??? Google me. On the run in the Belizean jungle for weeks while 17,000 armed men were trying to kill me. Assistant Prime Minister of Belize hired hitman Eddie McKoy to kill me. Five failed attempts by the Sinaloa Cartel. 6 different women tried to shoot me. Google me.

In June, he was hospitalized. A photo showed him lying in bed with tubes sprouting out of his body. Something he ate, drank or inhaled was spiked with poison by "incompetent enemies," he said.

He has previously maintained that violent cartels have attempted to kidnap him on several occasions over the last few years, most recently in September. The cartels want to abduct him, he said, because of his connection to the murder of his neighbor in Belize in 2012. (He was a "person of interest" in the investigation.)

When he escaped from Belize to Guatemala, he says he was chased through the jungle by "17,000 armed men."

It's worth noting that the entire armed forces of Belize number 2,100 personnel.

Naturally, McAfee's tweets prompted a flurry of comments. Here are a few:

Disagree. The Remington 870 SP Marine Magnum is peak shower performance. pic.twitter.com/uB06QwQdNt

Gonna shoot him through Twitter?

My wife always says accidental discharge doesnt matter and it happens to everyone

Thank you for your support:)

Only six women have tried to shoot you???!!!!

we are truly blessed to live in the timeline where McAfee turned into a Cyberpunk 2020 character

---

Read Mike Moffitt's latest stories and send him news tips atmmoffitt@sfchronicle.com.

Start receiving breaking news emails on wildfires, civil emergencies, riots, national breaking news, Amber Alerts, weather emergencies, and other critical events with the SFGATE breaking news email.Click here to make sure you get the news.

Read more from the original source:

John McAfee: I keep a gun in my hand while showering ...

We’re Almost Able to Cool Antimatter. Here’s Why That’s a Big Deal.

Researchers from CERN have just moved one step closer to cooling antimatter, which should make it easier for us to study the mysterious substance.

A COOL NEW LEAD. We’re still figuring out what the heck antimatter even is, but scientists are already getting ready to fiddle with it. Physicists at the European Organization for Nuclear Research (CERN) are one step closer to cooling antimatter using lasers, a milestone that could help us crack its many mysteries.

They published their research on Wednesday in the journal Nature.

BIG BANG, BIGGER MYSTERY. Antimatter is essentially the opposite of “normal” matter. While protons have a positive charge, their antimatter equivalents, antiprotons, have the same mass, but a negative charge. Electrons and their corresponding antiparticle, positrons, have the same mass — the only difference is that they have different charges (negative for electrons, positive for positrons).

When a particle meets its antimatter equivalent, the two annihilate one another, canceling the other out. In theory, the Big Bang should have produced an equal amount of matter and antimatter, in which case, the two would have just annihilated one another.

But that’s not what happened — the universe seems to have way more matter than antimatter. Researchers have no idea why that is, and because antimatter is very difficult to study, they haven’t had much recourse for figuring it out. And that’s why CERN researchers are trying to cool antimatter off, so they can get a better look.

MAGNETS AND LASERS. Using a tool called the Antihydrogen Laser Physics Apparatus (ALPHA), the researchers combined antiprotons with positrons to form antihydrogen atoms. Then, they magnetically trapped hundreds of these atoms in a vacuum and zapped them with laser pulses. This caused the antihydrogen atoms to undergo something called the Lyman-alpha transition.

“The Lyman-alpha transition is the most basic, important transition in regular hydrogen atoms, and to capture the same phenomenon in antihydrogen opens up a new era in antimatter science,” one of the researchers, Takamasa Momose, said in a university press release.

According to Momose, this phase change is a critical first step toward cooling antihydrogen. Researchers have long used lasers to cool other atoms to make them easier to study. If we can do the same for antimatter atoms, we’ll be better able to study them. Scientists can take more accurate measurements, and they might even be able to solve another long-unsettled mystery: figuring out how antimatter interacts with gravity.

For now, the team plans to continue working toward that goal of cooling antimatter. If they’re successful, they might be able to help unravel mysteries with answers critical to our understanding of the universe.

READ MORE: Canadian Laser Breakthrough Has Physicists Close to Cooling Down Antimatter [The University of British Columbia]

More on antihydrogen: Physicists Have Captured the First Spectral Fingerprints of Antimatter

The post We’re Almost Able to Cool Antimatter. Here’s Why That’s a Big Deal. appeared first on Futurism.

Go here to see the original:
We’re Almost Able to Cool Antimatter. Here’s Why That’s a Big Deal.

It’s Really Easy for Hackers to Take Control of Robots

A team of researchers from Brown University took control of a University of Washington robot to prove that hacking robots is far too easy.

NOT CREEPY AT ALL. We’re finding robots in more places in our daily lives, but it’s still pretty weird to hear them speak. And when one utters are “Hello from the hackers,” well, we’re pretty solidly into nightmare territory.

The robot that spoke these words was Herb2, a bot built by researchers at the University of Washington. The people directing the robot to talk? They were clear across the country at Brown University.

The Brown team was using Herb2 to illustrate a point: hacking robots is far too easy.

They published their research on hacking robots on the non-peer-reviewed preprint server arXiv.

HUNTING ROBOTS. To save valuable time and resources, many roboticists like to use something called the Robot Operating System (ROS), an open-source collection of software libraries and tools useful for building robots. Because the ROS platform is fairly widely used and lacks any security features, it was the perfect target for the Brown team’s research.

They used a tool called ZMap to scan the internet for robots running ROS. Their search turned up about 100 internet-connected robots and determined that about 10 percent were actual robots (and not robot simulations). They figured this out through a bit of detective work, looking for identifiers indicating the connected bot had hardware, such as the phrases “camera_info,” “gripper,” and “sound_play.”

When the Brown team came across a vulnerable robot, they simply notified its owners that the bot wasn’t secure — except in the case of Herb2. Instead, they asked the bot’s creators for permission to prove that they could hack it.

Once granted, they instructed the robot to utter its eerie greeting. The paper gets into a lot of technical detail about this, but basically all you need to know is that it wasn’t all that hard.

FAIR WARNING. The ease with which the Brown team took over Herb2 should serve as a wake-up call to roboticists everywhere, but at least some experts worry that it won’t.

“No one’s really thinking about security on these types of things,” computer scientist George Clark told Wired.? “Everyone’s just putting things out there trying to rush to market, especially in a research type of environment. My worry is how this carries over to a more industrial or consumer market.”

We’re definitely heading toward an era in which robots are far more prevalent. And if we don’t want them suddenly acting as puppets for malicious actors, we’re going to need to pay a lot more attention to their security.

READ MORE: The Serious Security Problem Looming Over Robotics [Wired]

More on hacking: Watch as a World-Renowned Hacker Shows You How It’s Done

The post It’s Really Easy for Hackers to Take Control of Robots appeared first on Futurism.

Original post:
It’s Really Easy for Hackers to Take Control of Robots

Facial Recognition Tech Catches Traveler With Fake Papers

A pilot program using facial recognition tech at airports catches a man from the Republic of Congo presenting a passport that didn't belong to him.

CAN’T BEAT THE SYSTEM. Well, that didn’t take long. During just its third day in action, a facial recognition system used by Washington Dulles International Airport (IAD) caught its first imposter. While that’s a clear win for proponents of the tech, it might also be major blow to the privacy of the average airline passenger.

On Monday, 14 airports in the U.S. launched a pilot program to test the effectiveness of a biometric scanning system during the security and boarding processes. Passengers simply stand in front of a camera that takes their photo. The system then compares that photo to the one on the person’s passport to confirm their identity.

On Thursday, U.S. Customs and Border Protection (CBP) announced that the facial recognition tech at IAD caught a man trying to enter the U.S. using fake identification papers.

The 26-year-old man, who was traveling from Sao Paulo, Brazil, presented a French passport to a CBP official, but the system determined that the man’s face didn’t match the photo on the passport. Officials later discovered that the man was concealing his authentic Republic of Congo identification card in his shoe.

The U.S. attorney’s office decided not to charge the man, CNET reports, and he left the country Wednesday night.

AN ETHICAL DIVIDE. Depending on which side of the facial recognition debate you land on, you probably have pretty different feelings about that.

For proponents, catching the traveler shows that a technology purportedly designed to keep us safe is doing its job.

“Terrorists and criminals continually look for creative methods to enter the U.S. including using stolen genuine documents,” said Casey Durst, CBP’s Director of the Baltimore Field Office, in a news release. “The new facial recognition technology virtually eliminates the ability for someone to use a genuine document that was issued to someone else.”

BIG BROTHER, AIRPORT EDITION. However, showing that facial recognition works could worry those who see it as a stepping stone along the path to a Big Brother-esque dystopia. If it works at IAD, other airports might decide to implement their own systems, and it’s not hard to imagine other locations — jobs, schools, public spaces — following suit.

This is problematic for a couple of reasons. According to an NBC report, officials say the airport system is 99 percent accurate, but other tests of facial recognition technology have revealed that the systems aren’t always as accurate as advertised. Furthermore, they often include racial and gender biases, producing higher error rates for people of color and women than for white men. Sure, the IAD system can help officials identify imposters, but how many other law-abiding travelers might it falsely flag?

Catching one person with fake papers doesn’t prove that a facial recognition system works, of course, and the technology is still being tested in a pilot program. But if the technology works as it’s supposed to, this could be the first of many cases in which travelers that might have otherwise made it past human immigration officers are caught at the border.

READ MORE: New Facial Recognition Tech Catches First Impostor at D.C. Airport [NBC News]

More on facial recognition: Amazon Rekognition Falsely Matched 28 Members of Congress to Mugshots

The post Facial Recognition Tech Catches Traveler With Fake Papers appeared first on Futurism.

Original post:
Facial Recognition Tech Catches Traveler With Fake Papers

A Tech Company Wants to Implant GPS Trackers in Dementia Patients

A Wisconsin vending machine company is developing a GPS implant for people suffering from Alzheimer's disease and dementia.

NOW WITH GPS. Three Square Market (32M) is no longer content to simply microchip its employees. The Wisconsin-based company now want to put trackable implants into people with dementia.

32M’s CEO Todd Westby announced the plan on CNBC’s “Closing Bell” on Wednesday. Westby said the company is working on a voice-activated, body-heat-powered chip that can monitor a person’s vital signs and track them via GPS. It plans to beta test the chips in 2019 and will seek Food and Drug Administration (FDA) approval for them.

TARGET AUDIENCE. The company’s president, Patrick McMullan, added that patients who suffer from dementia would be a primary target for the GPS implant. “Without question it’s a worthy cause, and it’s a product in demand,” he said.

There’s definitely a demand. According to the World Health Organization (WHO), an estimated 47 million people worldwide suffer from dementia; by 2030, they expect that figure to reach 75 million. Six of every 10 of these dementia suffers will wander at some point, traveling from a safe location out into the world where they might get lost.

Clearly, we need a way to quickly track down these wanderers, but is a GPS implant really the best option?

IS IT OVERKILL? We already have bracelets and shoes that dementia patients can wear to ensure they’re found quickly if they wander. SafetyNet Tracking Systems’ bracelet might actually be even more useful than the implant because it doesn’t rely on GPS, which can be spotty in remote areas or inside buildings. Instead, the bracelet emits a radio frequency signal that law enforcement can pick up. Sure, it requires a battery, but that only needs changed every six months.

There’s also the issue of personal autonomy — for people in the throes of a disease that makes them lose their memory, can dementia sufferers consent (and really understand) if they give permission to receive a GPS implant? Would we even need their permission if the person is under the guardianship of a family member or doctor?

Ultimately, while a GPS implant might seem like a logical solution to a growing problem, we might want to reserve our implants for the people who know exactly what they’re getting into, like 32M’s employees.

READ MORE: Wisconsin Company Known for Microchipping Employees Plans GPS Tracking Chip for Dementia Patients [CNBC]

More on Three Square Market (32M): This U.S. Company Is Offering to Put Microchips in Their Employees

The post A Tech Company Wants to Implant GPS Trackers in Dementia Patients appeared first on Futurism.

More here:
A Tech Company Wants to Implant GPS Trackers in Dementia Patients

We’re About to Get Many More Meat Alternatives

Included in Y Combinator's 2018 batch of companies is the Good Food Institute (GFI), a non-profit focused on accelerating the adoption of clean meat.

CLEAN MEAT. When it comes to betting on startups, Y Combinator knows how to pick some winners. Since it launched in 2005, the accelerator has provided seed funding and advice to several companies that went on to become household names, including Airbnb, Reddit, and Dropbox.

Now, the company is ready to sink its teeth into a new challenge: help make plant-based and clean meat — edible protein grown from animal stem cells — mainstream.

NOT YOUR AVERAGE STARTUP. On Wednesday, Y Combinator hosted the second day of demos for its Summer 2018 startups, and one of those, the Good Food Institute (GFI), was pretty unique. It’s not a company per se — GFI is a nonprofit think tank that will itself act as an accelerator for startups in the plant-based and clean meat sector. It’s basically accelerator inception.

GFI has its hands in pretty much anything that has to do with clean meat. It’s working with colleges to design clean meat-focused curriculums, launching a conference on the subject, and funding open-source research so that startups in the space can learn from each other’s work.

GFI also lobbies government organizations and traditional meat manufacturers to secure funding for clean meat startups. The institute has even launched three startups of its own.

A POWERFUL PARTNER. GFI is hopeful that the partnership with Y Combinator will help bring attention to both its startups and the industry as a whole. Y Combinator, meanwhile, simply sees clean meat as the future.

“YC got interested in [clean meat] because of the innovation in this industry,” Gustaf Alstromer, a partner at Y Combinator, told Fast Company. “We think that if it works, this will revolutionize the entire meat industry, and we think that it will probably be entrepreneurs and startups that build the companies that produce that meat.”

As we’ve seen in the past, Y Combinator has a pretty solid track record of detecting industry trends, so if its decision to back GFI is any indication, we’re probably going to see a lot more clean meat in the future.

READ MORE: Y Combinator Is Funding a Nonprofit That Advocates for Meat Alternatives [Fast Company]

More on clean meat: In the Future, the Meat You Eat Won’t Come From Living Organisms

The post We’re About to Get Many More Meat Alternatives appeared first on Futurism.

View post:
We’re About to Get Many More Meat Alternatives

The World’s First Digital Teacher Just Debuted in New Zealand

A NEW KIND OF TEACHER. It’s back to school, and you know what that means — time to fire up the computer that teaches you!

That’s what primary school students in New Zealand have to look forward to, anyways. They’ll soon be the first students in the world to learn from an artificially intelligent (AI) digital avatar.

MEET WILL. Auckland energy company Vector teamed up with AI company Soul Machines to create the avatar, which goes by the name Will. The AI is now part of Vector’s Be Sustainable with Energy program, which it offers free-of-charge to schools to which it provides electricity.

Will’s there to teach children about energy use. Students interact with Will — essentially just a face on a screen — via their desktop, tablet, or mobile device. He teaches them about different forms of renewable energy, such solar and wind. Will can then ask the students questions about what they’ve learned to ensure the lessons stick.

According to Vector’s Chief Digital Officer, Nikhil Ravishankar, students seem particularly taken by Will. “What was fascinating to me was the reaction of the children to Will. The way they look at the world is so creative and different, and Will really captured their attention,” he said in a news release.

He went on to add, “Using a digital human is a very compelling method to deliver new information to people, and I have a lot of hope in this technology as a means to deliver cost-effective, rich, educational experiences into the future.”

AN EDUCATION CRISIS. Ravishankar isn’t the only person who thinks robots — in the form of AI software programs like Will, or actual humanoid machines — will soon play a major role in education.

In February 2017, futurist Thomas Frey predicted that learning from bots will be commonplace by 2031. Meanwhile, British education expert Anthony Seldon thinks robots will replace human teachers by 2027.

Even if human teachers get to keep their jobs, working in tandem with robots could solve many of the problems currently facing the world’s education system.

Many nations, particularly in the developing world, don’t have nearly enough teachers. Robots like Will could help fill that gap. Compared to the cost of paying a human teacher, these systems are also far cheaper, and they can adjust to each individual student’s learning style to help them reach their potential.

While digital teachers could provide a host of benefits, they still aren’t as advanced as they need to be. Will is only well-versed on one topic — renewable energy — while quality teachers are typically far more well-rounded. Social interaction between teachers and students is also critical to a quality education, and digital teachers most certainly lag behind their human counterparts in this realm.

Still, Will might be the first digital teacher to hit the classroom, but he almost certainly won’t be the last.

READ MORE: World-First Digital Teacher in NZ Schools [Newsroom]

More on AI teaching: Your Next Teacher Could Be a Robot

The post The World’s First Digital Teacher Just Debuted in New Zealand appeared first on Futurism.

More:
The World’s First Digital Teacher Just Debuted in New Zealand

AI Couldn’t Beat a Team of Professional Gamers at DOTA 2, but It Held Its Own

During a major tournament for the strategy game DOTA 2, human gamers emerged victorious over OpenAI Five, a team of five neural networks.

AI DEFEAT. AI may be able to best humans at games like Go and chess and poker. But for the cooperative strategy video game DOTA 2, AI still can’t beat the pros — at least not yet.

That’s the major takeaway from The International, a tournament for the computer strategy game DOTA 2. During the tournament, research group OpenAI had its OpenAI Five, a team comprising five neural networks, play against professional DOTA 2 players.

It was a best-of-three competition, but the team from paiN Gaming and an all-star team of pros from China confirmed humanity’s superiority over the AI team in just two matches.

TWO TOURNEY TWEAKS. It’s not that the AI is bad at the game. In fact, back in June, OpenAI Five beat a team of five human amateurs at DOTA 2.

But this match was different in a few significant ways. Previously, each member of the AI team had its own invulnerable courier, a key component of the game that delivers supplies to players. For The International, each team had only one courier to share — the OpenAI Five had just a few days to adjust to that prior to the match.

Players couldn’t choose their own heroes this time around, either — DOTA 2 experts picked the characters for each player in the tournament, no matter if they were human or AI.

BACK ON TOP. Though the humans emerged victorious, OpenAI Five did hold its own. Each match lasted between 45 and 51 minutes — the humans didn’t simply destroy the AI team right out the gate. With a bit more time to adjust to the changes from the June match, the AI team might have fared a bit better, too.

For now, the OpenAI researchers plan to continue to improve the system, so maybe it’ll be ready to bring it in a future rematch.

READ MORE: AI Isn’t Good Enough to Beat the Best ‘Dota 2’ Players Just Yet [Engadget]

More on OpenAI: The Digest: Five AI Algorithms Worked Together to Beat Humans at a Strategy Game

The post AI Couldn’t Beat a Team of Professional Gamers at DOTA 2, but It Held Its Own appeared first on Futurism.

Go here to read the rest:
AI Couldn’t Beat a Team of Professional Gamers at DOTA 2, but It Held Its Own

Elon Musk Claims a Tesla Semi Prototype Drove “Across the Country Alone”

According to CEO Elon Musk, Tesla's Semi prototype is now able to autonomously travel across the country using the Supercharger network.

NOT ALL DOOM AND GLOOM. Tesla might be sucking the life force of CEO Elon Musk, but at least the company’s got some some good news to show for it.

On Saturday, Electrek tweeted a picture of one of Tesla’s autonomous Semi prototypes at the headquarters of trucking company J.B. Hunt. In response, Musk tweeted, “What’s cool is that it was driven across the country alone (no escort or any accompanying vehicles), using the existing Tesla Supercharger network and an extension cord.”

He later added (jokingly, we assume), “The extension cord was 1000 miles long, but still.”

SEEING SEMIS ON THE REG. This wasn’t the first sighting of Tesla’s Semi prototype. The vehicle has been making its way across the country, popping up in Missouri, Oklahoma, Texas, and California.

During some of those stops, witnesses managed to catch a glimpse of the prototype’s charging system. Based on their reports to Electrek, the truck charges via several extension cords, each plugged into its own supercharger stall (these are the chargers used to power Tesla’s other vehicles). This should cut down on charging times, though Tesla plans to eventually create Megachargers that could power the Semis much more quickly. The reports, however, don’t detail how exactly — or who — plugged the trucks in.

THE FUTURE OF TRUCKING. Tesla’s autonomous Semi has the potential to radically transform the future of trucking. According to reports, the vehicles will cost less to operate than their diesel counterparts, and because they’re electric, they’d be far better for the environment. Trucking is actually an industry that needs more workers, so the vehicles aren’t expected to put anyone out of a job, either.

If that Tesla’s Semi prototype did in fact traverse the nation solo, we might not have much longer to wait for the autonomous revolution to hit the trucking industry.

READ MORE: Tesla Semi Made It ‘Across the Country Alone’ With Only Supercharger Network and an Extension Cord, Says Elon Musk [Electrek]

More on Tesla’s semi: Watch Live: Elon Musk Is Finally Unveiling Tesla’s Electric Semi

The post Elon Musk Claims a Tesla Semi Prototype Drove “Across the Country Alone” appeared first on Futurism.

See the original post here:
Elon Musk Claims a Tesla Semi Prototype Drove “Across the Country Alone”

Should Evil AI Research Be Published? Five Experts Weigh In.

A rhetorical question for you. Let’s say you’re an AI scientist, and you’ve found the holy grail of your field — you figured out how to build an artificial general intelligence (AGI). That’s a truly intelligent computer that could pass as human in terms of cognitive ability or emotional intelligence. AGI would be creative and find links between disparate ideas — things no computer can do today.

That’s great, right? Except for one big catch: your AGI system is evil or could only be used for malicious purposes.

So, now a conundrum. Do you publish your white paper and tell the world exactly how to create this unrelenting force of evil? Do you file a patent so that no one else (except for you) could bring such an algorithm into existence? Or do you sit on your research, protecting the world from your creation but also passing up on the astronomical paycheck that would surely arrive in the wake of such a discovery?

Yes, this is a rhetorical question — for now. But some top names in the world of AI are already thinking about their answers. On Friday, speakers at the “AI Race and Societal Impacts” panel of The Joint Multi-Conference on Human-Level Artificial Intelligence in Prague gave their best responses after the question was posed by an audience member.

Here’s how five panelists, all experts on the future of AI, responded.

Hava Siegelmann, Program Manager at DARPA

Siegelmann urged the hypothetical scientist to publish their work immediately. Siegelmann had earlier told Futurism that she believes there is no evil technology, but there are people who would misuse it. If that AGI algorithm was shared with the world, people might be able to find ways to use it for good.

But after Siegelmann answered, the audience member who posed the hypothetical question clarified that, for the purposes of the thought experiment, we should assume that no good could ever possibly come from the AGI.

Irakle Eridze, Senior Strategy and Policy Advisor at UNICRI, United Nations

Easy one: “Don’t publish it!”

Eridze otherwise stayed out of the fray for this specific question, but throughout the conference he highlighted the importance of setting up strong ethical benchmarks on how to develop and deploy AGI. Apparently, deliberately releasing an evil super-intelligent entity into the world would go against those standards.

Alexey Turchin, author and finalist in GoodAI’s “Solving the AI Race” challenge

Turchin believes there are responsible ways to handle such an AI system. Think about a grenade, he said — one should not hand it to a small child, but maybe a trained soldier could be trusted with it.

But Turchin’s example is more revealing than it may initially appear. A hand grenade is a weapon created explicitly to cause death and destruction no matter who pulls the pin, so it’s difficult to imagine a so-called responsible way to use one. It’s not clear whether Turchin intended his example to be interpreted this way, but he urged the AI community to make sure dangerous algorithms were left only in the most trustworthy hands.

Tak Lo, a partner at Zeroth.ai, an accelerator that invests in AI startups

Lo said the hypothetical computer scientist should sell the evil AGI to him. That way, they wouldn’t have to hold onto the ethical burden of such a powerful and scary AI — instead, you could just pass it to Lo and he would take it from there. Lo was likely (at least half-)kidding, and the audience laughed. Earlier that day, Lo said that private capital and investors should be used to push AI forward, and he may have been poking fun at his own obviously capitalistic stance. Still, someone out there would absolutely try to buy such an AGI system, should it arrive.

But what Lo suggests, in jest or not, is one of the most likely results, should this actually come to pass. While hobbyists can develop truly valuable and innovative algorithms, much of the top talent in the AI field is scooped up by large companies who then own the products of their labor. The other likely scenario is that the scientist would publish their paper on an open-access preprint server like arXiv to help promote transparency.

Seán Ó hÉigeartaigh, Executive Director of the Cambridge Center for the Study of Existential Risk

Ó hÉigeartaigh agreed with Eridze: you shouldn’t publish it. “You don’t just share that with the world! You have to think about the kind of impact you will have,” he said.

And with that, the panel ended. Everyone went on their merry way, content that this evil AGI was safe in the realm of the hypothetical.

In the “real world,” though, ethics often end up taking a back seat to more earthly concerns like money and prestige. Companies like Facebook, Google, and Amazon regularly publish facial recognition or other surveillance systems, often selling them to police or the military which uses it to monitor everyday people. Academic scientists are trapped in the “publish or perish,” cycle — publish a study, or risk losing your position. So ethical concerns are often relegated to a paper’s conclusion, as a factor for someone else to sort out at some vague point in the future.

For now, though, it’s unlikely that anyone will come up with AGI — much less evil AGI — anytime soon. But the panelists’ wide-ranging answers means that we are still far from sorting out what should be done with unethical, dangerous science.

More about evil AI thought experiments: Grimes, Elon Musk, and the Supposedly Trauma-Inducing A.I. Theory That Brought Them Together

The post Should Evil AI Research Be Published? Five Experts Weigh In. appeared first on Futurism.

Read more:
Should Evil AI Research Be Published? Five Experts Weigh In.

Japan Wants to Be the First Place Where Flying Cars Are Standard Because of Course

LOOKING AHEAD. The Japanese government sees flying cars as the panacea to some of the nation’s traffic issues — the vehicles will decrease congestion, boost tourism, and increase access to remote areas.

So, naturally, the nation wants to be the world leader in the developing the vehicles. Now it has a dream team of companies on board to help it reach its goal, according to a statement released by the trade ministry in Tokyo on Friday.

TEAM FLYING CARS. Twenty-one companies and organizations have joined a Japanese government-led group designed to lay out the roadmap to flying car adoption in Japan.

Amongst those are some of the biggest players in the space, including Uber, Boeing, and Airbus. Delegates from each group member will meet on August 29 to figure out a plan that will get flying cars to Japan in the next decade.

A GOVERNMENT LEADER. While not everyone(see: Elon Musk) is on board with the idea of flying cars, if the futuristic transportation is ever going to take off, it’ll likely need a government leading the charge, and Japan appears ready to step up on that front.

“It’s necessary for the government to take a lead and coordinate on setting safety standards,” Yasuo Hashimoto, a researcher at Tokyo-based Japan Aviation Management Research, told Bloomberg. “They are trying to set a tone for the industry ahead of other countries.”

We’ll see if Japan is able to meet its ambitious goal of serving as the world leader in flying cars. But it certainly won’t fail because it lacks the right partners in its corner.

READ MORE: Uber and Airbus Enlist in Japan’s Flying-Car Plan [Bloomberg]

More on flying cars: Elon Musk: Flying Cars Are Definitely Not the Future of Transport

The post Japan Wants to Be the First Place Where Flying Cars Are Standard Because of Course appeared first on Futurism.

Originally posted here:
Japan Wants to Be the First Place Where Flying Cars Are Standard Because of Course

Why The Need For Stablecoins Has Never Been More Important Than Now

This article has been edited for brevity. Read Nevin’s full article here.

2018 is the year of the stablecoin. Leaders of the crypto ecosystem have been aware of the need for a stablecoin for years, and the financial incentive to create the winning currency is immense. When considering whether any given stablecoin will work, it’s very helpful to look at the three main approaches that have been proposed so far, and translate them into the language of monetary economics.

Traditional Asset-Backed Stablecoins

A traditional asset-backed stablecoin is simple to describe, and only gets complicated in practice. The issuer sells tokens for $1 each, and holds all of the dollars from those sales in reserve in a bank account. Any time someone wants to, they can redeem tokens for dollars with the issuer. This design is best with a trustworthy audit mechanism. But in practice, there are two central problems with traditional asset-backed stablecoins: counterparty risk and risk of government intervention?.

These two core problems create a fundamental tension:

  • The better the audit mechanism, the more credible the promise to use reserves to maintain the buy wall and thus defend the exchange rate peg, but the easier it is for the government to locate and freeze the assets.
  • The worse the audit mechanism, the harder it is for the government to locate and freeze the assets, but also harder it is for the market to assess the issuer’s solvency and ability to maintain the buy wall.

The traditional asset-backed stablecoin approach doesn’t serve what has been the main use case for BTC: free transfer of money across all borders. In order to supplant the US dollar and become the new reserve currency, free transfer across borders is essential.

Collateralized Debt Stablecoins

A collateralized debt stablecoin offers asset holders a way to take out loans backed by their cryptoassets, which then become the collateral in the system. Users can deposit a cryptoasset into a smart contract, which then mints a new stablecoin for them. The stablecoin must be of lesser value than the collateral, so let’s say the user deposits $2 worth of ETH and receives a stablecoin that has a target price of $1. The user can now go spend that stablecoin on goods and services, without having to give up their position in ETH. When they want their ETH back in their control to spend or trade, they have to repay their stablecoin loan to withdraw it. A common use case for this is going margin-long on cryptoassets: the user can sell $1 stablecoin for ETH, and then is holding $3 worth of ETH instead of just $2 worth. If ETH appreciates, they’ll earn more, and if it depreciates, they’ll lose more.

This design has the virtue of decentralization, and thus censorship resistance. But it doesn’t implement a strong and predictable currency peg. So we predict that users seeking stability will be unimpressed by the fluctuating price history that will likely develop and look elsewhere, so an approach like this will also be unlikely to yield the next reserve currency for the world.

Future Growth-Backed Stablecoins

A future growth-backed stablecoin offers speculators portions of the future growth in stablecoin market cap in exchange for providing the capital to peg a currency as needed. In the original design, there are two tokens: stablecoins and shares. When the price of the stablecoin is above the target price, the system mints more stablecoins and offers them in an auction. The currency used to buy stablecoins in the auction is the share token?—?so only share token holders can participate, and the highest bidders are the recipients of the newly minted stablecoins. The increase in stablecoin supply presumably reduces the market price back down to the target. When the price of the stablecoin is below the target price, the reverse happens?—?the system mints new shares, and auctions them off for stablecoins. By doing this, the system can reduce the supply of stablecoins and bring the price back up.

The central problem with this design is that if at any point speculators lose interest in purchasing shares, the peg breaks, since no stablecoins can be taken out of circulation. If growth in the market cap of the stablecoin is perceived to be highly probable by the market, this would be unlikely to happen, since it would be clear to the market that there is always money to be made by purchasing shares at some price. But in the early stages of adoption, growth in the market cap of the stablecoin may sometimes not be perceived as highly probable.

If a future growth-backed stablecoin does reach this state of stable equilibrium, it has the benefit of being totally decentralized and thus censorship resistant, and able to scale up easily in response to increasing stablecoin demand. But because of the bootstrapping difficulty and risk of catastrophic failure, most designs of this type don’t appear to be safe approaches to building a world-wide cryptocurrency.

The Incredible Responsibility of Stablecoin Producers

Creating a stablecoin is very different from creating a normal crypto token. The people in the world who will benefit most from a stable, full-stack open currency are those with no access to any stable store of value right now, and who often don’t have a lot of money to start with. If a stablecoin reaches prominence among this demographic and then crashes, it will have done great harm to some of the most vulnerable people on the planet.

This is why it’s crucial for the industry to enter this new era with caution. To further compound this issue, in many cases investors and issuers could earn a profit from a stablecoin project reaching prominence even if it breaks later, so long as they liquidate at the right time. This misalignment of incentives means that stablecoin project founders will have to make choices that are not in their own economic interest in order to act responsibly.

All of this is what has motivated the Reserve team to begin putting resources towards educating the industry more broadly. If you would like to get involved in this effort, we are looking to bring on conference organizers, online forum moderators, writers, and careful thinkers to help us build the open currency movement the right way.

Curious? Email contact@reserve.org to learn more.


The preceding communication has been paid for by Reserve. This communication is for informational purposes only and does not constitute an offer or solicitation to sell shares or securities in Reserve or any related or associated company. None of the information presented herein is intended to form the basis for any investment decision, and no specific recommendations are intended. This communication does not constitute investment advice or solicitation for investment. Futurism expressly disclaims any and all responsibility for any direct or consequential loss or damage of any kind whatsoever arising directly or indirectly from: (i) reliance on any information contained herein, (ii) any error, omission or inaccuracy in any such information or (iii) any action resulting from such information.

This post does not reflect the views or the endorsement of the Futurism.com editorial staff.

The post Why The Need For Stablecoins Has Never Been More Important Than Now appeared first on Futurism.

Read more from the original source:
Why The Need For Stablecoins Has Never Been More Important Than Now

Dumber Humans — That’s Just One Effect of a More Polluted Future

Researchers determine that exposure to air pollution can cause a decline in a person's cognitive ability, with older men affected the most.

ANOTHER DOWNSIDE. We knew that air pollution damages our physical health. Turns out, it’s also wreaking havoc on our intelligence.

According to a new study by researchers from the U.S. and China, exposure to air pollution can have a “huge” impact on a person’s cognitive performance. They published their research in the journal Proceedings of the National Academy of Sciences on Monday.

POLLUTION IN, SMARTS OUT. To explore the connection between air pollution and cognitive ability, the researchers turned to the China Family Panel Studies, an annual survey of Chinese citizens that includes verbal and math testing for cognitive performance. They specifically focused on surveys taken in 2010 and 2014 from 162 randomly chosen Chinese counties. In total, their study targeted about 20,000 people.

Next, they used official air pollution records to calculate individual’s cumulative exposure to dirty air in the time between surveys. From this, they were able to determine how air pollution affects a person’s cognitive ability.

The results weren’t encouraging, especially for older men. “Polluted air can cause everyone to reduce their level of education by one year, which is huge,” researcher Xi Chen told The Guardian. “But we know the effect is worse for the elderly, especially those over 64, and for men, and for those with low education. If we calculate [the loss] for those, it may be a few years of education.”

The researchers told NPR they aren’t certain why pollution has this impact. But they are pretty sure the pollution is causing the mental decline — that is, it’s not just a correlation between the two — and it could have something to do with how it’s affecting the brain’s white matter.

PRETTY CLEAR CUT. This isn’t the only study to note the link between polluted air and cognitive decline, but it is the first to look at people of all ages. It’s also the first to note differences between men and women, which could be due to supposed differences in the brains of males and females (whether they are even really different is a matter of debate).

It’s the most recent in a long line of studies that tells us, again: air pollution is bad for us. We all understand this, right? Noxious combustible compounds wafting densely in the air we breathe equals sad face?

And yet the United States is on a path that could actually lead to more pollution in the future, not less. The Environmental Protection Agency (EPA) is propping up the coal industry, even though coal-fired power plants have negative effects on the environment and public health. It’s also attempting to roll back rules that restrict vehicle emissions.

We could be on a path to a future in which we live on a dying planet in failing bodies with, according to this new research, failing minds to match.

READ MORE: Air Pollution Causes ‘Huge’ Reduction in Intelligence, Study Reveals [The Guardian]

More on air pollution: States Sue to Stop Trump’s EPA From Stripping Vehicle Emissions Rules

The post Dumber Humans — That’s Just One Effect of a More Polluted Future appeared first on Futurism.

Go here to read the rest:
Dumber Humans — That’s Just One Effect of a More Polluted Future