Leaders Who Pledged Not To Build Autonomous Killing Machines Are Ignoring The Real Problem

That major pledge against building autonomous killing machines is a great start, but it has some glaring holes in what it covers.

Last week, many of the major players in the artificial intelligence world signed a pledge to never build or endorse artificial intelligence systems that could run an autonomous weapon. The signatories included: Google DeepMind’s cofounders, OpenAI founder Elon Musk, and a whole slew of prominent artificial intelligence researchers and industry leaders.

The pledge, put forth by AI researcher Max Tegmark’s Future of Life Institute, argues that any system that can target and kill people without human oversight is inherently immoral, and condemns any future AI arms race that may occur. By signing the pledge, these AI bigwigs join the governments of 26 nations including China, Pakistan, and the State of Palestine, all of which also condemned and banned lethal autonomous weapons.

So if you want to build a fighter drone that doesn’t need any human oversight before killing, you’ll have to do it somewhere other than these nations, and with partners other than those who signed the agreement.

Yes, banning killer robots is likely a good move for our collective future — children in nations ravaged by drone warfare have already started to fear the sky — but there’s a pretty glaring hole in what this pledge actually does.

Namely: there are more subtle and insidious ways to leverage AI against a nation’s enemies than strapping a machine gun to a robot’s arm, Terminator-style.

The pledge totally ignores the fact that cybersecurity means more than protecting yourself from an army of killer robots. As Mariarosaria Taddeo of the Oxford Internet Institute told Business Insider, AI could be used in international conflicts in more subtle but impactful ways. Artificial intelligence algorithms could prove effective at hacking or hijacking networks that are crucial for national security.

Already, as Taddeo mentioned, the UK National Health Service was held hostage by the North Korea-linked WannaCry virus and a Russian cyberattack took control of European and North American power grids. With sophisticated, autonomous algorithms at the helm, these cyberattacks could become more frequent and more devastating. And yet, because these autonomous weapons don’t go “pew pew pew,” the recent AI pledge doesn’t mention (or pertain to) them at all.

Of course, that doesn’t make the pledge meaningless. Not by a long shot. But just as important as the high-profile people and companies that agreed to not make autonomous killing machines are the names missing from the agreement. Perhaps most notably is the U.S. Department of Defense, which recently established its Joint Artificial Intelligence Center (JAIC) for the express purpose of getting ahead for any forthcoming AI arms races.

“Deputy Secretary of Defense Patrick M. Shanahan directed the DOD Chief Information Officer to standup the Joint Artificial Intelligence Center (JAIC) in order to enable teams across DOD to swiftly deliver new AI-enabled capabilities and effectively experiment with new operating concepts in support of DOD’s military missions and business functions,” Heather Babb, Department of Defense spokesperson, told Futurism.

“Plenty of people talk about the treat from AI; we want to be the threat,” Deputy Defense Secretary Patrick Shanahan wrote in a recent email to DoD employees, a DoD spokesperson confirmed to Futurism.

The JAIC sees artificial intelligence as a crucial tool for the future of warfare. Given the U.S.’s hawkish stance on algorithmic warfare, it’s unclear if a well-intentioned, incomplete pledge can possibly hold up.

More on pledges against militarized AI: Google: JK, We’re Going To Keep Working With The Military After All

The post Leaders Who Pledged Not To Build Autonomous Killing Machines Are Ignoring The Real Problem appeared first on Futurism.

Read the original here:

Leaders Who Pledged Not To Build Autonomous Killing Machines Are Ignoring The Real Problem

Tesla Is Reportedly Asking Suppliers to Refund Payments so It Can Appear Profitable

Tesla's refund request to suppliers is raising eyebrows in the financial world, with some calling it

RETROACTIVE NEGOTIATION. Tesla seems to have a weird understanding of the old adage “You have to spend money to make money.” In order to look like it’s making money, the company is asking for refunds on the money it’s already spent — even though the people paid delivered on their part of the deal.

On Sunday, The Wall Street Journal reported that it had obtained a memo Tesla sent to one of its suppliers last week. In the memo, Tesla requested a refund on a “meaningful amount” of the money it had paid the supplier since 2016. The author of the memo, one of Tesla’s global supply managers, wrote that the money was “essential” to Tesla’s ability to continue operating and asked that the supplier view the refund as an “investment” that would allow Tesla and the supplier to continue to grow their relationship.

Though the memo claimed that all suppliers were receiving such refund requests, at least some contacted by The WSJ knew nothing about it.

HOW BIZARRE. A Tesla spokesperson doesn’t seem to think Tesla’s refund request is all that noteworthy, telling The WSJ it’s a standard practice. Many of those outside the company, however, think it’s downright bizarre. “I have never heard of that,” finance expert Ron Harbour told Bloomberg. “Suppliers have been asked for reductions, but going back for them in arrears reeks of desperation.”

It’s also a pretty self-centered move, according to manufacturing consultant Dennis Virag. “It’s simply ludicrous, and it just shows that Tesla is desperate right now,” he told The WSJ. “They’re worried about their profitability, but they don’t care about their suppliers’ profitability.”

TESLA’S WOES. Tesla’s current financial woes center on its Model 3, with frequent production issues repeatedly pushing back deliveries of the vehicle. The company currently carries more than $10 billion in debt and has been beset by one controversy after another throughout 2018. Just last month, shareholders even held a vote to decide whether or not to let CEO Elon Musk retain his position as chairman (they ultimately decided to let him stay on in that role).

If the plan behind Tesla’s refund request was to increase faith in the company as it continues to navigate the troubled waters of Model 3 production, it appears to be backfiring; Tesla’s stock dropped by 4 percent Monday morning, even though the first reviews of the Model 3 have started rolling out and have been largely positive (including from the WSJ).

On August 1, Musk will update shareholders on Tesla’s Q2 financial results, so he has just about a week to get the bad taste of Tesla’s refund request out of shareholders’ mouths. If he can’t, it’s not hard to imagine his role as chairman once again in jeopardy.

READ MORE: Tesla Asks Suppliers for Cash Back to Help Turn a Profit [The Wall Street Journal]

More on Model 3 production: In an Effort to Speed up Production, Tesla Is Assembling Model 3s in a Giant Tent

The post Tesla Is Reportedly Asking Suppliers to Refund Payments so It Can Appear Profitable appeared first on Futurism.

See original here:

Tesla Is Reportedly Asking Suppliers to Refund Payments so It Can Appear Profitable

The End of Moores Law Rodney Brooks

I have been working on an upcoming post about megatrends and how they drive tech. I had included the end of Moores Law to illustrate how the end of a megatrend might also have a big influence on tech, but that section got away from me, becoming much larger than the sections on each individual current megatrend. So I decided to break it out into a separate post and publish it first. Here it is.

Moores Law, concerning what we put on silicon wafers, is over after a solid fifty year run that completely reshaped our world. But that end unleashes lots of new opportunities.

Moore, Gordon E.,Cramming more components onto integrated circuits,Electronics, Vol 32, No. 8, April 19, 1965.

Electronicswas a trade journal that published monthly, mostly, from 1930 to 1995. Gordon Moores four and a half page contribution in 1965 was perhaps its most influential article ever. That article not only articulated the beginnings, and it was the very beginnings, of a trend, but the existence of that articulation became a goal/law that has run the silicon based circuit industry (which is the basis of every digital device in our world) for fifty years. Moore was a Cal Tech PhD, cofounder in 1957 of Fairchild Semiconductor, and head of its research and development laboratory from1959. Fairchild had been founded to make transistors from silicon at a time when they were usually made from much slower germanium.

One can find many files on the Web that claim to be copies of the original paper, but I have noticed that some of them have the graphs redrawn and that they are sometimes slightly different from the ones that I have always taken to be the originals. Below I reproduce two figures from the original that as far as I can tell have only been copied from an original paper version of the magazine, with no manual/human cleanup.

The first one that I reproduce here is the money shot for the origin of Moores Law. There was however an equally important earlier graph in the paper which was predictive of the future yield over time of functional circuits that could be made from silicon. It had less actual data than this one, and as well see, that is really saying something.

This graph is about the number of components on an integrated circuit. An integrated circuit is made through a process that is like printing. Light is projected onto a thin wafer of silicon in a number of different patterns, while different gases fill the chamber in which it is held. The different gases cause different light activated chemical processes to happen on the surface of the wafer, sometimes depositing some types of material, and sometimes etching material away. With precise masks to pattern the light, and precise control over temperature and duration of exposures, a physical two dimensional electronic circuit can be printed. The circuit has transistors, resistors, and other components. Lots of them might be made on a single wafer at once, just as lots of letters are printed on a single page at one. The yield is how many of those circuits are functionalsmall alignment or timing errors in production can screw up some of the circuits in any given print. Then the silicon wafer is cut up into pieces, each containing one of the circuits and each is put inside its own plastic package with little legs sticking out as the connectorsif you have looked at a circuit board made in the last forty years you have seen it populated with lots of integrated circuits.

The number of components in a single integrated circuit is important. Since the circuit is printed it involves no manual labor, unlike earlier electronics where every single component had to be placed and attached by hand. Now a complex circuit which involves multiple integrated circuits only requires hand construction (later this too was largely automated), to connect up a much smaller number of components. And as long as one has a process which gets good yield, it is constant time to build a single integrated circuit, regardless of how many components are in it. That means less total integrated circuits that need to be connected by hand or machine. So, as Moores papers title references,crammingmore components into a single integrated circuit is a really good idea.

The graph plots the logarithm base two of the number ofcomponentsin an integrated circuit on the vertical axis against calendar years on the horizontal axis. Every notch upwards on the left doubles the number of components. So while means components, means components. That is a thousand fold increase from 1962 to 1972.

There are two important things to note here.

The first is that he is talking aboutcomponentson an integrated circuit, not just the number of transistors. Generally there are many more components thantransistors, though the ratio did drop over time as different fundamental sorts of transistors were used. But in later years Moores Law was often turned into purely a count of transistors.

The other thing is that there are only four real data points here in this graph which he published in 1965. In 1959 the number of components is , i.e., that is not about anintegratedcircuit at all, just about single circuit elementsintegrated circuits had not yet been invented. So this is a null data point. Then he plots four actual data points, which we assume were taken from what Fairchild could produce, for 1962, 1963, 1964, and 1965, having 8, 16, 32, and 64 components. That is a doubling every year. It is an exponential increase in the true sense of exponential.

What is the mechanism for this, how can this work? It works because it is in the digital domain, the domain ofyesorno, the domain of or .

In the last half page of the four and a half page article Moore explains the limitations of his prediction, saying that for some things, like energy storage, we will not see his predicted trend. Energy takes up a certain number of atoms and their electrons to store a given amount, so you can not just arbitrarily change the number of atoms and still store the same amount of energy. Likewise if you have a half gallon milk container you can not put a gallon of milk in it.

But the fundamental digital abstraction isyesorno. A circuit element in an integrated circuit just needs to know whether a previous element said yes or no, whether there is a voltage or current there or not. In the design phase one decides above how many volts or amps, or whatever, means yes, and below how many means no. And there needs to be a good separation between those numbers, a significant no mans land compared to the maximum and minimum possible. But, the magnitudes do not matter.

I like to think of it like piles of sand. Is there a pile of sand on the table or not? We might have a convention about how big a typical pile of sand is. But we can make it work if we halve the normal size of a pile of sand. We can still answer whether or not there is a pile of sand there using just half as many grains of sand in a pile.

And then we can halve the number again. And the digital abstraction of yes or no still works. And we can halve it again, and it still works. And again, and again, and again.

This is what drives Moores Law, which in its original form said that we could expect to double the number of components on an integrated circuit every year for 10 years, from 1965 to 1975. That held up!

Variations of Moores Law followed; they were all about doubling, but sometimes doubling different things, and usually with slightly longer time constants for the doubling. The most popular versions were doubling of the number of transistors, doubling of the switching speed of those transistors (so a computer could run twice as fast), doubling of the amount of memory on a single chip, and doubling of the secondary memory of a computeroriginally on mechanically spinning disks, but for the last five years in solid state flash memory. And there were many others.

Lets get back to Moores original law for a moment. The components on an integrated circuit are laid out on a two dimensional wafer of silicon. So to double the number of components for the same amount of silicon you need to double the number of components per unit area. That means that the size of a component, in each linear dimension of the wafer needs to go down by a factor of . In turn, that means that Moore was seeing the linear dimension of each component go down to of what it was in a year, year over year.

But why was it limited to just a measly factor of two per year? Given the pile of sand analogy from above, why not just go to a quarter of the size of a pile of sand each year, or one sixteenth? It gets back to the yield one gets, the number of working integrated circuits, as you reduce the component size (most commonly calledfeature size). As the feature size gets smaller, the alignment of the projected patterns of light for each step of the process needs to get more accurate. Since , approximately, it needs to get better by as you halve the feature size. And because impurities in the materials that are printed on the circuit, the material from the gasses that are circulating and that are activated by light, the gas needs to get more pure, so that there are fewer bad atoms in each component, now half the area of before. Implicit in Moores Law, in its original form, was the idea that we could expect the production equipment to get better by about per year, for 10 years.

For various forms of Moores Law that came later, the time constant stretched out to 2 years, or even a little longer, for a doubling, but nevertheless the processing equipment has gotten that better time period over time period, again and again.

To see the magic of how this works, lets just look at 25 doublings. The equipment has to operate with things times smaller, i.e., roughly 5,793 times smaller. But we can fit more components in a single circuit, which is 33,554,432 times more. The accuracy of our equipment has improved 5,793 times, but that has gotten a further acceleration of 5,793 on top of the original 5,793 times due to the linear to area impact. That is where the payoff of Moores Law has come from.

In his original paper Moore only dared project out, and only implicitly, that the equipment would get better every year for ten years. In reality, with somewhat slowing time constants, that has continued to happen for 50 years.

Now it is coming to an end. But not because the accuracy of the equipment needed to give good yields has stopped improving. No. Rather it is because those piles of sand we referred to above have gotten so small that they only contain a single metaphorical grain of sand. We cant split the minimal quantum of a pile into two any more.

Perhaps the most remarkable thing is Moores foresight into how this would have an incredible impact upon the world. Here is the first sentence of his second paragraph:

Integrated circuits will lead to such wonders as home computersor at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment.

This was radical stuff in 1965. So called mini computers were still the size of a desk, and to be useful usually had a few peripherals such as tape units, card readers, or printers, that meant they would be hard to fit into a home kitchen of the day, even with the refrigerator, oven, and sink removed. Most people had never seen a computer and even fewer had interacted with one, and those who had, had mostly done it by dropping off a deck of punched cards, and a day later picking up a printout from what the computer had done when humans had fed the cards to the machine.

The electrical systems of cars were unbelievably simple by todays standards, with perhaps half a dozen on off switches, and simple electromechanical devices to drive the turn indicators, windshield wipers, and the distributor which timed the firing of the spark plugsevery single function producing piece of mechanism in auto electronics was big enough to be seen with the naked eye. And personal communications devices were rotary dial phones, one per household, firmly plugged into the wall at all time. Or handwritten letters than needed to be dropped into the mail box.

That sentence quoted above, given when it was made, is to me the bravest and most insightful prediction of technology future that we have ever seen.

By the way, the first computer made from integrated circuits was the guidance computer for the Apollo missions, one in the Command Module, and one in the Lunar Lander. The integrated circuits were made by Fairchild, Gordon Moores company. The first version had 4,100 integrated circuits, each implementing a single 3 input NOR gate. The more capable manned flight versions, which first flew in 1968, had only 2,800 integrated circuits, each implementing two 3 input NOR gates. Moores Law had its impact on getting to the Moon, even in the Laws infancy.

In the original magazine article this cartoon appears:

At a fortieth anniversary of Moores Law at the Chemical Heritage Foundationin Philadelphia I asked Dr. Moore whether this cartoon had been his idea. He replied that he had nothing to do with it, and it was just there in the magazine in the middle of his article, to his surprise.

Without any evidence at all on this, my guess is that the cartoonist was reacting somewhat skeptically to the sentence quoted above. The cartoon is set in a department store, as back then US department stores often had a Notions department, although this was not something of which I have any personal experience as they are long gone (and I first set foot in the US in 1977). It seems that notions is another word for haberdashery, i.e., pins, cotton, ribbons, and generally things used for sewing. As still today, there is also aCosmeticsdepartment. And plop in the middle of them is theHandy Home Computersdepartment, with the salesman holding a computer in his hand.

I am guessing that the cartoonist was making fun of this idea, trying to point out the ridiculousness of it. It all came to pass in only 25 years, including being sold in department stores. Not too far from the cosmetics department. But the notions departments had all disappeared. The cartoonist was right in the short term, but blew it in the slightly longer term.

There were many variations on Moores Law, not just his original about the number of components on a single chip.

Amongst the many there was a version of the law about how fast circuits could operate, as the smaller the transistors were the faster they could switch on and off. There were versions of the law for how much RAM memory, main memory for running computer programs, there would be and when. And there were versions of the law for how big and fast disk drives, for file storage, would be.

This tangle of versions of Moores Law had a big impact on how technology developed. I will discuss three modes of that impact; competition, coordination, and herd mentality in computer design.

Competition

Memory chips are where data and programs are stored as they are run on a computer. Moores Law applied to the number of bits of memory that a single chip could store, and a natural rhythm developed of that number of bits going up my a multiple of four on a regular but slightly slowing basis. By jumping over just a doubling, the cost of the silicon foundries could me depreciated over long enough time to keep things profitable (today a silicon foundry is about a $7B capital cost!), and furthermore it made sense to double the number of memory cells in each dimension to keep the designs balanced, again pointing to a step factor of four.

In the very early days of desktop PCs memory chips had bits. The memory chips were called RAM (Random Access Memoryi.e., any location in memory took equally long to access, there were no slower of faster places), and a chip of this size was called a 16K chip, where K means not exactly 1,000, but instead 1,024 (which is ). Many companies produced 16K RAM chips. But they all knew from Moores Law when the market would be expecting 64K RAM chips to appear. So they knew what they had to do to not get left behind, and they knew when they had to have samples ready for engineers designing new machines so that just as the machines came out their chips would be ready to be used having been designed in. And they could judge when it was worth getting just a little ahead of the competition at what price. Everyone knew the game (and in fact all came to a consensus agreement on when the Moores Law clock should slow down just a little), and they all competed on operational efficiency.

Coordination

Technology Reviewtalks about this in their story on the end of Moores Law. If you were the designer of a new computer box for a desktop machine, or any other digital machine for that matter, you could look at when you planned to hit the market and know what amount of RAM memory would take up what board space because you knew how many bits per chip would be available at that time. And you knew how much disk space would be available at what price and what physical volume (disks got smaller and smaller diameters just as they increased the total amount of storage). And you knew how fast the latest processor chip would run. And you knew what resolution display screen would be available at what price. So a couple of years ahead you could put all these numbers together and come up with what options and configurations would make sense by the exact time whenyou were going to bring your new computer to market.

The company that sold the computers might make one or two of the critical chips for their products but mostly they bought other components from other suppliers. The clockwork certainty of Moores Law let them design a new product without having horrible surprises disrupt their flow and plans. This really let the digital revolution proceed. Everything was orderly and predictable so there were fewer blind alleys to follow. We had probably the single most sustained continuous and predictable improvement in any technology over the history of mankind.

Herd mentality in computer design

But with this good came some things that might be viewed negatively (though Im sure there are some who would argue that they were all unalloyed good). Ill take up one of these as the third thing to talk about that Moores Law had a major impact upon.

A particular form of general purpose computer design had arisen by the time that central processors could be put on a single chip (see the Intel 4004 below), and soon those processors on a chip, microprocessors as they came to be known, supported that general architecture. That architecture is known as thevon Neumann architecture.

A distinguishing feature of this architecture is that there is a large RAM memory which holds both instructions and datamade from the RAM chips we talked about above under coordination. The memory is organized into consecutive indexable (or addressable) locations, each containing the same number of binary bits, or digits. The microprocessor itself has a few specialized memory cells, known as registers, and an arithmetic unit that can do additions, multiplications, divisions (more recently), etc. One of those specialized registers is called the program counter (PC), and it holds an address in RAM for the current instruction. The CPU looks at the pattern of bits in that current instruction location and decodes them into what actions it should perform. That might be an action to fetch another location in RAM and put it into one of the specialized registers (this is called a LOAD), or to send the contents the other direction (STORE), or to take the contents of two of the specialized registers feed them to the arithmetic unit, and take their sum from the output of that unit and store it in another of the specialized registers. Then the central processing unit increments its PC and looks at the next consecutive addressable instruction. Some specialized instructions can alter the PC and make the machine go to some other part of the program and this is known as branching. For instance if one of the specialized registers is being used to count down how many elements of an array of consecutive values stored in RAM have been added together, right after the addition instruction there might be an instruction to decrement that counting register, and then branch back earlier in the program to do another LOAD and add if the counting register is still more than zero.

Thats pretty much all there is to most digital computers. The rest is just hacks to make them go faster, while still looking essentially like this model. But note that the RAM is used in two ways by a von Neumann computerto contain data for a program and to contain the program itself. Well come back to this point later.

With all the versions of Moores Law firmly operating in support of this basic model it became very hard to break out of it. The human brain certainly doesnt work that way, so it seems that there could be powerful other ways to organize computation. But trying to change the basic organization was a dangerous thing to do, as the inexorable march of Moores Law based existing architecture was going to continue anyway. Trying something new would most probably set things back a few years. So brave big scale experiments like the Lisp MachineorConnection Machinewhich both grew out of the MIT Artificial Intelligence Lab (and turned into at least three different companies) and Japans fifth generation computerproject (which played with two unconventional ideas, data flow and logical inference) all failed, as before long the Moores Law doubling conventional computers overtook the advanced capabilities of the new machines, and software could better emulate the new ideas.

Most computer architects were locked into the conventional organizations of computers that had been around for decades. They competed on changing the coding of the instructions to make execution of programs slightly more efficient per square millimeter of silicon. They competed on strategies to cache copies of larger and larger amounts of RAM memory right on the main processor chip. They competed on how to put multiple processors on a single chip and how to share the cached information from RAM across multiple processor units running at once on a single piece of silicon. And they competed on how to make the hardware more predictive of what future decisions would be in a running program so that they could precompute the right next computations before it was clear whether they would be needed or not. But, they were all locked in to fundamentally the same way of doing computation. Thirty years ago there were dozens of different detailed processor designs, but now they fall into only a small handful of families, the X86, the ARM, and the PowerPC. The X86s are mostly desktops, laptops, and cloud servers. The ARM is what we find in phones and tablets. And you probably have a PowerPC adjusting all the parameters of your cars engine.

The one glaring exception to the lock in caused by Moores Law is that of Graphical Processing Units, orGPUs. These are different from von Neumann machines. Driven by wanting better video performance for video and graphics, and in particular gaming, the main processor getting better and better under Moores Law was just not enough to make real time rendering perform well as the underlying simulations got better and better. In this case a new sort of processor was developed. It was not particularly useful for general purpose computations but it was optimized very well to do additions and multiplications on streams of data which is what is needed to render something graphically on a screen. Here was a case where a new sort of chip got added into the Moores Law pool much later than conventional microprocessors, RAM, and disk. The new GPUs did not replace existing processors, but instead got added as partners where graphics rendering was needed. I mention GPUs here because it turns out that they are useful for another type of computation that has become very popular over the last three years, and that is being used as an argument that Moores Law is not over. I still think it is and will return to GPUs in the next section.

As I pointed out earlier we can not halve a pile of sand once we are down to piles that are only a single grain of sand. That is where we are now, we have gotten down to just about one grain piles of sand. Gordon Moores Law in its classical sense is over. SeeThe Economistfrom March of last year for a typically thorough, accessible, and thoughtful report.

I earlier talked about thefeature size of an integrated circuit and how with every doubling that size is divided by . By 1971 Gordon Moore was at Intel, and they released their first microprocessor on a single chip, the 4004 with 2,300 transistors on 12 square millimeters of silicon, with a feature size of 10 micrometers, written 10m. That means that the smallest distinguishable aspect of any component on the chip was th of a millimeter.

Since then the feature size has regularly been reduced by a factor of , or reduced to of its previous size, doubling the number of components in a given area, on a clockwork schedule. The schedule clock has however slowed down. Back in the era of Moores original publication the clock period was a year. Now it is a little over 2 years. In the first quarter of 2017 we are expecting to see the first commercial chips in mass market products with a feature size of 10 nanometers, written 10nm. That is 1,000 times smaller than the feature size of 1971, or 20 applications of the rule over 46 years. Sometimes the jump has been a little better than , and so we actually seen 17 jumps from10m down to 10nm. You can see them listed in Wikipedia. In 2012 the feature size was 22nm, in 2014 it was 14nm, now in the first quarter of 2017 we are about to see 10nm shipped to end users, and it is expected that we will see 7nm in 2019 or so. There are stillactive areas of researchworking on problems that are yet to be solved to make 7nm a reality, but industry is confident that it will happen. There are predictions of 5nm by 2021, but a year ago there was still much uncertaintyover whether the engineering problems necessary to do this could be solved and whether they would be economically viable in any case.

Once you get down to 5nm features they are only about 20 silicon atoms wide. If you go much below this the material starts to be dominated by quantum effects and classical physical properties really start to break down. That is what I mean by only one grain of sand left in the pile.

Todays microprocessors have a few hundred square millimeters of silicon, and 5 to 10 billion transistors. They have a lot of extra circuitry these days to cache RAM, predict branches, etc., all to improve performance. But getting bigger comes with many costs as they get faster too. There is heat to be dissipated from all the energy used in switching so many signals in such a small amount of time, and the time for a signal to travel from one side of the chip to the other, ultimately limited by the speed of light (in reality, in copper it is about less), starts to be significant. The speed of light is approximately 300,000 kilometers per second, or 300,000,000,000 millimeters per second. So light, or a signal, can travel 30 millimeters (just over an inch, about the size of a very large chip today) in no less than one over 10,000,000,000 seconds, i.e., no less than one ten billionth of a second.

Todays fastest processors have a clock speed of 8.760GigaHertz, which means by the time the signal is getting to the other side of the chip, the place if came from has moved on to the next thing to do. This makes synchronization across a single microprocessor something of a nightmare, and at best a designer can know ahead of time how late different signals from different parts of the processor will be, and try to design accordingly. So rather than push clock speed further (which is also hard) and rather than make a single microprocessor bigger with more transistors to do more stuff at every clock cycle, for the last few years we have seen large chips go to multicore, with two, four, or eight independent microprocessors on a single piece of silicon.

Multicore has preserved the number of operations done per second version of Moores Law, but at the cost of a simple program not being sped up by that amountone cannot simply smear a single program across multiple processing units. For a laptop or a smart phone that is trying to do many things at once that doesnt really matter, as there are usually enough different tasks that need to be done at once, that farming them out to different cores on the same chip leads to pretty full utilization. But that will not hold, except for specialized computations, when the number of cores doubles a few more times. The speed up starts to disappear as silicon is left idle because there just arent enough different things to do.

Despite the arguments that I presented a few paragraphs ago about why Moores Law is coming to a silicon end, many people argue that it is not, because we are finding ways around those constraints of small numbers of atoms by going to multicore and GPUs. But I think that is changing the definitions too much.

Here is a recent chart that Steve Jurvetson, cofounder of the VC firm DFJ (Draper Fisher Jurvetson), posted on his FaceBook page. He said it is an update of an earlier chart compiled by Ray Kurzweil.

In this case the left axis is a logarithmically scaled count of the number of calculations per second per constant dollar. So this expresses how much cheaper computation has gotten over time. In the 1940s there are specialized computers, such as the electromagnetic computers built to break codes at Bletchley Park. By the 1950s they become general purpose, von Neuman style computers and stay that way until the last few points.

The last two points are both GPUs, the GTX 450 and the NVIDIA Titan X. Steve doesnt label the few points before that, but in every earlier version of a diagram that I can find on the Web (and there are plenty of them), the points beyond 2010 are all multicore. First dual cores, and then quad cores, such as Intels quad core i7 (and I am typing these words on a 2.9MHz version of that chip, powering my laptop).

That GPUs are there and that people are excited about them is because besides graphics they happen to be very good at another very fashionable computation. Deep learning, a form of something known originally as back propagation neural networks, has had a big technological impact recently. It is what has made speech recognition so fantastically better in the last three years that Apples Siri, Amazons Echo, and Google Home are useful and practical programs and devices. It has also made image labeling so much better than what we had five years ago, and there is much experimentation with using networks trained on lots of road scenes as part of situational awareness for self driving cars. For deep learning there is a training phase, usually done in the cloud, on millions of examples. That produces a few million numbers which represent the network that is learned. Then when it is time to recognize a word or label an image that input is fed into a program simulating the network by doing millions of multiplications and additions. Coincidentally GPUs just happen to perfect for the way these networks are structured, and so we can expect more and more of them to be built into our automobiles. Lucky break for GPU manufacturers! While GPUs can do lots of computations they dont work well on just any problem. But they are great for deep learning networks and those are quickly becoming the flavor of the decade.

While rightly claiming that we continue to see exponential growth as in the chart above, exactly what is being measured has changed. That is a bit of a sleight of hand.

And I think that change will have big implications.

I think the end of Moores Law, as I have defined the end, will bring about a golden new era of computer architecture. No longer will architects need to cower atthe relentless improvements that they know others will get due to Moores Law. They will be able to take the time to try new ideas out in silicon, now safe in the knowledge that a conventional computer architecture will not be able to do the same thing in just two or four years in software. And the new things they do may not be about speed. They might be about making computation better in other ways.

Machine learning runtime

We are seeing this with GPUs as runtime engines for deep learning networks. But we are also seeing some more specific architectures. For instance, for about a a year Google has had their own chips called TensorFlow Units (or TPUs) that save power for deep learning networks by effectively reducing the number of significant digits that are kept around as neural networks work quite well at low precision. Google has placed many of these chips in the computers in their server farms, or cloud, and are able to use learned networks in various search queries, at higher speed for lower electrical power consumption.

Special purpose silicon

Typical mobile phone chips now have four ARM processor cores on a single piece of silicon, plus some highly optimized special purpose processors on that same piece of silicon. The processors manage data flowing from cameras and optimizing speech quality, and even on some chips there is a special highly optimized processor for detecting human faces. That is used in the camera application, youve probably noticed little rectangular boxes around peoples faces as you are about to take a photograph, to decide what regions in an image should be most in focus and with the best exposure timingthe faces!

New general purpose approaches

We are already seeing the rise of special purpose architectures for very specific computations. But perhaps we will see more general purpose architectures but with a a different style of computation making a comeback.

Conceivably the dataflow and logic models of the Japanese fifth generation computer project might now be worth exploring again. But as we digitalize the world the cost of bad computer security will threaten our very existence. So perhaps if things work out, the unleashed computer architects can slowly start to dig us out of our current deplorable situation.

Secure computing

We all hear about cyber hackers breaking into computers, often half a world away, or sometimes now in a computer controlling the engine, and soon everything else, of a car as it drives by. How can this happen?

Cyber hackers are creative but many ways that they get into systems are fundamentally through common programming errors in programs built on top of the von Neumann architectures we talked about before.

A common case is exploiting something known as buffer overrun. A fixed size piece of memory is reserved to hold, say, the web address that one can type into a browser, or the Google query box. If all programmers wrote very careful code and someone typed in way too many characters those past the limit would not get stored in RAM at all. But all too often a programmer has used a coding trick that is simple, and quick to produce, that does not check for overrun and the typed characters get put into memory way past the end of the buffer, perhaps overwriting some code that the program might jump to later. This relies on the feature of von Neumann architectures that data and programs are stored in the same memory. So, if the hacker chooses some characters whose binary codes correspond to instructions that do something malicious to the computer, say setting up an account for them with a particular password, then later as if by magic the hacker will have a remotely accessible account on the computer, just as many other human and program services may. Programmers shouldnt oughta make this mistake but history shows that it happens again and again.

Another common way in is that in modern web services sometimes the browser on a lap top, tablet, or smart phone, and the computers in the cloud need to pass really complex things between them. Rather than the programmer having to know in advance all those complex possible things and handle messages for them, it is set up so that one or both sides can pass little bits of source code of programs back and forth and execute them on the other computer. In this way capabilities that were never originally conceived of can start working later on in an existing system without having to update the applications. It is impossible to be sure that a piece of code wont do certain things, so if the programmer decided to give a fully general capability through this mechanism there is no way for the receiving machine to know ahead of time that the code is safe and wont do something malicious (this is a generalization of the halting problem I could go on and on but I wont here). So sometimes a cyber hacker can exploit this weakness and send a little bit of malicious code directly to some service that accepts code.

Beyond that cyber hackers are always coming up with new inventive ways inthese have just been two examples to illustrate a couple of ways of how itis currently done.

It is possible to write code that protects against many of these problems, but code writing is still a very human activity, and there are just too many human-created holes that can leak, from too many code writers. One way to combat this is to have extra silicon that hides some of the low level possibilities of a von Neumann architecture from programmers, by only giving the instructions in memory a more limited set of possible actions.

This is not a new idea. Most microprocessors have some version of protection rings which let more and more untrusted code only have access to more and more limited areas of memory, even if they try to access it with normal instructions. This idea has been around a long time but it has suffered from not having a standard way to use or implement it, so most software, in an attempt to be able to run on most machines, usually only specifies two or at most three rings of protection. That is a very coarse tool and lets too much through. Perhaps now the idea will be thought about more seriously in an attempt to get better security when just making things faster is no longer practical.

Another idea, that has mostly only been implemented in software, with perhaps one or two exceptions, is called capability based security, through capability based addressing. Programs are not given direct access to regions of memory they need to use, but instead are given unforgeable cryptographically sound reference handles, along with a defined subset of things they are allowed to do with the memory. Hardware architects might now have the time to push through on making this approach completely enforceable, getting it right once in hardware so that mere human programmers pushed to get new software out on a promised release date can not screw things up.

From one point of view the Lisp Machines that I talked about earlier were built on a very specific and limited version of a capability based architecture. Underneath it all, those machines were von Neumann machines, but the instructions they could execute were deliberately limited. Through the use of something called typed pointers, at the hardware level, every reference to every piece of memory came with restrictions on what instructions could do with that memory, based on the type encoded in the pointer. And memory could only be referenced by a pointer to the start of a chunk of memory of a fixed size at the time the memory was reserved. So in the buffer overrun case, a buffer for a string of characters would not allow data to be written to or read from beyond the end of it. And instructions could only be referenced from another type of pointer, a code pointer. The hardware kept the general purpose memory partitioned at a very fine grain by the type of pointers granted to it when reserved. And to a first approximation the type of a pointer could never be changed, nor couldthe actual address in RAM be seen by any instructions that had access to a pointer.

There have been ideas out there for a long time on how to improve security through this use of hardware restrictions on the general purpose von Neumann architecture. I have talked about a few of them here. Now I think we can expect this to become a much more compelling place for hardware architects to spend their time, as security of our computational systems becomes a major achilles heel on the smooth running of our businesses, our lives, and our society.

Quantum computers

Quantum computers are a largely experimental and very expensive at this time technology. With the need to cool them to physics experiment level ultra cold, and the expense that entails, to the confusion over how much speed up they might give over conventional silicon based computers and for what class of problem, they are a large investment, high risk research topic at this time. I wont go into all the arguments (I havent read them all, and frankly I do not have the expertise that would make me confident in any opinion I might form) butScott Aaronsons blogon computational complexity and quantum computation is probably the best source for those interested. Claims on speedups either achieved or hoped to be achieved on practical problems range from a factor of 1 to thousands (and I might have that upper bound wrong). In the old days just waiting 10 or 20 years would let Moores Law get you there. Instead we have seen well over a decade of sustained investment in a technology that people are still arguing over whether it can ever work. To me this is yet more evidence that the end of Moores Law is encouraging new investment and new explorations.

Unimaginable stuff

Even with these various innovations around, triggered by the end of Moores Law, the best things we might see may not yet be in the common consciousness. I think the freedom to innovate, without the overhang of Moores Law, the freedom to take time to investigate curious corners, may well lead to a new garden of Eden in computational models. Five to ten years from now we may see a completely new form of computer arrangement, in traditional silicon (not quantum), that is doing things and doing them faster than we can today imagine. And with a further thirty years of development those chips might be doing things that would today be indistinguishable from magic, just as todays smart phone would have seemed like utter magic to 50 year ago me.

Many times the popular press, or people who should know better, refer to something that is increasing a lot as exponential. Something is only truly exponential if there is a constant ratio in size between any two points in time separated by the same amount. Here the ratio is , for any two points a year apart. The misuse of the term exponential growth is widespread and makes me cranky.

Why the Chemical Heritage Foundation for this celebration? Both of Gordon Moores degrees (BS and PhD) were in physical chemistry!

For those who read my first blog, once again seeRoy Amaras Law.

I had been a post-doc at the MIT AI Lab and loved using Lisp Machines there, but when I left and joined the faculty at Stanford in 1983 I realized that the more conventional SUN workstationsbeing developed there and at spin-off company Sun Microsystemswould win out in performance very quickly. So I built a software based Lisp system (which I called TAIL (Toy AI Language) in a nod to the naming conventions of most software at the Stanford Artificial Intelligence Lab, e.g., BAIL, FAIL, SAIL, MAIL) that ran on the early Sun workstations, which themselves used completely generic microprocessors. By mid 1984 Richard Gabriel, I, and others had started a company called Lucidin Palo Alto to compete on conventional machines with the Lisp Machine companies. We used my Lisp compiler as a stop gap, but as is often the case with software, that was still the compiler used by Lucid eight years later when it ran on 19 different makes of machines. I had moved back to MIT to join the faculty in late 1984, and eventually became the director of the Artificial Intelligence Lab there (and then CSAIL). But for eight years, while teaching computer science and developing robots by day, I also at night developed and maintained my original compiler as the work horse of Lucid Lisp. Just as the Lisp Machine companies got swept away so too eventually did Lucid. Whereas the Lisp Machine companies got swept away by Moores Law, Lucid got swept away as the fashion in computer languages shifted to a winner take all world, for many years, of C.

Full disclosure. DFJ is one of the VCs who have invested in my company Rethink Robotics.

Excerpt from:

The End of Moores Law Rodney Brooks

Libertarianism – Wikipedia

“Libertarians” redirects here. For political parties that may go by this name, see Libertarian Party.

Libertarianism (from Latin: libertas, meaning “freedom”) is a collection of political philosophies and movements that uphold liberty as a core principle.[1] Libertarians seek to maximize political freedom and autonomy, emphasizing freedom of choice, voluntary association, and individual judgment.[2][3][4] Libertarians share a skepticism of authority and state power, but they diverge on the scope of their opposition to existing political and economic systems. Various schools of libertarian thought offer a range of views regarding the legitimate functions of state and private power, often calling for the restriction or dissolution of coercive social institutions.[5]

Left-libertarian ideologies seek to abolish capitalism and private ownership of the means of production, or else to restrict their purview or effects, in favor of common or cooperative ownership and management, viewing private property as a barrier to freedom and liberty.[6][7][8][9] In contrast, modern right-libertarian ideologies, such as minarchism and anarcho-capitalism, instead advocate laissez-faire capitalism and strong private property rights,[10] such as in land, infrastructure, and natural resources.

The first recorded use of the term “libertarian” was in 1789, when William Belsham wrote about libertarianism in the context of metaphysics.[11]

“Libertarian” came to mean an advocate or defender of liberty, especially in the political and social spheres, as early as 1796, when the London Packet printed on 12 February: “Lately marched out of the Prison at Bristol, 450 of the French Libertarians”.[12] The word was again used in a political sense in 1802 in a short piece critiquing a poem by “the author of Gebir” and has since been used with this meaning.[13][14][15]

The use of the word “libertarian” to describe a new set of political positions has been traced to the French cognate, libertaire, coined in a letter French libertarian communist Joseph Djacque wrote to mutualist Pierre-Joseph Proudhon in 1857.[16][17][18] Djacque also used the term for his anarchist publication Le Libertaire: Journal du Mouvement Social, which was printed from 9 June 1858 to 4 February 1861 in New York City.[19][20] In the mid-1890s, Sbastien Faure began publishing a new Le Libertaire while France’s Third Republic enacted the lois sclrates (“villainous laws”), which banned anarchist publications in France. Libertarianism has frequently been used as a synonym for anarchism since this time.[21][22][23]

The term “libertarianism” was first used in the United States as a synonym for classic liberalism in May 1955 by writer Dean Russell, a colleague of Leonard Read and a classic liberal himself. He justified the choice of the word as follows: “Many of us call ourselves ‘liberals.’ And it is true that the word ‘liberal’ once described persons who respected the individual and feared the use of mass compulsions. But the leftists have now corrupted that once-proud term to identify themselves and their program of more government ownership of property and more controls over persons. As a result, those of us who believe in freedom must explain that when we call ourselves liberals, we mean liberals in the uncorrupted classical sense. At best, this is awkward and subject to misunderstanding. Here is a suggestion: Let those of us who love liberty trade-mark and reserve for our own use the good and honorable word ‘libertarian'”.[24]

Subsequently, a growing number of Americans with classical liberal beliefs in the United States began to describe themselves as “libertarian”. The person most responsible for popularizing the term “libertarian” was Murray Rothbard,[25] who started publishing libertarian works in the 1960s.

Libertarianism in the United States has been described as conservative on economic issues and liberal on personal freedom[26] (for common meanings of conservative and liberal in the United States) and it is also often associated with a foreign policy of non-interventionism.[27][28]

Although the word “libertarian” has been used to refer to socialists internationally, its meaning in the United States has deviated from its political origins.[29][30]

There is contention about whether left and right libertarianism “represent distinct ideologies as opposed to variations on a theme”.[31] All libertarians begin with a conception of personal autonomy from which they argue in favor of civil liberties and a reduction or elimination of the state.

Left-libertarianism encompasses those libertarian beliefs that claim the Earth’s natural resources belong to everyone in an egalitarian manner, either unowned or owned collectively. Contemporary left-libertarians such as Hillel Steiner, Peter Vallentyne, Philippe Van Parijs, Michael Otsuka and David Ellerman believe the appropriation of land must leave “enough and as good” for others or be taxed by society to compensate for the exclusionary effects of private property. Libertarian socialists (social and individualist anarchists, libertarian Marxists, council communists, Luxemburgists and DeLeonists) promote usufruct and socialist economic theories, including communism, collectivism, syndicalism and mutualism. They criticize the state for being the defender of private property and believe capitalism entails wage slavery.

Right-libertarianism[32] developed in the United States in the mid-20th century and is the most popular conception of libertarianism in that region.[33] It is commonly referred to as a continuation or radicalization of classical liberalism.[34][35] Right-libertarians, while often sharing left-libertarians’ advocacy for social freedom, also value the social institutions that enforce conditions of capitalism, while rejecting institutions that function in opposition to these on the grounds that such interventions represent unnecessary coercion of individuals and abrogation of their economic freedom.[36] Anarcho-capitalists[37][38] seek complete elimination of the state in favor of privately funded security services while minarchists defend “night-watchman states”, which maintain only those functions of government necessary to maintain conditions of capitalism and personal security.

Anarchism envisages freedom as a form of autonomy,[39] which Paul Goodman describes as “the ability to initiate a task and do it one’s own way, without orders from authorities who do not know the actual problem and the available means”.[40] All anarchists oppose political and legal authority, but collectivist strains also oppose the economic authority of private property.[41] These social anarchists emphasize mutual aid, whereas individualist anarchists extoll individual sovereignty.[42]

Some right-libertarians consider the non-aggression principle (NAP) to be a core part of their beliefs.[43][44]

Libertarians have been advocates and activists of civil liberties, including free love and free thought.[45][46] Advocates of free love viewed sexual freedom as a clear, direct expression of individual sovereignty and they particularly stressed women’s rights as most sexual laws discriminated against women: for example, marriage laws and anti-birth control measures.[47]

Free love appeared alongside anarcha-feminism and advocacy of LGBT rights. Anarcha-feminism developed as a synthesis of radical feminism and anarchism and views patriarchy as a fundamental manifestation of compulsory government. It was inspired by the late-19th-century writings of early feminist anarchists such as Lucy Parsons, Emma Goldman, Voltairine de Cleyre and Virginia Bolten. Anarcha-feminists, like other radical feminists, criticise and advocate the abolition of traditional conceptions of family, education and gender roles. Free Society (18951897 as The Firebrand, 18971904 as Free Society) was an anarchist newspaper in the United States that staunchly advocated free love and women’s rights, while criticizing “comstockery”, the censorship of sexual information.[48] In recent times, anarchism has also voiced opinions and taken action around certain sex-related subjects such as pornography,[49] BDSM[50] and the sex industry.[50]

Free thought is a philosophical viewpoint that holds opinions should be formed on the basis of science, logic and reason in contrast with authority, tradition or other dogmas.[51][52] In the United States, free thought was an anti-Christian, anti-clerical movement whose purpose was to make the individual politically and spiritually free to decide on religious matters. A number of contributors to Liberty were prominent figures in both free thought and anarchism. In 1901, Catalan anarchist and free-thinker Francesc Ferrer i Gurdia established “modern” or progressive schools in Barcelona in defiance of an educational system controlled by the Catholic Church.[53] Fiercely anti-clerical, Ferrer believed in “freedom in education”, i.e. education free from the authority of the church and state.[54] The schools’ stated goal was to “educate the working class in a rational, secular and non-coercive setting”. Later in the 20th century, Austrian Freudo-Marxist Wilhelm Reich became a consistent propagandist for sexual freedom going as far as opening free sex-counselling clinics in Vienna for working-class patients[55] as well as coining the phrase “sexual revolution” in one of his books from the 1940s.[56] During the early 1970s, the English anarchist and pacifist Alex Comfort achieved international celebrity for writing the sex manuals The Joy of Sex and More Joy of Sex.

Most left-libertarians are anarchists and believe the state inherently violates personal autonomy: “As Robert Paul Wolff has argued, since ‘the state is authority, the right to rule’, anarchism which rejects the State is the only political doctrine consistent with autonomy in which the individual alone is the judge of his moral constraints”.[41] Social anarchists believe the state defends private property, which they view as intrinsically harmful, while market-oriented left-libertarians argue that so-called free markets actually consist of economic privileges granted by the state. These latter libertarians advocate instead for freed markets, which are freed from these privileges.[57]

There is a debate amongst right-libertarians as to whether or not the state is legitimate: while anarcho-capitalists advocate its abolition, minarchists support minimal states, often referred to as night-watchman states. Libertarians take a skeptical view of government authority.[58][unreliable source?] Minarchists maintain that the state is necessary for the protection of individuals from aggression, theft, breach of contract and fraud. They believe the only legitimate governmental institutions are the military, police and courts, though some expand this list to include fire departments, prisons and the executive and legislative branches.[59] They justify the state on the grounds that it is the logical consequence of adhering to the non-aggression principle and argue that anarchism is immoral because it implies that the non-aggression principle is optional, that the enforcement of laws under anarchism is open to competition.[citation needed] Another common justification is that private defense agencies and court firms would tend to represent the interests of those who pay them enough.[60]

Anarcho-capitalists argue that the state violates the non-aggression principle (NAP) by its nature because governments use force against those who have not stolen or vandalized private property, assaulted anyone or committed fraud.[61][62] Linda & Morris Tannehill argue that no coercive monopoly of force can arise on a truly free market and that a government’s citizenry can not desert them in favor of a competent protection and defense agency.[63]

Left-libertarians believe that neither claiming nor mixing one’s labor with natural resources is enough to generate full private property rights[64][65] and maintain that natural resources ought to be held in an egalitarian manner, either unowned or owned collectively.[66]

Right-libertarians maintain that unowned natural resources “may be appropriated by the first person who discovers them, mixes his labor with them, or merely claims themwithout the consent of others, and with little or no payment to them”. They believe that natural resources are originally unowned and therefore private parties may appropriate them at will without the consent of, or owing to, others.[67]

Left-libertarians (social and individualist anarchists, libertarian Marxists and left-wing market anarchists) argue in favor of socialist theories such as communism, syndicalism and mutualism (anarchist economics). Daniel Gurin writes that “anarchism is really a synonym for socialism. The anarchist is primarily a socialist whose aim is to abolish the exploitation of man by man. Anarchism is only one of the streams of socialist thought, that stream whose main components are concern for liberty and haste to abolish the State”.[68]

Right-libertarians are economic liberals of either the Austrian School or Chicago school and support laissez-faire capitalism.[69]

Wage labour has long been compared by socialists and anarcho-syndicalists to slavery.[70][71][72][73] As a result, the term “wage slavery” is often utilised as a pejorative for wage labor.[74] Advocates of slavery looked upon the “comparative evils of Slave Society and of Free Society, of slavery to human Masters and slavery to Capital”[75] and proceeded to argue that wage slavery was actually worse than chattel slavery.[76] Slavery apologists like George Fitzhugh contended that workers only accepted wage labour with the passage of time, as they became “familiarized and inattentive to the infected social atmosphere they continually inhale[d]”.[75]

According to Noam Chomsky, analysis of the psychological implications of wage slavery goes back to the Enlightenment era. In his 1791 book On the Limits of State Action, classical liberal thinker Wilhelm von Humboldt explained how “whatever does not spring from a man’s free choice, or is only the result of instruction and guidance, does not enter into his very nature; he does not perform it with truly human energies, but merely with mechanical exactness” and so when the labourer works under external control “we may admire what he does, but we despise what he is”.[77] For Marxists, labour-as-commodity, which is how they regard wage labour,[78] provides an absolutely fundamental point of attack against capitalism.[79] “It can be persuasively argued”, noted philosopher John Nelson, “that the conception of the worker’s labour as a commodity confirms Marx’s stigmatization of the wage system of private capitalism as ‘wage-slavery;’ that is, as an instrument of the capitalist’s for reducing the worker’s condition to that of a slave, if not below it”.[80] That this objection is fundamental follows immediately from Marx’s conclusion that wage labour is the very foundation of capitalism: “Without a class dependent on wages, the moment individuals confront each other as free persons, there can be no production of surplus value; without the production of surplus-value there can be no capitalist production, and hence no capital and no capitalist!”.[81]

Left-libertarianism (or left-wing libertarianism) names several related, but distinct approaches to political and social theory which stresses both individual freedom and social equality. In its classical usage, left-libertarianism is a synonym for anti-authoritarian varieties of left-wing politics, i.e. libertarian socialism, which includes anarchism and libertarian Marxism, among others.[82][83] Left-libertarianism can also refer to political positions associated with academic philosophers Hillel Steiner, Philippe Van Parijs and Peter Vallentyne that combine self-ownership with an egalitarian approach to natural resouces.[84]

While maintaining full respect for personal property, left-libertarians are skeptical of or fully against private property, arguing that neither claiming nor mixing one’s labor with natural resources is enough to generate full private property rights[85][86] and maintain that natural resources (land, oil, gold and vegetation) should be held in an egalitarian manner, either unowned or owned collectively. Those left-libertarians who support private property do so under the condition that recompense is offered to the local community.[86] Many left-libertarian schools of thought are communist, advocating the eventual replacement of money with labor vouchers or decentralized planning.

On the other hand, left-wing market anarchism, which includes Pierre-Joseph Proudhon’s mutualism and Samuel Edward Konkin III’s agorism, appeals to left-wing concerns such as egalitarianism, gender and sexuality, class, immigration and environmentalism within the paradigm of a socialist free market.[82]

Right-libertarianism (or right-wing libertarianism) refers to libertarian political philosophies that advocate negative rights, natural law and a major reversal of the modern welfare state.[87] Right-libertarians strongly support private property rights and defend market distribution of natural resources and private property.[88] This position is contrasted with that of some versions of left-libertarianism, which maintain that natural resources belong to everyone in an egalitarian manner, either unowned or owned collectively.[89] Right-libertarianism includes anarcho-capitalism and laissez-faire, minarchist liberalism.[note 1]

Elements of libertarianism can be traced as far back as the ancient Chinese philosopher Lao-Tzu and the higher-law concepts of the Greeks and the Israelites.[90][91] In 17th-century England, libertarian ideas began to take modern form in the writings of the Levellers and John Locke. In the middle of that century, opponents of royal power began to be called Whigs, or sometimes simply “opposition” or “country” (as opposed to Court) writers.[92]

During the 18th century, classical liberal ideas flourished in Europe and North America.[93][94] Libertarians of various schools were influenced by classical liberal ideas.[95] For libertarian philosopher Roderick T. Long, both libertarian socialists and libertarian capitalists “share a commonor at least an overlapping intellectual ancestry… both claim the seventeenth century English Levellers and the eighteenth century French encyclopedists among their ideological forebears; and (also)… usually share an admiration for Thomas Jefferson[96][97][98] and Thomas Paine”.[99]

John Locke greatly influenced both libertarianism and the modern world in his writings published before and after the English Revolution of 1688, especially A Letter Concerning Toleration (1667), Two Treatises of Government (1689) and An Essay Concerning Human Understanding (1690). In the text of 1689, he established the basis of liberal political theory: that people’s rights existed before government; that the purpose of government is to protect personal and property rights; that people may dissolve governments that do not do so; and that representative government is the best form to protect rights.[100] The United States Declaration of Independence was inspired by Locke in its statement: “[T]o secure these rights, Governments are instituted among Men, deriving their just powers from the consent of the governed. That whenever any Form of Government becomes destructive of these ends, it is the Right of the People to alter or to abolish it”.[101] Nevertheless scholar Ellen Meiksins Wood says that “there are doctrines of individualism that are opposed to Lockean individualism… and non-Lockean individualism may encompass socialism”.[102]

According to Murray Rothbard, the libertarian creed emerged from the classical liberal challenges to an “absolute central State and a king ruling by divine right on top of an older, restrictive web of feudal land monopolies and urban guild controls and restrictions”, the mercantilism of a bureaucratic warfaring state allied with privileged merchants. The object of classical liberals was individual liberty in the economy, in personal freedoms and civil liberty, separation of state and religion, and peace as an alternative to imperial aggrandizement. He cites Locke’s contemporaries, the Levellers, who held similar views. Also influential were the English “Cato’s Letters” during the early 1700s, reprinted eagerly by American colonists who already were free of European aristocracy and feudal land monopolies.[101]

In January of 1776, only two years after coming to America from England, Thomas Paine published his pamphlet Common Sense calling for independence for the colonies.[103] Paine promoted classical liberal ideas in clear, concise language that allowed the general public to understand the debates among the political elites.[104] Common Sense was immensely popular in disseminating these ideas,[105] selling hundreds of thousands of copies.[106] Paine later would write the Rights of Man and The Age of Reason and participate in the French Revolution.[103] Paine’s theory of property showed a “libertarian concern” with the redistribution of resources.[107]

In 1793, William Godwin wrote a libertarian philosophical treatise, Enquiry Concerning Political Justice and its Influence on Morals and Happiness, which criticized ideas of human rights and of society by contract based on vague promises. He took classical liberalism to its logical anarchic conclusion by rejecting all political institutions, law, government and apparatus of coercion as well as all political protest and insurrection. Instead of institutionalized justice, Godwin proposed that people influence one another to moral goodness through informal reasoned persuasion, including in the associations they joined as this would facilitate happiness.[108][109]

Modern anarchism sprang from the secular or religious thought of the Enlightenment, particularly Jean-Jacques Rousseau’s arguments for the moral centrality of freedom.[110]

As part of the political turmoil of the 1790s in the wake of the French Revolution, William Godwin developed the first expression of modern anarchist thought.[111][112] According to Peter Kropotkin, Godwin was “the first to formulate the political and economical conceptions of anarchism, even though he did not give that name to the ideas developed in his work”,[113] while Godwin attached his anarchist ideas to an early Edmund Burke.[114]

Godwin is generally regarded as the founder of the school of thought known as philosophical anarchism. He argued in Political Justice (1793)[112][115] that government has an inherently malevolent influence on society and that it perpetuates dependency and ignorance. He thought that the spread of the use of reason to the masses would eventually cause government to wither away as an unnecessary force. Although he did not accord the state with moral legitimacy, he was against the use of revolutionary tactics for removing the government from power. Rather, Godwin advocated for its replacement through a process of peaceful evolution.[112][116]

His aversion to the imposition of a rules-based society led him to denounce, as a manifestation of the people’s “mental enslavement”, the foundations of law, property rights and even the institution of marriage. Godwin considered the basic foundations of society as constraining the natural development of individuals to use their powers of reasoning to arrive at a mutually beneficial method of social organization. In each case, government and its institutions are shown to constrain the development of our capacity to live wholly in accordance with the full and free exercise of private judgment.

In France, various anarchist currents were present during the Revolutionary period, with some revolutionaries using the term anarchiste in a positive light as early as September 1793.[117] The enrags opposed revolutionary government as a contradiction in terms. Denouncing the Jacobin dictatorship, Jean Varlet wrote in 1794 that “government and revolution are incompatible, unless the people wishes to set its constituted authorities in permanent insurrection against itself”.[118] In his “Manifesto of the Equals”, Sylvain Marchal looked forward to the disappearance, once and for all, of “the revolting distinction between rich and poor, of great and small, of masters and valets, of governors and governed”.[118]

Libertarian socialism, libertarian communism and libertarian Marxism are all phrases which activists with a variety of perspectives have applied to their views.[119] Anarchist communist philosopher Joseph Djacque was the first person to describe himself as a libertarian.[120] Unlike mutualist anarchist philosopher Pierre-Joseph Proudhon, he argued that “it is not the product of his or her labor that the worker has a right to, but to the satisfaction of his or her needs, whatever may be their nature”.[121][122] According to anarchist historian Max Nettlau, the first use of the term “libertarian communism” was in November 1880, when a French anarchist congress employed it to more clearly identify its doctrines.[123] The French anarchist journalist Sbastien Faure started the weekly paper Le Libertaire (The Libertarian) in 1895.[124]

Individualist anarchism refers to several traditions of thought within the anarchist movement that emphasize the individual and their will over any kinds of external determinants such as groups, society, traditions, and ideological systems.[125][126] An influential form of individualist anarchism called egoism[127] or egoist anarchism was expounded by one of the earliest and best-known proponents of individualist anarchism, the German Max Stirner.[128] Stirner’s The Ego and Its Own, published in 1844, is a founding text of the philosophy.[128] According to Stirner, the only limitation on the rights of the individual is their power to obtain what they desire,[129] without regard for God, state or morality.[130] Stirner advocated self-assertion and foresaw unions of egoists, non-systematic associations continually renewed by all parties’ support through an act of will,[131] which Stirner proposed as a form of organisation in place of the state.[132] Egoist anarchists argue that egoism will foster genuine and spontaneous union between individuals.[133] Egoism has inspired many interpretations of Stirner’s philosophy. It was re-discovered and promoted by German philosophical anarchist and LGBT activist John Henry Mackay. Josiah Warren is widely regarded as the first American anarchist,[134] and the four-page weekly paper he edited during 1833, The Peaceful Revolutionist, was the first anarchist periodical published.[135] For American anarchist historian Eunice Minette Schuster, “[i]t is apparent… that Proudhonian Anarchism was to be found in the United States at least as early as 1848 and that it was not conscious of its affinity to the Individualist Anarchism of Josiah Warren and Stephen Pearl Andrews… William B. Greene presented this Proudhonian Mutualism in its purest and most systematic form.”.[136] Later, Benjamin Tucker fused Stirner’s egoism with the economics of Warren and Proudhon in his eclectic influential publication Liberty. From these early influences, individualist anarchism in different countries attracted a small yet diverse following of bohemian artists and intellectuals,[137] free love and birth control advocates (anarchism and issues related to love and sex),[138][139] individualist naturists nudists (anarcho-naturism),[140][141][142] free thought and anti-clerical activists[143][144] as well as young anarchist outlaws in what became known as illegalism and individual reclamation[145][146] (European individualist anarchism and individualist anarchism in France). These authors and activists included Emile Armand, Han Ryner, Henri Zisly, Renzo Novatore, Miguel Gimenez Igualada, Adolf Brand and Lev Chernyi.

In 1873, the follower and translator of Proudhon, the Catalan Francesc Pi i Margall, became President of Spain with a program which wanted “to establish a decentralized, or “cantonalist,” political system on Proudhonian lines”,[147] who according to Rudolf Rocker had “political ideas…much in common with those of Richard Price, Joseph Priestly [sic], Thomas Paine, Jefferson, and other representatives of the Anglo-American liberalism of the first period. He wanted to limit the power of the state to a minimum and gradually replace it by a Socialist economic order”.[148] On the other hand, Fermn Salvochea was a mayor of the city of Cdiz and a president of the province of Cdiz. He was one of the main propagators of anarchist thought in that area in the late 19th century and is considered to be “perhaps the most beloved figure in the Spanish Anarchist movement of the 19th century”.[149][150] Ideologically, he was influenced by Bradlaugh, Owen and Paine, whose works he had studied during his stay in England and Kropotkin, whom he read later.[149] The revolutionary wave of 19171923 saw the active participation of anarchists in Russia and Europe. Russian anarchists participated alongside the Bolsheviks in both the February and October 1917 revolutions. However, Bolsheviks in central Russia quickly began to imprison or drive underground the libertarian anarchists. Many fled to the Ukraine.[151] There, in the Ukrainian Free Territory they fought in the Russian Civil War against the White movement, monarchists and other opponents of revolution and then against Bolsheviks as part of the Revolutionary Insurrectionary Army of Ukraine led by Nestor Makhno, who established an anarchist society in the region for a number of months. Expelled American anarchists Emma Goldman and Alexander Berkman protested Bolshevik policy before they left Russia.[152]

The victory of the Bolsheviks damaged anarchist movements internationally as workers and activists joined Communist parties. In France and the United States, for example, members of the major syndicalist movements of the CGT and IWW joined the Communist International.[153] In Paris, the Dielo Truda group of Russian anarchist exiles, which included Nestor Makhno, issued a 1926 manifesto, the Organizational Platform of the General Union of Anarchists (Draft), calling for new anarchist organizing structures.[154][155]

The Bavarian Soviet Republic of 19181919 had libertarian socialist characteristics.[156][157] In Italy, from 1918 to 1921 the anarcho-syndicalist trade union Unione Sindacale Italiana grew to 800,000 members.[158]

In the 1920s and 1930s, with the rise of fascism in Europe, anarchists began to fight fascists in Italy,[159] in France during the February 1934 riots[160] and in Spain where the CNT (Confederacin Nacional del Trabajo) boycott of elections led to a right-wing victory and its later participation in voting in 1936 helped bring the popular front back to power. This led to a ruling class attempted coup and the Spanish Civil War (19361939).[161] Gruppo Comunista Anarchico di Firenze held that the during early twentieth century, the terms libertarian communism and anarchist communism became synonymous within the international anarchist movement as a result of the close connection they had in Spain (anarchism in Spain) (with libertarian communism becoming the prevalent term).[162]

Murray Bookchin wrote that the Spanish libertarian movement of the mid-1930s was unique because its workers’ control and collectiveswhich came out of a three-generation “massive libertarian movement”divided the republican camp and challenged the Marxists. “Urban anarchists” created libertarian communist forms of organization which evolved into the CNT, a syndicalist union providing the infrastructure for a libertarian society. Also formed were local bodies to administer social and economic life on a decentralized libertarian basis. Much of the infrastructure was destroyed during the 1930s Spanish Civil War against authoritarian and fascist forces.[163] The Iberian Federation of Libertarian Youth[164] (FIJL, Spanish: Federacin Ibrica de Juventudes Libertarias), sometimes abbreviated as Libertarian Youth (Juventudes Libertarias), was a libertarian socialist[165] organisation created in 1932 in Madrid.[166] In February 1937, the FIJL organised a plenum of regional organisations (second congress of FIJL). In October 1938, from the 16th through the 30th in Barcelona the FIJL participated in a national plenum of the libertarian movement, also attended by members of the CNT and the Iberian Anarchist Federation (FAI).[167] The FIJL exists until today. When the republican forces lost the Spanish Civil War, the city of Madrid was turned over to the francoist forces in 1939 by the last non-francoist mayor of the city, the anarchist Melchor Rodrguez Garca.[168] During autumn of 1931, the “Manifesto of the 30” was published by militants of the anarchist trade union CNT and among those who signed it there was the CNT General Secretary (19221923) Joan Peiro, Angel Pestaa CNT (General Secretary in 1929) and Juan Lopez Sanchez. They were called treintismo and they were calling for “libertarian possibilism” which advocated achieving libertarian socialist ends with participation inside structures of contemporary parliamentary democracy.[169] In 1932, they establish the Syndicalist Party which participates in the 1936 spanish general elections and proceed to be a part of the leftist coalition of parties known as the Popular Front obtaining 2 congressmen (Pestaa and Benito Pabon). In 1938, Horacio Prieto, general secretary of the CNT, proposes that the Iberian Anarchist Federation transforms itself into a “Libertarian Socialist Party” and that it participates in the national elections.[170]

The Manifesto of Libertarian Communism was written in 1953 by Georges Fontenis for the Federation Communiste Libertaire of France. It is one of the key texts of the anarchist-communist current known as platformism.[171] In 1968, in Carrara, Italy the International of Anarchist Federations was founded during an international anarchist conference to advance libertarian solidarity. It wanted to form “a strong and organised workers movement, agreeing with the libertarian ideas”.[172][173] In the United States, the Libertarian League was founded in New York City in 1954 as a left-libertarian political organisation building on the Libertarian Book Club.[174][175] Members included Sam Dolgoff,[176] Russell Blackwell, Dave Van Ronk, Enrico Arrigoni[177] and Murray Bookchin.

In Australia, the Sydney Push was a predominantly left-wing intellectual subculture in Sydney from the late 1940s to the early 1970s which became associated with the label “Sydney libertarianism”. Well known associates of the Push include Jim Baker, John Flaus, Harry Hooton, Margaret Fink, Sasha Soldatow,[178] Lex Banning, Eva Cox, Richard Appleton, Paddy McGuinness, David Makinson, Germaine Greer, Clive James, Robert Hughes, Frank Moorhouse and Lillian Roxon. Amongst the key intellectual figures in Push debates were philosophers David J. Ivison, George Molnar, Roelof Smilde, Darcy Waters and Jim Baker, as recorded in Baker’s memoir Sydney Libertarians and the Push, published in the libertarian Broadsheet in 1975.[179] An understanding of libertarian values and social theory can be obtained from their publications, a few of which are available online.[180][181]

In 1969, French platformist anarcho-communist Daniel Gurin published an essay in 1969 called “Libertarian Marxism?” in which he dealt with the debate between Karl Marx and Mikhail Bakunin at the First International and afterwards suggested that “[l]ibertarian marxism rejects determinism and fatalism, giving the greater place to individual will, intuition, imagination, reflex speeds, and to the deep instincts of the masses, which are more far-seeing in hours of crisis than the reasonings of the ‘elites’; libertarian marxism thinks of the effects of surprise, provocation and boldness, refuses to be cluttered and paralysed by a heavy ‘scientific’ apparatus, doesn’t equivocate or bluff, and guards itself from adventurism as much as from fear of the unknown”.[182] Libertarian Marxist currents often draw from Marx and Engels’ later works, specifically the Grundrisse and The Civil War in France.[183] They emphasize the Marxist belief in the ability of the working class to forge its own destiny without the need for a revolutionary party or state.[184] Libertarian Marxism includes such currents as council communism, left communism, Socialisme ou Barbarie, Lettrism/Situationism and operaismo/autonomism and New Left.[185][unreliable source?] In the United States, from 1970 to 1981 there existed the publication Root & Branch[186] which had as a subtitle “A Libertarian Marxist Journal”.[187] In 1974, the Libertarian Communism journal was started in the United Kingdom by a group inside the Socialist Party of Great Britain.[188] In 1986, the anarcho-syndicalist Sam Dolgoff started and led the publication Libertarian Labor Review in the United States[189] which decided to rename itself as Anarcho-Syndicalist Review in order to avoid confusion with right-libertarian views.[190]

The indigenous anarchist tradition in the United States was largely individualist.[191] In 1825, Josiah Warren became aware of the social system of utopian socialist Robert Owen and began to talk with others in Cincinnati about founding a communist colony.[192] When this group failed to come to an agreement about the form and goals of their proposed community, Warren “sold his factory after only two years of operation, packed up his young family, and took his place as one of 900 or so Owenites who had decided to become part of the founding population of New Harmony, Indiana”.[193] Warren termed the phrase “cost the limit of price”[194] and “proposed a system to pay people with certificates indicating how many hours of work they did. They could exchange the notes at local time stores for goods that took the same amount of time to produce”.[195] He put his theories to the test by establishing an experimental labor-for-labor store called the Cincinnati Time Store where trade was facilitated by labor notes. The store proved successful and operated for three years, after which it was closed so that Warren could pursue establishing colonies based on mutualism, including Utopia and Modern Times. “After New Harmony failed, Warren shifted his ideological loyalties from socialism to anarchism (which was no great leap, given that Owen’s socialism had been predicated on Godwin’s anarchism)”.[196] Warren is widely regarded as the first American anarchist[195] and the four-page weekly paper The Peaceful Revolutionist he edited during 1833 was the first anarchist periodical published,[135] an enterprise for which he built his own printing press, cast his own type and made his own printing plates.[135]

Catalan historian Xavier Diez reports that the intentional communal experiments pioneered by Warren were influential in European individualist anarchists of the late 19th and early 20th centuries such as mile Armand and the intentional communities started by them.[197] Warren said that Stephen Pearl Andrews, individualist anarchist and close associate, wrote the most lucid and complete exposition of Warren’s own theories in The Science of Society, published in 1852.[198] Andrews was formerly associated with the Fourierist movement, but converted to radical individualism after becoming acquainted with the work of Warren. Like Warren, he held the principle of “individual sovereignty” as being of paramount importance. Contemporary American anarchist Hakim Bey reports:

Steven Pearl Andrews… was not a fourierist, but he lived through the brief craze for phalansteries in America and adopted a lot of fourierist principles and practices… a maker of worlds out of words. He syncretized abolitionism in the United States, free love, spiritual universalism, Warren, and Fourier into a grand utopian scheme he called the Universal Pantarchy… He was instrumental in founding several ‘intentional communities,’ including the ‘Brownstone Utopia’ on 14th St. in New York, and ‘Modern Times’ in Brentwood, Long Island. The latter became as famous as the best-known fourierist communes (Brook Farm in Massachusetts & the North American Phalanx in New Jersey)in fact, Modern Times became downright notorious (for ‘Free Love’) and finally foundered under a wave of scandalous publicity. Andrews (and Victoria Woodhull) were members of the infamous Section 12 of the 1st International, expelled by Marx for its anarchist, feminist, and spiritualist tendencies.[199]

For American anarchist historian Eunice Minette Schuster, “[it is apparent… that Proudhonian Anarchism was to be found in the United States at least as early as 1848 and that it was not conscious of its affinity to the Individualist Anarchism of Josiah Warren and Stephen Pearl Andrews. William B. Greene presented this Proudhonian Mutualism in its purest and most systematic form”.[200] William Batchelder Greene was a 19th-century mutualist individualist anarchist, Unitarian minister, soldier and promoter of free banking in the United States. Greene is best known for the works Mutual Banking, which proposed an interest-free banking system; and Transcendentalism, a critique of the New England philosophical school. After 1850, he became active in labor reform.[200] “He was elected vice-president of the New England Labor Reform League, the majority of the members holding to Proudhon’s scheme of mutual banking, and in 1869 president of the Massachusetts Labor Union”.[200] Greene then published Socialistic, Mutualistic, and Financial Fragments (1875).[200] He saw mutualism as the synthesis of “liberty and order”.[200] His “associationism… is checked by individualism… ‘Mind your own business,’ ‘Judge not that ye be not judged.’ Over matters which are purely personal, as for example, moral conduct, the individual is sovereign, as well as over that which he himself produces. For this reason he demands ‘mutuality’ in marriagethe equal right of a woman to her own personal freedom and property”.[200]

Poet, naturalist and transcendentalist Henry David Thoreau was an important early influence in individualist anarchist thought in the United States and Europe. He is best known for his book Walden, a reflection upon simple living in natural surroundings; and his essay Civil Disobedience (Resistance to Civil Government), an argument for individual resistance to civil government in moral opposition to an unjust state. In Walden, Thoreau advocates simple living and self-sufficiency among natural surroundings in resistance to the advancement of industrial civilization.[201] Civil Disobedience, first published in 1849, argues that people should not permit governments to overrule or atrophy their consciences and that people have a duty to avoid allowing such acquiescence to enable the government to make them the agents of injustice. These works influenced green anarchism, anarcho-primitivism and anarcho-pacifism,[202] as well as figures including Mohandas Gandhi, Martin Luther King, Jr., Martin Buber and Leo Tolstoy.[202] “Many have seen in Thoreau one of the precursors of ecologism and anarcho-primitivism represented today in John Zerzan. For George Woodcock this attitude can be also motivated by certain idea of resistance to progress and of rejection of the growing materialism which is the nature of American society in the mid-19th century”.[201] Zerzan included Thoreau’s “Excursions” in his edited compilation of anti-civilization writings, Against Civilization: Readings and Reflections.[203] Individualist anarchists such as Thoreau[204][205] do not speak of economics, but simply the right of disunion from the state and foresee the gradual elimination of the state through social evolution. Agorist author J. Neil Schulman cites Thoreau as a primary inspiration.[206]

Many economists since Adam Smith have argued thatunlike other taxesa land value tax would not cause economic inefficiency.[207] It would be a progressive tax[208]primarily paid by the wealthyand increase wages, reduce economic inequality, remove incentives to misuse real estate and reduce the vulnerability that economies face from credit and property bubbles.[209][210] Early proponents of this view include Thomas Paine, Herbert Spencer, and Hugo Grotius,[84] but the concept was widely popularized by the economist and social reformer Henry George.[211] George believed that people ought to own the fruits of their labor and the value of the improvements they make, thus he was opposed to income taxes, sales taxes, taxes on improvements and all other taxes on production, labor, trade or commerce. George was among the staunchest defenders of free markets and his book Protection or Free Trade was read into the U.S. Congressional Record.[212] Yet he did support direct management of natural monopolies as a last resort, such as right-of-way monopolies necessary for railroads. George advocated for elimination of intellectual property arrangements in favor of government sponsored prizes for inventors.[213][not in citation given] Early followers of George’s philosophy called themselves single taxers because they believed that the only legitimate, broad-based tax was land rent. The term Georgism was coined later, though some modern proponents prefer the term geoism instead,[214] leaving the meaning of “geo” (Earth in Greek) deliberately ambiguous. The terms “Earth Sharing”,[215] “geonomics”[216] and “geolibertarianism”[217] are used by some Georgists to represent a difference of emphasis, or real differences about how land rent should be spent, but all agree that land rent should be recovered from its private owners.

Individualist anarchism found in the United States an important space for discussion and development within the group known as the “Boston anarchists”.[218] Even among the 19th-century American individualists there was no monolithic doctrine and they disagreed amongst each other on various issues including intellectual property rights and possession versus property in land.[219][220][221] Some Boston anarchists, including Benjamin Tucker, identified as socialists, which in the 19th century was often used in the sense of a commitment to improving conditions of the working class (i.e. “the labor problem”).[222] Lysander Spooner, besides his individualist anarchist activism, was also an anti-slavery activist and member of the First International.[223] Tucker argued that the elimination of what he called “the four monopolies”the land monopoly, the money and banking monopoly, the monopoly powers conferred by patents and the quasi-monopolistic effects of tariffswould undermine the power of the wealthy and big business, making possible widespread property ownership and higher incomes for ordinary people, while minimizing the power of would-be bosses and achieving socialist goals without state action. Tucker’s anarchist periodical, Liberty, was published from August 1881 to April 1908. The publication, emblazoned with Proudhon’s quote that liberty is “Not the Daughter But the Mother of Order” was instrumental in developing and formalizing the individualist anarchist philosophy through publishing essays and serving as a forum for debate. Contributors included Benjamin Tucker, Lysander Spooner, Auberon Herbert, Dyer Lum, Joshua K. Ingalls, John Henry Mackay, Victor Yarros, Wordsworth Donisthorpe, James L. Walker, J. William Lloyd, Florence Finch Kelly, Voltairine de Cleyre, Steven T. Byington, John Beverley Robinson, Jo Labadie, Lillian Harman and Henry Appleton.[224] Later, Tucker and others abandoned their traditional support of natural rights and converted to an egoism modeled upon the philosophy of Max Stirner.[220] A number of natural rights proponents stopped contributing in protest and “[t]hereafter, Liberty championed egoism, although its general content did not change significantly”.[225] Several publications “were undoubtedly influenced by Liberty’s presentation of egoism. They included: I published by C.L. Swartz, edited by W.E. Gordak and J.W. Lloyd (all associates of Liberty); The Ego and The Egoist, both of which were edited by Edward H. Fulton. Among the egoist papers that Tucker followed were the German Der Eigene, edited by Adolf Brand, and The Eagle and The Serpent, issued from London. The latter, the most prominent English-language egoist journal, was published from 1898 to 1900 with the subtitle ‘A Journal of Egoistic Philosophy and Sociology'”.[225]

By around the start of the 20th century, the heyday of individualist anarchism had passed.[226] H. L. Mencken and Albert Jay Nock were the first prominent figures in the United States to describe themselves as libertarians;[227] they believed Franklin D. Roosevelt had co-opted the word “liberal” for his New Deal policies which they opposed and used “libertarian” to signify their allegiance to individualism.[citation needed] In 1914, Nock joined the staff of The Nation magazine, which at the time was supportive of liberal capitalism. A lifelong admirer of Henry George, Nock went on to become co-editor of The Freeman from 1920 to 1924, a publication initially conceived as a vehicle for the single tax movement, financed by the wealthy wife of the magazine’s other editor, Francis Neilson.[228] Critic H.L. Mencken wrote that “[h]is editorials during the three brief years of the Freeman set a mark that no other man of his trade has ever quite managed to reach. They were well-informed and sometimes even learned, but there was never the slightest trace of pedantry in them”.[229]

Executive Vice President of the Cato Institute, David Boaz, writes: “In 1943, at one of the lowest points for liberty and humanity in history, three remarkable women published books that could be said to have given birth to the modern libertarian movement”.[230] Isabel Paterson’s The God of the Machine, Rose Wilder Lane’s The Discovery of Freedom and Ayn Rand’s The Fountainhead each promoted individualism and capitalism. None of the three used the term libertarianism to describe their beliefs and Rand specifically rejected the label, criticizing the burgeoning American libertarian movement as the “hippies of the right”.[231] Rand’s own philosophy, Objectivism, is notedly similar to libertarianism and she accused libertarians of plagiarizing her ideas.[231] Rand stated:

All kinds of people today call themselves “libertarians,” especially something calling itself the New Right, which consists of hippies who are anarchists instead of leftist collectivists; but anarchists are collectivists. Capitalism is the one system that requires absolute objective law, yet libertarians combine capitalism and anarchism. That’s worse than anything the New Left has proposed. It’s a mockery of philosophy and ideology. They sling slogans and try to ride on two bandwagons. They want to be hippies, but don’t want to preach collectivism because those jobs are already taken. But anarchism is a logical outgrowth of the anti-intellectual side of collectivism. I could deal with a Marxist with a greater chance of reaching some kind of understanding, and with much greater respect. Anarchists are the scum of the intellectual world of the Left, which has given them up. So the Right picks up another leftist discard. That’s the libertarian movement.[232]

In 1946, Leonard E. Read founded the Foundation for Economic Education (FEE), an American nonprofit educational organization which promotes the principles of laissez-faire economics, private property, and limited government.[233] According to Gary North, former FEE director of seminars and a current Ludwig von Mises Institute scholar, FEE is the “granddaddy of all libertarian organizations”.[234] The initial officers of FEE were Leonard E. Read as President, Austrian School economist Henry Hazlitt as Vice-President and Chairman David Goodrich of B. F. Goodrich. Other trustees on the FEE board have included wealthy industrialist Jasper Crane of DuPont, H. W. Luhnow of William Volker & Co. and Robert Welch, founder of the John Birch Society.[236][237]

Austrian school economist Murray Rothbard was initially an enthusiastic partisan of the Old Right, particularly because of its general opposition to war and imperialism,[238] but long embraced a reading of American history that emphasized the role of elite privilege in shaping legal and political institutions. He was part of Ayn Rand’s circle for a brief period, but later harshly criticized Objectivism.[239] He praised Rand’s Atlas Shrugged and wrote that she “introduced me to the whole field of natural rights and natural law philosophy”, prompting him to learn “the glorious natural rights tradition”.[240](pp121, 132134) He soon broke with Rand over various differences, including his defense of anarchism. Rothbard was influenced by the work of the 19th-century American individualist anarchists[241] and sought to meld their advocacy of free markets and private defense with the principles of Austrian economics.[242] This new philosophy he called anarcho-capitalism.

Karl Hess, a speechwriter for Barry Goldwater and primary author of the Republican Party’s 1960 and 1964 platforms, became disillusioned with traditional politics following the 1964 presidential campaign in which Goldwater lost to Lyndon B. Johnson. He parted with the Republicans altogether after being rejected for employment with the party, and began work as a heavy-duty welder. Hess began reading American anarchists largely due to the recommendations of his friend Murray Rothbard and said that upon reading the works of communist anarchist Emma Goldman, he discovered that anarchists believed everything he had hoped the Republican Party would represent. For Hess, Goldman was the source for the best and most essential theories of Ayn Rand without any of the “crazy solipsism that Rand was so fond of”.[243] Hess and Rothbard founded the journal Left and Right: A Journal of Libertarian Thought, which was published from 1965 to 1968, with George Resch and Leonard P. Liggio. In 1969, they edited The Libertarian Forum 1969, which Hess left in 1971. Hess eventually put his focus on the small scale, stating that “Society is: people together making culture”. He deemed two of his cardinal social principles to be “opposition to central political authority” and “concern for people as individuals”. His rejection of standard American party politics was reflected in a lecture he gave during which he said: “The Democrats or liberals think that everybody is stupid and therefore they need somebody… to tell them how to behave themselves. The Republicans think everybody is lazy”.[244]

The Vietnam War split the uneasy alliance between growing numbers of American libertarians and conservatives who believed in limiting liberty to uphold moral virtues. Libertarians opposed to the war joined the draft resistance and peace movements, as well as organizations such as Students for a Democratic Society (SDS). In 1969 and 1970, Hess joined with others, including Murray Rothbard, Robert LeFevre, Dana Rohrabacher, Samuel Edward Konkin III and former SDS leader Carl Oglesby to speak at two “left-right” conferences which brought together activists from both the Old Right and the New Left in what was emerging as a nascent libertarian movement.[245] As part of his effort to unite right and left-libertarianism, Hess would join the SDS as well as the Industrial Workers of the World (IWW), of which he explained: “We used to have a labor movement in this country, until I.W.W. leaders were killed or imprisoned. You could tell labor unions had become captive when business and government began to praise them. They’re destroying the militant black leaders the same way now. If the slaughter continues, before long liberals will be asking, ‘What happened to the blacks? Why aren’t they militant anymore?'”.[246] Rothbard ultimately broke with the left, allying himself instead with the burgeoning paleoconservative movement.[247] He criticized the tendency of these left-libertarians to appeal to “‘free spirits,’ to people who don’t want to push other people around, and who don’t want to be pushed around themselves” in contrast to “the bulk of Americans,” who “might well be tight-assed conformists, who want to stamp out drugs in their vicinity, kick out people with strange dress habits, etc”.[248] This left-libertarian tradition has been carried to the present day by Samuel Edward Konkin III’s agorists, contemporary mutualists such as Kevin Carson and Roderick T. Long and other left-wing market anarchists.[249]

In 1971, a small group of Americans led by David Nolan formed the Libertarian Party,[250] which has run a presidential candidate every election year since 1972. Other libertarian organizations, such as the Center for Libertarian Studies and the Cato Institute, were also formed in the 1970s.[251] Philosopher John Hospers, a one-time member of Rand’s inner circle, proposed a non-initiation of force principle to unite both groups, but this statement later became a required “pledge” for candidates of the Libertarian Party and Hospers became its first presidential candidate in 1972.[citation needed] In the 1980s, Hess joined the Libertarian Party and served as editor of its newspaper from 1986 to 1990.

Modern libertarianism gained significant recognition in academia with the publication of Harvard University professor Robert Nozick’s Anarchy, State, and Utopia in 1974, for which he received a National Book Award in 1975.[252] In response to John Rawls’s A Theory of Justice, Nozick’s book supported a nightwatchman state on the grounds that it was an inevitable phenomenon which could arise without violating individual rights.[253]

In the early 1970s, Rothbard wrote that “[o]ne gratifying aspect of our rise to some prominence is that, for the first time in my memory, we, ‘our side,’ had captured a crucial word from the enemy… ‘Libertarians’… had long been simply a polite word for left-wing anarchists, that is for anti-private property anarchists, either of the communist or syndicalist variety. But now we had taken it over”.[254] Since the resurgence of neoliberalism in the 1970s, this modern American libertarianism has spread beyond North America via think tanks and political parties.[255][256]

A surge of popular interest in libertarian socialism occurred in western nations during the 1960s and 1970s.[257] Anarchism was influential in the Counterculture of the 1960s[258][259][260] and anarchists actively participated in the late sixties students and workers revolts.[261] In 1968, the International of Anarchist Federations was founded in Carrara, Italy during an international anarchist conference held there in 1968 by the three existing European federations of France, the Italian and the Iberian Anarchist Federation as well as the Bulgarian federation in French exile.[173][262] The uprisings of May 1968 also led to a small resurgence of interest in left communist ideas. Various small left communist groups emerged around the world, predominantly in the leading capitalist countries. A series of conferences of the communist left began in 1976, with the aim of promoting international and cross-tendency discussion, but these petered out in the 1980s without having increased the profile of the movement or its unity of ideas.[263] Left communist groups existing today include the International Communist Party, International Communist Current and the Internationalist Communist Tendency. The housing and employment crisis in most of Western Europe led to the formation of communes and squatter movements like that of Barcelona, Spain. In Denmark, squatters occupied a disused military base and declared the Freetown Christiania, an autonomous haven in central Copenhagen.

Around the turn of the 21st century, libertarian socialism grew in popularity and influence as part of the anti-war, anti-capitalist and anti-globalisation movements.[264] Anarchists became known for their involvement in protests against the meetings of the World Trade Organization (WTO), Group of Eight and the World Economic Forum. Some anarchist factions at these protests engaged in rioting, property destruction and violent confrontations with police. These actions were precipitated by ad hoc, leaderless, anonymous cadres known as black blocs and other organisational tactics pioneered in this time include security culture, affinity groups and the use of decentralised technologies such as the internet.[264] A significant event of this period was the confrontations at WTO conference in Seattle in 1999.[264] For English anarchist scholar Simon Critchley, “contemporary anarchism can be seen as a powerful critique of the pseudo-libertarianism of contemporary neo-liberalism…One might say that contemporary anarchism is about responsibility, whether sexual, ecological or socio-economic; it flows from an experience of conscience about the manifold ways in which the West ravages the rest; it is an ethical outrage at the yawning inequality, impoverishment and disenfranchisment that is so palpable locally and globally”.[265] This might also have been motivated by “the collapse of ‘really existing socialism’ and the capitulation to neo-liberalism of Western social democracy”.[266]

Libertarian socialists in the early 21st century have been involved in the alter-globalization movement, squatter movement; social centers; infoshops; anti-poverty groups such as Ontario Coalition Against Poverty and Food Not Bombs; tenants’ unions; housing cooperatives; intentional communities generally and egalitarian communities; anti-sexist organizing; grassroots media initiatives; digital media and computer activism; experiments in participatory economics; anti-racist and anti-fascist groups like Anti-Racist Action and Anti-Fascist Action; activist groups protecting the rights of immigrants and promoting the free movement of people, such as the No Border network; worker co-operatives, countercultural and artist groups; and the peace movement.

In the United States, polls (circa 2006) find that the views and voting habits of between 10 and 20 percent (and increasing) of voting age Americans may be classified as “fiscally conservative and socially liberal, or libertarian”.[267][268] This is based on pollsters and researchers defining libertarian views as fiscally conservative and socially liberal (based on the common United States meanings of the terms) and against government intervention in economic affairs and for expansion of personal freedoms.[267] Through 20 polls on this topic spanning 13 years, Gallup found that voters who are libertarian on the political spectrum ranged from 1723% of the United States electorate.[269] However, a 2014 Pew Poll found that 23% of Americans who identify as libertarians have no idea what the word means.[270]

2009 saw the rise of the Tea Party movement, an American political movement known for advocating a reduction in the United States national debt and federal budget deficit by reducing government spending and taxes, which had a significant libertarian component[271] despite having contrasts with libertarian values and views in some areas, such as nationalism, free trade, social issues and immigration.[272] A 2011 Reason-Rupe poll found that among those who self-identified as Tea Party supporters, 41 percent leaned libertarian and 59 percent socially conservative.[273] The movement, named after the Boston Tea Party, also contains conservative[274] and populist elements[275] and has sponsored multiple protests and supported various political candidates since 2009. Tea Party activities have declined since 2010 with the number of chapters across the country slipping from about 1,000 to 600.[276][277] Mostly, Tea Party organizations are said to have shifted away from national demonstrations to local issues.[276] Following the selection of Paul Ryan as Mitt Romney’s 2012 vice presidential running mate, The New York Times declared that Tea Party lawmakers are no longer a fringe of the conservative coalition, but now “indisputably at the core of the modern Republican Party”.[278]

In 2012, anti-war presidential candidates (Libertarian Republican Ron Paul and Libertarian Party candidate Gary Johnson) raised millions of dollars and garnered millions of votes despite opposition to their obtaining ballot access by Democrats and Republicans.[279] The 2012 Libertarian National Convention, which saw Gary Johnson and James P. Gray nominated as the 2012 presidential ticket for the Libertarian Party, resulted in the most successful result for a third-party presidential candidacy since 2000 and the best in the Libertarian Party’s history by vote number. Johnson received 1% of the popular vote, amounting to more than 1.2 million votes.[280][281] Johnson has expressed a desire to win at least 5 percent of the vote so that the Libertarian Party candidates could get equal ballot access and federal funding, thus subsequently ending the two-party system.[282][283][284]

Since the 1950s, many American libertarian organizations have adopted a free market stance, as well as supporting civil liberties and non-interventionist foreign policies. These include the Ludwig von Mises Institute, Francisco Marroqun University, the Foundation for Economic Education, Center for Libertarian Studies, the Cato Institute and Liberty International. The activist Free State Project, formed in 2001, works to bring 20,000 libertarians to New Hampshire to influence state policy.[285] Active student organizations include Students for Liberty and Young Americans for Liberty.

A number of countries have libertarian parties that run candidates for political office. In the United States, the Libertarian Party was formed in 1972 and is the third largest[286][287] American political party, with over 370,000 registered voters in the 35 states that allow registration as a Libertarian[288] and has hundreds of party candidates elected or appointed to public office.[289]

Current international anarchist federations which sometimes identify themselves as libertarian include the International of Anarchist Federations, the International Workers’ Association, and International Libertarian Solidarity. The largest organised anarchist movement today is in Spain, in the form of the Confederacin General del Trabajo (CGT) and the CNT. CGT membership was estimated to be around 100,000 for 2003.[290] Other active syndicalist movements include the Central Organisation of the Workers of Sweden and the Swedish Anarcho-syndicalist Youth Federation in Sweden; the Unione Sindacale Italiana in Italy; Workers Solidarity Alliance in the United States; and Solidarity Federation in the United Kingdom. The revolutionary industrial unionist Industrial Workers of the World claiming 2,000 paying members as well as the International Workers Association, an anarcho-syndicalist successor to the First International, also remain active. In the United States, there exists the Common Struggle Libertarian Communist Federation.

Criticism of libertarianism includes ethical, economic, environmental, pragmatic, and philosophical concerns.[291] It has also been argued that laissez-faire capitalism does not necessarily produce the best or most efficient outcome,[292] nor does its policy of deregulation prevent the abuse of natural resources. Furthermore, libertarianism has been criticized as utopian due to the lack of any such societies today.

Critics such as Corey Robin describe right-libertarianism as fundamentally a reactionary conservative ideology, united with more traditional conservative thought and goals by a desire to enforce hierarchical power and social relations:[293]

Conservatism, then, is not a commitment to limited government and libertyor a wariness of change, a belief in evolutionary reform, or a politics of virtue. These may be the byproducts of conservatism, one or more of its historically specific and ever-changing modes of expression. But they are not its animating purpose. Neither is conservatism a makeshift fusion of capitalists, Christians, and warriors, for that fusion is impelled by a more elemental forcethe opposition to the liberation of men and women from the fetters of their superiors, particularly in the private sphere. Such a view might seem miles away from the libertarian defense of the free market, with its celebration of the atomistic and autonomous individual. But it is not. When the libertarian looks out upon society, he does not see isolated individuals; he sees private, often hierarchical, groups, where a father governs his family and an owner his employees.

John Donahue argues that if political power were radically shifted to local authorities, parochial local interests would predominate at the expense of the whole and that this would exacerbate current problems with collective action.[294]

Michael Lind has observed that of the 195 countries in the world today, none have fully actualized a libertarian society:

If libertarianism was a good idea, wouldn’t at least one country have tried it? Wouldn’t there be at least one country, out of nearly two hundred, with minimal government, free trade, open borders, decriminalized drugs, no welfare state and no public education system?[295]

Lind has also criticised libertarianism, particularly the right-wing and free market variant of the ideology, as being incompatible with democracy and apologetic towards autocracy.[296]

Original post:

Libertarianism – Wikipedia

Can Libertarianism Be a Governing Philosophy?

The discussion we are about to have naturally divides itself into two aspects:

First: Could libertarianism, if implemented, sustain a state apparatus and not devolve into autocracy or anarchy? By that I mean the lawless versions of autocracy and anarchy, not stable monarchy or emergent rule of law without a state. Second: even if the answer were Yesor, Yes, if . . . we would still need to know whether enough citizens desired a libertarian order that it could feasibly be voluntarily chosen. That is, I am ruling out involuntary imposition by force of libertarianism as a governing philosophy.

I will address both questions, but want to assert at the outset that the first is the more important and more fundamental one. If the answer to it is No, there is no point in moving on to the second question. If the answer is Yes, it may be possible to change peoples minds about accepting a libertarian order.

The Destinationalists

As I have argued elsewhere[1], there are two main paths to deriving libertarian principles, destinations and directions. The destinationist approach shares the method of most other ethical paradigms: the enunciation of timeless moral and ethical precepts that describe the ideal libertarian society.

What makes for a distinctly libertarian set of principles is two precepts:

The extreme forms of these principles, for destinationists, can be hard for outsiders to accept. One example is noted by Matt Zwolinski, who cites opinion data gathered from libertarians by Liberty magazine and presented in its periodic Liberty Poll. A survey question frequently included in the survey was:

Suppose that you are on a friends balcony on the 50th floor of a condominium complex. You trip, stumble and fall over the edge. You catch a flagpole on the next floor down. The owner opens his window and demands you stop trespassing.

Zwolinski writes that in 1988, 84 percent of respondents to the flagpole question

said they believed that in such circumstances they should enter the owners residence against the owners wishes. 2% (one respondent) said that they should let go and fall to their death, and 15% said they should hang on and wait for somebody to throw them a rope. In 1999, the numbers were 86%, 1%, and 13%. In 2008, they were 89.2%, 0.9%, and 9.9%.

The interesting thing is that, while the answers to the flagpole question were almost unchanged over time, with a slight upward drift in those who would aggress by trespassing, support for the non-aggression principle itself plummeted. Writes Zwolinski:

Respondents were asked to say whether they agreed or disagreed with [the non-aggression principle]. In 1988, a full 90% of respondents said that they agreed. By 1999, however, the percentage expressing agreement had dropped by almost half to 50%. And by 2008, it was down to 39.7%.

If we take support for the non-aggression principle as a Rorschach test, it does not appear that most people, maybe not even everyone who identifies as a libertarian, are fully convinced that the principle is an absolute categorical moral principle.

The Directionalists

Of course, it could be true that many who identify now as libertarians, and those who might be attracted to libertarianism in the future, are directionalists. A directional approach holds that any policy action that increases the liberty and welfare of individuals is an improvement, and should be supported by libertarians, even if the policy itself violates either the self-ownership principle or the non-aggression principle.

A useful example here might be school vouchers. Instead of being a monopoly provider of public school education, the state might specialize in funding but leave the provision of education at least partly to private sector actors. The destinationist would object (and correctly) that the policy still involves the initiation of violence in collecting taxes involuntarily imposed on at least individuals who would not pay without the threat of coercion. In contrast, the directionalist might support vouchers, since parents would at least be afforded more liberty in choosing schools for their children, and the system would be subject to more competition, thus holding providers responsible for the quality of education being delivered.

Here, then, is a slightly modified take on the central question: Would a hybrid version of libertarianism, one that advocated for the destination but accepted directional improvements, be a viable governing philosophy? Even with this amendment, allowing for directional improvements as part of the core governing philosophy, is libertarianismto use a trope of the momentsustainable? The reason this approach could be useful is that it correlates to one of the great divisions within the libertarian movement: the split between political anarchists, who believe that any coercive state apparatus is ultimately incompatible with liberty, and the minarchists, who believe that a limited government is desirable, even necessary, and that it is also possible.

Limiting Leviathan: Getting Power to Stay Where You Put It

For a state to be consistent with both the self-ownership principle and the non-aggression principle, there must be certain core rights to property, expression, and action that are inviolable. This inviolability extends even to situations where initiating force would greatly benefit most people, meaning that consequentialist considerations cannot outweigh the rights of individuals.

Where might such a state originate, and how could it be continually limited to only those functions for which it was originally justified? One common answer is a form of contractarianism. (Another is convention, which is beyond the scope in this essay. See Robert Sugden[2] and Gerard Gaus[3] for a review of some of the issues.) This is not to say that actual states are the results of explicitly contractual arrangements; rather, there is an as if element: rational citizens in a state of nature would have voluntarily consented to the limited coercion of a minarchist state, given the substantial and universal improvement in welfare that results from having a provider of public goods and a neutral enforcer of contracts. Without a state, claims the minarchist, these two functionspublic goods provision and contract enforcementare either impossible or so difficult as to make the move to create a coercive state universally welcome for all citizens.

Contractarianism is of course an enormous body of work in philosophy, ranging from Thomas Hobbes and Jean-Jacques Rousseau to David Gauthier and John Rawls. Our contractarians, the libertarian versions, start with James Buchanan and Jan Narveson. Buchanans contractarianism is stark: Rules start with us, and the justification for coercion is, but can only be, our consent to being coerced. It is not clear that Buchanan would accept the full justification of political authority by tacit contract, but Buchanan also claims that each group in society should start from where we are now, meaning that changes in the rules require something as close to unanimous consent as possible.[4]

Narvesons view is closer to the necessary evil claim for justifying government. We need a way to be secure from violence, and to be able to enter into binding agreements that are enforceable. He wrote in The Libertarian Idea (1988) that there is no alternative that can provide reasons to everyone for accepting it, no matter what their personal values or philosophy of life may be, and thus motivating this informal, yet society-wide institution. He goes on to say:

Without resort to obfuscating intuitions, of self-evident rights and the like, the contractarian view offers an intelligible account both of why it is rational to want a morality and of what, broadly speaking, the essentials of that morality must consist in: namely, those general rules that are universally advantageous to rational agents. We each need morality, first because we are vulnerable to the depredations of others, and second because we can all benefit from cooperation with others. So we need protection, in the form of the ability to rely on our fellows not to engage in activities harmful to us; and we need to be able to rely on those with whom we deal. We each need this regardless of what else we need or value.

The problem, or so the principled political anarchist would answer, is that Leviathan cannot be limited unless for some reason Leviathan wants to limit itself.

One of the most interesting proponent of this view is Anthony de Jasay, an independent philosopher of political economy. Jasay would not dispute the value of credible commitments for contracts. His quarrel comes when contractarians invoke a founding myth. When I think of the Social Contract (the capitals signify how important it is!), I am reminded of that scene from Monty Python where King Arthur is talking to the peasants:

King Arthur: I am your king.

Woman: Well, I didnt vote for you.

King Arthur: You dont vote for kings.

Woman: Well howd you become king then?

[holy music . . . ]

King Arthur: The Lady of the Lake, her arm clad in the purest shimmering samite held aloft Excalibur from the bosom of the water, signifying by divine providence that I, Arthur, was to carry Excalibur. That is why I am your king.

Dennis: [interrupting] Listen, strange women lyin in ponds distributin swords is no basis for a system of government. Supreme executive power derives from a mandate from the masses, not from some farcical aquatic ceremony.

According to Jasay, there are two distinct problems with contractarian justifications for the state. Each, separately and independently, is fatal for the project, in his view. Together they put paid to the notion that a libertarian could favor minarchism.

The first problem is the enforceable contracts justification. The second is the limiting Leviathan problem.

The usual statement of the first comes from Hobbes: Covenants, without the sword, are but words. That means that individuals cannot enter into binding agreements without some third party to enforce the agreement. Since entering into binding agreements is a central precondition for mutually beneficial exchange and broad-scale market cooperation, we need a powerful, neutral enforcer. So, we all agree on that; the enforcer collects the taxes that we all agreed on and, in exchange, enforces all our contracts for us. (See John Thrasher[5] for some caveats.)

Butwait. Jasay compares this to jumping over your own shadow. If contracts cannot be enforced save by coercion from a third party, how can the contract between citizens and the state be enforced? [I]t takes courage to affirm that rational people could unanimously wish to have a sovereign contract enforcer bound by no contract, wrote Jasay in his book Against Politics (1997). By courage he does not intend a compliment. Either those who make this claim are contradicting themselves (since we cant have contracts, well use a contract to solve the problem) or the argument is circular (cooperation requires enforceable contracts, but these require a norm of cooperation).

Jasay put the question this way in On Treating Like Cases Alike: Review of Politics by Principle Not Interest, his 1999 essay in the Independent Review:

If man can no more bind himself by contract than he can jump over his own shadow, how can he jump over his own shadow and bind himself in a social contract? He cannot be both incapable of collective action and capable of it when creating the coercive agency needed to enforce his commitment. One can, without resorting to a bootstrap theory, accept the idea of an exogenous coercive agent, a conqueror whose regime is better than anything the conquered people could organize for themselves. Consenting to such an accomplished fact, however, can hardly be represented as entering into a contract, complete with a contracts ethical implications of an act of free will. [Emphasis in original]

In sum, the former claimthat contracts cannot be enforcedcannot then be used to conjure enforceable contracts out of a shadow. The latter claimthat people will cooperate on their ownmeans that no state is necessary in the first place. The conclusion Jasay reaches is that states, if they exist, may well be able to compel people to obey. The usual argument goes like this:

The state exists and enjoys the monopoly of the use of force for some reason, probably a historical one, that we need not inquire into. What matters is that without the state, society could not function tolerably, if at all. Therefore all rational persons would choose to enter into a social contract to create it. Indeed, we should regard the state as if it were the result of our social contract, hence indisputably legitimate.[6]

Jasay concludes that this argument must be false. As Robert Nozick famously put it in Anarchy, State, and Utopia (1974), tacit consent isnt worth the paper its not written on. We cannot confect a claim that states deserve our obedience based on consent. For consent is what true political authority requires: not that our compliance can be compelled, but that the state deserves our compliance. Ordered anarchy with no formal state is therefore a better solution, in Jasays view, because consent is either not real or is not enough.

Of course, this is simply an extension of a long tradition in libertarian thought, dating at least to Lysander Spooner. As Spooner said:

If the majority, however large, of the people of a country, enter into a contract of government, called a constitution, by which they agree to aid, abet or accomplish any kind of injustice, or to destroy or invade the natural rights of any person or persons whatsoever, whether such persons be parties to the compact or not, this contract of government is unlawful and voidand for the same reason that a treaty between two nations for a similar purpose, or a contract of the same nature between two individuals, is unlawful and void. Such a contract of government has no moral sanction. It confers no rightful authority upon those appointed to administer it. It confers no legal or moral rights, and imposes no legal or moral obligation upon the people who are parties to it. The only duties, which any one can owe to it, or to the government established under color of its authority, are disobedience, resistance, destruction.[7]

Now for the other problem highlighted by Jasay, that of limiting Leviathan. Let us assume the best of state officials: that they genuinely intend to do good. We might make the standard Public Choice assumption that officials want to use power to benefit themselves, but let us put that aside; instead, officials genuinely want to improve the lives of their citizens.

This means a minarchist state is not sustainable. Officials, thinking of the society as a collective rather than as individuals with inviolable rights, will immediately discover opportunities to raise taxes, and create new programs and new powers that benefit those in need. In fact, it is precisely the failure of the Public Choice assumptions of narrow self-interest that ensure this outcome. It might be possible in theory to design a principal-agent system of bureaucratic contract that constrains selfish officials. But if state power attracts those who are willing to sacrifice the lives or welfare of some for the greater good, then minarchy is quickly breached and Leviathan swells without the possibility of constraint.

I hasten to add that it need not be true, for Jasays claim to go through, that the concept of the greater good have any empirical content. It is enough that a few people believe, and can brandish the greater good like a truncheon, smashing rules and laws designed to stop the expansion of state power. No one who wants to do good will pass up a chance to do good, even if it means changing the rules. This process is much like that described by F.A. Hayek in Why the Worst Get on Top (see Chapter 10 of The Road to Serfdom) or Bertrand de Jouvenels Power (1945).

So, again, we reach a contradiction: Either 1) minarchy is not possible, because it is overwhelmed by the desire to do good, or minarchy is not legitimate because it is based on a mythical tacit consent; or 2) no state, minarchist or otherwise, is necessary because people can limit their actions on their own. Citizens might conclude that such self-imposed limits on their own actions are morally required, and that reputation and competition can limit the extent of depredation and reward cooperation in settings with repeated interaction. Jasay would argue, then, that constitutions and parchment barriers are either unnecessary (if people are self-governing) or ineffective (if they are not). Leviathan either cannot exist or else it is illimitable.

But Thats Not Enough

What I have argued so far is that destinationist libertarianism that is fully faithful to the self-ownership principle and the non-aggression principle could not be an effective governing philosophy. The only exception to this claim would be if libertarianism were universally believed, and people all agreed to govern themselves in the absence of a coercive state apparatus of any kind. Of course, one could object that even then something like a state would emerge, because of the economies of scale in the provision of defense, leading to a dominant protection network as described by Nozick. Whether that structure of service-delivery is necessarily a state is an interesting question, but not central to our current inquiry.

My own view is that libertarianism is, and in fact should be, a philosophy of governing that is robust and useful. But then I am a thoroughgoing directionalist. The state and its deputized coercive instruments have expanded the scope and intensity of their activities far beyond what people need to achieve cooperative goals, and beyond what they want in terms of immanent intrusions into our private lives.

Given the constant push and pull of politics, and the desire of groups to create and maintain rents for themselves, the task of leaning into the prevailing winds of statism will never be done. But it is a coherent and useful governing philosophy. When someone asks how big the state should be, there arent many people who think the answer is zero. But thats not on the table, anyway. My answer is smaller than it is now. Any policy change that grants greater autonomy (but also responsibility) to individual citizens, or that lessens government control over private action, is desirable; and libertarians are crucial for providing compelling intellectual justifications for why this is so.

In short, I dont advocate abandoning destinationist debates. The positing of an ideal is an important device for recruitment and discussion. But at this point we have been going in the wrong direction, for decades. It should be possible to find allies and fellow travelers. They may want to get off the train long before we arrive at the end of the line, but for many miles our paths toward smaller government follow the same track.

[1] Michael Munger, Basic Income Is Not an Obligation, but It Might Be a Legitimate Choice, Basic Income Studies 6:2 (December 2011), 1-13.

[2] Robert Sugden, Can a Humean Be a Contractarian? in Perspectives in Moral Science, edited by Michael Baurmann and Bernd Lahno, Frankfurt School Verlag (2009), 1123.

[3] Gerald Gaus, Why the Conventionalist Needs the Social Contract (and Vice Versa), Rationality, Markets and Morals, Frankfurt School Verlag, 4 (2013), 7187.

[4] For more on the foundation of Buchanans thought, see my forthcoming essay in the Review of Austrian Economics, Thirty Years After the Nobel: James Buchanans Political Philosophy.

[5] John Thrasher, Uniqueness and Symmetry in Bargaining Theories of Justice, Philosophical Studies 167 (2014), 683699.

[6] Anthony de Jasay, Pious Lies: The Justification of States and Welfare States, Economic Affairs 24:2 (2004), 63-64.

[7] Lysander Spooner, The Unconstitutionality of Slavery (Boston: Bela Marsh, 1860), pp. 9-10.

Read more here:

Can Libertarianism Be a Governing Philosophy?

6 Reasons Why I Gave Up On Libertarianism Return Of Kings

These days, libertarianism tends to be quite discredited. It is now associated with the goofy candidature of Gary Johnson, having a rather narrow range of issueslegalize weed! less taxes!, cucking ones way to politics through sweeping all the embarrassing problems under the carpet, then surrendering to liberal virtue-signaling and endorsing anti-white diversity.

Now, everyone on the Alt-Right, manosphere und so wieser is laughing at those whose adhesion to a bunch of abstract premises leads to endorse globalist capital, and now that Trump officially heads the State, wed be better off if some private companies were nationalized than let to shadowy overlords.

To Americans, libertarianism has been a constant background presence. Its main icons, be them Ayn Rand, Murray Rothbard or Friedrich Hayek, were always read and discussed here and there, and never fell into oblivion although they barely had media attention. The academic and political standing of libertarianism may be marginal, it has always been granted small platforms and resurrected from time to time in the public landscape, one of the most conspicuous examples of it being the Tea Party demonstrations.

To a frog like yours trulyKek being now praised by thousands of well-meaning memers, I can embrace the frog moniker gladlylibertarianism does not have the same standing at all. In French universities, libertarian thinkers are barely discussed, even in classes that are supposed to tackle economics: for one hour spent talking about Hayek, Keynes easily enjoys ten, and the same goes on when comparing the attention given to, respectively, Adam Smith and Karl Marx.

On a wider perspective, a lot of the contemporary French identity is built on Jacobinism, i.e. on crushing underfoot organic regional sociability in the name of a bureaucratized and Masonic republic. The artificial construction of France is exactly the kind of endeavour libertarianism loathes. No matter why the public choices school, for example, is barely studied here: pompous leftist teachers and mediocre fonctionnaires are too busy gushing about themselves, sometimes hiding the emptiness of their life behind a ridiculous epic narrative that turns social achievements into heroic feats, to give a fair hearing to pertinent criticism.

When I found out about libertarianism, I was already sick of the dominant fifty shades of leftism political culture. The gloomy mediocrity of small bureaucrats, including most school teachers, combined with their petty political righteousness, always repelled me. Thus, the discovery oflaissez-faire advocates felt like stumbling on an entirely new scene of thoughtand my initial feeling was vindicated when I found about the naturalism often associated with it, something refreshing and intuitively more satisfying than the mainstream culture-obsessed, biology-denying view.

Libertarianism looked like it could solve everything. More entrepreneurship, more rights to those who actually create wealth and live through the good values of personal responsibility and work ethic, less parasitesbe they bureaucrats or immigrants, no more repressive speech laws. Coincidentally, a new translation of Ayn Rands Atlas Shrugged was published at this time: I devoured it, loving the sense of life, the heroism, the epic, the generally great and achieving ethos contained in it. Arent John Galt and Hank Rearden more appealing than any corrupt politician or beta bureaucrat that pretends to be altruistic while backstabbing his own colleagues and parasitizing the country?

Now, although I still support small-scale entrepreneurship wholeheartedly, I would never defend naked libertarianism, and here is why.

Part of the Rothschild family, where nepotism and consanguinity keep the money in

Unity makes strength, and trust is much easier to cultivate in a small group where everyone truly belongs than in an anonymous great society. Some ethnic groups, especially whites, tend to be instinctively individualistic, with a lot of people favouring personal liberty over belonging, while others, especially Jews, tend to favor extended family business and nepotism.

On a short-term basis, mobile individuals can do better than those who are bound to many social obligations. On the long run, however, extended families manage to create an environment of trust and concentrate capital. And whereas individuals may start cheating each other or scattering their wealth away, thanks to having no proper economic network, families and tribes will be able to invest heavily in some of their members and keep their wealth inside. This has been true for Jewish families, wherever their members work as moneylenders or diamond dealers, for Asians investing in new restaurants or any other business project of their own, and for North Africans taking over pubs and small shops in France.

The latter example is especially telling. White bartenders, butchers, grocers and the like have been chased off French suburbs by daily North African and black violence. No one helped them, everyone being afraid of getting harassed as well and busy with their own business. (Yep, just like what happened and still happens in Rotheram.) As a result, these isolated, unprotected shop-owners sold their outlet for a cheap price and fled. North Africans always covered each others violence and replied in groups against any hurdle, whereas whites lowered their heads and hoped not to be next on the list.

Atlas Shrugged was wrong. Loners get wrecked by groups. Packs of hyenas corner and eat the lone dog.

Libertarianism is not good for individuals on the long runit turns them into asocial weaklings, soon to be legally enslaved by global companies or beaten by groups, be they made of nepotistic family members or thugs.

How the middle classes end up after jobs have been sent overseas and wages lowered

People often believe, thanks to Leftist media and cuckservative posturing, that libertarians are big bosses. This is mostly, if not entirely, false. Most libertarians are middle class guys who want more opportunities, less taxation, and believe that libertarianism will help them to turn into successful entrepreneurs. They may be right in very specific circumstances: during the 2000s, small companies overturned the market of electronics, thus benefiting both to their independent founders and to society as a whole; but ultimately, they got bought by giants like Apple and Google, who are much better off when backed by a corrupt State than on a truly free market.

Libertarianism is a fake alternative, just as impossible to realize as communism: far from putting everyone at its place, it lets ample room to mafias, monopolies, unemployment caused by mechanization and global competition. If one wants the middle classes to survive, one must protect the employment and relative independence of its membersbankers and billionaires be damned.

Spontaneous order helped by a weak government. I hope they at least smoke weed.

A good feature of libertarianism is that it usually goes along with a positive stance on biology and human nature, in contrast with the everything is cultural and ought to be deconstructed left. However, this stance often leads to an exaggerated optimism about human nature. In a society of laissez-faire, the libertarians say, people flourish and the order appears spontaneously.

Well, this is plainly false. As all of the great religions say, after what Christians call the Fall, man is a sinner. If you let children flourish without moral standards and role models, they become spoiled, entitled, manipulative, emotionally fragile and deprived of self-control. If you let women flourish without suspicion, you let free rein to their propensities to hypergamy, hysteria, self-entitlement and everything we can witness in them today. If you let men do as they please, you let them become greedy, envious, and turning into bullies. As a Muslim proverb says, people must be flogged to enter into paradiseand as Aristotle put forth, virtues are trained dispositions, no matter the magnitude of innate talents and propensities.

Michelle The Man Obama and Lying Crooked at a Democrat meeting

When the laissez-faire rules, some will succeed on the market more than others, due to differences in investment, work, and natural abilities. Some will succeed enough to be able to buy someone elses business: this is the natural consequence of differences in wealth and of greed. When corrupt politicians enter the game, things become worse, as they will usually help some large business owners to shield their position against competitorsat the expense of most people, who then lose their independence and live off a wage.

At the end, what we get is a handful of very wealthy individuals who have managed to concentrate most capital and power levers into their hands and a big crowd of low-wage employees ready to cut each others throat for a small promotion, and females waiting in line to get notched by the one per cent while finding the other ninety-nine per cent boring.

Censorship by massive social pressure, monopoly over the institutions and crybullying is perfectly legal. What could go wrong?

On the surface, libertarianism looks good here, because it protects the individuals rights against left-hailing Statism and cuts off the welfare programs that have attracted dozens of millions of immigrants. Beneath, however, things are quite dire. Libertarianism enshrines the leftists right to free speech they abuse from, allows the pressure tactics used by radicals, and lets freethinking individuals getting singled out by SJWs as long as these do not resort to overt stealing or overt physical violence. As for the immigrants, libertarianism tends to oppose the very notion of non-private boundaries, thus letting the local cultures and identities defenseless against both greedy capitalists and subproletarian masses.

Supporting an ideology that allows the leftists to destroy society more or less legally equates to cucking, plain and simple. Desiring an ephemeral cohabitation with rabid ideological warriors is stupid. We should aim at a lasting victory, not at pretending to constrain them through useless means.

Am I the only one to find that Gary Johnson looks like a snail (Spongebob notwithstanding)?

In 2013, one of the rare French libertarians academic teachers, Jean-Louis Caccomo, was forced into a mental ward at the request of his university president. He then spent more than a year getting drugged. Mr. Caccomo had no real psychological problem: his confinement was part of a vicious strategy of pathologization and career-destruction that was already used by the Soviets. French libertarians could have wide denounced the abuse. Nonetheless, most of them freaked out, and almost no one dared to actually defend him publicly.

Why should rational egoists team up and risk their careers to defend one of themselves after all? They would rather posture at confidential social events, rail at organic solidarity and protectionism, or trolling the shit out of individuals of their own social milieu because Ive got the right to mock X, its my right to free speech! The few libertarian people I knew firsthand, the few events I have witnessed in that small milieu, were enough to give me serious doubts about libertarianism: how can a good political ideology breed such an unhealthy mindset?

Political ideologies are tools. They are not ends in themselves. All forms of government arent fit for any people or any era. Political actors must know at least the most important ones to get some inspiration, but ultimately, said actors win on the ground, not in philosophical debates.

Individualism, mindless consumerism, careerism, hedonism are part of the problem. Individual rights granted regardless of ones abilities, situation, and identity are a disaster. Time has come to overcome modernity, not stall in one of its false alternatives. The merchant caste must be regulated, though neither micromanaged or hampered by a parasitic bureaucracy nor denied its members right for small-scale independence. Individual rights must be conditional, boundaries must be restored, minority identities based on anti-white male resentment must be crushed so they cannot devour sociability from the inside again, and the pater familias must assert himself anew.

Long live the State and protectionism as long as they defend the backbone of society and healthy relationships between the sexes, and no quarter for those who think they have a right to wage grievance-mongering against us, no matter if they want to use the State or private companies. At the end, the socialism-libertarianism dichotomy is quite secondary.

Read Next: Sugar Baby Culture In The US Is Creating A Marketplace for Prostitution

Originally posted here:

6 Reasons Why I Gave Up On Libertarianism Return Of Kings

The End of Moores Law Rodney Brooks

I have been working on an upcoming post about megatrends and how they drive tech. I had included the end of Moores Law to illustrate how the end of a megatrend might also have a big influence on tech, but that section got away from me, becoming much larger than the sections on each individual current megatrend. So I decided to break it out into a separate post and publish it first. Here it is.

Moores Law, concerning what we put on silicon wafers, is over after a solid fifty year run that completely reshaped our world. But that end unleashes lots of new opportunities.

Moore, Gordon E.,Cramming more components onto integrated circuits,Electronics, Vol 32, No. 8, April 19, 1965.

Electronicswas a trade journal that published monthly, mostly, from 1930 to 1995. Gordon Moores four and a half page contribution in 1965 was perhaps its most influential article ever. That article not only articulated the beginnings, and it was the very beginnings, of a trend, but the existence of that articulation became a goal/law that has run the silicon based circuit industry (which is the basis of every digital device in our world) for fifty years. Moore was a Cal Tech PhD, cofounder in 1957 of Fairchild Semiconductor, and head of its research and development laboratory from1959. Fairchild had been founded to make transistors from silicon at a time when they were usually made from much slower germanium.

One can find many files on the Web that claim to be copies of the original paper, but I have noticed that some of them have the graphs redrawn and that they are sometimes slightly different from the ones that I have always taken to be the originals. Below I reproduce two figures from the original that as far as I can tell have only been copied from an original paper version of the magazine, with no manual/human cleanup.

The first one that I reproduce here is the money shot for the origin of Moores Law. There was however an equally important earlier graph in the paper which was predictive of the future yield over time of functional circuits that could be made from silicon. It had less actual data than this one, and as well see, that is really saying something.

This graph is about the number of components on an integrated circuit. An integrated circuit is made through a process that is like printing. Light is projected onto a thin wafer of silicon in a number of different patterns, while different gases fill the chamber in which it is held. The different gases cause different light activated chemical processes to happen on the surface of the wafer, sometimes depositing some types of material, and sometimes etching material away. With precise masks to pattern the light, and precise control over temperature and duration of exposures, a physical two dimensional electronic circuit can be printed. The circuit has transistors, resistors, and other components. Lots of them might be made on a single wafer at once, just as lots of letters are printed on a single page at one. The yield is how many of those circuits are functionalsmall alignment or timing errors in production can screw up some of the circuits in any given print. Then the silicon wafer is cut up into pieces, each containing one of the circuits and each is put inside its own plastic package with little legs sticking out as the connectorsif you have looked at a circuit board made in the last forty years you have seen it populated with lots of integrated circuits.

The number of components in a single integrated circuit is important. Since the circuit is printed it involves no manual labor, unlike earlier electronics where every single component had to be placed and attached by hand. Now a complex circuit which involves multiple integrated circuits only requires hand construction (later this too was largely automated), to connect up a much smaller number of components. And as long as one has a process which gets good yield, it is constant time to build a single integrated circuit, regardless of how many components are in it. That means less total integrated circuits that need to be connected by hand or machine. So, as Moores papers title references,crammingmore components into a single integrated circuit is a really good idea.

The graph plots the logarithm base two of the number ofcomponentsin an integrated circuit on the vertical axis against calendar years on the horizontal axis. Every notch upwards on the left doubles the number of components. So while means components, means components. That is a thousand fold increase from 1962 to 1972.

There are two important things to note here.

The first is that he is talking aboutcomponentson an integrated circuit, not just the number of transistors. Generally there are many more components thantransistors, though the ratio did drop over time as different fundamental sorts of transistors were used. But in later years Moores Law was often turned into purely a count of transistors.

The other thing is that there are only four real data points here in this graph which he published in 1965. In 1959 the number of components is , i.e., that is not about anintegratedcircuit at all, just about single circuit elementsintegrated circuits had not yet been invented. So this is a null data point. Then he plots four actual data points, which we assume were taken from what Fairchild could produce, for 1962, 1963, 1964, and 1965, having 8, 16, 32, and 64 components. That is a doubling every year. It is an exponential increase in the true sense of exponential.

What is the mechanism for this, how can this work? It works because it is in the digital domain, the domain ofyesorno, the domain of or .

In the last half page of the four and a half page article Moore explains the limitations of his prediction, saying that for some things, like energy storage, we will not see his predicted trend. Energy takes up a certain number of atoms and their electrons to store a given amount, so you can not just arbitrarily change the number of atoms and still store the same amount of energy. Likewise if you have a half gallon milk container you can not put a gallon of milk in it.

But the fundamental digital abstraction isyesorno. A circuit element in an integrated circuit just needs to know whether a previous element said yes or no, whether there is a voltage or current there or not. In the design phase one decides above how many volts or amps, or whatever, means yes, and below how many means no. And there needs to be a good separation between those numbers, a significant no mans land compared to the maximum and minimum possible. But, the magnitudes do not matter.

I like to think of it like piles of sand. Is there a pile of sand on the table or not? We might have a convention about how big a typical pile of sand is. But we can make it work if we halve the normal size of a pile of sand. We can still answer whether or not there is a pile of sand there using just half as many grains of sand in a pile.

And then we can halve the number again. And the digital abstraction of yes or no still works. And we can halve it again, and it still works. And again, and again, and again.

This is what drives Moores Law, which in its original form said that we could expect to double the number of components on an integrated circuit every year for 10 years, from 1965 to 1975. That held up!

Variations of Moores Law followed; they were all about doubling, but sometimes doubling different things, and usually with slightly longer time constants for the doubling. The most popular versions were doubling of the number of transistors, doubling of the switching speed of those transistors (so a computer could run twice as fast), doubling of the amount of memory on a single chip, and doubling of the secondary memory of a computeroriginally on mechanically spinning disks, but for the last five years in solid state flash memory. And there were many others.

Lets get back to Moores original law for a moment. The components on an integrated circuit are laid out on a two dimensional wafer of silicon. So to double the number of components for the same amount of silicon you need to double the number of components per unit area. That means that the size of a component, in each linear dimension of the wafer needs to go down by a factor of . In turn, that means that Moore was seeing the linear dimension of each component go down to of what it was in a year, year over year.

But why was it limited to just a measly factor of two per year? Given the pile of sand analogy from above, why not just go to a quarter of the size of a pile of sand each year, or one sixteenth? It gets back to the yield one gets, the number of working integrated circuits, as you reduce the component size (most commonly calledfeature size). As the feature size gets smaller, the alignment of the projected patterns of light for each step of the process needs to get more accurate. Since , approximately, it needs to get better by as you halve the feature size. And because impurities in the materials that are printed on the circuit, the material from the gasses that are circulating and that are activated by light, the gas needs to get more pure, so that there are fewer bad atoms in each component, now half the area of before. Implicit in Moores Law, in its original form, was the idea that we could expect the production equipment to get better by about per year, for 10 years.

For various forms of Moores Law that came later, the time constant stretched out to 2 years, or even a little longer, for a doubling, but nevertheless the processing equipment has gotten that better time period over time period, again and again.

To see the magic of how this works, lets just look at 25 doublings. The equipment has to operate with things times smaller, i.e., roughly 5,793 times smaller. But we can fit more components in a single circuit, which is 33,554,432 times more. The accuracy of our equipment has improved 5,793 times, but that has gotten a further acceleration of 5,793 on top of the original 5,793 times due to the linear to area impact. That is where the payoff of Moores Law has come from.

In his original paper Moore only dared project out, and only implicitly, that the equipment would get better every year for ten years. In reality, with somewhat slowing time constants, that has continued to happen for 50 years.

Now it is coming to an end. But not because the accuracy of the equipment needed to give good yields has stopped improving. No. Rather it is because those piles of sand we referred to above have gotten so small that they only contain a single metaphorical grain of sand. We cant split the minimal quantum of a pile into two any more.

Perhaps the most remarkable thing is Moores foresight into how this would have an incredible impact upon the world. Here is the first sentence of his second paragraph:

Integrated circuits will lead to such wonders as home computersor at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment.

This was radical stuff in 1965. So called mini computers were still the size of a desk, and to be useful usually had a few peripherals such as tape units, card readers, or printers, that meant they would be hard to fit into a home kitchen of the day, even with the refrigerator, oven, and sink removed. Most people had never seen a computer and even fewer had interacted with one, and those who had, had mostly done it by dropping off a deck of punched cards, and a day later picking up a printout from what the computer had done when humans had fed the cards to the machine.

The electrical systems of cars were unbelievably simple by todays standards, with perhaps half a dozen on off switches, and simple electromechanical devices to drive the turn indicators, windshield wipers, and the distributor which timed the firing of the spark plugsevery single function producing piece of mechanism in auto electronics was big enough to be seen with the naked eye. And personal communications devices were rotary dial phones, one per household, firmly plugged into the wall at all time. Or handwritten letters than needed to be dropped into the mail box.

That sentence quoted above, given when it was made, is to me the bravest and most insightful prediction of technology future that we have ever seen.

By the way, the first computer made from integrated circuits was the guidance computer for the Apollo missions, one in the Command Module, and one in the Lunar Lander. The integrated circuits were made by Fairchild, Gordon Moores company. The first version had 4,100 integrated circuits, each implementing a single 3 input NOR gate. The more capable manned flight versions, which first flew in 1968, had only 2,800 integrated circuits, each implementing two 3 input NOR gates. Moores Law had its impact on getting to the Moon, even in the Laws infancy.

In the original magazine article this cartoon appears:

At a fortieth anniversary of Moores Law at the Chemical Heritage Foundationin Philadelphia I asked Dr. Moore whether this cartoon had been his idea. He replied that he had nothing to do with it, and it was just there in the magazine in the middle of his article, to his surprise.

Without any evidence at all on this, my guess is that the cartoonist was reacting somewhat skeptically to the sentence quoted above. The cartoon is set in a department store, as back then US department stores often had a Notions department, although this was not something of which I have any personal experience as they are long gone (and I first set foot in the US in 1977). It seems that notions is another word for haberdashery, i.e., pins, cotton, ribbons, and generally things used for sewing. As still today, there is also aCosmeticsdepartment. And plop in the middle of them is theHandy Home Computersdepartment, with the salesman holding a computer in his hand.

I am guessing that the cartoonist was making fun of this idea, trying to point out the ridiculousness of it. It all came to pass in only 25 years, including being sold in department stores. Not too far from the cosmetics department. But the notions departments had all disappeared. The cartoonist was right in the short term, but blew it in the slightly longer term.

There were many variations on Moores Law, not just his original about the number of components on a single chip.

Amongst the many there was a version of the law about how fast circuits could operate, as the smaller the transistors were the faster they could switch on and off. There were versions of the law for how much RAM memory, main memory for running computer programs, there would be and when. And there were versions of the law for how big and fast disk drives, for file storage, would be.

This tangle of versions of Moores Law had a big impact on how technology developed. I will discuss three modes of that impact; competition, coordination, and herd mentality in computer design.

Competition

Memory chips are where data and programs are stored as they are run on a computer. Moores Law applied to the number of bits of memory that a single chip could store, and a natural rhythm developed of that number of bits going up my a multiple of four on a regular but slightly slowing basis. By jumping over just a doubling, the cost of the silicon foundries could me depreciated over long enough time to keep things profitable (today a silicon foundry is about a $7B capital cost!), and furthermore it made sense to double the number of memory cells in each dimension to keep the designs balanced, again pointing to a step factor of four.

In the very early days of desktop PCs memory chips had bits. The memory chips were called RAM (Random Access Memoryi.e., any location in memory took equally long to access, there were no slower of faster places), and a chip of this size was called a 16K chip, where K means not exactly 1,000, but instead 1,024 (which is ). Many companies produced 16K RAM chips. But they all knew from Moores Law when the market would be expecting 64K RAM chips to appear. So they knew what they had to do to not get left behind, and they knew when they had to have samples ready for engineers designing new machines so that just as the machines came out their chips would be ready to be used having been designed in. And they could judge when it was worth getting just a little ahead of the competition at what price. Everyone knew the game (and in fact all came to a consensus agreement on when the Moores Law clock should slow down just a little), and they all competed on operational efficiency.

Coordination

Technology Reviewtalks about this in their story on the end of Moores Law. If you were the designer of a new computer box for a desktop machine, or any other digital machine for that matter, you could look at when you planned to hit the market and know what amount of RAM memory would take up what board space because you knew how many bits per chip would be available at that time. And you knew how much disk space would be available at what price and what physical volume (disks got smaller and smaller diameters just as they increased the total amount of storage). And you knew how fast the latest processor chip would run. And you knew what resolution display screen would be available at what price. So a couple of years ahead you could put all these numbers together and come up with what options and configurations would make sense by the exact time whenyou were going to bring your new computer to market.

The company that sold the computers might make one or two of the critical chips for their products but mostly they bought other components from other suppliers. The clockwork certainty of Moores Law let them design a new product without having horrible surprises disrupt their flow and plans. This really let the digital revolution proceed. Everything was orderly and predictable so there were fewer blind alleys to follow. We had probably the single most sustained continuous and predictable improvement in any technology over the history of mankind.

Herd mentality in computer design

But with this good came some things that might be viewed negatively (though Im sure there are some who would argue that they were all unalloyed good). Ill take up one of these as the third thing to talk about that Moores Law had a major impact upon.

A particular form of general purpose computer design had arisen by the time that central processors could be put on a single chip (see the Intel 4004 below), and soon those processors on a chip, microprocessors as they came to be known, supported that general architecture. That architecture is known as thevon Neumann architecture.

A distinguishing feature of this architecture is that there is a large RAM memory which holds both instructions and datamade from the RAM chips we talked about above under coordination. The memory is organized into consecutive indexable (or addressable) locations, each containing the same number of binary bits, or digits. The microprocessor itself has a few specialized memory cells, known as registers, and an arithmetic unit that can do additions, multiplications, divisions (more recently), etc. One of those specialized registers is called the program counter (PC), and it holds an address in RAM for the current instruction. The CPU looks at the pattern of bits in that current instruction location and decodes them into what actions it should perform. That might be an action to fetch another location in RAM and put it into one of the specialized registers (this is called a LOAD), or to send the contents the other direction (STORE), or to take the contents of two of the specialized registers feed them to the arithmetic unit, and take their sum from the output of that unit and store it in another of the specialized registers. Then the central processing unit increments its PC and looks at the next consecutive addressable instruction. Some specialized instructions can alter the PC and make the machine go to some other part of the program and this is known as branching. For instance if one of the specialized registers is being used to count down how many elements of an array of consecutive values stored in RAM have been added together, right after the addition instruction there might be an instruction to decrement that counting register, and then branch back earlier in the program to do another LOAD and add if the counting register is still more than zero.

Thats pretty much all there is to most digital computers. The rest is just hacks to make them go faster, while still looking essentially like this model. But note that the RAM is used in two ways by a von Neumann computerto contain data for a program and to contain the program itself. Well come back to this point later.

With all the versions of Moores Law firmly operating in support of this basic model it became very hard to break out of it. The human brain certainly doesnt work that way, so it seems that there could be powerful other ways to organize computation. But trying to change the basic organization was a dangerous thing to do, as the inexorable march of Moores Law based existing architecture was going to continue anyway. Trying something new would most probably set things back a few years. So brave big scale experiments like the Lisp MachineorConnection Machinewhich both grew out of the MIT Artificial Intelligence Lab (and turned into at least three different companies) and Japans fifth generation computerproject (which played with two unconventional ideas, data flow and logical inference) all failed, as before long the Moores Law doubling conventional computers overtook the advanced capabilities of the new machines, and software could better emulate the new ideas.

Most computer architects were locked into the conventional organizations of computers that had been around for decades. They competed on changing the coding of the instructions to make execution of programs slightly more efficient per square millimeter of silicon. They competed on strategies to cache copies of larger and larger amounts of RAM memory right on the main processor chip. They competed on how to put multiple processors on a single chip and how to share the cached information from RAM across multiple processor units running at once on a single piece of silicon. And they competed on how to make the hardware more predictive of what future decisions would be in a running program so that they could precompute the right next computations before it was clear whether they would be needed or not. But, they were all locked in to fundamentally the same way of doing computation. Thirty years ago there were dozens of different detailed processor designs, but now they fall into only a small handful of families, the X86, the ARM, and the PowerPC. The X86s are mostly desktops, laptops, and cloud servers. The ARM is what we find in phones and tablets. And you probably have a PowerPC adjusting all the parameters of your cars engine.

The one glaring exception to the lock in caused by Moores Law is that of Graphical Processing Units, orGPUs. These are different from von Neumann machines. Driven by wanting better video performance for video and graphics, and in particular gaming, the main processor getting better and better under Moores Law was just not enough to make real time rendering perform well as the underlying simulations got better and better. In this case a new sort of processor was developed. It was not particularly useful for general purpose computations but it was optimized very well to do additions and multiplications on streams of data which is what is needed to render something graphically on a screen. Here was a case where a new sort of chip got added into the Moores Law pool much later than conventional microprocessors, RAM, and disk. The new GPUs did not replace existing processors, but instead got added as partners where graphics rendering was needed. I mention GPUs here because it turns out that they are useful for another type of computation that has become very popular over the last three years, and that is being used as an argument that Moores Law is not over. I still think it is and will return to GPUs in the next section.

As I pointed out earlier we can not halve a pile of sand once we are down to piles that are only a single grain of sand. That is where we are now, we have gotten down to just about one grain piles of sand. Gordon Moores Law in its classical sense is over. SeeThe Economistfrom March of last year for a typically thorough, accessible, and thoughtful report.

I earlier talked about thefeature size of an integrated circuit and how with every doubling that size is divided by . By 1971 Gordon Moore was at Intel, and they released their first microprocessor on a single chip, the 4004 with 2,300 transistors on 12 square millimeters of silicon, with a feature size of 10 micrometers, written 10m. That means that the smallest distinguishable aspect of any component on the chip was th of a millimeter.

Since then the feature size has regularly been reduced by a factor of , or reduced to of its previous size, doubling the number of components in a given area, on a clockwork schedule. The schedule clock has however slowed down. Back in the era of Moores original publication the clock period was a year. Now it is a little over 2 years. In the first quarter of 2017 we are expecting to see the first commercial chips in mass market products with a feature size of 10 nanometers, written 10nm. That is 1,000 times smaller than the feature size of 1971, or 20 applications of the rule over 46 years. Sometimes the jump has been a little better than , and so we actually seen 17 jumps from10m down to 10nm. You can see them listed in Wikipedia. In 2012 the feature size was 22nm, in 2014 it was 14nm, now in the first quarter of 2017 we are about to see 10nm shipped to end users, and it is expected that we will see 7nm in 2019 or so. There are stillactive areas of researchworking on problems that are yet to be solved to make 7nm a reality, but industry is confident that it will happen. There are predictions of 5nm by 2021, but a year ago there was still much uncertaintyover whether the engineering problems necessary to do this could be solved and whether they would be economically viable in any case.

Once you get down to 5nm features they are only about 20 silicon atoms wide. If you go much below this the material starts to be dominated by quantum effects and classical physical properties really start to break down. That is what I mean by only one grain of sand left in the pile.

Todays microprocessors have a few hundred square millimeters of silicon, and 5 to 10 billion transistors. They have a lot of extra circuitry these days to cache RAM, predict branches, etc., all to improve performance. But getting bigger comes with many costs as they get faster too. There is heat to be dissipated from all the energy used in switching so many signals in such a small amount of time, and the time for a signal to travel from one side of the chip to the other, ultimately limited by the speed of light (in reality, in copper it is about less), starts to be significant. The speed of light is approximately 300,000 kilometers per second, or 300,000,000,000 millimeters per second. So light, or a signal, can travel 30 millimeters (just over an inch, about the size of a very large chip today) in no less than one over 10,000,000,000 seconds, i.e., no less than one ten billionth of a second.

Todays fastest processors have a clock speed of 8.760GigaHertz, which means by the time the signal is getting to the other side of the chip, the place if came from has moved on to the next thing to do. This makes synchronization across a single microprocessor something of a nightmare, and at best a designer can know ahead of time how late different signals from different parts of the processor will be, and try to design accordingly. So rather than push clock speed further (which is also hard) and rather than make a single microprocessor bigger with more transistors to do more stuff at every clock cycle, for the last few years we have seen large chips go to multicore, with two, four, or eight independent microprocessors on a single piece of silicon.

Multicore has preserved the number of operations done per second version of Moores Law, but at the cost of a simple program not being sped up by that amountone cannot simply smear a single program across multiple processing units. For a laptop or a smart phone that is trying to do many things at once that doesnt really matter, as there are usually enough different tasks that need to be done at once, that farming them out to different cores on the same chip leads to pretty full utilization. But that will not hold, except for specialized computations, when the number of cores doubles a few more times. The speed up starts to disappear as silicon is left idle because there just arent enough different things to do.

Despite the arguments that I presented a few paragraphs ago about why Moores Law is coming to a silicon end, many people argue that it is not, because we are finding ways around those constraints of small numbers of atoms by going to multicore and GPUs. But I think that is changing the definitions too much.

Here is a recent chart that Steve Jurvetson, cofounder of the VC firm DFJ (Draper Fisher Jurvetson), posted on his FaceBook page. He said it is an update of an earlier chart compiled by Ray Kurzweil.

In this case the left axis is a logarithmically scaled count of the number of calculations per second per constant dollar. So this expresses how much cheaper computation has gotten over time. In the 1940s there are specialized computers, such as the electromagnetic computers built to break codes at Bletchley Park. By the 1950s they become general purpose, von Neuman style computers and stay that way until the last few points.

The last two points are both GPUs, the GTX 450 and the NVIDIA Titan X. Steve doesnt label the few points before that, but in every earlier version of a diagram that I can find on the Web (and there are plenty of them), the points beyond 2010 are all multicore. First dual cores, and then quad cores, such as Intels quad core i7 (and I am typing these words on a 2.9MHz version of that chip, powering my laptop).

That GPUs are there and that people are excited about them is because besides graphics they happen to be very good at another very fashionable computation. Deep learning, a form of something known originally as back propagation neural networks, has had a big technological impact recently. It is what has made speech recognition so fantastically better in the last three years that Apples Siri, Amazons Echo, and Google Home are useful and practical programs and devices. It has also made image labeling so much better than what we had five years ago, and there is much experimentation with using networks trained on lots of road scenes as part of situational awareness for self driving cars. For deep learning there is a training phase, usually done in the cloud, on millions of examples. That produces a few million numbers which represent the network that is learned. Then when it is time to recognize a word or label an image that input is fed into a program simulating the network by doing millions of multiplications and additions. Coincidentally GPUs just happen to perfect for the way these networks are structured, and so we can expect more and more of them to be built into our automobiles. Lucky break for GPU manufacturers! While GPUs can do lots of computations they dont work well on just any problem. But they are great for deep learning networks and those are quickly becoming the flavor of the decade.

While rightly claiming that we continue to see exponential growth as in the chart above, exactly what is being measured has changed. That is a bit of a sleight of hand.

And I think that change will have big implications.

I think the end of Moores Law, as I have defined the end, will bring about a golden new era of computer architecture. No longer will architects need to cower atthe relentless improvements that they know others will get due to Moores Law. They will be able to take the time to try new ideas out in silicon, now safe in the knowledge that a conventional computer architecture will not be able to do the same thing in just two or four years in software. And the new things they do may not be about speed. They might be about making computation better in other ways.

Machine learning runtime

We are seeing this with GPUs as runtime engines for deep learning networks. But we are also seeing some more specific architectures. For instance, for about a a year Google has had their own chips called TensorFlow Units (or TPUs) that save power for deep learning networks by effectively reducing the number of significant digits that are kept around as neural networks work quite well at low precision. Google has placed many of these chips in the computers in their server farms, or cloud, and are able to use learned networks in various search queries, at higher speed for lower electrical power consumption.

Special purpose silicon

Typical mobile phone chips now have four ARM processor cores on a single piece of silicon, plus some highly optimized special purpose processors on that same piece of silicon. The processors manage data flowing from cameras and optimizing speech quality, and even on some chips there is a special highly optimized processor for detecting human faces. That is used in the camera application, youve probably noticed little rectangular boxes around peoples faces as you are about to take a photograph, to decide what regions in an image should be most in focus and with the best exposure timingthe faces!

New general purpose approaches

We are already seeing the rise of special purpose architectures for very specific computations. But perhaps we will see more general purpose architectures but with a a different style of computation making a comeback.

Conceivably the dataflow and logic models of the Japanese fifth generation computer project might now be worth exploring again. But as we digitalize the world the cost of bad computer security will threaten our very existence. So perhaps if things work out, the unleashed computer architects can slowly start to dig us out of our current deplorable situation.

Secure computing

We all hear about cyber hackers breaking into computers, often half a world away, or sometimes now in a computer controlling the engine, and soon everything else, of a car as it drives by. How can this happen?

Cyber hackers are creative but many ways that they get into systems are fundamentally through common programming errors in programs built on top of the von Neumann architectures we talked about before.

A common case is exploiting something known as buffer overrun. A fixed size piece of memory is reserved to hold, say, the web address that one can type into a browser, or the Google query box. If all programmers wrote very careful code and someone typed in way too many characters those past the limit would not get stored in RAM at all. But all too often a programmer has used a coding trick that is simple, and quick to produce, that does not check for overrun and the typed characters get put into memory way past the end of the buffer, perhaps overwriting some code that the program might jump to later. This relies on the feature of von Neumann architectures that data and programs are stored in the same memory. So, if the hacker chooses some characters whose binary codes correspond to instructions that do something malicious to the computer, say setting up an account for them with a particular password, then later as if by magic the hacker will have a remotely accessible account on the computer, just as many other human and program services may. Programmers shouldnt oughta make this mistake but history shows that it happens again and again.

Another common way in is that in modern web services sometimes the browser on a lap top, tablet, or smart phone, and the computers in the cloud need to pass really complex things between them. Rather than the programmer having to know in advance all those complex possible things and handle messages for them, it is set up so that one or both sides can pass little bits of source code of programs back and forth and execute them on the other computer. In this way capabilities that were never originally conceived of can start working later on in an existing system without having to update the applications. It is impossible to be sure that a piece of code wont do certain things, so if the programmer decided to give a fully general capability through this mechanism there is no way for the receiving machine to know ahead of time that the code is safe and wont do something malicious (this is a generalization of the halting problem I could go on and on but I wont here). So sometimes a cyber hacker can exploit this weakness and send a little bit of malicious code directly to some service that accepts code.

Beyond that cyber hackers are always coming up with new inventive ways inthese have just been two examples to illustrate a couple of ways of how itis currently done.

It is possible to write code that protects against many of these problems, but code writing is still a very human activity, and there are just too many human-created holes that can leak, from too many code writers. One way to combat this is to have extra silicon that hides some of the low level possibilities of a von Neumann architecture from programmers, by only giving the instructions in memory a more limited set of possible actions.

This is not a new idea. Most microprocessors have some version of protection rings which let more and more untrusted code only have access to more and more limited areas of memory, even if they try to access it with normal instructions. This idea has been around a long time but it has suffered from not having a standard way to use or implement it, so most software, in an attempt to be able to run on most machines, usually only specifies two or at most three rings of protection. That is a very coarse tool and lets too much through. Perhaps now the idea will be thought about more seriously in an attempt to get better security when just making things faster is no longer practical.

Another idea, that has mostly only been implemented in software, with perhaps one or two exceptions, is called capability based security, through capability based addressing. Programs are not given direct access to regions of memory they need to use, but instead are given unforgeable cryptographically sound reference handles, along with a defined subset of things they are allowed to do with the memory. Hardware architects might now have the time to push through on making this approach completely enforceable, getting it right once in hardware so that mere human programmers pushed to get new software out on a promised release date can not screw things up.

From one point of view the Lisp Machines that I talked about earlier were built on a very specific and limited version of a capability based architecture. Underneath it all, those machines were von Neumann machines, but the instructions they could execute were deliberately limited. Through the use of something called typed pointers, at the hardware level, every reference to every piece of memory came with restrictions on what instructions could do with that memory, based on the type encoded in the pointer. And memory could only be referenced by a pointer to the start of a chunk of memory of a fixed size at the time the memory was reserved. So in the buffer overrun case, a buffer for a string of characters would not allow data to be written to or read from beyond the end of it. And instructions could only be referenced from another type of pointer, a code pointer. The hardware kept the general purpose memory partitioned at a very fine grain by the type of pointers granted to it when reserved. And to a first approximation the type of a pointer could never be changed, nor couldthe actual address in RAM be seen by any instructions that had access to a pointer.

There have been ideas out there for a long time on how to improve security through this use of hardware restrictions on the general purpose von Neumann architecture. I have talked about a few of them here. Now I think we can expect this to become a much more compelling place for hardware architects to spend their time, as security of our computational systems becomes a major achilles heel on the smooth running of our businesses, our lives, and our society.

Quantum computers

Quantum computers are a largely experimental and very expensive at this time technology. With the need to cool them to physics experiment level ultra cold, and the expense that entails, to the confusion over how much speed up they might give over conventional silicon based computers and for what class of problem, they are a large investment, high risk research topic at this time. I wont go into all the arguments (I havent read them all, and frankly I do not have the expertise that would make me confident in any opinion I might form) butScott Aaronsons blogon computational complexity and quantum computation is probably the best source for those interested. Claims on speedups either achieved or hoped to be achieved on practical problems range from a factor of 1 to thousands (and I might have that upper bound wrong). In the old days just waiting 10 or 20 years would let Moores Law get you there. Instead we have seen well over a decade of sustained investment in a technology that people are still arguing over whether it can ever work. To me this is yet more evidence that the end of Moores Law is encouraging new investment and new explorations.

Unimaginable stuff

Even with these various innovations around, triggered by the end of Moores Law, the best things we might see may not yet be in the common consciousness. I think the freedom to innovate, without the overhang of Moores Law, the freedom to take time to investigate curious corners, may well lead to a new garden of Eden in computational models. Five to ten years from now we may see a completely new form of computer arrangement, in traditional silicon (not quantum), that is doing things and doing them faster than we can today imagine. And with a further thirty years of development those chips might be doing things that would today be indistinguishable from magic, just as todays smart phone would have seemed like utter magic to 50 year ago me.

Many times the popular press, or people who should know better, refer to something that is increasing a lot as exponential. Something is only truly exponential if there is a constant ratio in size between any two points in time separated by the same amount. Here the ratio is , for any two points a year apart. The misuse of the term exponential growth is widespread and makes me cranky.

Why the Chemical Heritage Foundation for this celebration? Both of Gordon Moores degrees (BS and PhD) were in physical chemistry!

For those who read my first blog, once again seeRoy Amaras Law.

I had been a post-doc at the MIT AI Lab and loved using Lisp Machines there, but when I left and joined the faculty at Stanford in 1983 I realized that the more conventional SUN workstationsbeing developed there and at spin-off company Sun Microsystemswould win out in performance very quickly. So I built a software based Lisp system (which I called TAIL (Toy AI Language) in a nod to the naming conventions of most software at the Stanford Artificial Intelligence Lab, e.g., BAIL, FAIL, SAIL, MAIL) that ran on the early Sun workstations, which themselves used completely generic microprocessors. By mid 1984 Richard Gabriel, I, and others had started a company called Lucidin Palo Alto to compete on conventional machines with the Lisp Machine companies. We used my Lisp compiler as a stop gap, but as is often the case with software, that was still the compiler used by Lucid eight years later when it ran on 19 different makes of machines. I had moved back to MIT to join the faculty in late 1984, and eventually became the director of the Artificial Intelligence Lab there (and then CSAIL). But for eight years, while teaching computer science and developing robots by day, I also at night developed and maintained my original compiler as the work horse of Lucid Lisp. Just as the Lisp Machine companies got swept away so too eventually did Lucid. Whereas the Lisp Machine companies got swept away by Moores Law, Lucid got swept away as the fashion in computer languages shifted to a winner take all world, for many years, of C.

Full disclosure. DFJ is one of the VCs who have invested in my company Rethink Robotics.

See the original post:

The End of Moores Law Rodney Brooks

Moore’s Law | Definition of Moore’s Law by Merriam-Webster

In 1965, Gordon E. Moore, the co-founder of Intel published a paper predicting that the integrated circuit could be expanded exponentially at a reasonable cost approximately every two years. At the time, the integrated circuit, a key component in the central processing unit of computers had only been around for seven years. Indeed, the trend has continued for over fifty years with no sign of abating.

For the consumer, Moore’s law is demonstrated by a $1500 computer today being worth half that amount next year and being almost obsolete in two years.

While Moore’s law is a really just an observation of a trend, it has also become a goal of the electronics industry. The innovation of electronic design and manufacturing costs, including the objective of placing whole “systems” on a chip, miniaturization of electronic devices, and the seamless integration of electronics into social fabric of daily life are outcomes from these industry goals.

Original post:

Moore’s Law | Definition of Moore’s Law by Merriam-Webster

What is Moore’s Law? Webopedia Definition

Main TERM M

By Vangie Beal

(n.) Moore’s Law is the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the current definition of Moore’s Law, which Moore himself has blessed. Most experts, including Moore himself, expect Moore’s Law to hold true until 2020-2025.

Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now.

See original here:

What is Moore’s Law? Webopedia Definition

Moore’s law | computer science | Britannica.com

Moores law, prediction made by American engineer Gordon Moore in 1965 that the number of transistors per silicon chip doubles every year.

Read More on This Topic

transistor: Transistors and Moores law

In 1965, four years after Fairchild Semiconductor Corporation and Texas Instruments Inc. marketed their first integrated circuits, Fairchild research director Gordon E. Moore made a prediction in a special issue of Electronics magazine. Observing that the total number of components in

For a special issue of the journal Electronics, Moore was asked to predict developments over the next decade. Observing that the total number of components in these circuits had roughly doubled each year, he blithely extrapolated this annual doubling to the next decade, estimating that microcircuits of 1975 would contain an astounding 65,000 components per chip. In 1975, as the rate of growth began to slow, Moore revised his time frame to two years. His revised law was a bit pessimistic; over roughly 50 years from 1961, the number of transistors doubled approximately every 18 months. Subsequently, magazines regularly referred to Moores law as though it were inexorablea technological law with the assurance of Newtons laws of motion.

What made this dramatic explosion in circuit complexity possible was the steadily shrinking size of transistors over the decades. Measured in millimetres in the late 1940s, the dimensions of a typical transistor in the early 2010s were more commonly expressed in tens of nanometres (a nanometre being one-billionth of a metre)a reduction factor of over 100,000. Transistor features measuring less than a micron (a micrometre, or one-millionth of a metre) were attained during the 1980s, when dynamic random-access memory (DRAM) chips began offering megabyte storage capacities. At the dawn of the 21st century, these features approached 0.1 micron across, which allowed the manufacture of gigabyte memory chips and microprocessors that operate at gigahertz frequencies. Moores law continued into the second decade of the 21st century with the introduction of three-dimensional transistors that were tens of nanometres in size.

View post:

Moore’s law | computer science | Britannica.com

Nineteen Eighty-Four – Wikipedia

Nineteen Eighty-Four, often published as 1984, is a dystopian novel published in 1949 by English author George Orwell.[2][3] The novel is set in the year 1984 when most of the world population have become victims of perpetual war, omnipresent government surveillance and public manipulation.

In the novel, Great Britain (“Airstrip One”) has become a province of a superstate named Oceania. Oceania is ruled by the “Party”, who employ the “Thought Police” to persecute individualism and independent thinking.[4] The Party’s leader is Big Brother, who enjoys an intense cult of personality but may not even exist. The protagonist of the novel, Winston Smith, is a rank-and-file Party member. Smith is an outwardly diligent and skillful worker, but he secretly hates the Party and dreams of rebellion against Big Brother. Smith rebels by entering a forbidden relationship with fellow employee Julia.

As literary political fiction and dystopian science-fiction, Nineteen Eighty-Four is a classic novel in content, plot, and style. Many of its terms and concepts, such as Big Brother, doublethink, thoughtcrime, Newspeak, Room 101, telescreen, 2 + 2 = 5, and memory hole, have entered into common usage since its publication in 1949. Nineteen Eighty-Four popularised the adjective Orwellian, which describes official deception, secret surveillance, brazenly misleading terminology, and manipulation of recorded history by a totalitarian or authoritarian state.[5] In 2005, the novel was chosen by Time magazine as one of the 100 best English-language novels from 1923 to 2005.[6] It was awarded a place on both lists of Modern Library 100 Best Novels, reaching number 13 on the editor’s list, and 6 on the readers’ list.[7] In 2003, the novel was listed at number 8 on the BBC’s survey The Big Read.[8]

Orwell “encapsulate[d] the thesis at the heart of his unforgiving novel” in 1944, the implications of dividing the world up into zones of influence, which had been conjured by the Tehran Conference. Three years later, he wrote most of it on the Scottish island of Jura from 1947 to 1948 despite being seriously ill with tuberculosis.[9][10] On 4 December 1948, he sent the final manuscript to the publisher Secker and Warburg, and Nineteen Eighty-Four was published on 8 June 1949.[11][12] By 1989, it had been translated into 65 languages, more than any other novel in English until then.[13] The title of the novel, its themes, the Newspeak language and the author’s surname are often invoked against control and intrusion by the state, and the adjective Orwellian describes a totalitarian dystopia that is characterised by government control and subjugation of the people.

Orwell’s invented language, Newspeak, satirises hypocrisy and evasion by the state: the Ministry of Love (Miniluv) oversees torture and brainwashing, the Ministry of Plenty (Miniplenty) oversees shortage and rationing, the Ministry of Peace (Minipax) oversees war and atrocity and the Ministry of Truth (Minitrue) oversees propaganda and historical revisionism.

The Last Man in Europe was an early title for the novel, but in a letter dated 22 October 1948 to his publisher Fredric Warburg, eight months before publication, Orwell wrote about hesitating between that title and Nineteen Eighty-Four.[14] Warburg suggested choosing the main title to be the latter, a more commercial one.[15]

In the novel 1985 (1978), Anthony Burgess suggests that Orwell, disillusioned by the onset of the Cold War (194591), intended to call the book 1948. The introduction to the Penguin Books Modern Classics edition of Nineteen Eighty-Four reports that Orwell originally set the novel in 1980 but that he later shifted the date to 1982 and then to 1984. The introduction to the Houghton Mifflin Harcourt edition of Animal Farm and 1984 (2003) reports that the title 1984 was chosen simply as an inversion of the year 1948, the year in which it was being completed, and that the date was meant to give an immediacy and urgency to the menace of totalitarian rule.[16]

Throughout its publication history, Nineteen Eighty-Four has been either banned or legally challenged, as subversive or ideologically corrupting, like Aldous Huxley’s Brave New World (1932), We (1924) by Yevgeny Zamyatin, Darkness at Noon (1940) by Arthur Koestler, Kallocain (1940) by Karin Boye and Fahrenheit 451 (1953) by Ray Bradbury.[17] Some writers consider the Russian dystopian novel We by Zamyatin to have influenced Nineteen Eighty-Four,[18][19] and the novel bears significant similarities in its plot and characters to Darkness at Noon, written years before by Arthur Koestler, who was a personal friend of Orwell.[20]

The novel is in the public domain in Canada,[21] South Africa,[22] Argentina,[23] Australia,[24] and Oman.[25] It will be in the public domain in the United Kingdom, the EU,[26] and Brazil in 2021[27] (70 years after the author’s death), and in the United States in 2044.[28]

Nineteen Eighty-Four is set in Oceania, one of three inter-continental superstates that divided the world after a global war.

Smith’s memories and his reading of the proscribed book, The Theory and Practice of Oligarchical Collectivism by Emmanuel Goldstein, reveal that after the Second World War, the United Kingdom became involved in a war fought in Europe, western Russia, and North America during the early 1950s. Nuclear weapons were used during the war, leading to the destruction of Colchester. London would also suffer widespread aerial raids, leading Winston’s family to take refuge in a London Underground station. Britain fell to civil war, with street fighting in London, before the English Socialist Party, abbreviated as Ingsoc, emerged victorious and formed a totalitarian government in Britain. The British Commonwealth was absorbed by the United States to become Oceania. Eventually Ingsoc emerged to form a totalitarian government in the country.

Simultaneously, the Soviet Union conquered continental Europe and established the second superstate of Eurasia. The third superstate of Eastasia would emerge in the Far East after several decades of fighting. The three superstates wage perpetual war for the remaining unconquered lands of the world in “a rough quadrilateral with its corners at Tangier, Brazzaville, Darwin, and Hong Kong” through constantly shifting alliances. Although each of the three states are said to have sufficient natural resources, the war continues in order to maintain ideological control over the people.

However, due to the fact that Winston barely remembers these events and due to the Party’s manipulation of history, the continuity and accuracy of these events are unclear. Winston himself notes that the Party has claimed credit for inventing helicopters, airplanes and trains, while Julia theorizes that the perpetual bombing of London is merely a false-flag operation designed to convince the populace that a war is occurring. If the official account was accurate, Smith’s strengthening memories and the story of his family’s dissolution suggest that the atomic bombings occurred first, followed by civil war featuring “confused street fighting in London itself” and the societal postwar reorganisation, which the Party retrospectively calls “the Revolution”.

Most of the plot takes place in London, the “chief city of Airstrip One”, the Oceanic province that “had once been called England or Britain”.[29][30] Posters of the Party leader, Big Brother, bearing the caption “BIG BROTHER IS WATCHING YOU”, dominate the city (Winston states it can be found on nearly every house), while the ubiquitous telescreen (transceiving television set) monitors the private and public lives of the populace. Military parades, propaganda films, and public executions are said to be commonplace.

The class hierarchy of Oceania has three levels:

As the government, the Party controls the population with four ministries:

The protagonist Winston Smith, a member of the Outer Party, works in the Records Department of the Ministry of Truth as an editor, revising historical records, to make the past conform to the ever-changing party line and deleting references to unpersons, people who have been “vaporised”, i.e., not only killed by the state but denied existence even in history or memory.

The story of Winston Smith begins on 4 April 1984: “It was a bright cold day in April, and the clocks were striking thirteen.” Yet he is uncertain of the true date, given the regime’s continual rewriting and manipulation of history.[31]

In the year 1984, civilization has been damaged by war, civil conflict, and revolution. Airstrip One (formerly Britain) is a province of Oceania, one of the three totalitarian super-states that rules the world. It is ruled by the “Party” under the ideology of “Ingsoc” and the mysterious leader Big Brother, who has an intense cult of personality. The Party stamps out anyone who does not fully conform to their regime using the Thought Police and constant surveillance, through devices such as Telescreens (two-way televisions).

Winston Smith is a member of the middle class Outer Party. He works at the Ministry of Truth, where he rewrites historical records to conform to the state’s ever-changing version of history. Those who fall out of favour with the Party become “unpersons”, disappearing with all evidence of their existence removed. Winston revises past editions of The Times, while the original documents are destroyed by fire in a “memory hole”. He secretly opposes the Party’s rule and dreams of rebellion. He realizes that he is already a “thoughtcriminal” and likely to be caught one day.

While in a proletarian neighbourhood, he meets an antique shop owner called Mr. Charrington and buys a diary. He uses an alcove to hide it from the Telescreen in his room, and writes thoughts criticising the Party and Big Brother. In the journal, he records his sexual frustration over a young woman maintaining the novel-writing machines at the ministry named Julia, whom Winston is attracted to but suspects is an informant. He also suspects that his superior, an Inner Party official named O’Brien, is a secret agent for an enigmatic underground resistance movement known as the Brotherhood, a group formed by Big Brother’s reviled political rival Emmanuel Goldstein.

The next day, Julia secretly hands Winston a note confessing her love for him. Winston and Julia begin an affair, an act of the rebellion as the Party insists that sex may only be used for reproduction. Winston realizes that she shares his loathing of the Party. They first meet in the country, and later in a rented room above Mr. Charrington’s shop. During his affair with Julia, Winston remembers the disappearance of his family during the civil war of the 1950s and his terse relationship with his ex-wife Katharine. Winston also interacts with his colleague Syme, who is writing a dictionary for a revised version of the English language called Newspeak. After Syme admits that the true purpose of Newspeak is to reduce the capacity of human thought, Winston speculates that Syme will disappear. Not long after, Syme disappears and no one acknowledges his absence.

Weeks later, Winston is approached by O’Brien, who offers Winston a chance to join the Brotherhood. They arrange a meeting at O’Brien’s luxurious flat where both Winston and Julia swear allegiance to the Brotherhood. He sends Winston a copy of The Theory and Practice of Oligarchical Collectivism by Emmanuel Goldstein. Winston and Julia read parts of the book, which explains more about how the Party maintains power, the true meanings of its slogans and the concept of perpetual war. It argues that the Party can be overthrown if proles (proletarians) rise up against it.

Mr. Charrington is revealed to be an agent of the Thought Police. Winston and Julia are captured in the shop and imprisoned in the Ministry of Love. O’Brien reveals that he is loyal to the party, and part of a special sting operation to catch “thoughtcriminals”. Over many months, Winston is tortured and forced to “cure” himself of his “insanity” by changing his own perception to fit the Party line, even if it requires saying that “2 + 2 = 5”. O’Brien openly admits that the Party “is not interested in the good of others; it is interested solely in power.” He says that once Winston is brainwashed into loyalty, he will be released back into society for a period of time, before they execute him. Winston points out that the Party has not managed to make him betray Julia.

O’Brien then takes Winston to Room 101 for the final stage of re-education. The room contains each prisoner’s worst fear, in Winston’s case rats. As a wire cage holding hungry rats is fitted onto his face, Winston shouts “Do it to Julia!”, thus betraying her. After being released, Winston meets Julia in a park. She says that she was also tortured, and both reveal betraying the other. Later, Winston sits alone in a caf as Oceania celebrates a supposed victory over Eurasian armies in Africa, and realizes that “He loved Big Brother.”

Ingsoc (English Socialism) is the predominant ideology and pseudophilosophy of Oceania, and Newspeak is the official language of official documents.

In London, the capital city of Airstrip One, Oceania’s four government ministries are in pyramids (300 m high), the faades of which display the Party’s three slogans. The ministries’ names are the opposite (doublethink) of their true functions: “The Ministry of Peace concerns itself with war, the Ministry of Truth with lies, the Ministry of Love with torture and the Ministry of Plenty with starvation.” (Part II, Chapter IX The Theory and Practice of Oligarchical Collectivism)

The Ministry of Peace supports Oceania’s perpetual war against either of the two other superstates:

The primary aim of modern warfare (in accordance with the principles of doublethink, this aim is simultaneously recognized and not recognized by the directing brains of the Inner Party) is to use up the products of the machine without raising the general standard of living. Ever since the end of the nineteenth century, the problem of what to do with the surplus of consumption goods has been latent in industrial society. At present, when few human beings even have enough to eat, this problem is obviously not urgent, and it might not have become so, even if no artificial processes of destruction had been at work.

The Ministry of Plenty rations and controls food, goods, and domestic production; every fiscal quarter, it publishes false claims of having raised the standard of living, when it has, in fact, reduced rations, availability, and production. The Ministry of Truth substantiates Ministry of Plenty’s claims by revising historical records to report numbers supporting the current, “increased rations”.

The Ministry of Truth controls information: news, entertainment, education, and the arts. Winston Smith works in the Minitrue RecDep (Records Department), “rectifying” historical records to concord with Big Brother’s current pronouncements so that everything the Party says is true.

The Ministry of Love identifies, monitors, arrests, and converts real and imagined dissidents. In Winston’s experience, the dissident is beaten and tortured, and, when near-broken, he is sent to Room 101 to face “the worst thing in the world”until love for Big Brother and the Party replaces dissension.

The keyword here is blackwhite. Like so many Newspeak words, this word has two mutually contradictory meanings. Applied to an opponent, it means the habit of impudently claiming that black is white, in contradiction of the plain facts. Applied to a Party member, it means a loyal willingness to say that black is white when Party discipline demands this. But it means also the ability to believe that black is white, and more, to know that black is white, and to forget that one has ever believed the contrary. This demands a continuous alteration of the past, made possible by the system of thought which really embraces all the rest, and which is known in Newspeak as doublethink. Doublethink is basically the power of holding two contradictory beliefs in one’s mind simultaneously, and accepting both of them.

Three perpetually warring totalitarian super-states control the world:[34]

The perpetual war is fought for control of the “disputed area” lying “between the frontiers of the super-states”, which forms “a rough parallelogram with its corners at Tangier, Brazzaville, Darwin and Hong Kong”,[34] and Northern Africa, the Middle East, India and Indonesia are where the superstates capture and use slave labour. Fighting also takes place between Eurasia and Eastasia in Manchuria, Mongolia and Central Asia, and all three powers battle one another over various Atlantic and Pacific islands.

Goldstein’s book, The Theory and Practice of Oligarchical Collectivism, explains that the superstates’ ideologies are alike and that the public’s ignorance of this fact is imperative so that they might continue believing in the detestability of the opposing ideologies. The only references to the exterior world for the Oceanian citizenry (the Outer Party and the Proles) are Ministry of Truth maps and propaganda to ensure their belief in “the war”.

Winston Smith’s memory and Emmanuel Goldstein’s book communicate some of the history that precipitated the Revolution. Eurasia was formed when the Soviet Union conquered Continental Europe, creating a single state stretching from Portugal to the Bering Strait. Eurasia does not include the British Isles because the United States annexed them along with the rest of the British Empire and Latin America, thus establishing Oceania and gaining control over a quarter of the planet. Eastasia, the last superstate established, emerged only after “a decade of confused fighting”. It includes the Asian lands conquered by China and Japan. Although Eastasia is prevented from matching Eurasia’s size, its larger populace compensates for that handicap.

The annexation of Britain occurred about the same time as the atomic war that provoked civil war, but who fought whom in the war is left unclear. Nuclear weapons fell on Britain; an atomic bombing of Colchester is referenced in the text. Exactly how Ingsoc and its rival systems (Neo-Bolshevism and Death Worship) gained power in their respective countries is also unclear.

While the precise chronology cannot be traced, most of the global societal reorganization occurred between 1945 and the early 1960s. Winston and Julia once meet in the ruins of a church that was destroyed in a nuclear attack “thirty years” earlier, which suggests 1954 as the year of the atomic war that destabilised society and allowed the Party to seize power. It is stated in the novel that the “fourth quarter of 1983” was “also the sixth quarter of the Ninth Three-Year Plan”, which implies that the first quarter of the first three-year plan began in July 1958. By then, the Party was apparently in control of Oceania.

In 1984, there is a perpetual war between Oceania, Eurasia and Eastasia, the superstates that emerged from the global atomic war. The Theory and Practice of Oligarchical Collectivism, by Emmanuel Goldstein, explains that each state is so strong it cannot be defeated, even with the combined forces of two superstates, despite changing alliances. To hide such contradictions, history is rewritten to explain that the (new) alliance always was so; the populaces are accustomed to doublethink and accept it. The war is not fought in Oceanian, Eurasian or Eastasian territory but in the Arctic wastes and in a disputed zone comprising the sea and land from Tangiers (Northern Africa) to Darwin (Australia). At the start, Oceania and Eastasia are allies fighting Eurasia in northern Africa and the Malabar Coast.

That alliance ends and Oceania, allied with Eurasia, fights Eastasia, a change occurring on Hate Week, dedicated to creating patriotic fervour for the Party’s perpetual war. The public are blind to the change; in mid-sentence, an orator changes the name of the enemy from “Eurasia” to “Eastasia” without pause. When the public are enraged at noticing that the wrong flags and posters are displayed, they tear them down; the Party later claims to have captured Africa.

Goldstein’s book explains that the purpose of the unwinnable, perpetual war is to consume human labour and commodities so that the economy of a superstate cannot support economic equality, with a high standard of life for every citizen. By using up most of the produced objects like boots and rations, the proles are kept poor and uneducated and will neither realise what the government is doing nor rebel. Goldstein also details an Oceanian strategy of attacking enemy cities with atomic rockets before invasion but dismisses it as unfeasible and contrary to the war’s purpose; despite the atomic bombing of cities in the 1950s, the superstates stopped it for fear that would imbalance the powers. The military technology in the novel differs little from that of World War II, but strategic bomber aeroplanes are replaced with rocket bombs, helicopters were heavily used as weapons of war (they did not figure in World War II in any form but prototypes) and surface combat units have been all but replaced by immense and unsinkable Floating Fortresses, island-like contraptions concentrating the firepower of a whole naval task force in a single, semi-mobile platform (in the novel, one is said to have been anchored between Iceland and the Faroe Islands, suggesting a preference for sea lane interdiction and denial).

The society of Airstrip One and, according to “The Book”, almost the whole world, lives in poverty: hunger, disease and filth are the norms. Ruined cities and towns are common: the consequence of the civil war, the atomic wars and the purportedly enemy (but possibly false flag) rockets. Social decay and wrecked buildings surround Winston; aside from the ministerial pyramids, little of London was rebuilt. Members of the Outer Party consume synthetic foodstuffs and poor-quality “luxuries” such as oily gin and loosely-packed cigarettes, distributed under the “Victory” brand. (That is a parody of the low-quality Indian-made “Victory” cigarettes, widely smoked in Britain and by British soldiers during World War II. They were smoked because it was easier to import them from India than it was to import American cigarettes from across the Atlantic because of the War of the Atlantic.)

Winston describes something as simple as the repair of a broken pane of glass as requiring committee approval that can take several years and so most of those living in one of the blocks usually do the repairs themselves (Winston himself is called in by Mrs. Parsons to repair her blocked sink). All Outer Party residences include telescreens that serve both as outlets for propaganda and to monitor the Party members; they can be turned down, but they cannot be turned off.

In contrast to their subordinates, the Inner Party upper class of Oceanian society reside in clean and comfortable flats in their own quarter of the city, with pantries well-stocked with foodstuffs such as wine, coffee and sugar, all denied to the general populace.[35] Winston is astonished that the lifts in O’Brien’s building work, the telescreens can be switched off and O’Brien has an Asian manservant, Martin. All members of the Inner Party are attended to by slaves captured in the disputed zone, and “The Book” suggests that many have their own motorcars or even helicopters. Nonetheless, “The Book” makes clear that even the conditions enjoyed by the Inner Party are only “relatively” comfortable, and standards would be regarded as austere by those of the prerevolutionary lite.[36]

The proles live in poverty and are kept sedated with alcohol, pornography and a national lottery whose winnings are never actually paid out; that is obscured by propaganda and the lack of communication within Oceania. At the same time, the proles are freer and less intimidated than the middle-class Outer Party: they are subject to certain levels of monitoring but are not expected to be particularly patriotic. They lack telescreens in their own homes and often jeer at the telescreens that they see. “The Book” indicates that is because the middle class, not the lower class, traditionally starts revolutions. The model demands tight control of the middle class, with ambitious Outer-Party members neutralised via promotion to the Inner Party or “reintegration” by the Ministry of Love, and proles can be allowed intellectual freedom because they lack intellect. Winston nonetheless believes that “the future belonged to the proles”.[37]

The standard of living of the populace is low overall. Consumer goods are scarce, and all those available through official channels are of low quality; for instance, despite the Party regularly reporting increased boot production, more than half of the Oceanian populace goes barefoot. The Party claims that poverty is a necessary sacrifice for the war effort, and “The Book” confirms that to be partially correct since the purpose of perpetual war consumes surplus industrial production. Outer Party members and proles occasionally gain access to better items in the market, which deals in goods that were pilfered from the residences of the Inner Party.[citation needed]

Nineteen Eighty-Four expands upon the subjects summarised in Orwell’s essay “Notes on Nationalism”[38] about the lack of vocabulary needed to explain the unrecognised phenomena behind certain political forces. In Nineteen Eighty-Four, the Party’s artificial, minimalist language ‘Newspeak’ addresses the matter.

O’Brien concludes: “The object of persecution is persecution. The object of torture is torture. The object of power is power.”

In the book, Inner Party member O’Brien describes the Party’s vision of the future:

There will be no curiosity, no enjoyment of the process of life. All competing pleasures will be destroyed. But alwaysdo not forget this, Winstonalways there will be the intoxication of power, constantly increasing and constantly growing subtler. Always, at every moment, there will be the thrill of victory, the sensation of trampling on an enemy who is helpless. If you want a picture of the future, imagine a boot stamping on a human faceforever.

Part III, Chapter III, Nineteen Eighty-Four

A major theme of Nineteen Eighty-Four is censorship, especially in the Ministry of Truth, where photographs are modified and public archives rewritten to rid them of “unpersons” (persons who are erased from history by the Party). On the telescreens, figures for all types of production are grossly exaggerated or simply invented to indicate an ever-growing economy, when the reality is the opposite. One small example of the endless censorship is Winston being charged with the task of eliminating a reference to an unperson in a newspaper article. He proceeds to write an article about Comrade Ogilvy, a made-up party member who displayed great heroism by leaping into the sea from a helicopter so that the dispatches he was carrying would not fall into enemy hands.

The inhabitants of Oceania, particularly the Outer Party members, have no real privacy. Many of them live in apartments equipped with two-way telescreens so that they may be watched or listened to at any time. Similar telescreens are found at workstations and in public places, along with hidden microphones. Written correspondence is routinely opened and read by the government before it is delivered. The Thought Police employ undercover agents, who pose as normal citizens and report any person with subversive tendencies. Children are encouraged to report suspicious persons to the government, and some denounce their parents. Citizens are controlled, and the smallest sign of rebellion, even something so small as a facial expression, can result in immediate arrest and imprisonment. Thus, citizens, particularly party members, are compelled to obedience.

“The Principles of Newspeak” is an academic essay appended to the novel. It describes the development of Newspeak, the Party’s minimalist artificial language meant to ideologically align thought and action with the principles of Ingsoc by making “all other modes of thought impossible”. (A linguistic theory about how language may direct thought is the SapirWhorf hypothesis.)

Whether or not the Newspeak appendix implies a hopeful end to Nineteen Eighty-Four remains a critical debate, as it is in Standard English and refers to Newspeak, Ingsoc, the Party etc., in the past tense: “Relative to our own, the Newspeak vocabulary was tiny, and new ways of reducing it were constantly being devised” p.422). Some critics (Atwood,[39] Benstead,[40] Milner,[41] Pynchon[42]) claim that for the essay’s author, both Newspeak and the totalitarian government are in the past.

Nineteen Eighty-Four uses themes from life in the Soviet Union and wartime life in Great Britain as sources for many of its motifs. Some time at an unspecified date after the first American publication of the book, producer Sidney Sheldon wrote to Orwell interested in adapting the novel to the Broadway stage. Orwell sold the American stage rights to Sheldon, explaining that his basic goal with Nineteen Eighty-Four was imagining the consequences of Stalinist government ruling British society:

[Nineteen Eighty-Four] was based chiefly on communism, because that is the dominant form of totalitarianism, but I was trying chiefly to imagine what communism would be like if it were firmly rooted in the English speaking countries, and was no longer a mere extension of the Russian Foreign Office.[43]

The statement “2 + 2 = 5”, used to torment Winston Smith during his interrogation, was a communist party slogan from the second five-year plan, which encouraged fulfillment of the five-year plan in four years. The slogan was seen in electric lights on Moscow house-fronts, billboards and elsewhere.[44]

The switch of Oceania’s allegiance from Eastasia to Eurasia and the subsequent rewriting of history (“Oceania was at war with Eastasia: Oceania had always been at war with Eastasia. A large part of the political literature of five years was now completely obsolete”; ch 9) is evocative of the Soviet Union’s changing relations with Nazi Germany. The two nations were open and frequently vehement critics of each other until the signing of the 1939 Treaty of Non-Aggression. Thereafter, and continuing until the Nazi invasion of the Soviet Union in 1941, no criticism of Germany was allowed in the Soviet press, and all references to prior party lines stoppedincluding in the majority of non-Russian communist parties who tended to follow the Russian line. Orwell had criticised the Communist Party of Great Britain for supporting the Treaty in his essays for Betrayal of the Left (1941). “The Hitler-Stalin pact of August 1939 reversed the Soviet Union’s stated foreign policy. It was too much for many of the fellow-travellers like Gollancz [Orwell’s sometime publisher] who had put their faith in a strategy of construction Popular Front governments and the peace bloc between Russia, Britain and France.”[45]

The description of Emmanuel Goldstein, with a “small, goatee beard”, evokes the image of Leon Trotsky. The film of Goldstein during the Two Minutes Hate is described as showing him being transformed into a bleating sheep. This image was used in a propaganda film during the Kino-eye period of Soviet film, which showed Trotsky transforming into a goat.[46] Goldstein’s book is similar to Trotsky’s highly critical analysis of the USSR, The Revolution Betrayed, published in 1936.

The omnipresent images of Big Brother, a man described as having a moustache, bears resemblance to the cult of personality built up around Joseph Stalin.

The news in Oceania emphasised production figures, just as it did in the Soviet Union, where record-setting in factories (by “Heroes of Socialist Labor”) was especially glorified. The best known of these was Alexey Stakhanov, who purportedly set a record for coal mining in 1935.

The tortures of the Ministry of Love evoke the procedures used by the NKVD in their interrogations,[47] including the use of rubber truncheons, being forbidden to put your hands in your pockets, remaining in brightly lit rooms for days, torture through the use of provoked rodents, and the victim being shown a mirror after their physical collapse.

The random bombing of Airstrip One is based on the Buzz bombs and the V-2 rocket, which struck England at random in 19441945.

The Thought Police is based on the NKVD, which arrested people for random “anti-soviet” remarks.[48] The Thought Crime motif is drawn from Kempeitai, the Japanese wartime secret police, who arrested people for “unpatriotic” thoughts.

The confessions of the “Thought Criminals” Rutherford, Aaronson and Jones are based on the show trials of the 1930s, which included fabricated confessions by prominent Bolsheviks Nikolai Bukharin, Grigory Zinoviev and Lev Kamenev to the effect that they were being paid by the Nazi government to undermine the Soviet regime under Leon Trotsky’s direction.

The song “Under the Spreading Chestnut Tree” (“Under the spreading chestnut tree, I sold you, and you sold me”) was based on an old English song called “Go no more a-rushing” (“Under the spreading chestnut tree, Where I knelt upon my knee, We were as happy as could be, ‘Neath the spreading chestnut tree.”). The song was published as early as 1891. The song was a popular camp song in the 1920s, sung with corresponding movements (like touching your chest when you sing “chest”, and touching your head when you sing “nut”). Glenn Miller recorded the song in 1939.[49]

The “Hates” (Two Minutes Hate and Hate Week) were inspired by the constant rallies sponsored by party organs throughout the Stalinist period. These were often short pep-talks given to workers before their shifts began (Two Minutes Hate), but could also last for days, as in the annual celebrations of the anniversary of the October revolution (Hate Week).

Orwell fictionalized “newspeak”, “doublethink”, and “Ministry of Truth” as evinced by both the Soviet press and that of Nazi Germany.[50] In particular, he adapted Soviet ideological discourse constructed to ensure that public statements could not be questioned.[51]

Winston Smith’s job, “revising history” (and the “unperson” motif) are based on the Stalinist habit of airbrushing images of ‘fallen’ people from group photographs and removing references to them in books and newspapers.[53] In one well-known example, the Soviet encyclopaedia had an article about Lavrentiy Beria. When he fell in 1953, and was subsequently executed, institutes that had the encyclopaedia were sent an article about the Bering Strait, with instructions to paste it over the article about Beria.[54]

Big Brother’s “Orders of the Day” were inspired by Stalin’s regular wartime orders, called by the same name. A small collection of the more political of these have been published (together with his wartime speeches) in English as “On the Great Patriotic War of the Soviet Union” By Joseph Stalin.[55][56] Like Big Brother’s Orders of the day, Stalin’s frequently lauded heroic individuals,[57] like Comrade Ogilvy, the fictitious hero Winston Smith invented to ‘rectify’ (fabricate) a Big Brother Order of the day.

The Ingsoc slogan “Our new, happy life”, repeated from telescreens, evokes Stalin’s 1935 statement, which became a CPSU slogan, “Life has become better, Comrades; life has become more cheerful.”[48]

In 1940 Argentine writer Jorge Luis Borges published Tln, Uqbar, Orbis Tertius which described the invention by a “benevolent secret society” of a world that would seek to remake human language and reality along human-invented lines. The story concludes with an appendix describing the success of the project. Borges’ story addresses similar themes of epistemology, language and history to 1984.[58]

During World War II, Orwell believed that British democracy as it existed before 1939 would not survive the war. The question being “Would it end via Fascist coup d’tat from above or via Socialist revolution from below”?[citation needed] Later, he admitted that events proved him wrong: “What really matters is that I fell into the trap of assuming that ‘the war and the revolution are inseparable’.”[59]

Nineteen Eighty-Four (1949) and Animal Farm (1945) share themes of the betrayed revolution, the person’s subordination to the collective, rigorously enforced class distinctions (Inner Party, Outer Party, Proles), the cult of personality, concentration camps, Thought Police, compulsory regimented daily exercise, and youth leagues. Oceania resulted from the US annexation of the British Empire to counter the Asian peril to Australia and New Zealand. It is a naval power whose militarism venerates the sailors of the floating fortresses, from which battle is given to recapturing India, the “Jewel in the Crown” of the British Empire. Much of Oceanic society is based upon the USSR under Joseph StalinBig Brother. The televised Two Minutes Hate is ritual demonisation of the enemies of the State, especially Emmanuel Goldstein (viz Leon Trotsky). Altered photographs and newspaper articles create unpersons deleted from the national historical record, including even founding members of the regime (Jones, Aaronson and Rutherford) in the 1960s purges (viz the Soviet Purges of the 1930s, in which leaders of the Bolshevik Revolution were similarly treated). A similar thing also happened during the French Revolution in which many of the original leaders of the Revolution were later put to death, for example Danton who was put to death by Robespierre, and then later Robespierre himself met the same fate.

In his 1946 essay “Why I Write”, Orwell explains that the serious works he wrote since the Spanish Civil War (193639) were “written, directly or indirectly, against totalitarianism and for democratic socialism”.[3][60] Nineteen Eighty-Four is a cautionary tale about revolution betrayed by totalitarian defenders previously proposed in Homage to Catalonia (1938) and Animal Farm (1945), while Coming Up for Air (1939) celebrates the personal and political freedoms lost in Nineteen Eighty-Four (1949). Biographer Michael Shelden notes Orwell’s Edwardian childhood at Henley-on-Thames as the golden country; being bullied at St Cyprian’s School as his empathy with victims; his life in the Indian Imperial Police in Burma and the techniques of violence and censorship in the BBC as capricious authority.[61]

Other influences include Darkness at Noon (1940) and The Yogi and the Commissar (1945) by Arthur Koestler; The Iron Heel (1908) by Jack London; 1920: Dips into the Near Future[62] by John A. Hobson; Brave New World (1932) by Aldous Huxley; We (1921) by Yevgeny Zamyatin which he reviewed in 1946;[63] and The Managerial Revolution (1940) by James Burnham predicting perpetual war among three totalitarian superstates. Orwell told Jacintha Buddicom that he would write a novel stylistically like A Modern Utopia (1905) by H. G. Wells.[citation needed]

Extrapolating from World War II, the novel’s pastiche parallels the politics and rhetoric at war’s endthe changed alliances at the “Cold War’s” (194591) beginning; the Ministry of Truth derives from the BBC’s overseas service, controlled by the Ministry of Information; Room 101 derives from a conference room at BBC Broadcasting House;[64] the Senate House of the University of London, containing the Ministry of Information is the architectural inspiration for the Minitrue; the post-war decrepitude derives from the socio-political life of the UK and the US, i.e., the impoverished Britain of 1948 losing its Empire despite newspaper-reported imperial triumph; and war ally but peace-time foe, Soviet Russia became Eurasia.

The term “English Socialism” has precedents in his wartime writings; in the essay “The Lion and the Unicorn: Socialism and the English Genius” (1941), he said that “the war and the revolution are inseparable…the fact that we are at war has turned Socialism from a textbook word into a realisable policy” because Britain’s superannuated social class system hindered the war effort and only a socialist economy would defeat Adolf Hitler. Given the middle class’s grasping this, they too would abide socialist revolution and that only reactionary Britons would oppose it, thus limiting the force revolutionaries would need to take power. An English Socialism would come about which “will never lose touch with the tradition of compromise and the belief in a law that is above the State. It will shoot traitors, but it will give them a solemn trial beforehand and occasionally it will acquit them. It will crush any open revolt promptly and cruelly, but it will interfere very little with the spoken and written word.”[65]

In the world of Nineteen Eighty-Four, “English Socialism”(or “Ingsoc” in Newspeak) is a totalitarian ideology unlike the English revolution he foresaw. Comparison of the wartime essay “The Lion and the Unicorn” with Nineteen Eighty-Four shows that he perceived a Big Brother regime as a perversion of his cherished socialist ideals and English Socialism. Thus Oceania is a corruption of the British Empire he believed would evolve “into a federation of Socialist states, like a looser and freer version of the Union of Soviet Republics”.[66][verification needed]

When first published, Nineteen Eighty-Four was generally well received by reviewers. V. S. Pritchett, reviewing the novel for the New Statesman stated: “I do not think I have ever read a novel more frightening and depressing; and yet, such are the originality, the suspense, the speed of writing and withering indignation that it is impossible to put the book down.”[67] P. H. Newby, reviewing Nineteen Eighty-Four for The Listener magazine, described it as “the most arresting political novel written by an Englishman since Rex Warner’s The Aerodrome.”[68] Nineteen Eighty-Four was also praised by Bertrand Russell, E. M. Forster and Harold Nicolson.[68] On the other hand, Edward Shanks, reviewing Nineteen Eighty-Four for The Sunday Times, was dismissive; Shanks claimed Nineteen Eighty-Four “breaks all records for gloomy vaticination”.[68] C. S. Lewis was also critical of the novel, claiming that the relationship of Julia and Winston, and especially the Party’s view on sex, lacked credibility, and that the setting was “odious rather than tragic”.[69]

Nineteen Eighty-Four has been adapted for the cinema, radio, television and theatre at least twice each, as well as for other art media, such as ballet and opera.

The effect of Nineteen Eighty-Four on the English language is extensive; the concepts of Big Brother, Room 101, the Thought Police, thoughtcrime, unperson, memory hole (oblivion), doublethink (simultaneously holding and believing contradictory beliefs) and Newspeak (ideological language) have become common phrases for denoting totalitarian authority. Doublespeak and groupthink are both deliberate elaborations of doublethink, and the adjective “Orwellian” means similar to Orwell’s writings, especially Nineteen Eighty-Four. The practice of ending words with “-speak” (such as mediaspeak) is drawn from the novel.[70] Orwell is perpetually associated with 1984; in July 1984, an asteroid was discovered by Antonn Mrkos and named after Orwell.

References to the themes, concepts and plot of Nineteen Eighty-Four have appeared frequently in other works, especially in popular music and video entertainment. An example is the worldwide hit reality television show Big Brother, in which a group of people live together in a large house, isolated from the outside world but continuously watched by television cameras.

The book touches on the invasion of privacy and ubiquitous surveillance. From mid-2013 it was publicized that the NSA has been secretly monitoring and storing global internet traffic, including the bulk data collection of email and phone call data. Sales of Nineteen Eighty-Four increased by up to seven times within the first week of the 2013 mass surveillance leaks.[79][80][81] The book again topped the Amazon.com sales charts in 2017 after a controversy involving Kellyanne Conway using the phrase “alternative facts” to explain discrepancies with the media.[82][83][84][85]

The book also shows mass media as a catalyst for the intensification of destructive emotions and violence. Since the 20th century, news and other forms of media have been publicizing violence more often.[86][87] In 2013, the Almeida Theatre and Headlong staged a successful new adaptation (by Robert Icke and Duncan Macmillan), which twice toured the UK and played an extended run in London’s West End. The play opened on Broadway in 2017.

In the decades since the publication of Nineteen Eighty-Four, there have been numerous comparisons to Aldous Huxley’s novel Brave New World, which had been published 17 years earlier, in 1932.[88][89][90][91] They are both predictions of societies dominated by a central government and are both based on extensions of the trends of their times. However, members of the ruling class of Nineteen Eighty-Four use brutal force, torture and mind control to keep individuals in line, but rulers in Brave New World keep the citizens in line by addictive drugs and pleasurable distractions.

In October 1949, after reading Nineteen Eighty-Four, Huxley sent a letter to Orwell and wrote that it would be more efficient for rulers to stay in power by the softer touch by allowing citizens to self-seek pleasure to control them rather than brute force and to allow a false sense of freedom:

Within the next generation I believe that the world’s rulers will discover that infant conditioning and narco-hypnosis are more efficient, as instruments of government, than clubs and prisons, and that the lust for power can be just as completely satisfied by suggesting people into loving their servitude as by flogging and kicking them into obedience.[92]

Elements of both novels can be seen in modern-day societies, with Huxley’s vision being more dominant in the West and Orwell’s vision more prevalent with dictators in ex-communist countries, as is pointed out in essays that compare the two novels, including Huxley’s own Brave New World Revisited.[93][94][95][85]

Comparisons with other dystopian novels like The Handmaid’s Tale, Virtual Light, The Private Eye and Children of Men have also been drawn.[96][97]

Here is the original post:

Nineteen Eighty-Four – Wikipedia

Ethereum Price Forecast: G20 Regulations Would at Least Bring Certainty

Ethereum News Update
Investors tend to panic when international organizations talk about cryptocurrency regulation, but is that really the nightmare scenario?

What we have at the moment seems worse.

With each country or state striking its own path on crypto regulation, investors are left without a clear sense of direction. “Where is the industry headed?” they keep wondering. All the while, a technology that was supposed to transcend borders becomes limited by them.

Just look at the difference around the world.

In the U.S., you have the head of the Securities and Exchange Commission (SEC) saying that blockchains have “incredible promise,” whereas in China and.

The post Ethereum Price Forecast: G20 Regulations Would at Least Bring Certainty appeared first on Profit Confidential.

Follow this link:

Ethereum Price Forecast: G20 Regulations Would at Least Bring Certainty

Ripple Price Prediction: Debate Over XRP Designation Heats Up

Ripple News Update
Although XRP prices are flashing red this morning, Ripple is actually net positive for the weekend. From its Friday lows to the time of this writing, the XRP to USD exchange rate advanced 5.55%.

But that’s not the biggest story in today’s Ripple news update.

No, once again, investors are at odds about XRP. Is it a cryptocurrency? Is it centralized? The questions that have haunted XRP prices for years are back, spread across message boards and forums that support more libertarian digital assets.

These debates may seem crazy to.

The post Ripple Price Prediction: Debate Over XRP Designation Heats Up appeared first on Profit Confidential.

See original here:

Ripple Price Prediction: Debate Over XRP Designation Heats Up

Ripple Price Prediction: xRapid Shows Success, But SEC Still Holds Power

XRP Prices Hang in the Balance
Ripple bears like to claim that XRP “serves no purpose” in its technology, but recent success with the “xRapid” software says otherwise. That—plus the continual “Is XRP a security?” debate—drove Ripple prices round and round in circles last week.

I see these two forces working in opposite directions.

Investors should be happy that xRapid is providing genuine benefits to businesses that dared to take a chance on XRP. But does it matter if the U.S. Securities & Exchange Commission (SEC) designates XRP a security?
xRapid Success
For the uninitiated, Ripple has multiple offerings. One is “xCurrent,” a.

The post Ripple Price Prediction: xRapid Shows Success, But SEC Still Holds Power appeared first on Profit Confidential.

Go here to read the rest:

Ripple Price Prediction: xRapid Shows Success, But SEC Still Holds Power

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Ripple vs SWIFT: The War Begins
While most criticisms of XRP do nothing to curb my bullish Ripple price forecast, there is one obstacle that nags at my conscience. Its name is SWIFT.

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the king of international payments.

It coordinates wire transfers across 11,000 banks in more than 200 countries and territories, meaning that in order for XRP prices to ascend to $10.00, Ripple needs to launch a successful coup. That is, and always has been, an unwritten part of Ripple’s story.

We’ve seen a lot of progress on that score. In the last three years, Ripple wooed more than 100 financial firms onto its.

The post Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More appeared first on Profit Confidential.

See the rest here:

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Ethereum Price Forecast: Big Corporate Moves Could Bolster ETH Prices

Crypto Rally Slows Down
As I write this report, cryptocurrency prices are in the middle of a vicious tug of war between the bulls and the bears. And the bears are winning right now.

Most, if not all, of our favorite cryptocurrencies trended down over the last seven days, erasing the progress they made in earlier weeks.

Short-term volatility is completely overtaking the market, making it tough for existing holders of crypto assets.

But…

If you’re someone who is looking to enter the market, a sell-off is exactly the right time. How many times have I heard investors say, “If I had bought Bitcoin two years ago, I would have made [insert insane profits.

The post Ethereum Price Forecast: Big Corporate Moves Could Bolster ETH Prices appeared first on Profit Confidential.

Here is the original post:

Ethereum Price Forecast: Big Corporate Moves Could Bolster ETH Prices

The Epic Relation Between Bitcoin and the Stock Market

Bitcoin Prices Are Less Independent Than You Think
Inside the world of cryptocurrencies, some truths go unquestioned: 1) centralization is terrible, 2) fixed money supplies are great, 3) cryptocurrencies are uncorrelated from stocks.

The last “truth” is now in question.

Many analysts, myself included, have raised questions about Bitcoin following the stock market before, but none of us made the case as strongly as Forbes contributor Clem Chambers.

Chambers recently used intraday trade charts to show that Bitcoin prices often follow the same patterns as the Dow Jones Index. (Source: “.

The post The Epic Relation Between Bitcoin and the Stock Market appeared first on Profit Confidential.

Originally posted here:

The Epic Relation Between Bitcoin and the Stock Market

Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling

Trust Is Growing…
Before we get to this week’s cryptocurrency news, analysis, and our cryptocurrency price forecast, I want to share an experience from this past week. I was at home watching the NBA playoffs, trying to ignore the commercials, when a strange advertisement caught my eye.

It followed a tomato from its birth on the vine to its end on the dinner table (where it was served as a bolognese sauce), and a diamond from its dusty beginnings to when it sparkled atop an engagement ring.

The voiceover said: “This is a shipment passed 200 times, transparently tracked from port to port. This is the IBM blockchain.”

Let that sink in—IBM.

The post Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling appeared first on Profit Confidential.

Read more from the original source:

Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling