12345...102030...


Nationwide Public Voting on ETOLL 2015 – HelloWeb

R35 Billion Wasted . Were has it gone ??? Creating jobs for who and what ??? If this was for job creation , more people would have jobs . With this type of wastage , We do not need E-tolls . Look at the Pot- Holes that is not fixed . Scrap the system as it is Corrupts same as Goverment . Sorry this is Government we talking about .

See the rest here:

Nationwide Public Voting on ETOLL 2015 – HelloWeb

Classic Maya collapse – Wikipedia

In archaeology, the classic Maya collapse is the decline of Classic Maya civilization and the abandonment of Maya cities in the southern Maya lowlands of Mesoamerica between the 8th and 9thcenturies, at the end of the Classic Maya Period. Preclassic Maya experienced a similar collapse in the 2nd century.[citation needed]

The Classic Period of Mesoamerican chronology is generally defined as the period from 250 to 900, the last century of which is referred to as the Terminal Classic.[1] The Classic Maya collapse is one of the greatest unsolved mysteries in archaeology. Urban centers of the southern lowlands, among them Palenque, Copn, Tikal, and Calakmul, went into decline during the 8th and 9thcenturies and were abandoned shortly thereafter. Archaeologically, this decline is indicated by the cessation of monumental inscriptions[2] and the reduction of large-scale architectural construction at the primary urban centers of the Classic Period.[citation needed]

Although termed a collapse, it did not mark the end of the Maya civilization but rather a shift away from the Southern Lowlands as a power center; Northern Yucatn in particular prospered afterwards, although with very different artistic and architectural styles, and with much less use of monumental hieroglyphic writing. In the Post-Classic Period following the collapse, the state of Chichn Itz built an empire that briefly united much of the Maya region,[2] and centers such as Mayapn and Uxmal flourished, as did the Highland states of the K’iche’ and Kaqchikel Maya. Independent Maya civilization continued until 1697 when the Spanish conquered Nojpetn, the last independent city-state. Millions of Maya people still inhabit the Yucatn peninsula today.[3]

Because parts of Maya civilization unambiguously continued, a number of scholars strongly dislike the term collapse.[4] Regarding the proposed collapse, E. W. Andrews IV went as far as to say, “in my belief no such thing happened.”[5]

The Maya often recorded dates on monuments they built. Few dated monuments were being built circa 500 around ten per year in 514, for example. The number steadily increased to twenty per year by 672 and forty by around 750. After this, the number of dated monuments begins to falter relatively quickly, collapsing back to ten by 800 and to zero by 900. Likewise, recorded lists of kings complement this analysis. Altar Q at Copn shows a reign of kings from 426 to 763. One last king not recorded on Altar Q was Ukit Took, “Patron of Flint”, who was probably a usurper. The dynasty is believed to have collapsed entirely shortly thereafter. In Quirigua, twenty miles north of Copn, the last king Jade Sky began his rule between 895 and 900, and throughout the Maya area all kingdoms similarly fell around that time.[6]

A third piece of evidence of the progression of Maya decline, gathered by Ann Corinne Freter, Nancy Gonlin, and David Webster, uses a technique called obsidian hydration. The technique allowed them to map the spread and growth of settlements in the Copn Valley and estimate their populations. Between 400 and 450, the population was estimated at a peak of twenty-eight thousand, between 750 and 800 larger than London at the time. Population then began to steadily decline. By 900 the population had fallen to fifteen thousand, and by 1200 the population was again less than 1000.[citation needed]

Over 80 different theories or variations of theories attempting to explain the Classic Maya collapse have been identified.[7] From climate change to deforestation to lack of action by Maya kings, there is no universally accepted collapse theory, although drought is gaining momentum as the leading explanation.[8]

The archaeological evidence of the Toltec intrusion into Seibal, Peten, suggests to some the theory of foreign invasion. The latest hypothesis states that the southern lowlands were invaded by a non-Maya group whose homelands were probably in the gulf coast lowlands. This invasion began in the 9thcentury and set off, within 100years, a group of events that destroyed the Classic Maya. It is believed that this invasion was somehow influenced by the Toltec people of central Mexico. However, most Mayanists do not believe that foreign invasion was the main cause of the Classic Maya collapse; they postulate that no military defeat can explain or be the cause of the protracted and complex Classic collapse process. Teotihuacan influence across the Maya region may have involved some form of military invasion; however, it is generally noted that significant Teotihuacan-Maya interactions date from at least the Early Classic period, well before the episodes of Late Classic collapse.[9]

The foreign invasion theory does not answer the question of where the inhabitants went. David Webster believed that the population should have increased because of the lack of elite power. Further, it is not understood why the governmental institutions were not remade following the revolts, which happened under similar circumstances in places like China. A study by anthropologist Elliot M. Abrams came to the conclusion that buildings, specifically in Copan, did not require an extensive amount of time and workers to construct.[10] However, this theory was developed during a period when the archaeological evidence showed that there were fewer Maya people than there are now known to have been.[11] Revolutions, peasant revolts, and social turmoil change circumstances, and are often followed by foreign wars, but they run their course. There are no documented revolutions that caused wholesale abandonment of entire regions.[citation needed]

It has been hypothesized that the decline of the Maya is related to the collapse of their intricate trade systems, especially those connected to the central Mexican city of Teotihuacn. Preceding improved knowledge of the chronology of Mesoamerica, Teotihuacan was believed to have fallen during 700750, forcing the “restructuring of economic relations throughout highland Mesoamerica and the Gulf Coast”.[12] This remaking of relationships between civilizations would have then given the collapse of the Classic Maya a slightly later date. However, after knowing more about the events and the periods when they occurred, it is believed that the strongest Teotihuacan influence was during the 4th and 5thcenturies. In addition, the civilization of Teotihuacan started to lose its power, and maybe abandoned the city, during 600650. This differs greatly from the previous belief that Teotihuacano power decreased during 700750.[13] But since the new decline date of 600650 has been accepted, the Maya civilizations are now thought to have lived on and prospered for another century and more[14] than what was previously believed. Rather than the decline of Teotihuacan directly preceding the collapse of the Maya, their decline is now seen as contributing to the 6th-century hiatus.[14]

The disease theory is also a contender as a factor in the Classic Maya collapse. Widespread disease could explain some rapid depopulation, both directly through the spread of infection itself and indirectly as an inhibition to recovery over the long run. According to Dunn (1968) and Shimkin (1973), infectious diseases spread by parasites are common in tropical rainforest regions, such as the Maya lowlands. Shimkin specifically suggests that the Maya may have encountered endemic infections related to American trypanosomiasis, Ascaris, and some enteropathogens that cause acute diarrheal illness. Furthermore, some experts believe that, through development of their civilization (that is, development of agriculture and settlements), the Maya could have created a “disturbed environment”, in which parasitic and pathogen-carrying insects often thrive.[15] Among the pathogens listed above, it is thought that those that cause the acute diarrheal illnesses would have been the most devastating to the Maya population, because such illness would have struck a victim at an early age, thereby hampering nutritional health and the natural growth and development of a child. This would have made them more susceptible to other diseases later in life, and would have been exacerbated by an increasing dependence on carbohydrate-rich crops.[16] Such ideas as this could explain the role of disease as at least a possible partial reason for the Classic Maya Collapse.[17]

Large droughts hit the Yucatn Peninsula and Petn Basin areas with particular ferocity, as thin tropical soils decline in fertility and become unworkable when deprived of forest cover,[18] and due to regular seasonal drought drying up surface water.[19] Colonial Spanish officials accurately documented cycles of drought, famine, disease, and war, providing a reliable historical record of the basic drought pattern in the Maya region.[20]

Climatic factors were first implicated in the collapse as early as 1931 by Mayanists Thomas Gann and J. E. S. Thompson.[21] In The Great Maya Droughts, Richardson Gill gathers and analyzes an array of climatic, historical, hydrologic, tree ring, volcanic, geologic, lake bed, and archeological research, and demonstrates that a prolonged series of droughts probably caused the Classic Maya collapse.[22] The drought theory provides a comprehensive explanation, because non-environmental and cultural factors (excessive warfare, foreign invasion, peasant revolt, less trade, etc.) can all be explained by the effects of prolonged drought on Classic Maya civilization.[23]

Climatic changes are, with increasing frequency, found to be major drivers in the rise and fall of civilizations all over the world.[24] Professors Harvey Weiss of Yale University and Raymond S. Bradley of the University of Massachusetts have written, “Many lines of evidence now point to climate forcing as the primary agent in repeated social collapse.”[25] In a separate publication, Weiss illustrates an emerging understanding of scientists:

Within the past five years new tools and new data for archaeologists, climatologists, and historians have brought us to the edge of a new era in the study of global and hemispheric climate change and its cultural impacts. The climate of the Holocene, previously assumed static, now displays a surprising dynamism, which has affected the agricultural bases of pre-industrial societies. The list of Holocene climate alterations and their socio-economic effects has rapidly become too complex for brief summary.[26]

The drought theory holds that rapid climate change in the form of severe drought brought about the Classic Maya collapse. According to the particular version put forward by Gill in The Great Maya Droughts,

[Studies of] Yucatecan lake sediment cores … provide unambiguous evidence for a severe 200-year drought from AD800 to 1000 … the most severe in the last 7,000years … precisely at the time of the Maya Collapse.[27]

Climatic modeling, tree ring data, and historical climate data show that cold weather in the Northern Hemisphere is associated with drought in Mesoamerica.[28] Northern Europe suffered extremely low[clarification needed] temperatures around the same time as the Maya droughts. The same connection between drought in the Maya areas and extreme cold in northern Europe was found again at the beginning of the 20thcentury. Volcanic activity, within and outside Mesoamerica, is also correlated with colder weather and resulting drought, as the effects of the Tambora volcano eruption in 1815 indicate.[29]

Mesoamerican civilization provides a remarkable exception: civilization prospering in the tropical swampland. The Maya are often perceived as having lived in a rainforest, but technically, they lived in a seasonal desert without access to stable sources of drinking water.[30] The exceptional accomplishments of the Maya are even more remarkable because of their engineered response to the fundamental environmental difficulty of relying upon rainwater rather than permanent sources of water. The Maya succeeded in creating a civilization in a seasonal desert by creating a system of water storage and management which was totally dependent on consistent rainfall.[31] The constant need for water kept the Maya on the edge of survival. Given this precarious balance of wet and dry conditions, even a slight shift in the distribution of annual precipitation can have serious consequences.[19] Water and civilization were vitally connected in ancient Mesoamerica. Archaeologist and specialist in pre-industrial land and water usage practices Vernon Scarborough believes water management and access were critical to the development of Maya civilization.[32]

Critics of the drought theory wonder why the southern and central lowland cities were abandoned and the northern cities like Chichen Itza, Uxmal, and Coba continued to thrive.[33] One critic argued that Chichen Itza revamped its political, military, religious, and economic institutions away from powerful lords or kings.[34] Inhabitants of the northern Yucatn also had access to seafood, which might have explained the survival of Chichen Itza and Mayapan, cities away from the coast but within reach of coastal food supplies.[35] Critics of the drought theory also point to current weather patterns: much heavier rainfall in the southern lowlands compared to the lighter amount of rain in the northern Yucatn. Drought theory supporters state that the entire regional climate changed, including the amount of rainfall, so that modern rainfall patterns are not indicative of rainfall from 800 to 900. LSU archaeologist Heather McKillop found a significant[clarification needed] rise in sea level along the coast nearest the southern Maya lowlands, coinciding with the end of the Classic period, and indicating climate change.[36]

David Webster, a critic of the megadrought theory, says that much of the evidence provided by Gill comes from the northern Yucatn and not the southern part of the peninsula, where Classic Maya civilization flourished. He also states that if water sources were to have dried up, then several city-states would have moved to other water sources. That Gill suggests that all water in the region would have dried up and destroyed Maya civilization is a stretch, according to Webster,[37] although Webster does not have a precise competing theory explaining the Classic Maya Collapse.

A study published in Science in 2012 found that modest rainfall reductions, amounting to only 25 to 40 percent of annual rainfall, may have been the tipping point to the Maya collapse. Based on samples of lake and cave sediments in the areas surrounding major Maya cities, the researchers were able to determine the amount of annual rainfall in the region. The mild droughts that took place between 800950 would therefore be enough to rapidly deplete seasonal water supplies in the Yucatn lowlands, where there are no rivers.[38][39][40]

A study published in Scientific Reports in 2016 showed that between 750 and 900 a cluster of four earthquakes affected the wet tropical mountains south of the Yucatn lowlands, which are not vulnerable to drought, and include such important cities as Quirigua and Copn. These earthquakes left detectable destruction in several Maya cities and led to the abandonment of Quirigua. The study hypothesizes that repeated destruction combined with declining trade with the Maya kingdoms of the Yucatn lowlands to propagate the collapse to the southern part of the Maya realm.[41]

LIDAR scanning of the Classic Maya heartlands bolsters the drought theory. A huge population as we now understand existed would not ordinarily disappear from civil war, revolution, soil degradation, disease, earthquake or other suspected factors. Drought, the absence of water in an agricultural system heavily dependent upon water, is almost the only remaining possibility for the collapse in the entire heavily populated region. The Yucatn may have provided underground water and more rainfall to permit the continuance of Mayan civilization there.

Some ecological theories of Maya decline focus on the worsening agricultural and resource conditions in the late Classic period. It was originally thought that the majority of Maya agriculture was dependent on a simple slash-and-burn system. Based on this method, the hypothesis of soil exhaustion was advanced by Orator F. Cook in 1921. Similar soil exhaustion assumptions are associated with erosion, intensive agricultural, and savanna grass competition.

More recent investigations have shown a complicated variety of intensive agricultural techniques utilized by the Maya, explaining the high population of the Classic Maya polities. Modern archaeologists now comprehend the sophisticated intensive and productive agricultural techniques of the ancient Maya, and several of the Maya agricultural methods have not yet been reproduced. Intensive agricultural methods were developed and utilized by all the Mesoamerican cultures to boost their food production and give them a competitive advantage over less skillful peoples.[42] These intensive agricultural methods included canals, terracing, raised fields, ridged fields, chinampas, the use of human feces as fertilizer, seasonal swamps or bajos, using muck from the bajos to create fertile fields, dikes, dams, irrigation, water reservoirs, several types of water storage systems, hydraulic systems, swamp reclamation, swidden systems, and other agricultural techniques that have not yet been fully understood.[43] Systemic ecological collapse is said to be evidenced by deforestation, siltation, and the decline of biological diversity.

In addition to mountainous terrain, Mesoamericans successfully exploited the very problematic tropical rainforest for 1,500years.[44] The agricultural techniques utilized by the Maya were entirely dependent upon ample supplies of water, lending credit to the drought theory of collapse. The Maya thrived in territory that would be uninhabitable to most peoples. Their success over two millennia in this environment was “amazing.”[45]

Anthropologist Joseph Tainter wrote extensively about the collapse of the Southern Lowland Maya in his 1988 study The Collapse of Complex Societies. His theory about Maya collapse encompasses some of the above explanations, but focuses specifically on the development of and the declining marginal returns from the increasing social complexity of the competing Maya city-states.[46] Psychologist Julian Jaynes suggested that the collapse was due to a failure in the social control systems of religion and political authority, due to increasing socioeconomic complexity that overwhelmed the power of traditional rituals and the king’s authority to compel obedience.[47]

Read more here:

Classic Maya collapse – Wikipedia

Cultural Collapse Theory: The 7 Steps That Lead To A …

(To download the PDF edition of this article, click here. It was originally published on Roosh V.)

It was Joes first date with Mary. He asked her what she wanted in life and she replied, I want to establish my career. Thats the most important thing to me right now. Undeterred that she had no need for a man in her life, Joe entertained her with enough funny stories and cocky statements that she soon allowed him to lightly pet her forearm.

At the end of the date, he locked arms with her on the walk to the subway station, when two Middle Eastern men on scooter patrol accosted them and said they were forbidden to touch. This is Sharia zone, they said in heavily accented English, in front of a Halal butcher shop. Joe and Mary felt bad that they offended the two men, because they were trained in school to respect all religions but that of their ancestors. One of the first things they learned was that their white skin gave them extra privilege in life which must be consciously restrained at all times. Even if they happened to disagree with the two men, they could not verbally object because of anti-hate laws that would put them in jail for religious discrimination. They unlocked arms and maintained a distance of three feet from each other.

Unfortunately for Joe, Mary did not want to go out with him again, but seven years later he did receive a message from her on Facebook saying hello. She became vice president of a company, but could not find a man equal to her station since women now made 25% more than men on average. Joe had long left the country and moved to Thailand, where he married a young Thai girl and had three children. He had no plans on returning to his country, America.

If cultural collapse occurs in the way I will now describe, the above scenario will be the rule within a few decades. The Western world is being colonized in reverse, not by weapons or hard power, but through a combination of progressivism and low reproductive rates. These two factors will lead to a complete cultural collapse of many Western nations within the next 200 years. This theory will show the most likely mechanism that it will proceed in America, Canada, UK, Scandinavia, and Western Europe.

Cultural collapse is the decline, decay, or disappearance of a native populations rituals, habits, interpersonal communication, relationships, art, and language. It coincides with a relative decline of population compared to outside groups. National identity and group identification will be lost while revisionist history will be applied to demonize or find fault with the native population. Cultural collapse is not to be confused with economic or state collapse. A nation that suffers from a cultural collapse can still be economically productive and have a working government.

First I will share a brief summary of the cultural collapse progression before explaining them in more detail. Then I will discuss where I see many countries along its path.

1. Removal of religious narrative from peoples lives, replaced by a treadmill of scientific and technological progress.

2. Elimination of traditional sex roles through feminism, gender equality, political correctness, cultural Marxism, and socialism.

3. Delay or abstainment of family formation by women to pursue careerist lifestyles while men wait in confused limbo.

4. Decreasing birth rate among native population.

5. Government enactment of open immigration policies to prevent economic collapse.

6. Immigrant refusal to fully acclimate, forcing host culture to adopt external rituals and beliefs while being out-reproduced.

7. Natives becoming marginalized in their own country.

Religion has been a powerful restraint for millennia in preventing humans from pursuing their base desires and narcissistic tendencies so that they satisfy a god. Family formation is the central unit of most religions, possibly because children increase membership at zero marginal cost to the church (i.e. they dont need to be recruited).

Religion may promote scientific ignorance, but it facilitates reproduction by giving people a narrative that places family near the center of their existence.[1] [2] [3] After the Enlightenment, the rapid advance of science and its logical but nihilistic explanations into the universe have removed the religious narrative and replaced it with an empty narrative of scientific progress, knowledge, and technology, which act as a restraint and hindrance to family formation, allowing people to pursue individual goals of wealth accumulation or hedonistic pleasure seeking.[4] As of now, there has not been a single non-religious population that has been able to reproduce above the death rate.[5]

Even though many people today claim to believe in god, they may not step inside a church but once or twice a year for special holidays. Religion went from being a lifestyle, a manual for living, to something that is thought about in passing.

Once religion no longer plays a role in peoples lives, the stage is set to fracture male-female bonding. It is collectively attacked by several ideologies stemming from the beliefs of Cultural Marxist theory, which serve to accomplish one common end: destruction of the family unit so that citizens are dependent on the state. They achieve this goal through the marginalization of men and their role in society under the banner of equality.[6] With feminism pushed to the forefront of this umbrella movement, the drive for equality ends up being a power grab by women.[7] This attack is performed on a range of fronts:

The end result is that men, confused about their identify and averse to state punishment from sexual harassment, date rape, and divorce proceedings, make a rational decision to wait on the sidelines.[15] Women, still not happy with the increased power given to them, continue their assault on men by instructing them to man up into what has become an unfair dealmarriage. The elevation of women above men is allowed by corporations, which adopt girl power marketing to expand their consumer base and increase profits.[16] [17] Governments also allow it because it increases their tax revenue. Because there is money to be made with women working and becoming consumers, there is no effort by the elite to halt this development.

At the same time men are emasculated as mere sperm donors, women are encouraged to adopt the career goals, mannerisms, and competitive lifestyles of men, inevitably causing them to delay marriage, often into an age where they can no longer find suitable husbands who have more resources than themselves. [18] [19] [20] [21] The average woman will find it exceedingly difficult to balance career and family, and since she has no concern of getting fired from her family, who she may see as a hindrance to her career goals, she will devote an increasing proportion of time into her job.

Female income, in aggregate, will soon match or exceed that of men.[22] [23] [24] A key reason that women historically got married was to be economically provided for, but this reason will no longer persist and women will feel less pressure or motivation to marry. The burgeoning spinster population will simply be a money-making opportunity for corporations to market to an increasing population of lonely women. Cat and small dog sales will rise.

Women succumb to their primal sexual and materialistic urges to live the Sex and the City lifestyle full of fine dining, casual sex, technological bliss, and general gluttony without learning traditional household skills or feminine qualities that would make them attractive wives.[25] [26] Men adapt to careerist women in a rational way by doing the following:

Careerist women who decide to marry will do so in a hurried rush around 30 because they fear growing old alone, but since they are well past their fertility peak[31], they may find it difficult to reproduce. In the event of successful reproduction at such a later age, fewer children can be born before biological infertility, limiting family size compared to the historical past.

The stage is now set for the death rate to outstrip the birth rate. This creates a demographic cliff where there is a growing population of non-working elderly relative to able-bodied younger workers. Two problems result:

No modern nation has figured out how to substantially raise birth rates among native populations. The most successful effort has been done in France, but that has still kept the birth rate among French-born women just under the replacement rate (2.08 vs 2.1).[34] The easiest and fastest way to solve this double-edged problem is to promote mass immigration of non-elderly individuals who will work, spend, and procreate at rates greater than natives.[35]

A replenishing supply of births are necessary to create taxpayers, workers, entrepreneurs, and consumers in order to maintain the nations economic development.[36] While many claim that the planet is suffering from overpopulation, an economic collapse is inevitable for those countries who do not increase their population at steady rates.

An aging population without youthful refilling will cause a scarcity of labor, increasing that labors price. Corporate elites will now lobby governments for immigration reform to relieve this upward pressure on wages.[37] [38] At the same time, the modern mantra of sustained GDP growth puts pressure on politicians for dissemination of favorable economic growth data to aid in their re-elections. The simplest way to increase GDP without innovation or development of industry is to expand the population. Both corporate and political elites now have their goals in alignment where the easiest solution becomes immigration.[39] [40]

While politicians hem and haw about designing permanent immigration policies, immigrants continue to settle within the nation.[41] The national birth rate problem is essentially solved overnight, as its much easier to drain third-world nations of its starry-eyed population with enticements of living in the first-world than it is to encourage the native women to reproduce. (Lateral immigration from one first-world nation to another is so relatively insignificant that the niche term expatriation has been developed to describe it). Native women will show a stubborn resistance at any suggestion they should create families, much preferring a relatively responsibility-free lifestyle of sexual variety, casual internet dating via mobile apps, consumer excess, and comfortable high-paying jobs in air conditioned offices.[42] [43]

Immigrants will almost always come from societies that are more religious and, in the case of Islam with regard to European immigration, far more scientifically primitive and rigid in its customs.[44]

While many adult immigrants will feel gracious at the opportunity to live in a more prosperous nation, others will soon feel resentment that they are forced to work menial jobs in a country that is far more expensive than their own.[45] [46] [47] [48] [49] The majority of them remain in lower economic classes, living in poor immigrant communities where they can speak their own language, find their own homeland foods, and follow their own customs or religion.

Instead of breaking out of their foreigner communities, immigrants seek to expand it by organizing. They form local groups and civic organizations to teach natives better ways to understand and serve immigrant populations. They will be eager to publicize cases where immigrants have been insulted by insensitive natives or treated unfairly by police authorities in the case of petty crime.[50] [51] [52] [53] [54] [55] School curriculums may be changed to promote diversity or multiculturalism, at great expense to the native culture.[56] Concessions will be made not to offend immigrants.[57] A continual stream of outrages will be found and this will feed the power of the organizations and create a state within a state where native elites become fearful of applying laws to immigrants.[58]

This step has not yet happened in any first-world nation, so I will predict it based on logically extending known events I have already described.

Local elites will give lip service to immigrant groups for votes but will be slow to give them real state or economic power. Citizenship rules may even be tightened to prevent immigrants from being elected. The elites will be mostly insulated from the cultural crises in their isolated communities, private schools, and social clubs, where they can continue to incubate their own sub-culture without outside influence. At the same time, they will make speeches and enact polices to force native citizens to accept multiculturalism and blind immigration. Anti-hate and anti-discrimination laws will be more vigorously enforced than other more serious crimes. Police will monitor social networking to identify those who make statements against protected classes.

Cultural decline begins in earnest when the natives feel shame or guilt for who they are, their history, their way of life, and where their ancestors came from. They will let immigrant groups criticize their customs without protest, or they simply embrace immigrant customs instead with religious conversion and interethnic marriages. Nationalistic pride will be condemned as a far-right phenomenon and popular nationalistic politicians will be compared to Hitler. Natives learn the art of self-censorship, limiting the range of their speech and expressions, and soon only the elderly can speak the truths of the cultural decline while a younger multiculturalist within earshot attributes such frankness to senility or racist nostalgia.

With the already entrenched environment of political correctness (see stage 2), the local culture becomes a sort of world culture that can be declared tolerant and progressive as long as there is a lack of criticism against immigrants, multiculturalism, and their combined influence. All cultural identity will eventually be lost, and to be American or British, for example, will no longer have modern meaning from a sociological perspective. Native traditions will be eradicated and a cultural mixing will take place where citizens from one world nation will be nearly identical in behavior, thought, and consumer tastes to citizens of another. Once a collapse occurs, it cannot be reversed. The nations cultural heritage will be forever lost.

I want to now take a brief look at six different countries and see where they are along the cultural collapse progression

This is an interesting case because, up to recently, we saw very low birth rates not due to progressive ideals but from a rough transition to capitalism in the 1990s and a high male mortality from alcoholism.[59] [60] To help sustain its population, Russia is readily accepting immigrants from Central Asian regions, treating them like second-class citizens and refusing to make any accommodations away from the ethnic Russian way of life. Even police authorities turn a blind eye when local skinhead groups attack immigrants.[61] In addition, Russia has also shown no tolerance to homosexual or progressive groups,[62] stunting their negative effects upon the culture. The birth rate has risen in recent years to levels seen in Western Europe but its still not above the death rate. Russia will see a population collapse before a cultural one.

Likelihood of 50-year cultural collapse: Very low

Were seeing rapid movement through stages 2 and 3, where progressive ideology based on the American model is becoming adopted and a large poor population ensure progressive politicians will continue to remain in power with promises of economic redistribution.[63] [64] [65] Within 15 years we should see a sharp drop in birth rates and a relaxation of immigration laws.

Likelihood of 50-year cultural collapse: Moderate

Some could argue that America is currently experiencing a cultural collapse. It always had a fragile culture because of its immigrant foundings, but immigrants of the past (including my own parents) rapidly acclimated into the host culture to create a sense of national pride around an ethic of hard work and shared democratic values. This is being eroded as a fem-centric culture rises in its place, with its focus on trends, celebrities, homosexuality, multiculturalism, and male-bashing. Natives have become pleasure seekers with little inclination to reproduction during their years of peak fertility.[66]

Likelihood of 50-year cultural collapse: Very high

While America always had high amounts of immigration, and therefore a system of integration, England is newer to the game. In the past 20 years, they have massively ramped up their immigration efforts.[67] A visit to London will confirm that the native British are slowly becoming minorities, with their iconic red telephone booths left undisturbed purely for tourist photo opportunities. Approximately 5% of the English population is now Muslim.[68] Instead of acclimatizing, they are achieving early success in creating zones with Sharia law.[69] The English elite, in response, is jailing natives under stringent anti-race laws.[70] England had a highly successful immigration story with Polish immigrants who eagerly acclimated to English culture, but have opened the doors to other peoples who dont want to integrate.[71]

Likelihood of 50-year cultural collapse: Very high

Sweden is experiencing a similar immigration situation to England, but they possess a higher amount of self-shame and white guilt. Instead of allowing immigrants who could work in the Swedish economy, they are encouraging migration of asylum seekers who have been made destitute by war. These immigrants enter Sweden and immediately receive social benefits. In effect, Sweden is welcoming the least economically productive people in the world.[72] The immigrants will produce little or no economic benefit, and may even worsen Swedens economy. Immigrants are turning some parts of Sweden, such as the Rosengard area of Malmo, into a ghetto.[73]

Likelihood of 50-year cultural collapse: Very high

From my one and half years of living in Poland, I have seen a moderate level of progressive ideological creep, careerism among women, hedonism, and idolation of Western values, particularly out of England, where a large percentage of the Polish population have emigrated for work. Younger Poles may not act much different from their Western counterparts in their party lifestyle behavior, but there nonetheless remains a tenuous maintenance of traditional sex roles. Women of fertile age are pursuing relationships over one-night stands, but careerism is causing them to stall family formation. This puts a downward pressure on birth rates, which stems from significant numbers of fertile young women emigrating to countries like the UK and USA, along with continued economic uncertainties faced from transitioning to capitalism[74]. As Europes least multicultural nation, Poland has long been hesitant to accept immigrants, but this has recently changed and they are encouraging migrants.[75] To its credit, it is seeking first-world entrepreneurs instead of low skilled laborers or asylum seekers. Its cultural fate will be an interesting development in the years to come, but the prognosis will be more negative as long as its young people are eager to leave the homeland.

Likelihood of 50-year cultural collapse: Possible

Poland and Russia show the limitations of Cultural Collapse Theory in that it best applies to first-world nations with highly developed economies. They have low birth rates but not through the mechanism I described, though if they adopt a more Western ideological track like Brazil, I expect to see the same outcome that is befalling England or Sweden.

There can be many paths to cultural destruction, and those nations with the most similarities will gravitate towards the same path, just like how Eastern European nations are suffering low birth rates because of mass emigration due to being introduced into the European Union.

Maintaining native birth rates while preventing the elite from allowing immigrant labor is the most effective means at preventing cultural collapse. Since multiculturalism is an experiment with no proven efficacy, a culture can only be maintained by a relatively homogenous group who identify with each other. When that homogeneity breaks down and one citizen looks to the next and does not see a person with the same values as himself, the culture falls in dis-repair as native citizens begin to lose a shared means of communication and identity. Once the percentage of the immigrant population crosses a certain threshold (perhaps 15%), the decline will pick up in pace and cultural breakdown will be readily apparent to all observers.

Current policies to solve low birth rates through immigration is a short-term fix with dire long-term consequences. In effect, its a Trojan-horse prescription of irreversible cultural destruction. A state must prevent itself from entering the position where mass immigration is considered a solution by blocking progressive ideologies from taking hold. One way this can be done is through the promotion of a state-sponsored religion which encourages the nuclear family instead of single motherhood and homosexuality. However, introducing religion as a mainstay of citizen life in the post-enlightenment era may be impossible.

We must consider that the scientific era is an evolutionary maladaptive feature of humanity that natural selection will accordingly punish (i.e. those who are anti-religious and pro-science will simply breed less). It must also be considered that with religion in permanent decline, cultural collapse may be a certainty that eventually occurs in all developed nations. Religion, it may turn out, was evolutionary beneficial to the human race.

Another possible solution is to foster a patriarchal society where men serve as strong providers. If you encourage the development of successful men who possess indispensable skills and therefore resources that are lacked by females, there will be women below their station who want to marry and procreate with them, but if strong women are produced instead, marriage and procreation is unlikely to take place at levels above the death rate.

A gap between the sexes should always exist in the favor of men if procreation is to occur at high rates, or else youll have something similar to the situation in America where urban professional women cannot find good men to begin a family with (i.e., men who are significantly more financially successful than them). They instead remain single and barren, only used occasionally by cads for exciting casual sex.

One issue that I purposefully ignored is the effect of technology and consumerism on lowering birth rates. How much influence does video games, internet, and smartphones contribute to a birth decline? How much of an effect does Western-style consumerism have in delaying marriage? I suspect they have more of an amplification effect than being an outright cause. If a country is proceeding through the cultural collapse model, technology will simply hurry the collapse, but giving internet access to a traditionally religious group of people may not cause them to flip overnight. Research will have to be done in these areas to say for sure.

The first iteration of any theory is sure to create as many questions as answers, but I hope that by proposing this model, it becomes more clear why some cultures seem so quick to degrade while others display a sort of immunity. Some countries may be too far down the wrong path to be saved, but I hope the information presented gives concerned readers ideas on protecting their own culture by allowing them to connect how progressive ideologies that may seem innocent or benign on the surface can eventually lead to an outright collapse of their nations culture.

If you like this article and are concerned about the future of the Western world, check out Roosh’s book Free Speech Isn’t Free. It gives an inside look to how the globalist establishment is attempting to marginalize masculine men with a leftist agenda that promotes censorship, feminism, and sterility. It also shares key knowledge and tools that you can use to defend yourself against social justice attacks. Click here to learn more about the book. Your support will help maintain our operation.

View original post here:

Cultural Collapse Theory: The 7 Steps That Lead To A …

Race is the elephant in the room when it comes to …

In 1967, with the Civil Rights movement still in full swing and Jim Crow still looming in the rearview mirror, median household income was 43% higher for white, non-Hispanic households than for black households. But things changed dramatically over the next half century, as legal segregation faded into history. By 2011, median white household income was 72% higher than median black household income, according to a Census report from that year [PDF].

To say that economic inequality is still a heavily racialized phenomenon, even a generation after the end of the Civil Rights era, would be an understatement. Yet both major parties continue to discuss inequality in largely color-blind terms, only hinting at the role played by race.

The trend is even more startling when one looks at median household wealth instead of yearly income. In 1984, the white-to-black wealth ratio was 12-to-1, according to Pew Research Center. By 1995, the chasm had narrowed until median white income had only a 5-to-1 advantage over black income. But over the next 14 years the wealth gap began to grow once again, until it had skyrocketed up to 19-to-1 in 2009.

Yet even a recent 204-page analysis of the federal War on Poverty, spearheaded by Rep. Paul Ryan, R-Wis., gives only passing mentions to racial disparity. In the first section of the report, which purports to explain the causes of modern poverty, Ryan and his co-authors bring up race only twice: Once to identify the breakdown of the familiy as a key cause of poverty within the black community, citing Daniel Patrick Moynihan, and again to applaud the narrowing of the achievement gap between white and black schoolchildren. Weeks later, during a radio appearance, Ryan said poverty is in part to blame on the fact that inner cities have a culture of men not working.

President Obama went a step forward in Decembers major address on inequality, when he noted that the painful legacy of discrimination means that African Americans, Latinos, Native Americans are far more likely to suffer from a lack of opportunityhigher unemployment, higher poverty rates. Yet that amounted to a footnote in a speech that also included the line, The opportunity gap in America is now as much about class as it is about race.

I think it doesnt make for good politics, said Color of Change executive director Rashad Robinson of the racial wealth gap. Its messy and requires us to be deep and think about much bigger and more long-term solutions than Washingtons oftentimes willing to deal with.

Yet in a serious discussion about American inequality, the subject of race is essentially unavoidable. Thats because most of the pipelines to a higher economic classsuch as employment and homeownershipare oftentimes not equally accessible to black folks, said Robinson.

Disparities in homeownership are a major driver of the racial wealth gap, according to a recent study from Brandeis University. According to the authors of the report, redlining [a form of discrimination in banking or insurance practices], discriminatory mortgage-lending practices, lack of access to credit, and lower incomes have blocked the homeownership path for African-Americans while creating and reinforcing communities segregated by race.

Many of the black families that have successfully battled their way to homeownership over the past few decades saw their nest eggs get pulverized by the 2008 financial collapse. The Brandeis researchers found that half the collective wealth of African-American families was stripped away during the Great Recession, in large part due to the collapse of the housing market and the subsequent explosion in the nationwide foreclosure rate.

Similarly, employment discrimination has done its part to ensure that black unemployment remains twice as high as white unemploymenta ratio that has stayed largely consistent since the mid-1950s. National Bureau of Economy Research fellows have found that resumes are significantly less likely to get a positive response from potential employers if the applicants have names that are more common in the black community. And an arrest for even a non-violent drug offense can haunt a job applicant for the rest of his life; combined with the fact that black people are nearly four times more likely to be arrested for marijuana possession than whites, despite using the drug at roughly the same rate, criminal background checks have helped to fuel racial inequity in job hiring.

Yet both parties have stressed personal responsibility to an outsized degree, said William Darity Jr., the director of Duke Universitys Consortium on Social Equity.

The underlying narrative that many people share is that whatever inequities still exist, theyre due to the misbehavior or disfunctional behavior of black folks themselves, said Darity. So theres no reason to pay attention to racial disparities because one doesnt believe theyre still significant, or theres no need for public policy action by the government because its just a question of black folks changing their own behaviors.

Darity portrayed this as a bipartisan problem and criticized President Obama for [playing] into that behavior by emphasizing personal responsibility in the My Brothers Keeper initiative to help young men of color. The conservative notion of a culture of povertyis another example of the fallacy, he said.

I think a lot of people are really attracted to stories about personal uplift or social mobility, but these are very exceptional cases, he said. Thats not the norm. Most people who are born into deprived circumstances do not really have the capacity or support to come out of those deprived circumstances.

Instead, he argued that the only way to break self-perpetuating inequality was through wealth transfers.

Peoples behaviors are largely shaped by the resources they possess, and if their resources alterned, than they might change their behaviors, he said.

Here is the original post:

Race is the elephant in the room when it comes to …

The End of Moores Law Rodney Brooks

I have been working on an upcoming post about megatrends and how they drive tech. I had included the end of Moores Law to illustrate how the end of a megatrend might also have a big influence on tech, but that section got away from me, becoming much larger than the sections on each individual current megatrend. So I decided to break it out into a separate post and publish it first. Here it is.

Moores Law, concerning what we put on silicon wafers, is over after a solid fifty year run that completely reshaped our world. But that end unleashes lots of new opportunities.

Moore, Gordon E.,Cramming more components onto integrated circuits,Electronics, Vol 32, No. 8, April 19, 1965.

Electronicswas a trade journal that published monthly, mostly, from 1930 to 1995. Gordon Moores four and a half page contribution in 1965 was perhaps its most influential article ever. That article not only articulated the beginnings, and it was the very beginnings, of a trend, but the existence of that articulation became a goal/law that has run the silicon based circuit industry (which is the basis of every digital device in our world) for fifty years. Moore was a Cal Tech PhD, cofounder in 1957 of Fairchild Semiconductor, and head of its research and development laboratory from1959. Fairchild had been founded to make transistors from silicon at a time when they were usually made from much slower germanium.

One can find many files on the Web that claim to be copies of the original paper, but I have noticed that some of them have the graphs redrawn and that they are sometimes slightly different from the ones that I have always taken to be the originals. Below I reproduce two figures from the original that as far as I can tell have only been copied from an original paper version of the magazine, with no manual/human cleanup.

The first one that I reproduce here is the money shot for the origin of Moores Law. There was however an equally important earlier graph in the paper which was predictive of the future yield over time of functional circuits that could be made from silicon. It had less actual data than this one, and as well see, that is really saying something.

This graph is about the number of components on an integrated circuit. An integrated circuit is made through a process that is like printing. Light is projected onto a thin wafer of silicon in a number of different patterns, while different gases fill the chamber in which it is held. The different gases cause different light activated chemical processes to happen on the surface of the wafer, sometimes depositing some types of material, and sometimes etching material away. With precise masks to pattern the light, and precise control over temperature and duration of exposures, a physical two dimensional electronic circuit can be printed. The circuit has transistors, resistors, and other components. Lots of them might be made on a single wafer at once, just as lots of letters are printed on a single page at one. The yield is how many of those circuits are functionalsmall alignment or timing errors in production can screw up some of the circuits in any given print. Then the silicon wafer is cut up into pieces, each containing one of the circuits and each is put inside its own plastic package with little legs sticking out as the connectorsif you have looked at a circuit board made in the last forty years you have seen it populated with lots of integrated circuits.

The number of components in a single integrated circuit is important. Since the circuit is printed it involves no manual labor, unlike earlier electronics where every single component had to be placed and attached by hand. Now a complex circuit which involves multiple integrated circuits only requires hand construction (later this too was largely automated), to connect up a much smaller number of components. And as long as one has a process which gets good yield, it is constant time to build a single integrated circuit, regardless of how many components are in it. That means less total integrated circuits that need to be connected by hand or machine. So, as Moores papers title references,crammingmore components into a single integrated circuit is a really good idea.

The graph plots the logarithm base two of the number ofcomponentsin an integrated circuit on the vertical axis against calendar years on the horizontal axis. Every notch upwards on the left doubles the number of components. So while means components, means components. That is a thousand fold increase from 1962 to 1972.

There are two important things to note here.

The first is that he is talking aboutcomponentson an integrated circuit, not just the number of transistors. Generally there are many more components thantransistors, though the ratio did drop over time as different fundamental sorts of transistors were used. But in later years Moores Law was often turned into purely a count of transistors.

The other thing is that there are only four real data points here in this graph which he published in 1965. In 1959 the number of components is , i.e., that is not about anintegratedcircuit at all, just about single circuit elementsintegrated circuits had not yet been invented. So this is a null data point. Then he plots four actual data points, which we assume were taken from what Fairchild could produce, for 1962, 1963, 1964, and 1965, having 8, 16, 32, and 64 components. That is a doubling every year. It is an exponential increase in the true sense of exponential.

What is the mechanism for this, how can this work? It works because it is in the digital domain, the domain ofyesorno, the domain of or .

In the last half page of the four and a half page article Moore explains the limitations of his prediction, saying that for some things, like energy storage, we will not see his predicted trend. Energy takes up a certain number of atoms and their electrons to store a given amount, so you can not just arbitrarily change the number of atoms and still store the same amount of energy. Likewise if you have a half gallon milk container you can not put a gallon of milk in it.

But the fundamental digital abstraction isyesorno. A circuit element in an integrated circuit just needs to know whether a previous element said yes or no, whether there is a voltage or current there or not. In the design phase one decides above how many volts or amps, or whatever, means yes, and below how many means no. And there needs to be a good separation between those numbers, a significant no mans land compared to the maximum and minimum possible. But, the magnitudes do not matter.

I like to think of it like piles of sand. Is there a pile of sand on the table or not? We might have a convention about how big a typical pile of sand is. But we can make it work if we halve the normal size of a pile of sand. We can still answer whether or not there is a pile of sand there using just half as many grains of sand in a pile.

And then we can halve the number again. And the digital abstraction of yes or no still works. And we can halve it again, and it still works. And again, and again, and again.

This is what drives Moores Law, which in its original form said that we could expect to double the number of components on an integrated circuit every year for 10 years, from 1965 to 1975. That held up!

Variations of Moores Law followed; they were all about doubling, but sometimes doubling different things, and usually with slightly longer time constants for the doubling. The most popular versions were doubling of the number of transistors, doubling of the switching speed of those transistors (so a computer could run twice as fast), doubling of the amount of memory on a single chip, and doubling of the secondary memory of a computeroriginally on mechanically spinning disks, but for the last five years in solid state flash memory. And there were many others.

Lets get back to Moores original law for a moment. The components on an integrated circuit are laid out on a two dimensional wafer of silicon. So to double the number of components for the same amount of silicon you need to double the number of components per unit area. That means that the size of a component, in each linear dimension of the wafer needs to go down by a factor of . In turn, that means that Moore was seeing the linear dimension of each component go down to of what it was in a year, year over year.

But why was it limited to just a measly factor of two per year? Given the pile of sand analogy from above, why not just go to a quarter of the size of a pile of sand each year, or one sixteenth? It gets back to the yield one gets, the number of working integrated circuits, as you reduce the component size (most commonly calledfeature size). As the feature size gets smaller, the alignment of the projected patterns of light for each step of the process needs to get more accurate. Since , approximately, it needs to get better by as you halve the feature size. And because impurities in the materials that are printed on the circuit, the material from the gasses that are circulating and that are activated by light, the gas needs to get more pure, so that there are fewer bad atoms in each component, now half the area of before. Implicit in Moores Law, in its original form, was the idea that we could expect the production equipment to get better by about per year, for 10 years.

For various forms of Moores Law that came later, the time constant stretched out to 2 years, or even a little longer, for a doubling, but nevertheless the processing equipment has gotten that better time period over time period, again and again.

To see the magic of how this works, lets just look at 25 doublings. The equipment has to operate with things times smaller, i.e., roughly 5,793 times smaller. But we can fit more components in a single circuit, which is 33,554,432 times more. The accuracy of our equipment has improved 5,793 times, but that has gotten a further acceleration of 5,793 on top of the original 5,793 times due to the linear to area impact. That is where the payoff of Moores Law has come from.

In his original paper Moore only dared project out, and only implicitly, that the equipment would get better every year for ten years. In reality, with somewhat slowing time constants, that has continued to happen for 50 years.

Now it is coming to an end. But not because the accuracy of the equipment needed to give good yields has stopped improving. No. Rather it is because those piles of sand we referred to above have gotten so small that they only contain a single metaphorical grain of sand. We cant split the minimal quantum of a pile into two any more.

Perhaps the most remarkable thing is Moores foresight into how this would have an incredible impact upon the world. Here is the first sentence of his second paragraph:

Integrated circuits will lead to such wonders as home computersor at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment.

This was radical stuff in 1965. So called mini computers were still the size of a desk, and to be useful usually had a few peripherals such as tape units, card readers, or printers, that meant they would be hard to fit into a home kitchen of the day, even with the refrigerator, oven, and sink removed. Most people had never seen a computer and even fewer had interacted with one, and those who had, had mostly done it by dropping off a deck of punched cards, and a day later picking up a printout from what the computer had done when humans had fed the cards to the machine.

The electrical systems of cars were unbelievably simple by todays standards, with perhaps half a dozen on off switches, and simple electromechanical devices to drive the turn indicators, windshield wipers, and the distributor which timed the firing of the spark plugsevery single function producing piece of mechanism in auto electronics was big enough to be seen with the naked eye. And personal communications devices were rotary dial phones, one per household, firmly plugged into the wall at all time. Or handwritten letters than needed to be dropped into the mail box.

That sentence quoted above, given when it was made, is to me the bravest and most insightful prediction of technology future that we have ever seen.

By the way, the first computer made from integrated circuits was the guidance computer for the Apollo missions, one in the Command Module, and one in the Lunar Lander. The integrated circuits were made by Fairchild, Gordon Moores company. The first version had 4,100 integrated circuits, each implementing a single 3 input NOR gate. The more capable manned flight versions, which first flew in 1968, had only 2,800 integrated circuits, each implementing two 3 input NOR gates. Moores Law had its impact on getting to the Moon, even in the Laws infancy.

In the original magazine article this cartoon appears:

At a fortieth anniversary of Moores Law at the Chemical Heritage Foundationin Philadelphia I asked Dr. Moore whether this cartoon had been his idea. He replied that he had nothing to do with it, and it was just there in the magazine in the middle of his article, to his surprise.

Without any evidence at all on this, my guess is that the cartoonist was reacting somewhat skeptically to the sentence quoted above. The cartoon is set in a department store, as back then US department stores often had a Notions department, although this was not something of which I have any personal experience as they are long gone (and I first set foot in the US in 1977). It seems that notions is another word for haberdashery, i.e., pins, cotton, ribbons, and generally things used for sewing. As still today, there is also aCosmeticsdepartment. And plop in the middle of them is theHandy Home Computersdepartment, with the salesman holding a computer in his hand.

I am guessing that the cartoonist was making fun of this idea, trying to point out the ridiculousness of it. It all came to pass in only 25 years, including being sold in department stores. Not too far from the cosmetics department. But the notions departments had all disappeared. The cartoonist was right in the short term, but blew it in the slightly longer term.

There were many variations on Moores Law, not just his original about the number of components on a single chip.

Amongst the many there was a version of the law about how fast circuits could operate, as the smaller the transistors were the faster they could switch on and off. There were versions of the law for how much RAM memory, main memory for running computer programs, there would be and when. And there were versions of the law for how big and fast disk drives, for file storage, would be.

This tangle of versions of Moores Law had a big impact on how technology developed. I will discuss three modes of that impact; competition, coordination, and herd mentality in computer design.

Competition

Memory chips are where data and programs are stored as they are run on a computer. Moores Law applied to the number of bits of memory that a single chip could store, and a natural rhythm developed of that number of bits going up my a multiple of four on a regular but slightly slowing basis. By jumping over just a doubling, the cost of the silicon foundries could me depreciated over long enough time to keep things profitable (today a silicon foundry is about a $7B capital cost!), and furthermore it made sense to double the number of memory cells in each dimension to keep the designs balanced, again pointing to a step factor of four.

In the very early days of desktop PCs memory chips had bits. The memory chips were called RAM (Random Access Memoryi.e., any location in memory took equally long to access, there were no slower of faster places), and a chip of this size was called a 16K chip, where K means not exactly 1,000, but instead 1,024 (which is ). Many companies produced 16K RAM chips. But they all knew from Moores Law when the market would be expecting 64K RAM chips to appear. So they knew what they had to do to not get left behind, and they knew when they had to have samples ready for engineers designing new machines so that just as the machines came out their chips would be ready to be used having been designed in. And they could judge when it was worth getting just a little ahead of the competition at what price. Everyone knew the game (and in fact all came to a consensus agreement on when the Moores Law clock should slow down just a little), and they all competed on operational efficiency.

Coordination

Technology Reviewtalks about this in their story on the end of Moores Law. If you were the designer of a new computer box for a desktop machine, or any other digital machine for that matter, you could look at when you planned to hit the market and know what amount of RAM memory would take up what board space because you knew how many bits per chip would be available at that time. And you knew how much disk space would be available at what price and what physical volume (disks got smaller and smaller diameters just as they increased the total amount of storage). And you knew how fast the latest processor chip would run. And you knew what resolution display screen would be available at what price. So a couple of years ahead you could put all these numbers together and come up with what options and configurations would make sense by the exact time whenyou were going to bring your new computer to market.

The company that sold the computers might make one or two of the critical chips for their products but mostly they bought other components from other suppliers. The clockwork certainty of Moores Law let them design a new product without having horrible surprises disrupt their flow and plans. This really let the digital revolution proceed. Everything was orderly and predictable so there were fewer blind alleys to follow. We had probably the single most sustained continuous and predictable improvement in any technology over the history of mankind.

Herd mentality in computer design

But with this good came some things that might be viewed negatively (though Im sure there are some who would argue that they were all unalloyed good). Ill take up one of these as the third thing to talk about that Moores Law had a major impact upon.

A particular form of general purpose computer design had arisen by the time that central processors could be put on a single chip (see the Intel 4004 below), and soon those processors on a chip, microprocessors as they came to be known, supported that general architecture. That architecture is known as thevon Neumann architecture.

A distinguishing feature of this architecture is that there is a large RAM memory which holds both instructions and datamade from the RAM chips we talked about above under coordination. The memory is organized into consecutive indexable (or addressable) locations, each containing the same number of binary bits, or digits. The microprocessor itself has a few specialized memory cells, known as registers, and an arithmetic unit that can do additions, multiplications, divisions (more recently), etc. One of those specialized registers is called the program counter (PC), and it holds an address in RAM for the current instruction. The CPU looks at the pattern of bits in that current instruction location and decodes them into what actions it should perform. That might be an action to fetch another location in RAM and put it into one of the specialized registers (this is called a LOAD), or to send the contents the other direction (STORE), or to take the contents of two of the specialized registers feed them to the arithmetic unit, and take their sum from the output of that unit and store it in another of the specialized registers. Then the central processing unit increments its PC and looks at the next consecutive addressable instruction. Some specialized instructions can alter the PC and make the machine go to some other part of the program and this is known as branching. For instance if one of the specialized registers is being used to count down how many elements of an array of consecutive values stored in RAM have been added together, right after the addition instruction there might be an instruction to decrement that counting register, and then branch back earlier in the program to do another LOAD and add if the counting register is still more than zero.

Thats pretty much all there is to most digital computers. The rest is just hacks to make them go faster, while still looking essentially like this model. But note that the RAM is used in two ways by a von Neumann computerto contain data for a program and to contain the program itself. Well come back to this point later.

With all the versions of Moores Law firmly operating in support of this basic model it became very hard to break out of it. The human brain certainly doesnt work that way, so it seems that there could be powerful other ways to organize computation. But trying to change the basic organization was a dangerous thing to do, as the inexorable march of Moores Law based existing architecture was going to continue anyway. Trying something new would most probably set things back a few years. So brave big scale experiments like the Lisp MachineorConnection Machinewhich both grew out of the MIT Artificial Intelligence Lab (and turned into at least three different companies) and Japans fifth generation computerproject (which played with two unconventional ideas, data flow and logical inference) all failed, as before long the Moores Law doubling conventional computers overtook the advanced capabilities of the new machines, and software could better emulate the new ideas.

Most computer architects were locked into the conventional organizations of computers that had been around for decades. They competed on changing the coding of the instructions to make execution of programs slightly more efficient per square millimeter of silicon. They competed on strategies to cache copies of larger and larger amounts of RAM memory right on the main processor chip. They competed on how to put multiple processors on a single chip and how to share the cached information from RAM across multiple processor units running at once on a single piece of silicon. And they competed on how to make the hardware more predictive of what future decisions would be in a running program so that they could precompute the right next computations before it was clear whether they would be needed or not. But, they were all locked in to fundamentally the same way of doing computation. Thirty years ago there were dozens of different detailed processor designs, but now they fall into only a small handful of families, the X86, the ARM, and the PowerPC. The X86s are mostly desktops, laptops, and cloud servers. The ARM is what we find in phones and tablets. And you probably have a PowerPC adjusting all the parameters of your cars engine.

The one glaring exception to the lock in caused by Moores Law is that of Graphical Processing Units, orGPUs. These are different from von Neumann machines. Driven by wanting better video performance for video and graphics, and in particular gaming, the main processor getting better and better under Moores Law was just not enough to make real time rendering perform well as the underlying simulations got better and better. In this case a new sort of processor was developed. It was not particularly useful for general purpose computations but it was optimized very well to do additions and multiplications on streams of data which is what is needed to render something graphically on a screen. Here was a case where a new sort of chip got added into the Moores Law pool much later than conventional microprocessors, RAM, and disk. The new GPUs did not replace existing processors, but instead got added as partners where graphics rendering was needed. I mention GPUs here because it turns out that they are useful for another type of computation that has become very popular over the last three years, and that is being used as an argument that Moores Law is not over. I still think it is and will return to GPUs in the next section.

As I pointed out earlier we can not halve a pile of sand once we are down to piles that are only a single grain of sand. That is where we are now, we have gotten down to just about one grain piles of sand. Gordon Moores Law in its classical sense is over. SeeThe Economistfrom March of last year for a typically thorough, accessible, and thoughtful report.

I earlier talked about thefeature size of an integrated circuit and how with every doubling that size is divided by . By 1971 Gordon Moore was at Intel, and they released their first microprocessor on a single chip, the 4004 with 2,300 transistors on 12 square millimeters of silicon, with a feature size of 10 micrometers, written 10m. That means that the smallest distinguishable aspect of any component on the chip was th of a millimeter.

Since then the feature size has regularly been reduced by a factor of , or reduced to of its previous size, doubling the number of components in a given area, on a clockwork schedule. The schedule clock has however slowed down. Back in the era of Moores original publication the clock period was a year. Now it is a little over 2 years. In the first quarter of 2017 we are expecting to see the first commercial chips in mass market products with a feature size of 10 nanometers, written 10nm. That is 1,000 times smaller than the feature size of 1971, or 20 applications of the rule over 46 years. Sometimes the jump has been a little better than , and so we actually seen 17 jumps from10m down to 10nm. You can see them listed in Wikipedia. In 2012 the feature size was 22nm, in 2014 it was 14nm, now in the first quarter of 2017 we are about to see 10nm shipped to end users, and it is expected that we will see 7nm in 2019 or so. There are stillactive areas of researchworking on problems that are yet to be solved to make 7nm a reality, but industry is confident that it will happen. There are predictions of 5nm by 2021, but a year ago there was still much uncertaintyover whether the engineering problems necessary to do this could be solved and whether they would be economically viable in any case.

Once you get down to 5nm features they are only about 20 silicon atoms wide. If you go much below this the material starts to be dominated by quantum effects and classical physical properties really start to break down. That is what I mean by only one grain of sand left in the pile.

Todays microprocessors have a few hundred square millimeters of silicon, and 5 to 10 billion transistors. They have a lot of extra circuitry these days to cache RAM, predict branches, etc., all to improve performance. But getting bigger comes with many costs as they get faster too. There is heat to be dissipated from all the energy used in switching so many signals in such a small amount of time, and the time for a signal to travel from one side of the chip to the other, ultimately limited by the speed of light (in reality, in copper it is about less), starts to be significant. The speed of light is approximately 300,000 kilometers per second, or 300,000,000,000 millimeters per second. So light, or a signal, can travel 30 millimeters (just over an inch, about the size of a very large chip today) in no less than one over 10,000,000,000 seconds, i.e., no less than one ten billionth of a second.

Todays fastest processors have a clock speed of 8.760GigaHertz, which means by the time the signal is getting to the other side of the chip, the place if came from has moved on to the next thing to do. This makes synchronization across a single microprocessor something of a nightmare, and at best a designer can know ahead of time how late different signals from different parts of the processor will be, and try to design accordingly. So rather than push clock speed further (which is also hard) and rather than make a single microprocessor bigger with more transistors to do more stuff at every clock cycle, for the last few years we have seen large chips go to multicore, with two, four, or eight independent microprocessors on a single piece of silicon.

Multicore has preserved the number of operations done per second version of Moores Law, but at the cost of a simple program not being sped up by that amountone cannot simply smear a single program across multiple processing units. For a laptop or a smart phone that is trying to do many things at once that doesnt really matter, as there are usually enough different tasks that need to be done at once, that farming them out to different cores on the same chip leads to pretty full utilization. But that will not hold, except for specialized computations, when the number of cores doubles a few more times. The speed up starts to disappear as silicon is left idle because there just arent enough different things to do.

Despite the arguments that I presented a few paragraphs ago about why Moores Law is coming to a silicon end, many people argue that it is not, because we are finding ways around those constraints of small numbers of atoms by going to multicore and GPUs. But I think that is changing the definitions too much.

Here is a recent chart that Steve Jurvetson, cofounder of the VC firm DFJ (Draper Fisher Jurvetson), posted on his FaceBook page. He said it is an update of an earlier chart compiled by Ray Kurzweil.

In this case the left axis is a logarithmically scaled count of the number of calculations per second per constant dollar. So this expresses how much cheaper computation has gotten over time. In the 1940s there are specialized computers, such as the electromagnetic computers built to break codes at Bletchley Park. By the 1950s they become general purpose, von Neuman style computers and stay that way until the last few points.

The last two points are both GPUs, the GTX 450 and the NVIDIA Titan X. Steve doesnt label the few points before that, but in every earlier version of a diagram that I can find on the Web (and there are plenty of them), the points beyond 2010 are all multicore. First dual cores, and then quad cores, such as Intels quad core i7 (and I am typing these words on a 2.9MHz version of that chip, powering my laptop).

That GPUs are there and that people are excited about them is because besides graphics they happen to be very good at another very fashionable computation. Deep learning, a form of something known originally as back propagation neural networks, has had a big technological impact recently. It is what has made speech recognition so fantastically better in the last three years that Apples Siri, Amazons Echo, and Google Home are useful and practical programs and devices. It has also made image labeling so much better than what we had five years ago, and there is much experimentation with using networks trained on lots of road scenes as part of situational awareness for self driving cars. For deep learning there is a training phase, usually done in the cloud, on millions of examples. That produces a few million numbers which represent the network that is learned. Then when it is time to recognize a word or label an image that input is fed into a program simulating the network by doing millions of multiplications and additions. Coincidentally GPUs just happen to perfect for the way these networks are structured, and so we can expect more and more of them to be built into our automobiles. Lucky break for GPU manufacturers! While GPUs can do lots of computations they dont work well on just any problem. But they are great for deep learning networks and those are quickly becoming the flavor of the decade.

While rightly claiming that we continue to see exponential growth as in the chart above, exactly what is being measured has changed. That is a bit of a sleight of hand.

And I think that change will have big implications.

I think the end of Moores Law, as I have defined the end, will bring about a golden new era of computer architecture. No longer will architects need to cower atthe relentless improvements that they know others will get due to Moores Law. They will be able to take the time to try new ideas out in silicon, now safe in the knowledge that a conventional computer architecture will not be able to do the same thing in just two or four years in software. And the new things they do may not be about speed. They might be about making computation better in other ways.

Machine learning runtime

We are seeing this with GPUs as runtime engines for deep learning networks. But we are also seeing some more specific architectures. For instance, for about a a year Google has had their own chips called TensorFlow Units (or TPUs) that save power for deep learning networks by effectively reducing the number of significant digits that are kept around as neural networks work quite well at low precision. Google has placed many of these chips in the computers in their server farms, or cloud, and are able to use learned networks in various search queries, at higher speed for lower electrical power consumption.

Special purpose silicon

Typical mobile phone chips now have four ARM processor cores on a single piece of silicon, plus some highly optimized special purpose processors on that same piece of silicon. The processors manage data flowing from cameras and optimizing speech quality, and even on some chips there is a special highly optimized processor for detecting human faces. That is used in the camera application, youve probably noticed little rectangular boxes around peoples faces as you are about to take a photograph, to decide what regions in an image should be most in focus and with the best exposure timingthe faces!

New general purpose approaches

We are already seeing the rise of special purpose architectures for very specific computations. But perhaps we will see more general purpose architectures but with a a different style of computation making a comeback.

Conceivably the dataflow and logic models of the Japanese fifth generation computer project might now be worth exploring again. But as we digitalize the world the cost of bad computer security will threaten our very existence. So perhaps if things work out, the unleashed computer architects can slowly start to dig us out of our current deplorable situation.

Secure computing

We all hear about cyber hackers breaking into computers, often half a world away, or sometimes now in a computer controlling the engine, and soon everything else, of a car as it drives by. How can this happen?

Cyber hackers are creative but many ways that they get into systems are fundamentally through common programming errors in programs built on top of the von Neumann architectures we talked about before.

A common case is exploiting something known as buffer overrun. A fixed size piece of memory is reserved to hold, say, the web address that one can type into a browser, or the Google query box. If all programmers wrote very careful code and someone typed in way too many characters those past the limit would not get stored in RAM at all. But all too often a programmer has used a coding trick that is simple, and quick to produce, that does not check for overrun and the typed characters get put into memory way past the end of the buffer, perhaps overwriting some code that the program might jump to later. This relies on the feature of von Neumann architectures that data and programs are stored in the same memory. So, if the hacker chooses some characters whose binary codes correspond to instructions that do something malicious to the computer, say setting up an account for them with a particular password, then later as if by magic the hacker will have a remotely accessible account on the computer, just as many other human and program services may. Programmers shouldnt oughta make this mistake but history shows that it happens again and again.

Another common way in is that in modern web services sometimes the browser on a lap top, tablet, or smart phone, and the computers in the cloud need to pass really complex things between them. Rather than the programmer having to know in advance all those complex possible things and handle messages for them, it is set up so that one or both sides can pass little bits of source code of programs back and forth and execute them on the other computer. In this way capabilities that were never originally conceived of can start working later on in an existing system without having to update the applications. It is impossible to be sure that a piece of code wont do certain things, so if the programmer decided to give a fully general capability through this mechanism there is no way for the receiving machine to know ahead of time that the code is safe and wont do something malicious (this is a generalization of the halting problem I could go on and on but I wont here). So sometimes a cyber hacker can exploit this weakness and send a little bit of malicious code directly to some service that accepts code.

Beyond that cyber hackers are always coming up with new inventive ways inthese have just been two examples to illustrate a couple of ways of how itis currently done.

It is possible to write code that protects against many of these problems, but code writing is still a very human activity, and there are just too many human-created holes that can leak, from too many code writers. One way to combat this is to have extra silicon that hides some of the low level possibilities of a von Neumann architecture from programmers, by only giving the instructions in memory a more limited set of possible actions.

This is not a new idea. Most microprocessors have some version of protection rings which let more and more untrusted code only have access to more and more limited areas of memory, even if they try to access it with normal instructions. This idea has been around a long time but it has suffered from not having a standard way to use or implement it, so most software, in an attempt to be able to run on most machines, usually only specifies two or at most three rings of protection. That is a very coarse tool and lets too much through. Perhaps now the idea will be thought about more seriously in an attempt to get better security when just making things faster is no longer practical.

Another idea, that has mostly only been implemented in software, with perhaps one or two exceptions, is called capability based security, through capability based addressing. Programs are not given direct access to regions of memory they need to use, but instead are given unforgeable cryptographically sound reference handles, along with a defined subset of things they are allowed to do with the memory. Hardware architects might now have the time to push through on making this approach completely enforceable, getting it right once in hardware so that mere human programmers pushed to get new software out on a promised release date can not screw things up.

From one point of view the Lisp Machines that I talked about earlier were built on a very specific and limited version of a capability based architecture. Underneath it all, those machines were von Neumann machines, but the instructions they could execute were deliberately limited. Through the use of something called typed pointers, at the hardware level, every reference to every piece of memory came with restrictions on what instructions could do with that memory, based on the type encoded in the pointer. And memory could only be referenced by a pointer to the start of a chunk of memory of a fixed size at the time the memory was reserved. So in the buffer overrun case, a buffer for a string of characters would not allow data to be written to or read from beyond the end of it. And instructions could only be referenced from another type of pointer, a code pointer. The hardware kept the general purpose memory partitioned at a very fine grain by the type of pointers granted to it when reserved. And to a first approximation the type of a pointer could never be changed, nor couldthe actual address in RAM be seen by any instructions that had access to a pointer.

There have been ideas out there for a long time on how to improve security through this use of hardware restrictions on the general purpose von Neumann architecture. I have talked about a few of them here. Now I think we can expect this to become a much more compelling place for hardware architects to spend their time, as security of our computational systems becomes a major achilles heel on the smooth running of our businesses, our lives, and our society.

Quantum computers

Quantum computers are a largely experimental and very expensive at this time technology. With the need to cool them to physics experiment level ultra cold, and the expense that entails, to the confusion over how much speed up they might give over conventional silicon based computers and for what class of problem, they are a large investment, high risk research topic at this time. I wont go into all the arguments (I havent read them all, and frankly I do not have the expertise that would make me confident in any opinion I might form) butScott Aaronsons blogon computational complexity and quantum computation is probably the best source for those interested. Claims on speedups either achieved or hoped to be achieved on practical problems range from a factor of 1 to thousands (and I might have that upper bound wrong). In the old days just waiting 10 or 20 years would let Moores Law get you there. Instead we have seen well over a decade of sustained investment in a technology that people are still arguing over whether it can ever work. To me this is yet more evidence that the end of Moores Law is encouraging new investment and new explorations.

Unimaginable stuff

Even with these various innovations around, triggered by the end of Moores Law, the best things we might see may not yet be in the common consciousness. I think the freedom to innovate, without the overhang of Moores Law, the freedom to take time to investigate curious corners, may well lead to a new garden of Eden in computational models. Five to ten years from now we may see a completely new form of computer arrangement, in traditional silicon (not quantum), that is doing things and doing them faster than we can today imagine. And with a further thirty years of development those chips might be doing things that would today be indistinguishable from magic, just as todays smart phone would have seemed like utter magic to 50 year ago me.

Many times the popular press, or people who should know better, refer to something that is increasing a lot as exponential. Something is only truly exponential if there is a constant ratio in size between any two points in time separated by the same amount. Here the ratio is , for any two points a year apart. The misuse of the term exponential growth is widespread and makes me cranky.

Why the Chemical Heritage Foundation for this celebration? Both of Gordon Moores degrees (BS and PhD) were in physical chemistry!

For those who read my first blog, once again seeRoy Amaras Law.

I had been a post-doc at the MIT AI Lab and loved using Lisp Machines there, but when I left and joined the faculty at Stanford in 1983 I realized that the more conventional SUN workstationsbeing developed there and at spin-off company Sun Microsystemswould win out in performance very quickly. So I built a software based Lisp system (which I called TAIL (Toy AI Language) in a nod to the naming conventions of most software at the Stanford Artificial Intelligence Lab, e.g., BAIL, FAIL, SAIL, MAIL) that ran on the early Sun workstations, which themselves used completely generic microprocessors. By mid 1984 Richard Gabriel, I, and others had started a company called Lucidin Palo Alto to compete on conventional machines with the Lisp Machine companies. We used my Lisp compiler as a stop gap, but as is often the case with software, that was still the compiler used by Lucid eight years later when it ran on 19 different makes of machines. I had moved back to MIT to join the faculty in late 1984, and eventually became the director of the Artificial Intelligence Lab there (and then CSAIL). But for eight years, while teaching computer science and developing robots by day, I also at night developed and maintained my original compiler as the work horse of Lucid Lisp. Just as the Lisp Machine companies got swept away so too eventually did Lucid. Whereas the Lisp Machine companies got swept away by Moores Law, Lucid got swept away as the fashion in computer languages shifted to a winner take all world, for many years, of C.

Full disclosure. DFJ is one of the VCs who have invested in my company Rethink Robotics.

The rest is here:

The End of Moores Law Rodney Brooks

What is Moore’s Law? Webopedia Definition

Main TERM M

By Vangie Beal

(n.) Moore’s Law is the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the current definition of Moore’s Law, which Moore himself has blessed. Most experts, including Moore himself, expect Moore’s Law to hold true until 2020-2025.

Stay up to date on the latest developments in Internet terminology with a free newsletter from Webopedia. Join to subscribe now.

Read the original:

What is Moore’s Law? Webopedia Definition

What is Moore’s Law? Webopedia Definition

Main TERM M

By Vangie Beal

(n.) Moore’s Law is the observation made in 1965 by Gordon Moore, co-founder of Intel, that the number of transistors per square inch on integrated circuits had doubled every year since the integrated circuit was invented. Moore predicted that this trend would continue for the foreseeable future. In subsequent years, the pace slowed down a bit, but data density has doubled approximately every 18 months, and this is the current definition of Moore’s Law, which Moore himself has blessed. Most experts, including Moore himself, expect Moore’s Law to hold true until 2020-2025.

Stay up to date on the latest developments in Internet terminology with a free newsletter from Webopedia. Join to subscribe now.

View post:

What is Moore’s Law? Webopedia Definition

Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto

Cryptocurrency News
This was a bloody week for cryptocurrencies. Everything was covered in red, from Ethereum (ETH) on down to the Basic Attention Token (BAT).

Some investors claim it was inevitable. Others say that price manipulation is to blame.

We think the answers are more complicated than either side has to offer, because our research reveals deep contradictions between the price of cryptos and the underlying development of blockchain projects.

For instance, a leading venture capital (VC) firm launched a $300.0-million crypto investment fund, yet liquidity continues to dry up in crypto markets.

Another example is the U.S. Securities and Exchange Commission’s.

The post Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto appeared first on Profit Confidential.

Visit link:

Cryptocurrency News: Bitcoin ETFs, Andreessen Horowitz, and Contradictions in Crypto

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Another Crypto Hack Derails Recovery
Since our last report, hackers broke into yet another cryptocurrency exchange. This time the target was Bithumb, a Korean exchange known for high-flying prices and ultra-active traders.

While the hackers made off with approximately $31.5 million in funds, the exchange is working with relevant authorities to return the stolen tokens to their respective owners. In the event that some is still missing, the exchange will cover the losses. (Source: “Bithumb Working With Other Crypto Exchanges to Recover Hacked Funds,”.

The post Cryptocurrency News: Looking Past the Bithumb Crypto Hack appeared first on Profit Confidential.

Read more here:

Cryptocurrency News: Looking Past the Bithumb Crypto Hack

Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More

Cryptocurrency News
On the whole, cryptocurrency prices are down from our previous report on cryptos, with the market slipping on news of an exchange being hacked and a report about Bitcoin manipulation.

However, there have been two bright spots: 1) an official from the U.S. Securities and Exchange Commission (SEC) said that Ethereum is not a security, and 2) Coinbase is expanding its selection of tokens.

Let’s start with the good news.
SEC Says ETH Is Not a Security
Investors have some reason to cheer this week. A high-ranking SEC official told attendees of the Yahoo! All Markets Summit: Crypto that Ethereum and Bitcoin are not.

The post Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More appeared first on Profit Confidential.

More:

Cryptocurrency News: This Week on Bitfinex, Tether, Coinbase, & More

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Ripple vs SWIFT: The War Begins
While most criticisms of XRP do nothing to curb my bullish Ripple price forecast, there is one obstacle that nags at my conscience. Its name is SWIFT.

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the king of international payments.

It coordinates wire transfers across 11,000 banks in more than 200 countries and territories, meaning that in order for XRP prices to ascend to $10.00, Ripple needs to launch a successful coup. That is, and always has been, an unwritten part of Ripple’s story.

We’ve seen a lot of progress on that score. In the last three years, Ripple wooed more than 100 financial firms onto its.

The post Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More appeared first on Profit Confidential.

Here is the original post:

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling

Trust Is Growing…
Before we get to this week’s cryptocurrency news, analysis, and our cryptocurrency price forecast, I want to share an experience from this past week. I was at home watching the NBA playoffs, trying to ignore the commercials, when a strange advertisement caught my eye.

It followed a tomato from its birth on the vine to its end on the dinner table (where it was served as a bolognese sauce), and a diamond from its dusty beginnings to when it sparkled atop an engagement ring.

The voiceover said: “This is a shipment passed 200 times, transparently tracked from port to port. This is the IBM blockchain.”

Let that sink in—IBM.

The post Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling appeared first on Profit Confidential.

Continue reading here:

Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling

Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds

Cryptocurrency News
Although cryptocurrency prices were heating up last week (Bitcoin, especially), regulators poured cold water on the rally by rejecting calls for a Bitcoin exchange-traded fund (ETF). This is the second time that the proposal fell on deaf ears. (More on that below.)

Crypto mining ran into similar trouble, as you can see from Advanced Micro Devices, Inc.‘s (NASDAQ:AMD) most recent quarterly earnings. However, it wasn’t all bad news. Investors should, for instance, be cheering the fact that hedge funds are ramping up their involvement in cryptocurrency markets.

Without further ado, here are those stories in greater detail.
ETF Rejection.

The post Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds appeared first on Profit Confidential.

Read this article:

Cryptocurrency News: Bitcoin ETF Rejection, AMD Microchip Sales, and Hedge Funds

Cryptocurrency News: What You Need to Know This Week

Cryptocurrency News
Cryptocurrencies traded sideways since our last report on cryptos. However, I noticed something interesting when playing around with Yahoo! Finance’s cryptocurrency screener: There are profitable pockets in this market.

Incidentally, Yahoo’s screener is far superior to the one on CoinMarketCap, so if you’re looking to compare digital assets, I highly recommend it.

But let’s get back to my epiphany.

In the last month, at one point or another, most crypto assets on our favorites list saw double-digit increases. It’s true that each upswing was followed by a hard crash, but investors who rode the trend would have made a.

The post Cryptocurrency News: What You Need to Know This Week appeared first on Profit Confidential.

Original post:

Cryptocurrency News: What You Need to Know This Week

Cryptocurrency News: XRP Validators, Malta, and Practical Tokens

Cryptocurrency News & Market Summary
Investors finally saw some light at the end of the tunnel last week, with cryptos soaring across the board. No one quite knows what kicked off the rally—as it could have been any of the stories we discuss below—but the net result was positive.

Of course, prices won’t stay on this rocket ride forever. I expect to see a resurgence of volatility in short order, because the market is moving as a single unit. Everything is rising in tandem.

This tells me that investors are simply “buying the dip” rather than identifying which cryptos have enough real-world value to outlive the crash.

So if you want to know when.

The post Cryptocurrency News: XRP Validators, Malta, and Practical Tokens appeared first on Profit Confidential.

See the original post here:

Cryptocurrency News: XRP Validators, Malta, and Practical Tokens

Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs

Cryptocurrency News
While headline numbers look devastating this week, investors might take some solace in knowing that cryptocurrencies found their bottom at roughly $189.8 billion in market cap—that was the low point. Since then, investors put more than $20.0 billion back into the market.

During the rout, Ethereum broke below $300.00 and XRP fell below $0.30, marking yearly lows for both tokens. The same was true down the list of the top 100 biggest cryptos.

Altcoins took the brunt of the hit. BTC Dominance, which reveals how tightly investment is concentrated in Bitcoin, rose from 42.62% to 53.27% in just one month, showing that investors either fled altcoins at higher.

The post Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs appeared first on Profit Confidential.

See original here:

Cryptocurrency News: Vitalik Buterin Doesn’t Care About Bitcoin ETFs

Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity

Cryptocurrency News
Even though the cryptocurrency news was upbeat in recent days, the market tumbled after the U.S. Securities and Exchange Commission (SEC) rejected calls for a Bitcoin (BTC) exchange-traded fund (ETF).

That news came as a blow to investors, many of whom believe the ETF would open the cryptocurrency industry up to pension funds and other institutional investors. This would create a massive tailwind for cryptos, they say.

So it only follows that a rejection of the Bitcoin ETF should send cryptos tumbling, correct? Well, maybe you can follow that logic. To me, it seems like a dramatic overreaction.

I understand that legitimizing cryptos is important. But.

The post Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity appeared first on Profit Confidential.

Go here to see the original:

Cryptocurrency News: New Exchanges Could Boost Crypto Liquidity

The End of Moores Law Rodney Brooks

I have been working on an upcoming post about megatrends and how they drive tech. I had included the end of Moores Law to illustrate how the end of a megatrend might also have a big influence on tech, but that section got away from me, becoming much larger than the sections on each individual current megatrend. So I decided to break it out into a separate post and publish it first. Here it is.

Moores Law, concerning what we put on silicon wafers, is over after a solid fifty year run that completely reshaped our world. But that end unleashes lots of new opportunities.

Moore, Gordon E.,Cramming more components onto integrated circuits,Electronics, Vol 32, No. 8, April 19, 1965.

Electronicswas a trade journal that published monthly, mostly, from 1930 to 1995. Gordon Moores four and a half page contribution in 1965 was perhaps its most influential article ever. That article not only articulated the beginnings, and it was the very beginnings, of a trend, but the existence of that articulation became a goal/law that has run the silicon based circuit industry (which is the basis of every digital device in our world) for fifty years. Moore was a Cal Tech PhD, cofounder in 1957 of Fairchild Semiconductor, and head of its research and development laboratory from1959. Fairchild had been founded to make transistors from silicon at a time when they were usually made from much slower germanium.

One can find many files on the Web that claim to be copies of the original paper, but I have noticed that some of them have the graphs redrawn and that they are sometimes slightly different from the ones that I have always taken to be the originals. Below I reproduce two figures from the original that as far as I can tell have only been copied from an original paper version of the magazine, with no manual/human cleanup.

The first one that I reproduce here is the money shot for the origin of Moores Law. There was however an equally important earlier graph in the paper which was predictive of the future yield over time of functional circuits that could be made from silicon. It had less actual data than this one, and as well see, that is really saying something.

This graph is about the number of components on an integrated circuit. An integrated circuit is made through a process that is like printing. Light is projected onto a thin wafer of silicon in a number of different patterns, while different gases fill the chamber in which it is held. The different gases cause different light activated chemical processes to happen on the surface of the wafer, sometimes depositing some types of material, and sometimes etching material away. With precise masks to pattern the light, and precise control over temperature and duration of exposures, a physical two dimensional electronic circuit can be printed. The circuit has transistors, resistors, and other components. Lots of them might be made on a single wafer at once, just as lots of letters are printed on a single page at one. The yield is how many of those circuits are functionalsmall alignment or timing errors in production can screw up some of the circuits in any given print. Then the silicon wafer is cut up into pieces, each containing one of the circuits and each is put inside its own plastic package with little legs sticking out as the connectorsif you have looked at a circuit board made in the last forty years you have seen it populated with lots of integrated circuits.

The number of components in a single integrated circuit is important. Since the circuit is printed it involves no manual labor, unlike earlier electronics where every single component had to be placed and attached by hand. Now a complex circuit which involves multiple integrated circuits only requires hand construction (later this too was largely automated), to connect up a much smaller number of components. And as long as one has a process which gets good yield, it is constant time to build a single integrated circuit, regardless of how many components are in it. That means less total integrated circuits that need to be connected by hand or machine. So, as Moores papers title references,crammingmore components into a single integrated circuit is a really good idea.

The graph plots the logarithm base two of the number ofcomponentsin an integrated circuit on the vertical axis against calendar years on the horizontal axis. Every notch upwards on the left doubles the number of components. So while means components, means components. That is a thousand fold increase from 1962 to 1972.

There are two important things to note here.

The first is that he is talking aboutcomponentson an integrated circuit, not just the number of transistors. Generally there are many more components thantransistors, though the ratio did drop over time as different fundamental sorts of transistors were used. But in later years Moores Law was often turned into purely a count of transistors.

The other thing is that there are only four real data points here in this graph which he published in 1965. In 1959 the number of components is , i.e., that is not about anintegratedcircuit at all, just about single circuit elementsintegrated circuits had not yet been invented. So this is a null data point. Then he plots four actual data points, which we assume were taken from what Fairchild could produce, for 1962, 1963, 1964, and 1965, having 8, 16, 32, and 64 components. That is a doubling every year. It is an exponential increase in the true sense of exponential.

What is the mechanism for this, how can this work? It works because it is in the digital domain, the domain ofyesorno, the domain of or .

In the last half page of the four and a half page article Moore explains the limitations of his prediction, saying that for some things, like energy storage, we will not see his predicted trend. Energy takes up a certain number of atoms and their electrons to store a given amount, so you can not just arbitrarily change the number of atoms and still store the same amount of energy. Likewise if you have a half gallon milk container you can not put a gallon of milk in it.

But the fundamental digital abstraction isyesorno. A circuit element in an integrated circuit just needs to know whether a previous element said yes or no, whether there is a voltage or current there or not. In the design phase one decides above how many volts or amps, or whatever, means yes, and below how many means no. And there needs to be a good separation between those numbers, a significant no mans land compared to the maximum and minimum possible. But, the magnitudes do not matter.

I like to think of it like piles of sand. Is there a pile of sand on the table or not? We might have a convention about how big a typical pile of sand is. But we can make it work if we halve the normal size of a pile of sand. We can still answer whether or not there is a pile of sand there using just half as many grains of sand in a pile.

And then we can halve the number again. And the digital abstraction of yes or no still works. And we can halve it again, and it still works. And again, and again, and again.

This is what drives Moores Law, which in its original form said that we could expect to double the number of components on an integrated circuit every year for 10 years, from 1965 to 1975. That held up!

Variations of Moores Law followed; they were all about doubling, but sometimes doubling different things, and usually with slightly longer time constants for the doubling. The most popular versions were doubling of the number of transistors, doubling of the switching speed of those transistors (so a computer could run twice as fast), doubling of the amount of memory on a single chip, and doubling of the secondary memory of a computeroriginally on mechanically spinning disks, but for the last five years in solid state flash memory. And there were many others.

Lets get back to Moores original law for a moment. The components on an integrated circuit are laid out on a two dimensional wafer of silicon. So to double the number of components for the same amount of silicon you need to double the number of components per unit area. That means that the size of a component, in each linear dimension of the wafer needs to go down by a factor of . In turn, that means that Moore was seeing the linear dimension of each component go down to of what it was in a year, year over year.

But why was it limited to just a measly factor of two per year? Given the pile of sand analogy from above, why not just go to a quarter of the size of a pile of sand each year, or one sixteenth? It gets back to the yield one gets, the number of working integrated circuits, as you reduce the component size (most commonly calledfeature size). As the feature size gets smaller, the alignment of the projected patterns of light for each step of the process needs to get more accurate. Since , approximately, it needs to get better by as you halve the feature size. And because impurities in the materials that are printed on the circuit, the material from the gasses that are circulating and that are activated by light, the gas needs to get more pure, so that there are fewer bad atoms in each component, now half the area of before. Implicit in Moores Law, in its original form, was the idea that we could expect the production equipment to get better by about per year, for 10 years.

For various forms of Moores Law that came later, the time constant stretched out to 2 years, or even a little longer, for a doubling, but nevertheless the processing equipment has gotten that better time period over time period, again and again.

To see the magic of how this works, lets just look at 25 doublings. The equipment has to operate with things times smaller, i.e., roughly 5,793 times smaller. But we can fit more components in a single circuit, which is 33,554,432 times more. The accuracy of our equipment has improved 5,793 times, but that has gotten a further acceleration of 5,793 on top of the original 5,793 times due to the linear to area impact. That is where the payoff of Moores Law has come from.

In his original paper Moore only dared project out, and only implicitly, that the equipment would get better every year for ten years. In reality, with somewhat slowing time constants, that has continued to happen for 50 years.

Now it is coming to an end. But not because the accuracy of the equipment needed to give good yields has stopped improving. No. Rather it is because those piles of sand we referred to above have gotten so small that they only contain a single metaphorical grain of sand. We cant split the minimal quantum of a pile into two any more.

Perhaps the most remarkable thing is Moores foresight into how this would have an incredible impact upon the world. Here is the first sentence of his second paragraph:

Integrated circuits will lead to such wonders as home computersor at least terminals connected to a central computerautomatic controls for automobiles, and personal portable communications equipment.

This was radical stuff in 1965. So called mini computers were still the size of a desk, and to be useful usually had a few peripherals such as tape units, card readers, or printers, that meant they would be hard to fit into a home kitchen of the day, even with the refrigerator, oven, and sink removed. Most people had never seen a computer and even fewer had interacted with one, and those who had, had mostly done it by dropping off a deck of punched cards, and a day later picking up a printout from what the computer had done when humans had fed the cards to the machine.

The electrical systems of cars were unbelievably simple by todays standards, with perhaps half a dozen on off switches, and simple electromechanical devices to drive the turn indicators, windshield wipers, and the distributor which timed the firing of the spark plugsevery single function producing piece of mechanism in auto electronics was big enough to be seen with the naked eye. And personal communications devices were rotary dial phones, one per household, firmly plugged into the wall at all time. Or handwritten letters than needed to be dropped into the mail box.

That sentence quoted above, given when it was made, is to me the bravest and most insightful prediction of technology future that we have ever seen.

By the way, the first computer made from integrated circuits was the guidance computer for the Apollo missions, one in the Command Module, and one in the Lunar Lander. The integrated circuits were made by Fairchild, Gordon Moores company. The first version had 4,100 integrated circuits, each implementing a single 3 input NOR gate. The more capable manned flight versions, which first flew in 1968, had only 2,800 integrated circuits, each implementing two 3 input NOR gates. Moores Law had its impact on getting to the Moon, even in the Laws infancy.

In the original magazine article this cartoon appears:

At a fortieth anniversary of Moores Law at the Chemical Heritage Foundationin Philadelphia I asked Dr. Moore whether this cartoon had been his idea. He replied that he had nothing to do with it, and it was just there in the magazine in the middle of his article, to his surprise.

Without any evidence at all on this, my guess is that the cartoonist was reacting somewhat skeptically to the sentence quoted above. The cartoon is set in a department store, as back then US department stores often had a Notions department, although this was not something of which I have any personal experience as they are long gone (and I first set foot in the US in 1977). It seems that notions is another word for haberdashery, i.e., pins, cotton, ribbons, and generally things used for sewing. As still today, there is also aCosmeticsdepartment. And plop in the middle of them is theHandy Home Computersdepartment, with the salesman holding a computer in his hand.

I am guessing that the cartoonist was making fun of this idea, trying to point out the ridiculousness of it. It all came to pass in only 25 years, including being sold in department stores. Not too far from the cosmetics department. But the notions departments had all disappeared. The cartoonist was right in the short term, but blew it in the slightly longer term.

There were many variations on Moores Law, not just his original about the number of components on a single chip.

Amongst the many there was a version of the law about how fast circuits could operate, as the smaller the transistors were the faster they could switch on and off. There were versions of the law for how much RAM memory, main memory for running computer programs, there would be and when. And there were versions of the law for how big and fast disk drives, for file storage, would be.

This tangle of versions of Moores Law had a big impact on how technology developed. I will discuss three modes of that impact; competition, coordination, and herd mentality in computer design.

Competition

Memory chips are where data and programs are stored as they are run on a computer. Moores Law applied to the number of bits of memory that a single chip could store, and a natural rhythm developed of that number of bits going up my a multiple of four on a regular but slightly slowing basis. By jumping over just a doubling, the cost of the silicon foundries could me depreciated over long enough time to keep things profitable (today a silicon foundry is about a $7B capital cost!), and furthermore it made sense to double the number of memory cells in each dimension to keep the designs balanced, again pointing to a step factor of four.

In the very early days of desktop PCs memory chips had bits. The memory chips were called RAM (Random Access Memoryi.e., any location in memory took equally long to access, there were no slower of faster places), and a chip of this size was called a 16K chip, where K means not exactly 1,000, but instead 1,024 (which is ). Many companies produced 16K RAM chips. But they all knew from Moores Law when the market would be expecting 64K RAM chips to appear. So they knew what they had to do to not get left behind, and they knew when they had to have samples ready for engineers designing new machines so that just as the machines came out their chips would be ready to be used having been designed in. And they could judge when it was worth getting just a little ahead of the competition at what price. Everyone knew the game (and in fact all came to a consensus agreement on when the Moores Law clock should slow down just a little), and they all competed on operational efficiency.

Coordination

Technology Reviewtalks about this in their story on the end of Moores Law. If you were the designer of a new computer box for a desktop machine, or any other digital machine for that matter, you could look at when you planned to hit the market and know what amount of RAM memory would take up what board space because you knew how many bits per chip would be available at that time. And you knew how much disk space would be available at what price and what physical volume (disks got smaller and smaller diameters just as they increased the total amount of storage). And you knew how fast the latest processor chip would run. And you knew what resolution display screen would be available at what price. So a couple of years ahead you could put all these numbers together and come up with what options and configurations would make sense by the exact time whenyou were going to bring your new computer to market.

The company that sold the computers might make one or two of the critical chips for their products but mostly they bought other components from other suppliers. The clockwork certainty of Moores Law let them design a new product without having horrible surprises disrupt their flow and plans. This really let the digital revolution proceed. Everything was orderly and predictable so there were fewer blind alleys to follow. We had probably the single most sustained continuous and predictable improvement in any technology over the history of mankind.

Herd mentality in computer design

But with this good came some things that might be viewed negatively (though Im sure there are some who would argue that they were all unalloyed good). Ill take up one of these as the third thing to talk about that Moores Law had a major impact upon.

A particular form of general purpose computer design had arisen by the time that central processors could be put on a single chip (see the Intel 4004 below), and soon those processors on a chip, microprocessors as they came to be known, supported that general architecture. That architecture is known as thevon Neumann architecture.

A distinguishing feature of this architecture is that there is a large RAM memory which holds both instructions and datamade from the RAM chips we talked about above under coordination. The memory is organized into consecutive indexable (or addressable) locations, each containing the same number of binary bits, or digits. The microprocessor itself has a few specialized memory cells, known as registers, and an arithmetic unit that can do additions, multiplications, divisions (more recently), etc. One of those specialized registers is called the program counter (PC), and it holds an address in RAM for the current instruction. The CPU looks at the pattern of bits in that current instruction location and decodes them into what actions it should perform. That might be an action to fetch another location in RAM and put it into one of the specialized registers (this is called a LOAD), or to send the contents the other direction (STORE), or to take the contents of two of the specialized registers feed them to the arithmetic unit, and take their sum from the output of that unit and store it in another of the specialized registers. Then the central processing unit increments its PC and looks at the next consecutive addressable instruction. Some specialized instructions can alter the PC and make the machine go to some other part of the program and this is known as branching. For instance if one of the specialized registers is being used to count down how many elements of an array of consecutive values stored in RAM have been added together, right after the addition instruction there might be an instruction to decrement that counting register, and then branch back earlier in the program to do another LOAD and add if the counting register is still more than zero.

Thats pretty much all there is to most digital computers. The rest is just hacks to make them go faster, while still looking essentially like this model. But note that the RAM is used in two ways by a von Neumann computerto contain data for a program and to contain the program itself. Well come back to this point later.

With all the versions of Moores Law firmly operating in support of this basic model it became very hard to break out of it. The human brain certainly doesnt work that way, so it seems that there could be powerful other ways to organize computation. But trying to change the basic organization was a dangerous thing to do, as the inexorable march of Moores Law based existing architecture was going to continue anyway. Trying something new would most probably set things back a few years. So brave big scale experiments like the Lisp MachineorConnection Machinewhich both grew out of the MIT Artificial Intelligence Lab (and turned into at least three different companies) and Japans fifth generation computerproject (which played with two unconventional ideas, data flow and logical inference) all failed, as before long the Moores Law doubling conventional computers overtook the advanced capabilities of the new machines, and software could better emulate the new ideas.

Most computer architects were locked into the conventional organizations of computers that had been around for decades. They competed on changing the coding of the instructions to make execution of programs slightly more efficient per square millimeter of silicon. They competed on strategies to cache copies of larger and larger amounts of RAM memory right on the main processor chip. They competed on how to put multiple processors on a single chip and how to share the cached information from RAM across multiple processor units running at once on a single piece of silicon. And they competed on how to make the hardware more predictive of what future decisions would be in a running program so that they could precompute the right next computations before it was clear whether they would be needed or not. But, they were all locked in to fundamentally the same way of doing computation. Thirty years ago there were dozens of different detailed processor designs, but now they fall into only a small handful of families, the X86, the ARM, and the PowerPC. The X86s are mostly desktops, laptops, and cloud servers. The ARM is what we find in phones and tablets. And you probably have a PowerPC adjusting all the parameters of your cars engine.

The one glaring exception to the lock in caused by Moores Law is that of Graphical Processing Units, orGPUs. These are different from von Neumann machines. Driven by wanting better video performance for video and graphics, and in particular gaming, the main processor getting better and better under Moores Law was just not enough to make real time rendering perform well as the underlying simulations got better and better. In this case a new sort of processor was developed. It was not particularly useful for general purpose computations but it was optimized very well to do additions and multiplications on streams of data which is what is needed to render something graphically on a screen. Here was a case where a new sort of chip got added into the Moores Law pool much later than conventional microprocessors, RAM, and disk. The new GPUs did not replace existing processors, but instead got added as partners where graphics rendering was needed. I mention GPUs here because it turns out that they are useful for another type of computation that has become very popular over the last three years, and that is being used as an argument that Moores Law is not over. I still think it is and will return to GPUs in the next section.

As I pointed out earlier we can not halve a pile of sand once we are down to piles that are only a single grain of sand. That is where we are now, we have gotten down to just about one grain piles of sand. Gordon Moores Law in its classical sense is over. SeeThe Economistfrom March of last year for a typically thorough, accessible, and thoughtful report.

I earlier talked about thefeature size of an integrated circuit and how with every doubling that size is divided by . By 1971 Gordon Moore was at Intel, and they released their first microprocessor on a single chip, the 4004 with 2,300 transistors on 12 square millimeters of silicon, with a feature size of 10 micrometers, written 10m. That means that the smallest distinguishable aspect of any component on the chip was th of a millimeter.

Since then the feature size has regularly been reduced by a factor of , or reduced to of its previous size, doubling the number of components in a given area, on a clockwork schedule. The schedule clock has however slowed down. Back in the era of Moores original publication the clock period was a year. Now it is a little over 2 years. In the first quarter of 2017 we are expecting to see the first commercial chips in mass market products with a feature size of 10 nanometers, written 10nm. That is 1,000 times smaller than the feature size of 1971, or 20 applications of the rule over 46 years. Sometimes the jump has been a little better than , and so we actually seen 17 jumps from10m down to 10nm. You can see them listed in Wikipedia. In 2012 the feature size was 22nm, in 2014 it was 14nm, now in the first quarter of 2017 we are about to see 10nm shipped to end users, and it is expected that we will see 7nm in 2019 or so. There are stillactive areas of researchworking on problems that are yet to be solved to make 7nm a reality, but industry is confident that it will happen. There are predictions of 5nm by 2021, but a year ago there was still much uncertaintyover whether the engineering problems necessary to do this could be solved and whether they would be economically viable in any case.

Once you get down to 5nm features they are only about 20 silicon atoms wide. If you go much below this the material starts to be dominated by quantum effects and classical physical properties really start to break down. That is what I mean by only one grain of sand left in the pile.

Todays microprocessors have a few hundred square millimeters of silicon, and 5 to 10 billion transistors. They have a lot of extra circuitry these days to cache RAM, predict branches, etc., all to improve performance. But getting bigger comes with many costs as they get faster too. There is heat to be dissipated from all the energy used in switching so many signals in such a small amount of time, and the time for a signal to travel from one side of the chip to the other, ultimately limited by the speed of light (in reality, in copper it is about less), starts to be significant. The speed of light is approximately 300,000 kilometers per second, or 300,000,000,000 millimeters per second. So light, or a signal, can travel 30 millimeters (just over an inch, about the size of a very large chip today) in no less than one over 10,000,000,000 seconds, i.e., no less than one ten billionth of a second.

Todays fastest processors have a clock speed of 8.760GigaHertz, which means by the time the signal is getting to the other side of the chip, the place if came from has moved on to the next thing to do. This makes synchronization across a single microprocessor something of a nightmare, and at best a designer can know ahead of time how late different signals from different parts of the processor will be, and try to design accordingly. So rather than push clock speed further (which is also hard) and rather than make a single microprocessor bigger with more transistors to do more stuff at every clock cycle, for the last few years we have seen large chips go to multicore, with two, four, or eight independent microprocessors on a single piece of silicon.

Multicore has preserved the number of operations done per second version of Moores Law, but at the cost of a simple program not being sped up by that amountone cannot simply smear a single program across multiple processing units. For a laptop or a smart phone that is trying to do many things at once that doesnt really matter, as there are usually enough different tasks that need to be done at once, that farming them out to different cores on the same chip leads to pretty full utilization. But that will not hold, except for specialized computations, when the number of cores doubles a few more times. The speed up starts to disappear as silicon is left idle because there just arent enough different things to do.

Despite the arguments that I presented a few paragraphs ago about why Moores Law is coming to a silicon end, many people argue that it is not, because we are finding ways around those constraints of small numbers of atoms by going to multicore and GPUs. But I think that is changing the definitions too much.

Here is a recent chart that Steve Jurvetson, cofounder of the VC firm DFJ (Draper Fisher Jurvetson), posted on his FaceBook page. He said it is an update of an earlier chart compiled by Ray Kurzweil.

In this case the left axis is a logarithmically scaled count of the number of calculations per second per constant dollar. So this expresses how much cheaper computation has gotten over time. In the 1940s there are specialized computers, such as the electromagnetic computers built to break codes at Bletchley Park. By the 1950s they become general purpose, von Neuman style computers and stay that way until the last few points.

The last two points are both GPUs, the GTX 450 and the NVIDIA Titan X. Steve doesnt label the few points before that, but in every earlier version of a diagram that I can find on the Web (and there are plenty of them), the points beyond 2010 are all multicore. First dual cores, and then quad cores, such as Intels quad core i7 (and I am typing these words on a 2.9MHz version of that chip, powering my laptop).

That GPUs are there and that people are excited about them is because besides graphics they happen to be very good at another very fashionable computation. Deep learning, a form of something known originally as back propagation neural networks, has had a big technological impact recently. It is what has made speech recognition so fantastically better in the last three years that Apples Siri, Amazons Echo, and Google Home are useful and practical programs and devices. It has also made image labeling so much better than what we had five years ago, and there is much experimentation with using networks trained on lots of road scenes as part of situational awareness for self driving cars. For deep learning there is a training phase, usually done in the cloud, on millions of examples. That produces a few million numbers which represent the network that is learned. Then when it is time to recognize a word or label an image that input is fed into a program simulating the network by doing millions of multiplications and additions. Coincidentally GPUs just happen to perfect for the way these networks are structured, and so we can expect more and more of them to be built into our automobiles. Lucky break for GPU manufacturers! While GPUs can do lots of computations they dont work well on just any problem. But they are great for deep learning networks and those are quickly becoming the flavor of the decade.

While rightly claiming that we continue to see exponential growth as in the chart above, exactly what is being measured has changed. That is a bit of a sleight of hand.

And I think that change will have big implications.

I think the end of Moores Law, as I have defined the end, will bring about a golden new era of computer architecture. No longer will architects need to cower atthe relentless improvements that they know others will get due to Moores Law. They will be able to take the time to try new ideas out in silicon, now safe in the knowledge that a conventional computer architecture will not be able to do the same thing in just two or four years in software. And the new things they do may not be about speed. They might be about making computation better in other ways.

Machine learning runtime

We are seeing this with GPUs as runtime engines for deep learning networks. But we are also seeing some more specific architectures. For instance, for about a a year Google has had their own chips called TensorFlow Units (or TPUs) that save power for deep learning networks by effectively reducing the number of significant digits that are kept around as neural networks work quite well at low precision. Google has placed many of these chips in the computers in their server farms, or cloud, and are able to use learned networks in various search queries, at higher speed for lower electrical power consumption.

Special purpose silicon

Typical mobile phone chips now have four ARM processor cores on a single piece of silicon, plus some highly optimized special purpose processors on that same piece of silicon. The processors manage data flowing from cameras and optimizing speech quality, and even on some chips there is a special highly optimized processor for detecting human faces. That is used in the camera application, youve probably noticed little rectangular boxes around peoples faces as you are about to take a photograph, to decide what regions in an image should be most in focus and with the best exposure timingthe faces!

New general purpose approaches

We are already seeing the rise of special purpose architectures for very specific computations. But perhaps we will see more general purpose architectures but with a a different style of computation making a comeback.

Conceivably the dataflow and logic models of the Japanese fifth generation computer project might now be worth exploring again. But as we digitalize the world the cost of bad computer security will threaten our very existence. So perhaps if things work out, the unleashed computer architects can slowly start to dig us out of our current deplorable situation.

Secure computing

We all hear about cyber hackers breaking into computers, often half a world away, or sometimes now in a computer controlling the engine, and soon everything else, of a car as it drives by. How can this happen?

Cyber hackers are creative but many ways that they get into systems are fundamentally through common programming errors in programs built on top of the von Neumann architectures we talked about before.

A common case is exploiting something known as buffer overrun. A fixed size piece of memory is reserved to hold, say, the web address that one can type into a browser, or the Google query box. If all programmers wrote very careful code and someone typed in way too many characters those past the limit would not get stored in RAM at all. But all too often a programmer has used a coding trick that is simple, and quick to produce, that does not check for overrun and the typed characters get put into memory way past the end of the buffer, perhaps overwriting some code that the program might jump to later. This relies on the feature of von Neumann architectures that data and programs are stored in the same memory. So, if the hacker chooses some characters whose binary codes correspond to instructions that do something malicious to the computer, say setting up an account for them with a particular password, then later as if by magic the hacker will have a remotely accessible account on the computer, just as many other human and program services may. Programmers shouldnt oughta make this mistake but history shows that it happens again and again.

Another common way in is that in modern web services sometimes the browser on a lap top, tablet, or smart phone, and the computers in the cloud need to pass really complex things between them. Rather than the programmer having to know in advance all those complex possible things and handle messages for them, it is set up so that one or both sides can pass little bits of source code of programs back and forth and execute them on the other computer. In this way capabilities that were never originally conceived of can start working later on in an existing system without having to update the applications. It is impossible to be sure that a piece of code wont do certain things, so if the programmer decided to give a fully general capability through this mechanism there is no way for the receiving machine to know ahead of time that the code is safe and wont do something malicious (this is a generalization of the halting problem I could go on and on but I wont here). So sometimes a cyber hacker can exploit this weakness and send a little bit of malicious code directly to some service that accepts code.

Beyond that cyber hackers are always coming up with new inventive ways inthese have just been two examples to illustrate a couple of ways of how itis currently done.

It is possible to write code that protects against many of these problems, but code writing is still a very human activity, and there are just too many human-created holes that can leak, from too many code writers. One way to combat this is to have extra silicon that hides some of the low level possibilities of a von Neumann architecture from programmers, by only giving the instructions in memory a more limited set of possible actions.

This is not a new idea. Most microprocessors have some version of protection rings which let more and more untrusted code only have access to more and more limited areas of memory, even if they try to access it with normal instructions. This idea has been around a long time but it has suffered from not having a standard way to use or implement it, so most software, in an attempt to be able to run on most machines, usually only specifies two or at most three rings of protection. That is a very coarse tool and lets too much through. Perhaps now the idea will be thought about more seriously in an attempt to get better security when just making things faster is no longer practical.

Another idea, that has mostly only been implemented in software, with perhaps one or two exceptions, is called capability based security, through capability based addressing. Programs are not given direct access to regions of memory they need to use, but instead are given unforgeable cryptographically sound reference handles, along with a defined subset of things they are allowed to do with the memory. Hardware architects might now have the time to push through on making this approach completely enforceable, getting it right once in hardware so that mere human programmers pushed to get new software out on a promised release date can not screw things up.

From one point of view the Lisp Machines that I talked about earlier were built on a very specific and limited version of a capability based architecture. Underneath it all, those machines were von Neumann machines, but the instructions they could execute were deliberately limited. Through the use of something called typed pointers, at the hardware level, every reference to every piece of memory came with restrictions on what instructions could do with that memory, based on the type encoded in the pointer. And memory could only be referenced by a pointer to the start of a chunk of memory of a fixed size at the time the memory was reserved. So in the buffer overrun case, a buffer for a string of characters would not allow data to be written to or read from beyond the end of it. And instructions could only be referenced from another type of pointer, a code pointer. The hardware kept the general purpose memory partitioned at a very fine grain by the type of pointers granted to it when reserved. And to a first approximation the type of a pointer could never be changed, nor couldthe actual address in RAM be seen by any instructions that had access to a pointer.

There have been ideas out there for a long time on how to improve security through this use of hardware restrictions on the general purpose von Neumann architecture. I have talked about a few of them here. Now I think we can expect this to become a much more compelling place for hardware architects to spend their time, as security of our computational systems becomes a major achilles heel on the smooth running of our businesses, our lives, and our society.

Quantum computers

Quantum computers are a largely experimental and very expensive at this time technology. With the need to cool them to physics experiment level ultra cold, and the expense that entails, to the confusion over how much speed up they might give over conventional silicon based computers and for what class of problem, they are a large investment, high risk research topic at this time. I wont go into all the arguments (I havent read them all, and frankly I do not have the expertise that would make me confident in any opinion I might form) butScott Aaronsons blogon computational complexity and quantum computation is probably the best source for those interested. Claims on speedups either achieved or hoped to be achieved on practical problems range from a factor of 1 to thousands (and I might have that upper bound wrong). In the old days just waiting 10 or 20 years would let Moores Law get you there. Instead we have seen well over a decade of sustained investment in a technology that people are still arguing over whether it can ever work. To me this is yet more evidence that the end of Moores Law is encouraging new investment and new explorations.

Unimaginable stuff

Even with these various innovations around, triggered by the end of Moores Law, the best things we might see may not yet be in the common consciousness. I think the freedom to innovate, without the overhang of Moores Law, the freedom to take time to investigate curious corners, may well lead to a new garden of Eden in computational models. Five to ten years from now we may see a completely new form of computer arrangement, in traditional silicon (not quantum), that is doing things and doing them faster than we can today imagine. And with a further thirty years of development those chips might be doing things that would today be indistinguishable from magic, just as todays smart phone would have seemed like utter magic to 50 year ago me.

Many times the popular press, or people who should know better, refer to something that is increasing a lot as exponential. Something is only truly exponential if there is a constant ratio in size between any two points in time separated by the same amount. Here the ratio is , for any two points a year apart. The misuse of the term exponential growth is widespread and makes me cranky.

Why the Chemical Heritage Foundation for this celebration? Both of Gordon Moores degrees (BS and PhD) were in physical chemistry!

For those who read my first blog, once again seeRoy Amaras Law.

I had been a post-doc at the MIT AI Lab and loved using Lisp Machines there, but when I left and joined the faculty at Stanford in 1983 I realized that the more conventional SUN workstationsbeing developed there and at spin-off company Sun Microsystemswould win out in performance very quickly. So I built a software based Lisp system (which I called TAIL (Toy AI Language) in a nod to the naming conventions of most software at the Stanford Artificial Intelligence Lab, e.g., BAIL, FAIL, SAIL, MAIL) that ran on the early Sun workstations, which themselves used completely generic microprocessors. By mid 1984 Richard Gabriel, I, and others had started a company called Lucidin Palo Alto to compete on conventional machines with the Lisp Machine companies. We used my Lisp compiler as a stop gap, but as is often the case with software, that was still the compiler used by Lucid eight years later when it ran on 19 different makes of machines. I had moved back to MIT to join the faculty in late 1984, and eventually became the director of the Artificial Intelligence Lab there (and then CSAIL). But for eight years, while teaching computer science and developing robots by day, I also at night developed and maintained my original compiler as the work horse of Lucid Lisp. Just as the Lisp Machine companies got swept away so too eventually did Lucid. Whereas the Lisp Machine companies got swept away by Moores Law, Lucid got swept away as the fashion in computer languages shifted to a winner take all world, for many years, of C.

Full disclosure. DFJ is one of the VCs who have invested in my company Rethink Robotics.

Read this article:

The End of Moores Law Rodney Brooks

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Ripple vs SWIFT: The War Begins
While most criticisms of XRP do nothing to curb my bullish Ripple price forecast, there is one obstacle that nags at my conscience. Its name is SWIFT.

The Society for Worldwide Interbank Financial Telecommunication (SWIFT) is the king of international payments.

It coordinates wire transfers across 11,000 banks in more than 200 countries and territories, meaning that in order for XRP prices to ascend to $10.00, Ripple needs to launch a successful coup. That is, and always has been, an unwritten part of Ripple’s story.

We’ve seen a lot of progress on that score. In the last three years, Ripple wooed more than 100 financial firms onto its.

The post Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More appeared first on Profit Confidential.

Read the original:

Ripple Price Forecast: XRP vs SWIFT, SEC Updates, and More

Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling

Trust Is Growing…
Before we get to this week’s cryptocurrency news, analysis, and our cryptocurrency price forecast, I want to share an experience from this past week. I was at home watching the NBA playoffs, trying to ignore the commercials, when a strange advertisement caught my eye.

It followed a tomato from its birth on the vine to its end on the dinner table (where it was served as a bolognese sauce), and a diamond from its dusty beginnings to when it sparkled atop an engagement ring.

The voiceover said: “This is a shipment passed 200 times, transparently tracked from port to port. This is the IBM blockchain.”

Let that sink in—IBM.

The post Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling appeared first on Profit Confidential.

Continue reading here:

Cryptocurrency Price Forecast: Trust Is Growing, But Prices Are Falling


12345...102030...