A.I. Artificial Intelligence (2001) – IMDb

Nominated for 2 Oscars. Another 16 wins & 67 nominations. See more awards Learn more People who liked this also liked…

Action | Adventure | Crime

In a future where a special police unit is able to arrest murderers before they commit their crimes, an officer from that unit is himself accused of a future murder.

Director:Steven Spielberg

Stars: Tom Cruise, Colin Farrell, Samantha Morton

Drama | Sci-Fi

After an accidental encounter with otherworldly vessels, an ordinary man follows a series of psychic clues to the first scheduled meeting between representatives of Earth and visitors from the cosmos.

Director:Steven Spielberg

Stars: Richard Dreyfuss, Franois Truffaut, Teri Garr

Adventure | Sci-Fi | Thriller

As Earth is invaded by alien tripod fighting machines, one family fights for survival.

Director:Steven Spielberg

Stars: Tom Cruise, Dakota Fanning, Tim Robbins

Comedy | Drama | Sci-Fi

An android endeavors to become human as he gradually acquires emotions.

Director:Chris Columbus

Stars: Robin Williams, Embeth Davidtz, Sam Neill

Drama | History | War

A young English boy struggles to survive under Japanese occupation during World War II.

Director:Steven Spielberg

Stars: Christian Bale, John Malkovich, Miranda Richardson

Drama

A black Southern woman struggles to find her identity after suffering abuse from her father and others over four decades.

Director:Steven Spielberg

Stars: Danny Glover, Whoopi Goldberg, Oprah Winfrey

Comedy | Drama | Romance

An eastern immigrant finds himself stranded in JFK airport, and must take up temporary residence there.

Director:Steven Spielberg

Stars: Tom Hanks, Catherine Zeta-Jones, Chi McBride

Drama | History

In 1839, the revolt of Mende captives aboard a Spanish owned ship causes a major controversy in the United States when the ship is captured off the coast of Long Island. The courts must decide whether the Mende are slaves or legally free.

Director:Steven Spielberg

Stars: Djimon Hounsou, Matthew McConaughey, Anthony Hopkins

Drama | History | Thriller

Based on the true story of the Black September aftermath, about the five men chosen to eliminate the ones responsible for that fateful day.

Director:Steven Spielberg

Stars: Eric Bana, Daniel Craig, Marie-Jose Croze

Drama | History | War

Young Albert enlists to serve in World War I after his beloved horse is sold to the cavalry. Albert’s hopeful journey takes him out of England and to the front lines as the war rages on.

Director:Steven Spielberg

Stars: Jeremy Irvine, Emily Watson, David Thewlis

Fantasy | Romance

The spirit of a recently deceased expert pilot mentors a newer pilot while watching him fall in love with the girlfriend that he left behind.

Director:Steven Spielberg

Stars: Richard Dreyfuss, Holly Hunter, Brad Johnson

Action | Adventure | Crime

In 2035, a technophobic cop investigates a crime that may have been perpetrated by a robot, which leads to a larger threat to humanity.

Director:Alex Proyas

Stars: Will Smith, Bridget Moynahan, Bruce Greenwood

In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them. One of the mecha-producing companies builds David, an artificial kid which is the first to have real feelings, especially a never-ending love for his “mother”, Monica. Monica is the woman who adopted him as a substitute for her real son, who remains in cryo-stasis, stricken by an incurable disease. David is living happily with Monica and her husband, but when their real son returns home after a cure is discovered, his life changes dramatically. Written byChris Makrozahopoulos

Budget:$100,000,000 (estimated)

Opening Weekend USA: $29,352,630,1 July 2001, Wide Release

Gross USA: $78,616,689, 23 September 2001

Cumulative Worldwide Gross: $235,927,000

Runtime: 146 min

Aspect Ratio: 1.85 : 1

Read more from the original source:

A.I. Artificial Intelligence (2001) – IMDb

Benefits & Risks of Artificial Intelligence – Future of …

Many AI researchers roll their eyes when seeing this headline:Stephen Hawking warns that rise of robots may be disastrous for mankind. And as many havelost count of how many similar articles theyveseen.Typically, these articles are accompanied by an evil-looking robot carrying a weapon, and they suggest we should worry about robots rising up and killing us because theyve become conscious and/or evil.On a lighter note, such articles are actually rather impressive, because they succinctly summarize the scenario that AI researchers dontworry about. That scenario combines as many as three separate misconceptions: concern about consciousness, evil, androbots.

If you drive down the road, you have a subjective experience of colors, sounds, etc. But does a self-driving car have a subjective experience? Does it feel like anything at all to be a self-driving car?Although this mystery of consciousness is interesting in its own right, its irrelevant to AI risk. If you get struck by a driverless car, it makes no difference to you whether it subjectively feels conscious. In the same way, what will affect us humans is what superintelligent AIdoes, not how it subjectively feels.

The fear of machines turning evil is another red herring. The real worry isnt malevolence, but competence. A superintelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans dont generally hate ants, but were more intelligent than they are so if we want to build a hydroelectric dam and theres an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.

The consciousness misconception is related to the myth that machines cant have goals.Machines can obviously have goals in the narrow sense of exhibiting goal-oriented behavior: the behavior of a heat-seeking missile is most economically explained as a goal to hit a target.If you feel threatened by a machine whose goals are misaligned with yours, then it is precisely its goals in this narrow sense that troubles you, not whether the machine is conscious and experiences a sense of purpose.If that heat-seeking missile were chasing you, you probably wouldnt exclaim: Im not worried, because machines cant have goals!

I sympathize with Rodney Brooks and other robotics pioneers who feel unfairly demonized by scaremongering tabloids,because some journalists seem obsessively fixated on robots and adorn many of their articles with evil-looking metal monsters with red shiny eyes. In fact, the main concern of the beneficial-AI movement isnt with robots but with intelligence itself: specifically, intelligence whose goals are misaligned with ours. To cause us trouble, such misaligned superhuman intelligence needs no robotic body, merely an internet connection this may enable outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Even if building robots were physically impossible, a super-intelligent and super-wealthy AI could easily pay or manipulate many humans to unwittingly do its bidding.

The robot misconception is related to the myth that machines cant control humans. Intelligence enables control: humans control tigers not because we are stronger, but because we are smarter. This means that if we cede our position as smartest on our planet, its possible that we might also cede control.

Excerpt from:

Benefits & Risks of Artificial Intelligence – Future of …

Alternative News | The Zeitgeist Movement UK

About

The Zeitgeist Movement is a global sustainability activist group working to bring the world together for the common goal of species sustainability before it is too late. It is a social movement, not a political one, with over 1100 chapters across nearly all countries. Divisive notions such as nations, governments, races, political parties, religions, creeds or class are non-operational distinctions in the view of The Movement. Rather, we recognize the world as one system and the human species as a singular unit, sharing a common habitat. Our overarching intent could be summarized as ‘the application of the scientific method for social concern.

2013 The Zeitgeist Movement UK. All Rights Reserved.

Originally posted here:

Alternative News | The Zeitgeist Movement UK

About Us | The Zeitgeist Movement – Australia

The Zeitgeist Movement is a sustainability advocacy organization, which conducts community based activism and awareness actions through a network of global/regional chapters, project teams, annual events, media and charity work.

The movements principle focus includes the recognition that the majority of the social problems that plague the human species at this time are not the sole result of some institutional corruption, absolute scarcity, a political policy, a flaw of human nature or other commonly held assumptions of causality. Rather, the movement recognizes that issues such as poverty, corruption, pollution, homelessness, war, starvation and the like appear to be symptoms born out of an outdated social structure.

The Natural Law/Resource-Based Economy (NLRBE) is about taking a direct technical approach to social management as opposed to a monetary or even political one. It is about updating the workings of society to the most advanced and proven methods known, leaving behind the damaging consequences and limiting inhibitions. which are generated by our current system of monetary exchange, profit, business and other structural and motivational issues.

There is little reason to assume war, poverty, most crime and many other monetarily-based scarcity effects common in our current model cannot be resolved over time. The range of the movements activism and awareness campaigns extend from short to long term, with methods based explicitly on non-violent methods of communication.

The Zeitgeist Movement has no allegiance to country or traditional political platforms. It views the world as a single system and the human species as a single family and recognizes that all countries must disarm and learn to share resources and ideas if we expect to survive in the long run. Hence, the solutions arrived at and promoted are in the interest to help everyone on Earth, not a select group.

See more here:

About Us | The Zeitgeist Movement – Australia

Virtual reality | computer science | Britannica.com

Virtual reality (VR), the use of computer modeling and simulation that enables a person to interact with an artificial three-dimensional (3-D) visual or other sensory environment. VR applications immerse the user in a computer-generated environment that simulates reality through the use of interactive devices, which send and receive information and are worn as goggles, headsets, gloves, or body suits. In a typical VR format, a user wearing a helmet with a stereoscopic screen views animated images of a simulated environment. The illusion of being there (telepresence) is effected by motion sensors that pick up the users movements and adjust the view on the screen accordingly, usually in real time (the instant the users movement takes place). Thus, a user can tour a simulated suite of rooms, experiencing changing viewpoints and perspectives that are convincingly related to his own head turnings and steps. Wearing data gloves equipped with force-feedback devices that provide the sensation of touch, the user can even pick up and manipulate objects that he sees in the virtual environment.

The term virtual reality was coined in 1987 by Jaron Lanier, whose research and engineering contributed a number of products to the nascent VR industry. A common thread linking early VR research and technology development in the United States was the role of the federal government, particularly the Department of Defense, the National Science Foundation, and the National Aeronautics and Space Administration (NASA). Projects funded by these agencies and pursued at university-based research laboratories yielded an extensive pool of talented personnel in fields such as computer graphics, simulation, and networked environments and established links between academic, military, and commercial work. The history of this technological development, and the social context in which it took place, is the subject of this article.

Read More on This Topic

electronic game: Networked games and virtual worlds

During the 1990s and 2000s, computer game designers exploited three-dimensional graphics, faster microprocessors, networking, handheld and wireless game devices, and the Internet to develop new genres for video consoles, personal computers, and networked environments. These included first-person shootersaction games in which the environment

Artists, performers, and entertainers have always been interested in techniques for creating imaginative worlds, setting narratives in fictional spaces, and deceiving the senses. Numerous precedents for the suspension of disbelief in an artificial world in artistic and entertainment media preceded virtual reality. Illusionary spaces created by paintings or views have been constructed for residences and public spaces since antiquity, culminating in the monumental panoramas of the 18th and 19th centuries. Panoramas blurred the visual boundaries between the two-dimensional images displaying the main scenes and the three-dimensional spaces from which these were viewed, creating an illusion of immersion in the events depicted. This image tradition stimulated the creation of a series of mediafrom futuristic theatre designs, stereopticons, and 3-D movies to IMAX movie theatresover the course of the 20th century to achieve similar effects. For example, the Cinerama widescreen film format, originally called Vitarama when invented for the 1939 New York Worlds Fair by Fred Waller and Ralph Walker, originated in Wallers studies of vision and depth perception. Wallers work led him to focus on the importance of peripheral vision for immersion in an artificial environment, and his goal was to devise a projection technology that could duplicate the entire human field of vision. The Vitarama process used multiple cameras and projectors and an arc-shaped screen to create the illusion of immersion in the space perceived by a viewer. Though Vitarama was not a commercial hit until the mid-1950s (as Cinerama), the Army Air Corps successfully used the system during World War II for anti-aircraft training under the name Waller Flexible Gunnery Traineran example of the link between entertainment technology and military simulation that would later advance the development of virtual reality.

Sensory stimulation was a promising method for creating virtual environments before the use of computers. After the release of a promotional film called This Is Cinerama (1952), the cinematographer Morton Heilig became fascinated with Cinerama and 3-D movies. Like Waller, he studied human sensory signals and illusions, hoping to realize a cinema of the future. By late 1960, Heilig had built an individual console with a variety of inputsstereoscopic images, motion chair, audio, temperature changes, odours, and blown airthat he patented in 1962 as the Sensorama Simulator, designed to stimulate the senses of an individual to simulate an actual experience realistically. During the work on Sensorama, he also designed the Telesphere Mask, a head-mounted stereoscopic 3-D TV display that he patented in 1960. Although Heilig was unsuccessful in his efforts to market Sensorama, in the mid-1960s he extended the idea to a multiviewer theatre concept patented as the Experience Theater and a similar system called Thrillerama for the Walt Disney Company.

The seeds for virtual reality were planted in several computing fields during the 1950s and 60s, especially in 3-D interactive computer graphics and vehicle/flight simulation. Beginning in the late 1940s, Project Whirlwind, funded by the U.S. Navy, and its successor project, the SAGE (Semi-Automated Ground Environment) early-warning radar system, funded by the U.S. Air Force, first utilized cathode-ray tube (CRT) displays and input devices such as light pens (originally called light guns). By the time the SAGE system became operational in 1957, air force operators were routinely using these devices to display aircraft positions and manipulate related data.

During the 1950s, the popular cultural image of the computer was that of a calculating machine, an automated electronic brain capable of manipulating data at previously unimaginable speeds. The advent of more affordable second-generation (transistor) and third-generation (integrated circuit) computers emancipated the machines from this narrow view, and in doing so it shifted attention to ways in which computing could augment human potential rather than simply substituting for it in specialized domains conducive to number crunching. In 1960 Joseph Licklider, a professor at the Massachusetts Institute of Technology (MIT) specializing in psychoacoustics, posited a man-computer symbiosis and applied psychological principles to human-computer interactions and interfaces. He argued that a partnership between computers and the human brain would surpass the capabilities of either alone. As founding director of the new Information Processing Techniques Office (IPTO) of the Defense Advanced Research Projects Agency (DARPA), Licklider was able to fund and encourage projects that aligned with his vision of human-computer interaction while also serving priorities for military systems, such as data visualization and command-and-control systems.

Another pioneer was electrical engineer and computer scientist Ivan Sutherland, who began his work in computer graphics at MITs Lincoln Laboratory (where Whirlwind and SAGE had been developed). In 1963 Sutherland completed Sketchpad, a system for drawing interactively on a CRT display with a light pen and control board. Sutherland paid careful attention to the structure of data representation, which made his system useful for the interactive manipulation of images. In 1964 he was put in charge of IPTO, and from 1968 to 1976 he led the computer graphics program at the University of Utah, one of DARPAs premier research centres. In 1965 Sutherland outlined the characteristics of what he called the ultimate display and speculated on how computer imagery could construct plausible and richly articulated virtual worlds. His notion of such a world began with visual representation and sensory input, but it did not end there; he also called for multiple modes of sensory input. DARPA sponsored work during the 1960s on output and input devices aligned with this vision, such as the Sketchpad III system by Timothy Johnson, which presented 3-D views of objects; Larry Robertss Lincoln Wand, a system for drawing in three dimensions; and Douglas Engelbarts invention of a new input device, the computer mouse.

Within a few years, Sutherland contributed the technological artifact most often identified with virtual reality, the head-mounted 3-D computer display. In 1967 Bell Helicopter (now part of Textron Inc.) carried out tests in which a helicopter pilot wore a head-mounted display (HMD) that showed video from a servo-controlled infrared camera mounted beneath the helicopter. The camera moved with the pilots head, both augmenting his night vision and providing a level of immersion sufficient for the pilot to equate his field of vision with the images from the camera. This kind of system would later be called augmented reality because it enhanced a human capacity (vision) in the real world. When Sutherland left DARPA for Harvard University in 1966, he began work on a tethered display for computer images (see photograph). This was an apparatus shaped to fit over the head, with goggles that displayed computer-generated graphical output. Because the display was too heavy to be borne comfortably, it was held in place by a suspension system. Two small CRT displays were mounted in the device, near the wearers ears, and mirrors reflected the images to his eyes, creating a stereo 3-D visual environment that could be viewed comfortably at a short distance. The HMD also tracked where the wearer was looking so that correct images would be generated for his field of vision. The viewers immersion in the displayed virtual space was intensified by the visual isolation of the HMD, yet other senses were not isolated to the same degree and the wearer could continue to walk around.

An important area of application for VR systems has always been training for real-life activities. The appeal of simulations is that they can provide training equal or nearly equal to practice with real systems, but at reduced cost and with greater safety. This is particularly the case for military training, and the first significant application of commercial simulators was pilot training during World War II. Flight simulators rely on visual and motion feedback to augment the sensation of flying while seated in a closed mechanical system on the ground. The Link Company, founded by former piano maker Edwin Link, began to construct the first prototype Link Trainers during the late 1920s, eventually settling on the blue box design acquired by the Army Air Corps in 1934. The first systems used motion feedback to increase familiarity with flight controls. Pilots trained by sitting in a simulated cockpit, which could be moved hydraulically in response to their actions (see photograph). Later versions added a cyclorama scene painted on a wall outside the simulator to provide limited visual feedback. Not until the Celestial Navigation Trainer, commissioned by the British government in World War II, were projected film strips used in Link Trainersstill, these systems could only project what had been filmed along a correct flight or landing path, not generate new imagery based on a trainees actions. By the 1960s, flight trainers were using film and closed-circuit television to enhance the visual experience of flying. The images could be distorted to generate flight paths that diverted slightly from what had been filmed; sometimes multiple cameras were used to provide different perspectives, or movable cameras were mounted over scale models to depict airports for simulated landings.

Inspired by the controls in the Link flight trainer, Sutherland suggested that such displays include multiple sensory outputs, force-feedback joysticks, muscle sensors, and eye trackers; a user would be fully immersed in the displayed environment and fly through concepts which never before had any visual representation. In 1968 he moved to the University of Utah, where he and his colleague David Evans founded Evans & Sutherland Computer Corporation. The new company initially focused on the development of graphics applications, such as scene generators for flight simulator systems. These systems could render scenes at roughly 20 frames per second in the early 1970s, about the minimum frame rate for effective flight training. General Electric Company constructed the first flight simulators with built-in, real-time computer image generation, first for the Apollo program in the 1960s, then for the U.S. Navy in 1972. By the mid-1970s, these systems were capable of generating simple 3-D models with a few hundred polygon faces; they utilized raster graphics (collections of dots) and could model solid objects with textures to enhance the sense of realism (see computer graphics). By the late 1970s, military flight simulators were also incorporating head-mounted displays, such as McDonnell Douglas Corporations VITAL helmet, primarily because they required much less space than a projected display. A sophisticated head tracker in the HMD followed a pilots eye movements to match computer-generated images (CGI) with his view and handling of the flight controls.

Advances in flight simulators, human-computer interfaces, and augmented reality systems pointed to the possibility of immersive, real-time control systems, not only for research or training but also for improved performance. Since the 1960s, electrical engineer Thomas Furness had been working on visual displays and instrumentation in cockpits for the U.S. Air Force. By the late 1970s, he had begun development of virtual interfaces for flight control, and in 1982 he demonstrated the Visually Coupled Airborne Systems Simulatorbetter known as the Darth Vader helmet, for the armoured archvillain of the popular movie Star Wars. From 1986 to 1989, Furness directed the air forces Super Cockpit program. The essential idea of this project was that the capacity of human pilots to handle spatial information depended on these data being portrayed in a way that takes advantage of the humans natural perceptual mechanisms. Applying the HMD to this goal, Furness designed a system that projected information such as computer-generated 3-D maps, forward-looking infrared and radar imagery, and avionics data into an immersive, 3-D virtual space that the pilot could view and hear in real time. The helmets tracking system, voice-actuated controls, and sensors enabled the pilot to control the aircraft with gestures, utterances, and eye movements, translating immersion in a data-filled virtual space into control modalities. The more natural perceptual interface also reduced the complexity and number of controls in the cockpit. The Super Cockpit thus realized Lickliders vision of man-machine symbiosis by creating a virtual environment in which pilots flew through data. Beginning in 1987, British Aerospace (now part of BAE Systems) also used the HMD as the basis for a similar training simulator, known as the Virtual Cockpit, that incorporated head, hand, and eye tracking, as well as speech recognition.

Sutherland and Furness brought the notion of simulator technology from real-world imagery to virtual worlds that represented abstract models and data. In these systems, visual verisimilitude was less important than immersion and feedback that engaged all the senses in a meaningful way. This approach had important implications for medical and scientific research. Project GROPE, started in 1967 at the University of North Carolina by Frederick Brooks, was particularly noteworthy for the advancements it made possible in the study of molecular biology. Brooks sought to enhance perception and comprehension of the interaction of a drug molecule with its receptor site on a protein by creating a window into the virtual world of molecular docking forces. He combined wire-frame imagery to represent molecules and physical forces with haptic (tactile) feedback mediated through special hand-grip devices to arrange the virtual molecules into a minimum binding energy configuration. Scientists using this system felt their way around the represented forces like flight trainees learning the instruments in a Link cockpit, grasping the physical situations depicted in the virtual world and hypothesizing new drugs based on their manipulations. During the 1990s, Brookss laboratory extended the use of virtual reality to radiology and ultrasound imaging.

Virtual reality was extended to surgery through the technology of telepresence, the use of robotic devices controlled remotely through mediated sensory feedback to perform a task. The foundation for virtual surgery was the expansion during the 1970s and 80s of microsurgery and other less invasive forms of surgery. By the late 1980s, microcameras attached to endoscopic devices relayed images that could be shared among a group of surgeons looking at one or more monitors, often in diverse locations. In the early 1990s, a DARPA initiative funded research to develop telepresence workstations for surgical procedures. This was Sutherlands window into a virtual world, with the added dimension of a level of sensory feedback that could match a surgeons fine motor control and hand-eye coordination. The first telesurgery equipment was developed at SRI International in 1993; the first robotic surgery was performed in 1998 at the Broussais Hospital in Paris.

As virtual worlds became more detailed and immersive, people began to spend time in these spaces for entertainment, aesthetic inspiration, and socializing. Research that conceived of virtual places as fantasy spaces, focusing on the activity of the subject rather than replication of some real environment, was particularly conducive to entertainment. Beginning in 1969, Myron Krueger of the University of Wisconsin created a series of projects on the nature of human creativity in virtual environments, which he later called artificial reality. Much of Kruegers work, especially his VIDEOPLACE system, processed interactions between a participants digitized image and computer-generated graphical objects. VIDEOPLACE could analyze and process the users actions in the real world and translate them into interactions with the systems virtual objects in various preprogrammed ways. Different modes of interaction with names like finger painting and digital drawing suggest the aesthetic dimension of this system. VIDEOPLACE differed in several aspects from training and research simulations. In particular, the system reversed the emphasis from the user perceiving the computers generated world to the computer perceiving the users actions and converting these actions into compositions of objects and space within the virtual world. With the emphasis shifted to responsiveness and interaction, Krueger found that fidelity of representation became less important than the interactions between participants and the rapidity of response to images or other forms of sensory input.

The ability to manipulate virtual objects and not just see them is central to the presentation of compelling virtual worldshence the iconic significance of the data glove in the emergence of VR in commerce and popular culture. Data gloves relay a users hand and finger movements to a VR system, which then translates the wearers gestures into manipulations of virtual objects. The first data glove, developed in 1977 at the University of Illinois for a project funded by the National Endowment for the Arts, was called the Sayre Glove after one of the team members. In 1982 Thomas Zimmerman invented the first optical glove, and in 1983 Gary Grimes at Bell Laboratories constructed the Digital Data Entry Glove, the first glove with sufficient flexibility and tactile and inertial sensors to monitor hand position for a variety of applications, such as providing an alternative to keyboard input for data entry.

Zimmermans glove would have the greatest impact. He had been thinking for years about constructing an interface device for musicians based on the common practice of playing air guitarin particular, a glove capable of tracking hand and finger movements could be used to control instruments such as electronic synthesizers. He patented an optical flex-sensing device (which used light-conducting fibres) in 1982, one year after Grimes patented his glove-based computer interface device. By then, Zimmerman was working at the Atari Research Center in Sunnyvale, California, along with Scott Fisher, Brenda Laurel, and other VR researchers who would be active during the 1980s and beyond. Jaron Lanier, another researcher at Atari, shared Zimmermans interest in electronic music. Beginning in 1983, they worked together on improving the design of the data glove, and in 1985 they left Atari to start up VPL Research; its first commercial product was the VPL DataGlove.

By 1985, Fisher had also left Atari to join NASAs Ames Research Center at Moffett Field, California, as founding director of the Virtual Environment Workstation (VIEW) project. The VIEW project put together a package of objectives that summarized previous work on artificial environments, ranging from creation of multisensory and immersive virtual environment workstations to telepresence and teleoperation applications. Influenced by a range of prior projects that included Sensorama, flight simulators, and arcade rides, and surprised by the expense of the air forces Darth Vader helmets, Fishers group focused on building low-cost, personal simulation environments. While the objective of NASA was to develop telerobotics for automated space stations in future planetary exploration, the group also considered the workstations use for entertainment, scientific, and educational purposes. The VIEW workstation, called the Virtual Visual Environment Display when completed in 1985, established a standard suite of VR technology that included a stereoscopic head-coupled display, head tracker, speech recognition, computer-generated imagery, data glove, and 3-D audio technology.

The VPL DataGlove was brought to market in 1987, and in October of that year it appeared on the cover of Scientific American (see photograph). VPL also spawned a full-body, motion-tracking system called the DataSuit, a head-mounted display called the EyePhone, and a shared VR system for two people called RB2 (Reality Built for Two). VPL declared June 7, 1989, Virtual Reality Day. On that day, both VPL and Autodesk publicly demonstrated the first commercial VR systems. The Autodesk VR CAD (computer-aided design) system was based on VPLs RB2 technology but was scaled down for operation on personal computers. The marketing splash introduced Laniers new term virtual reality as a realization of cyberspace, a concept introduced in science fiction writer William Gibsons Neuromancer in 1984. Lanier, the dreadlocked chief executive officer of VPL, became the public celebrity of the new VR industry, while announcements by Autodesk and VPL let loose a torrent of enthusiasm, speculation, and marketing hype. Soon it seemed that VR was everywhere, from the Mattel/Nintendo PowerGlove (1989) to the HMD in the movie The Lawnmower Man (1992), the Nintendo VirtualBoy game system (1995), and the television series VR5 (1995).

Numerous VR companies were founded in the early 1990s, most of them in Silicon Valley, but by mid-decade most of the energy unleashed by the VPL and Autodesk marketing campaigns had dissipated. The VR configuration that took shape over a span of projects leading from Sutherland to LanierHMD, data gloves, multimodal sensory input, and so forthfailed to have a broad appeal as quickly as the enthusiasts had predicted. Instead, the most visible and successfully marketed products were location-based entertainment systems rather than personal VR systems. These VR arcades and simulators, designed by teams from the game, movie, simulation, and theme park industries, combined the attributes of video games, amusement park rides, and highly immersive storytelling. Perhaps the most important of the early projects was Disneylands Star Tours, an immersive flight simulator ride based on the Star Wars movie series and designed in collaboration with producer George Lucass Industrial Light & Magic. Disney had long built themed rides utilizing advanced technology, such as animatronic charactersnotably in Pirates of the Caribbean, an attraction originally installed at Disneyland in 1967. Star Tours utilized simulated motion and special-effects technology, mixing techniques learned from Hollywood films and military flight simulators with strong story lines and architectural elements that shaped the viewers experience from the moment they entered the waiting line for the attraction. After the opening of Star Tours in 1987, Walt Disney Imagineering embarked on a series of projects to apply interactive technology and immersive environments to ride systems, including 3-D motion-picture photography used in Honey, I Shrunk the Audience (1995), the DisneyQuest indoor interactive theme park (1998), and the multiplayer-gaming virtual world, Toontown Online (2001).

In 1990, Virtual World Entertainment opened the first BattleTech emporium in Chicago. Modeled loosely on the U.S. militarys SIMNET system of networked training simulators, BattleTech centres put players in individual pods, essentially cockpits that served as immersive, interactive consoles for both narrative and competitive game experiences. All the vehicles represented in the game were controlled by other players, each in his own pod and linked to a high-speed network set up for a simultaneous multiplayer experience. The players immersion in the virtual world of the competition resulted from a combination of elements, including a carefully constructed story line, the physical architecture of the arcade space and pod, and the networked virtual environment. During the 1990s, BattleTech centres were constructed in other cities around the world, and the BattleTech franchise also expanded to home electronic games, books, toys, and television.

While the Disney and Virtual World Entertainment projects were the best-known instances of location-based VR entertainments, other important projects included Iwerks Entertainments Turbo Tour and Turboride 3-D motion simulator theatres, first installed in San Francisco in 1992; motion-picture producer Steven Spielbergs Gameworks arcades, the first of which opened in 1997 as a joint project of Universal Studios, Sega Corporation, and Dreamworks SKG; many individual VR arcade rides, beginning with Sega Arcades R360 gyroscope flight simulator, released in 1991; and, finally, Visions of Realitys VR arcades, the spectacular failure of which contributed to the bursting of the investment bubble for VR ventures in the mid-1990s.

Visit link:

Virtual reality | computer science | Britannica.com

Negentropy – Wikipedia

The negentropy has different meanings in information theory and theoretical biology. In a biological context, the negentropy (also negative entropy, syntropy, extropy, ectropy or entaxy[1]) of a living system is the entropy that it exports to keep its own entropy low; it lies at the intersection of entropy and life. In other words, negentropy is reverse entropy. It means things becoming more orderly. By ‘order’ is meant organisation, structure and function: the opposite of randomness or chaos. The concept and phrase “negative entropy” was introduced by Erwin Schrdinger in his 1944 popular-science book What is Life?[2] Later, Lon Brillouin shortened the phrase to negentropy,[3][4] to express it in a more “positive” way: a living system imports negentropy and stores it.[5] In 1974, Albert Szent-Gyrgyi proposed replacing the term negentropy with syntropy. That term may have originated in the 1940s with the Italian mathematician Luigi Fantappi, who tried to construct a unified theory of biology and physics. Buckminster Fuller tried to popularize this usage, but negentropy remains common.

In a note to What is Life? Schrdinger explained his use of this phrase.

In 2009, Mahulikar & Herwig redefined negentropy of a dynamically ordered sub-system as the specific entropy deficit of the ordered sub-system relative to its surrounding chaos.[6] Thus, negentropy has SI units of (J kg1 K1) when defined based on specific entropy per unit mass, and (K1) when defined based on specific entropy per unit energy. This definition enabled: i) scale-invariant thermodynamic representation of dynamic order existence, ii) formulation of physical principles exclusively for dynamic order existence and evolution, and iii) mathematical interpretation of Schrdinger’s negentropy debt.

In information theory and statistics, negentropy is used as a measure of distance to normality.[7][8][9] Out of all distributions with a given mean and variance, the normal or Gaussian distribution is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes if and only if the signal is Gaussian.

Negentropy is defined as

where S ( x ) {displaystyle S(varphi _{x})} is the differential entropy of the Gaussian density with the same mean and variance as p x {displaystyle p_{x}} and S ( p x ) {displaystyle S(p_{x})} is the differential entropy of p x {displaystyle p_{x}} :

Negentropy is used in statistics and signal processing. It is related to network entropy, which is used in independent component analysis.[10][11]

There is a physical quantity closely linked to free energy (free enthalpy), with a unit of entropy and isomorphic to negentropy known in statistics and information theory. In 1873, Willard Gibbs created a diagram illustrating the concept of free energy corresponding to free enthalpy. On the diagram one can see the quantity called capacity for entropy. This quantity is the amount of entropy that may be increased without changing an internal energy or increasing its volume.[12] In other words, it is a difference between maximum possible, under assumed conditions, entropy and its actual entropy. It corresponds exactly to the definition of negentropy adopted in statistics and information theory. A similar physical quantity was introduced in 1869 by Massieu for the isothermal process[13][14][15] (both quantities differs just with a figure sign) and then Planck for the isothermal-isobaric process.[16] More recently, the MassieuPlanck thermodynamic potential, known also as free entropy, has been shown to play a great role in the so-called entropic formulation of statistical mechanics,[17] applied among the others in molecular biology[18] and thermodynamic non-equilibrium processes.[19]

In 1953, Lon Brillouin derived a general equation[20] stating that the changing of an information bit value requires at least kT ln(2) energy. This is the same energy as the work Le Szilrd’s engine produces in the idealistic case. In his book,[21] he further explored this problem concluding that any cause of this bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount of energy.

See original here:

Negentropy – Wikipedia

Superintelligence – Wikipedia

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. “Superintelligence” may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”. The program Fritz falls short of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks. Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity toeither as a single being or as a new speciesbecome much more powerful than humans, and to displace them.

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to superhuman intelligence. Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials. He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI. Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.

If research into strong AI produced sufficiently intelligent software, it would be able to reprogram and improve itself a feature called “recursive self-improvement”. It would then be even better at improving itself, and could continue doing so in a rapidly increasing cycle, leading to a superintelligence. This scenario is known as an intelligence explosion. Such an intelligence would not have the limitations of human intellect, and may be able to invent or discover almost anything.

Computer components already greatly surpass human performance in speed. Bostrom writes, “Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz).” Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, “whereas existing electronic processing cores can communicate optically at the speed of light”. Thus, the simplest example of a superintelligence may be an emulated human mind that’s run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making. Humans appear to differ from chimpanzees in the ways we think more than we differ in brain size or speed.[9] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it likelier that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.

All of the above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[12] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, NSI-189, MAO-I’s, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude greater. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process very rapidly. A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or braincomputer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able “to simulate learning and every other aspect of human intelligence” by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[17]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines “that can carry out most human professions at least as well as a typical human” (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said ‘never’ for 50% confidence, and the 16.5% who said ‘never’ for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:

Responding to Bostrom, Santos-Lang raised concern that developers may attempt to start with a single kind of superintelligence.

Learning computers that rapidly become superintelligent may take unforeseen actions or robots might out-compete humanity (one potential technological singularity scenario).[21] Researchers have argued that, by way of an “intelligence explosion” sometime over the next century, a self-improving AI could become so powerful as to be unstoppable by humans.[22]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[23]

Eliezer Yudkowsky explains: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”[24]

This presents the AI control problem: how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right “the first time”, is that a misprogrammed superintelligence might rationally decide to “take over the world” and refuse to permit its programmers to modify it once it has been activated. Potential design strategies include “capability control” (preventing an AI from being able to pursue harmful plans), and “motivational control” (building an AI that wants to be helpful).

Bill Hibbard advocates for public education about superintelligence and public control over the development of superintelligence.

The rest is here:

Superintelligence – Wikipedia

NATO – Homepage

NATO constantly reviews and transforms its policies, capabilities and structures to ensure that it can continue to address current and future challenges to the freedom and security of its members. Presently, Allied forces are required to carry out a wide range of missions across several continents; the Alliance needs to ensure that its armed forces remain modern, deployable, and capable of sustained operations.

Read more here:

NATO – Homepage

Member states of NATO – Wikipedia

NATO (the North Atlantic Treaty Organization) is an international alliance that consists of 29 member states from North America and Europe. It was established at the signing of the North Atlantic Treaty on 4 April 1949. Article Five of the treaty states that if an armed attack occurs against one of the member states, it should be considered an attack against all members, and other members shall assist the attacked member, with armed forces if necessary.[1]

Of the 29 member countries, two are located in North America (Canada and the United States) and 27 are European countries while Turkey is in Eurasia. All members have militaries, except for Iceland which does not have a typical army (but does, however, have a coast guard and a small unit of civilian specialists for NATO operations). Three of NATO’s members are nuclear weapons states: France, the United Kingdom, and the United States. NATO has 12 original founding member nation states, and from 18 February 1952 to 6 May 1955, it added three more member nations, and a fourth on 30 May 1982. After the end of the Cold War, NATO added 13 more member nations (10 former Warsaw Pact members and three former Yugoslav republics) from 12 March 1999 to 5 June 2017.

NATO has added new members seven times since its founding in 1949, and since 2017 NATO has had 29 members. Twelve countries were part of the founding of NATO: Belgium, Canada, Denmark, France, Iceland, Italy, Luxembourg, the Netherlands, Norway, Portugal, the United Kingdom, and the United States. In 1952, Greece and Turkey became members of the Alliance, joined later by West Germany (in 1955) and Spain (in 1982). In 1990, with the reunification of Germany, NATO grew to include the former country of East Germany. Between 1994 and 1997, wider forums for regional cooperation between NATO and its neighbors were set up, including the Partnership for Peace, the Mediterranean Dialogue initiative and the Euro-Atlantic Partnership Council. In 1997, three former Warsaw Pact countries, Hungary, the Czech Republic, and Poland, were invited to join NATO. After this fourth enlargement in 1999, the Vilnius group of The Baltics and seven East European countries formed in May 2000 to cooperate and lobby for further NATO membership. Seven of these countries joined in the fifth enlargement in 2004. The Adriatic States Albania and Croatia joined in the sixth enlargement in 2009, Montenegro in 2017.

Due to the 201617 Turkish purges and Recep Tayyip Erdogan’s authoritarian politics some have speculated that Turkey could be expelled from NATO.[2][3][4][5][6][7][8][9] United States President Donald Trump has also expressed interest in withdrawing from the organization during the 2016 election campaign, and only recently stated the United States would protect allies in the event that Article V is invoked.[10][11][12]

a Iceland has no armed forces.b 2015 data.

The United States spends more on the organization than all other members combined.[14] Criticism of the organization by then newly elected US President Donald Trump caused various reactions from American and European political figures, ranging from ridicule to panic.[15][16][17] Pew Research Center’s 2016 survey among its member states showed that while most countries viewed NATO positively, most NATO members preferred keeping their military spending the same. The response to whether their country should militarily aid another NATO country if it were to get into a serious military conflict with Russia was also mixed. Only in the US and Canada did more than 50% of the people answer that they should.[18][19]

Population data from CIA World FactbookGDP data from IMF[21]Expenditure data (except Iceland) from SIPRI Military Expenditure Database,[22] Icelandic data (2013) from Statistics Iceland[23]Military personnel data from NATO[24]a Iceland has no armed forces.b 2015 data.

Read the rest here:

Member states of NATO – Wikipedia

What is NATO?

NATO is committed to the principle that an attack against one or several of its members is considered as an attack against all. This is the principle of collective defence, which is enshrined in Article 5 of the Washington Treaty.. So far, Article 5 has been invoked once – in response to the 9/11 terrorist attacks in the United States in 2001.

Read more:

What is NATO?

NATO (@NATO) | Twitter

We have successfully completed #NATO’s move to our new HQ. It has been a complex endeavour and a collective effort, during which NATO remained fully operational. Looking forward to hosting our first fully fledged #NATOsummit in our new home. Thanks to all who made it happen! pic.twitter.com/fHnmYrvKX4

View original post here:

NATO (@NATO) | Twitter

Automation – definition of automation by The Free Dictionary

automation ( tme n)

n.

1. the technique, method, or system of operating or controlling a process by highly automatic means, as by electronic devices, reducing human intervention to a minimum.

2. the act or process of automating or making automatic.

3. the state of being automated.

the use or care of automobiles. automobilist, n. automobility, n.

1. the science or study of how man and animals perform tasks and solve certain types of problems involving use of the body.2. the application of this study to the design of computer-driven and other automated equipment.3. the application of this study to the design of artificial limbs, organs, and other prosthetic devices. bionic, adj.

the jargon or language typical of those involved with computers.

the comparative study of complex electronic devices and the nervous system in an attempt to understand better the nature of the human brain. cyberneticist, n. cybernetic, adj.

the application of automated machinery to tasks traditionally done by hand, as in manufacturing.

the use of automated machinery or manlike mechanical devices to perform tasks. robotistic, adj.

a closed-circuit feedback system used in the automatic control of machines, involving an error-sensor using a small amount of energy, an amplifier, and a servomotor dispensing large amounts of power. Also called servo. servomechanical, adj.

View post:

Automation – definition of automation by The Free Dictionary

Ecosystem – Wikipedia

This article is about natural ecosystems. For the term used in man-made systems, see Digital ecosystem.

An ecosystem is a community made up of living organisms and nonliving components such as air, water, and mineral soil.[3] Ecosystems can be studied in two different ways. They can be thought of as interdependent collections of plants and animals, or as structured systems and communities governed by general rules.[4] The living (biotic) and non-living (abiotic) components interact through nutrient cycles and energy flows.[5] Ecosystems include interactions among organisms, and between organisms and their environment.[6] Ecosystems can be of any size but each ecosystem has a specific, limited space.[7] Some scientists view the entire planet as one ecosystem.[8]

Energy, water, nitrogen and soil minerals are essential abiotic components of an ecosystem. The energy used by ecosystems comes primarily from the sun, via photosynthesis. Photosynthesis uses energy from the sun and also captures carbon dioxide from the atmosphere. Animals also play an important role in the movement of matter and energy through ecosystems. They influence the amount of plant and microbial biomass that lives in the system. As organic matter dies, carbon is released back into the atmosphere. This process also facilitates nutrient cycling by converting nutrients stored in dead biomass back to a form that can be used again by plants and other microbes.[9]

Ecosystems are controlled by both external and internal factors. External factors such as climate, the parent material that forms the soil, topography and time each affect ecosystems. However, these external factors are not themselves influenced by the ecosystem.[10] Ecosystems are dynamic: they are subject to periodic disturbances and are often in the process of recovering from past disturbances and seeking balance.[11] Internal factors are different: They not only control ecosystem processes but are also controlled by them. Another way of saying this is that internal factors are subject to feedback loops.[10]

Humans operate within ecosystems and can influence both internal and external factors.[10] Global warming is an example of a cumulative effect of human activities. Ecosystems provide benefits, called “ecosystem services”, which people depend on for their livelihood. Ecosystem management is more efficient than trying to manage individual species.

There is no single definition of what constitutes an ecosystem.[4] German ecologist Ernst-Detlef Schulze and coauthors defined an ecosystem as an area which is “uniform regarding the biological turnover, and contains all the fluxes above and below the ground area under consideration.” They explicitly reject Gene Likens’ use of entire river catchments as “too wide a demarcation” to be a single ecosystem, given the level of heterogeneity within such an area.[12] Other authors have suggested that an ecosystem can encompass a much larger area, even the whole planet.[8] Schulze and coauthors also rejected the idea that a single rotting log could be studied as an ecosystem because the size of the flows between the log and its surroundings are too large, relative to the proportion cycles within the log.[12] Philosopher of science Mark Sagoff considers the failure to define “the kind of object it studies” to be an obstacle to the development of theory in ecosystem ecology.[4]

Ecosystems can be studied in a variety of ways. Those include theoretical studies or more practical studies that monitor specific ecosystems over long periods of time or look at differences between ecosystems to better understand how they work. Some studies involve experimenting with direct manipulation of the ecosystem.[13] Studies can be carried out at a variety of scales, ranging from whole-ecosystem studies to to studying microcosms or mesocosms (simplified representations of ecosystems).[14] American ecologist Stephen R. Carpenter has argued that microcosm experiments can be “irrelevant and diversionary” if they are not carried out in conjunction with field studies done at the ecosystem scale. Microcosm experiments often fail to accurately predict ecosystem-level dynamics.[15]

The Hubbard Brook Ecosystem Study started in 1963 to study the White Mountains in New Hampshire. It was the first successful attempt to study an entire watershed as an ecosystem. The study used stream chemistry as a means of monitoring ecosystem properties, and developed a detailed biogeochemical model of the ecosystem.[16] Long-term research at the site led to the discovery of acid rain in North America in 1972. Researchers documented the depletion of soil cations (especially calcium) over the next several decades.[17]

Terrestrial ecosystems (found on land) and aquatic ecosystems (found in water) are concepts related to ecosystems. Aquatic ecosystems are split into marine ecosystems and freshwater ecosystems.

Ecosystems are controlled both by external and internal factors. External factors, also called state factors, control the overall structure of an ecosystem and the way things work within it, but are not themselves influenced by the ecosystem. The most important of these is climate.[10] Climate determines the biome in which the ecosystem is embedded. Rainfall patterns and seasonal temperatures influence photosynthesis and thereby determine the amount of water and energy available to the ecosystem.[10]

Parent material determines the nature of the soil in an ecosystem, and influences the supply of mineral nutrients. Topography also controls ecosystem processes by affecting things like microclimate, soil development and the movement of water through a system. For example, ecosystems can be quite different if situated in a small depression on the landscape, versus one present on an adjacent steep hillside.[10]

Other external factors that play an important role in ecosystem functioning include time and potential biota. Similarly, the set of organisms that can potentially be present in an area can also significantly affect ecosystems. Ecosystems in similar environments that are located in different parts of the world can end up doing things very differently simply because they have different pools of species present.[10] The introduction of non-native species can cause substantial shifts in ecosystem function.

Unlike external factors, internal factors in ecosystems not only control ecosystem processes but are also controlled by them. Consequently, they are often subject to feedback loops.[10] While the resource inputs are generally controlled by external processes like climate and parent material, the availability of these resources within the ecosystem is controlled by internal factors like decomposition, root competition or shading.[10] Other factors like disturbance, succession or the types of species present are also internal factors.

Primary production is the production of organic matter from inorganic carbon sources. This mainly occurs through photosynthesis. The energy incorporated through this process supports life on earth, while the carbon makes up much of the organic matter in living and dead biomass, soil carbon and fossil fuels. It also drives the carbon cycle, which influences global climate via the greenhouse effect.

Through the process of photosynthesis, plants capture energy from light and use it to combine carbon dioxide and water to produce carbohydrates and oxygen. The photosynthesis carried out by all the plants in an ecosystem is called the gross primary production (GPP).[18] About 4860% of the GPP is consumed in plant respiration.

The remainder, that portion of GPP that is not used up by respiration, is known as the net primary production (NPP).[19]

Energy and carbon enter ecosystems through photosynthesis, are incorporated into living tissue, transferred to other organisms that feed on the living and dead plant matter, and eventually released through respiration.[19]

The carbon and energy incorporated into plant tissues (net primary production) is either consumed by animals while the plant is alive, or it remains uneaten when the plant tissue dies and becomes detritus. In terrestrial ecosystems, roughly 90% of the net primary production ends up being broken down by decomposers. The remainder is either consumed by animals while still alive and enters the plant-based trophic system, or it is consumed after it has died, and enters the detritus-based trophic system.

In aquatic systems, the proportion of plant biomass that gets consumed by herbivores is much higher.[20] In trophic systems photosynthetic organisms are the primary producers. The organisms that consume their tissues are called primary consumers or secondary producersherbivores. Organisms which feed on microbes (bacteria and fungi) are termed microbivores. Animals that feed on primary consumerscarnivoresare secondary consumers. Each of these constitutes a trophic level.[20]

The sequence of consumptionfrom plant to herbivore, to carnivoreforms a food chain. Real systems are much more complex than thisorganisms will generally feed on more than one form of food, and may feed at more than one trophic level. Carnivores may capture some prey which are part of a plant-based trophic system and others that are part of a detritus-based trophic system (a bird that feeds both on herbivorous grasshoppers and earthworms, which consume detritus). Real systems, with all these complexities, form food webs rather than food chains.[20]

Ecosystem ecology studies “the flow of energy and materials through organisms and the physical environment”. It seeks to understand the processes which govern the stocks of material and energy in ecosystems, and the flow of matter and energy through them. The study of ecosystems can cover 10 orders of magnitude, from the surface layers of rocks to the surface of the planet.[21]

The carbon and nutrients in dead organic matter are broken down by a group of processes known as decomposition. This releases nutrients that can then be re-used for plant and microbial production and returns carbon dioxide to the atmosphere (or water) where it can be used for photosynthesis. In the absence of decomposition, the dead organic matter would accumulate in an ecosystem, and nutrients and atmospheric carbon dioxide would be depleted.[22] Approximately 90% of terrestrial net primary production goes directly from plant to decomposer.[20]

Decomposition processes can be separated into three categoriesleaching, fragmentation and chemical alteration of dead material.

As water moves through dead organic matter, it dissolves and carries with it the water-soluble components. These are then taken up by organisms in the soil, react with mineral soil, or are transported beyond the confines of the ecosystem (and are considered lost to it).[22] Newly shed leaves and newly dead animals have high concentrations of water-soluble components and include sugars, amino acids and mineral nutrients. Leaching is more important in wet environments and much less important in dry ones.[22]

Fragmentation processes break organic material into smaller pieces, exposing new surfaces for colonization by microbes. Freshly shed leaf litter may be inaccessible due to an outer layer of cuticle or bark, and cell contents are protected by a cell wall. Newly dead animals may be covered by an exoskeleton. Fragmentation processes, which break through these protective layers, accelerate the rate of microbial decomposition.[22] Animals fragment detritus as they hunt for food, as does passage through the gut. Freeze-thaw cycles and cycles of wetting and drying also fragment dead material.[22]

The chemical alteration of the dead organic matter is primarily achieved through bacterial and fungal action. Fungal hyphae produce enzymes which can break through the tough outer structures surrounding dead plant material. They also produce enzymes which break down lignin, which allows them access to both cell contents and to the nitrogen in the lignin. Fungi can transfer carbon and nitrogen through their hyphal networks and thus, unlike bacteria, are not dependent solely on locally available resources.[22]

Decomposition rates vary among ecosystems. The rate of decomposition is governed by three sets of factorsthe physical environment (temperature, moisture, and soil properties), the quantity and quality of the dead material available to decomposers, and the nature of the microbial community itself.[23] Temperature controls the rate of microbial respiration; the higher the temperature, the faster microbial decomposition occurs. It also affects soil moisture, which slows microbial growth and reduces leaching. Freeze-thaw cycles also affect decompositionfreezing temperatures kill soil microorganisms, which allows leaching to play a more important role in moving nutrients around. This can be especially important as the soil thaws in the spring, creating a pulse of nutrients which become available.[23]

Decomposition rates are low under very wet or very dry conditions. Decomposition rates are highest in wet, moist conditions with adequate levels of oxygen. Wet soils tend to become deficient in oxygen (this is especially true in wetlands), which slows microbial growth. In dry soils, decomposition slows as well, but bacteria continue to grow (albeit at a slower rate) even after soils become too dry to support plant growth.

Ecosystems continually exchange energy and carbon with the wider environment. Mineral nutrients, on the other hand, are mostly cycled back and forth between plants, animals, microbes and the soil. Most nitrogen enters ecosystems through biological nitrogen fixation, is deposited through precipitation, dust, gases or is applied as fertilizer.[24]

Since most terrestrial ecosystems are nitrogen-limited, nitrogen cycling is an important control on ecosystem production.[24]

Until modern times, nitrogen fixation was the major source of nitrogen for ecosystems. Nitrogen-fixing bacteria either live symbiotically with plants or live freely in the soil. The energetic cost is high for plants which support nitrogen-fixing symbiontsas much as 25% of gross primary production when measured in controlled conditions. Many members of the legume plant family support nitrogen-fixing symbionts. Some cyanobacteria are also capable of nitrogen fixation. These are phototrophs, which carry out photosynthesis. Like other nitrogen-fixing bacteria, they can either be free-living or have symbiotic relationships with plants.[24] Other sources of nitrogen include acid deposition produced through the combustion of fossil fuels, ammonia gas which evaporates from agricultural fields which have had fertilizers applied to them, and dust.[24] Anthropogenic nitrogen inputs account for about 80% of all nitrogen fluxes in ecosystems.[24]

When plant tissues are shed or are eaten, the nitrogen in those tissues becomes available to animals and microbes. Microbial decomposition releases nitrogen compounds from dead organic matter in the soil, where plants, fungi, and bacteria compete for it. Some soil bacteria use organic nitrogen-containing compounds as a source of carbon, and release ammonium ions into the soil. This process is known as nitrogen mineralization. Others convert ammonium to nitrite and nitrate ions, a process known as nitrification. Nitric oxide and nitrous oxide are also produced during nitrification.[24] Under nitrogen-rich and oxygen-poor conditions, nitrates and nitrites are converted to nitrogen gas, a process known as denitrification.[24]

Other important nutrients include phosphorus, sulfur, calcium, potassium, magnesium and manganese.[25] Phosphorus enters ecosystems through weathering. As ecosystems age this supply diminishes, making phosphorus-limitation more common in older landscapes (especially in the tropics).[25] Calcium and sulfur are also produced by weathering, but acid deposition is an important source of sulfur in many ecosystems. Although magnesium and manganese are produced by weathering, exchanges between soil organic matter and living cells account for a significant portion of ecosystem fluxes. Potassium is primarily cycled between living cells and soil organic matter.[25]

Biodiversity plays an important role in ecosystem functioning.[27] The reason for this is that ecosystem processes are driven by the number of species in an ecosystem, the exact nature of each individual species, and the relative abundance organisms within these species.[28] Ecosystem processes are broad generalizations that actually take place through the actions of individual organisms. The nature of the organismsthe species, functional groups and trophic levels to which they belongdictates the sorts of actions these individuals are capable of carrying out and the relative efficiency with which they do so.

Ecological theory suggests that in order to coexist, species must have some level of limiting similaritythey must be different from one another in some fundamental way, otherwise one species would competitively exclude the other.[29] Despite this, the cumulative effect of additional species in an ecosystem is not linearadditional species may enhance nitrogen retention, for example, but beyond some level of species richness, additional species may have little additive effect.[28]

The addition (or loss) of species which are ecologically similar to those already present in an ecosystem tends to only have a small effect on ecosystem function. Ecologically distinct species, on the other hand, have a much larger effect. Similarly, dominant species have a large effect on ecosystem function, while rare species tend to have a small effect. Keystone species tend to have an effect on ecosystem function that is disproportionate to their abundance in an ecosystem.[28] Similarly, an ecosystem engineer is any organism that creates, significantly modifies, maintains or destroys a habitat.

Ecosystems are dynamic entities. They are subject to periodic disturbances and are in the process of recovering from some past disturbance.[11] When a perturbation occurs, an ecoystem responds by moving away from its initial state. The tendency of an ecosystem to remain close to its equilibrium state, despite that disturbance, is termed its resistance. On the other hand, the speed with which it returns to its initial state after disturbance is called its resilience.[11] Time plays a role in the development of soil from bare rock and the recovery of a community from disturbance.[10]

From one year to another, ecosystems experience variation in their biotic and abiotic environments. A drought, an especially cold winter and a pest outbreak all constitute short-term variability in environmental conditions. Animal populations vary from year to year, building up during resource-rich periods and crashing as they overshoot their food supply. These changes play out in changes in net primary production decomposition rates, and other ecosystem processes.[11] Longer-term changes also shape ecosystem processesthe forests of eastern North America still show legacies of cultivation which ceased 200 years ago, while methane production in eastern Siberian lakes is controlled by organic matter which accumulated during the Pleistocene.[11]

Disturbance also plays an important role in ecological processes. F. Stuart Chapin and coauthors define disturbance as “a relatively discrete event in time and space that alters the structure of populations, communities, and ecosystems and causes changes in resources availability or the physical environment”.[30] This can range from tree falls and insect outbreaks to hurricanes and wildfires to volcanic eruptions. Such disturbances can cause large changes in plant, animal and microbe populations, as well soil organic matter content.[11] Disturbance is followed by succession, a “directional change in ecosystem structure and functioning resulting from biotically driven changes in resources supply.”[30]

The frequency and severity of disturbance determine the way it affects ecosystem function. A major disturbance like a volcanic eruption or glacial advance and retreat leave behind soils that lack plants, animals or organic matter. Ecosystems that experience such disturbances undergo primary succession. A less severe disturbance like forest fires, hurricanes or cultivation result in secondary succession and a faster recovery.[11] More severe disturbance and more frequent disturbance result in longer recovery times.

Classifying ecosystems into ecologically homogeneous units is an important step towards effective ecosystem management.[31] There is no single, agreed-upon way to do this. A variety of systems exist, based on vegetation cover, remote sensing, and bioclimatic classification systems.[31]

Ecological land classification is a cartographical delineation or regionalisation of distinct ecological areas, identified by their geology, topography, soils, vegetation, climate conditions, living species, habitats, water resources, and sometimes also anthropic factors.[32]

Human activities are important in almost all ecosystems. Although humans exist and operate within ecosystems, their cumulative effects are large enough to influence external factors like climate.[10]

Ecosystems provide a variety of goods and services upon which people depend.[33] Ecosystem goods include the “tangible, material products” of ecosystem processes such as food, construction material, medicinal plants.[34] They also include less tangible items like tourism and recreation, and genes from wild plants and animals that can be used to improve domestic species.[33]

Ecosystem services, on the other hand, are generally “improvements in the condition or location of things of value”.[34] These include things like the maintenance of hydrological cycles, cleaning air and water, the maintenance of oxygen in the atmosphere, crop pollination and even things like beauty, inspiration and opportunities for research.[33] While ecosystem goods have traditionally been recognized as being the basis for things of economic value, ecosystem services tend to be taken for granted.[34]

When natural resource management is applied to whole ecosystems, rather than single species, it is termed ecosystem management.[35] Although definitions of ecosystem management abound, there is a common set of principles which underlie these definitions.[36] A fundamental principle is the long-term sustainability of the production of goods and services by the ecosystem;[36] “intergenerational sustainability [is] a precondition for management, not an afterthought”.[33]

While ecosystem management can be used as part of a plan for wilderness conservation, it can also be used in intensively managed ecosystems[33] (see, for example, agroecosystem and close to nature forestry).

As human populations and per capita consumption grow, so do the resource demands imposed on ecosystems and the effects of the human ecological footprint. Natural resources are vulnerable and limited. The environmental impacts of anthropogenic actions are becoming more apparent. Problems for all ecosystems include: environmental pollution, climate change and biodiversity loss. For terrestrial ecosystems further threats include air pollution, soil degradation, and deforestation. For aquatic ecosystems threats include also unsustainable exploitation of marine resources (for example overfishing of certain species), marine pollution, microplastics pollution, water pollution, and building on coastal areas.[37]

Society is increasingly becoming aware that ecosystem services are not only limited but also that they are threatened by human activities. The need to better consider long-term ecosystem health and its role in enabling human habitation and economic activity is urgent. To help inform decision-makers, many ecosystem services are being assigned economic values, often based on the cost of replacement with anthropogenic alternatives. The ongoing challenge of prescribing economic value to nature, for example through biodiversity banking, is prompting transdisciplinary shifts in how we recognize and manage the environment, social responsibility, business opportunities, and our future as a species.[citation needed]

The term “ecosystem” was first used in 1935 in a publication by British ecologist Arthur Tansley.[fn 1][38] Tansley devised the concept to draw attention to the importance of transfers of materials between organisms and their environment.[39] He later refined the term, describing it as “The whole system, … including not only the organism-complex, but also the whole complex of physical factors forming what we call the environment”.[40] Tansley regarded ecosystems not simply as natural units, but as “mental isolates”.[40] Tansley later defined the spatial extent of ecosystems using the term ecotope.[41]

G. Evelyn Hutchinson, a limnologist who was a contemporary of Tansley’s, combined Charles Elton’s ideas about trophic ecology with those of Russian geochemist Vladimir Vernadsky. As a result, he suggested that mineral nutrient availability in a lake limited algal production. This would, in turn, limit the abundance of animals that feed on algae. Raymond Lindeman took these ideas further to suggest that the flow of energy through a lake was the primary driver of the ecosystem. Hutchinson’s students, brothers Howard T. Odum and Eugene P. Odum, further developed a “systems approach” to the study of ecosystems. This allowed them to study the flow of energy and material through ecological systems.[39]

See the rest here:

Ecosystem – Wikipedia

Fishcoin: Blockchain Based Seafood Traceability & Data …

Greg HorowittAuthor & Managing Partner at Jun Capital Partners

Author & Managing Partner at Jun Capital Partners

rnGreg Horowitt is a Managing Partner at Jun Capital Partners Pte Ltd. (Singapore, Shanghai, Tokyo, Bangkok, Tel Aviv, Paris, Palo Alto, San Diego, San Francisco). Greg is a serial entrepreneur, investor, author, and innovation systems architect. He is a visiting lecturer at Stanford University and serves as the Director of Innovation at the University of California, San Diego, where he also lectures. He has spent 25+ years working in start-ups and venture capital, and is one of the pioneers in the field of innovation-based economic development. Greg is the co-author of the bestselling book, The Rainforest: The Secret to Building the Next Silicon Valley, and is a trusted advisor to such notable organizations as the US State Department, the Aspen Institute, the University of California, the World Bank, and the Inter-American Development Bank, in addition to being a Senior Fellow with the Global Federation of Competitiveness Councils.

See the original post:

Fishcoin: Blockchain Based Seafood Traceability & Data …

European e-Competence Framework

Welcome to the e-CF

The European e-Competence Framework (e-CF) provides a reference of 40 competences as applied at the Information and Communication Technology (ICT) workplace, using a common language for competences, skills, knowledge and proficiency levels that can be understood across Europe.

In 2016, the e-CF became a European standard and was published officially as the European Norm EN 16234-1.

As the first sector-specific implementation of the European Qualifications Framework (EQF), the e-CF fits for application by ICT service, user and supply organisations, multinationals and SMEs, for ICT managers, HR departments and individuals,educational institutions including higher education and private certification providers, social partners, market analysts, policy makers and other organisations inpublic and private sectors.

The European e-Competence Framework provides a common language to describe the competences including skills and knowledge requirements of ICT professionals, professions and organisations at five proficiency levels, and is designed to meet the needs of individuals, businesses and other organisations in public and private sectors.

The e-CF version 3.0 gives clear definitions and sound orientation to support decision-making in relation to the selection and recruitment of candidates, as well as the qualification, training and assessment of ICT professionals. It enables the identification of skills and competences that may be required to successfully perform duties and fulfill responsibilities related to the ICT workplace.The widespread adoption of the e-CF by companies and organisations throughout Europe has started to increase the transparency, mobility and efficiency of ICT sector related human resources.

The e-CF was developed through a process of collaboration between experts and stakeholders from many different countries under the umbrella of the CEN ICT Skills Workshop.

Following consultation of CEN member states, the e-CF became a European standard and was published in 2016 officially as the European Norm (EN) 16234.Identical in its structure and content to the e-CF 3.0 CWA, the new EN format provides great opportunities for further dissemination and continued adoption of the framework Europe-wide.

The e-CF is a component of the European unions strategy for e-Skills in the 21st Century supported by the European Commission and The Council of Ministers. The Framework supports key policy objectives of the Digital Skills and Jobs Coalitionand benefits an ever growing user community from the EU and across the world.

Visit link:

European e-Competence Framework

Posted in Cf

Astrophysics – Play it now at Coolmath-Games.com

‘); } else { //console.log(“User may have come from google or is within the free game limit “+ (freeGameLimit-userPlayedGames) ); //TODO Display Game removeAdSwfJWPLayer(); } } //display to user how many free games left once page load completes. if (window.addEventListener) window.addEventListener(‘load’, checkPageLoad, false); else if (window.attachEvent) window.attachEvent(‘onload’, checkPageLoad); else window.onload = checkPageLoad; }}function checkPageLoad() {//console.log(“checkPageLoad: Checkers test “); if(freeGameLimit) { freeGamesLeft = ((freeGameLimit – userPlayedGames)); } else { freeGamesLeft = 0; } if(freeGamesLeft === 0) { var zeroFreeGamesLeftUsers =localStorage.getItem(“zeroFreeGamesLeftUsers”); if(zeroFreeGamesLeftUsers == null) { localStorage.setItem(“zeroFreeGamesLeftUsers”,”1″); if(typeof __gaTracker !== “undefined”) { __gaTracker(‘send’, { ‘hitType’: ‘event’, // Required. ‘eventCategory’: “ZeroFreeGamesLeftUsers”, // Required. ‘eventAction’: subscriberLeg, // Required. ‘eventLabel’: document.title, ‘eventValue’: “0”, ‘nonInteraction’: 1 }); } } } //Replace Go Ad Free header promo with parents and teachers promo if(typeof freeTrialUser !== ‘undefined’ && freeTrialUser && typeof targeted_state !== ‘undefined’ && targeted_state && jQuery(‘.panel-pane.pane-block.pane-bean-subscriber-promo’).length) { jQuery(‘.panel-pane.pane-block.pane-bean-subscriber-promo’).replaceWith(”) } else if(typeof freeTrialUser !== ‘undefined’ && freeTrialUser && typeof targeted_state !== ‘undefined’ && targeted_state && jQuery(‘.panel-pane.pane-block .pane-bean-subscriber-promo’).length) { jQuery(‘.panel-pane.pane-block .pane-bean-subscriber-promo’).replaceWith(”) } subscriptionSignUpUrl(); if(Drupal.settings.isSubscriptionActive == false && getCookie(‘cmg_l’) !== null) { subscribeNowAlienClass = “subscribe-now-alien-subscribe”; }else if(getCookie(‘cmg_l’) == null) { subscribeNowAlienClass = “subscribe-now-alien”; }else if(getCookie(‘cmg_l’) == null && subscriberLeg == ‘Default Leg’) { subscribeNowAlienClass = “subscribe-now-signup”; } var alreadySubscriberText = ‘

Already a Subscriber? Login

Original post:

Astrophysics – Play it now at Coolmath-Games.com

Simon Lane | Yogscast Wiki | FANDOM powered by Wikia

SimonReal Name

Simon Charles Lane

Honeydew, Honeybeard, Derek Smart, Alejandew

2008 (Co-Founder)

Yes

Creative Director at Yogscast Ltd, YouTube Content Producer at Yogscast Ltd

“See ya later, Shitlord(s)!”

Simon Lane, under the username Honeydew, is a founding member of the Yogscast, and runs the main Yogscast YouTube channel with Lewis Brindley. He is known for playing a dwarf in any situation he can. He is renowned for being a strongman, entertainer, astronaut and a budding musician.

Lewis and Simon have uploaded an enormous variety of content, such as Minecraft adventure maps and mini-games, Garry’s Mod, indie games, and many collaborations. Some of Lewis and Simon’s most popular Minecraft series include YogLabs, Jaffa Factory, JaffaQuest, Hole Diggers, Deep Space Mine, Lucky Block Challenge, and of course, Shadow of Israphel. When playing Minecraft he has a fondness of pigs, Jaffa Cakes, fire and things that explode.

Simon is the creative force behind The Yogscast, known as the singer of “Diggy Diggy Hole” and “The Man of a Thousand Voices, all of which sound oddly similar”. Simons charm, wit and endearing silliness are unmatched. He is the co-founder of the Yogscast.

Simon has, on rare occasions, managed to hijack the BlueXephos channel on YouTube, enabling him to post content in which he is the central character. This content tends to be superficially innocent and light, but upon closer examination reveals a twisted, diabolical malevolence and passive aggressive Machiavellian instinct that can only mean Simon’s ultimate goal for the Yogscast is total world domination. These videos generally fall into two basic yet far-reaching categories, which are: Simon Sings, and Simon Plays. The Simon Playsseries are simple Let’s Play videos of various computer/console games such as Portal 2 [1]. While playing these games Simon would occasionally give the characters unique voices, with a narrative thread roughly maintained throughout the video. The Simon’s Songs series of videos is a collection of brief musical interludes wherein Simon does his best vocal impersonation of a cat being used to clean a rug. These videos demonstrate Simon’s mind at work, as he eventually arrives at the perfect understanding of the two key critical lyrical elements that have defined success for one of his favourite musical artists, Parry Grip. This culminates in what is bound to be one of the top music videos of 2011, Elephant Having A Wank. The actual category the video will fall into–best or worst of 2011–is still in doubt.

Simon took a hiatus from the Yogscast in March 2015, with a video explaining his sudden absence. In the video, he relates his hiatus to unspecified medical issues tied into an unexpected visit to the hospital. Although he was released from the hospital a few weeks later, it was claimed that he wanted to take some time off in order to recover before returning to actively working on video content. A further passing mention in a Yogscast vlog in June simply said that he was getting better and that his friends hoped he would be fully recovered soon. Simon returned to making YouTube videos on September 25, making his first appearance in 6 months in the first episode of Trials of Skobbels [2].

Despite returning to several series on the main channel, he had a diminished involvement with Yogscast projects during his recuperation throughout 2016. He then went on another hiatus in March 2017, returning on the 9th of June on the stream. However, after his last hiatus, Simon has appeared way less frequently on the main channel (in which he used to appear in almost every single video). Since then, he has been showing up mainly in the Chilluminati streams, in the Game Goblin series (with Tom), and in some GTA V and Trouble in Terrorist Town (TTT) videos.

You are not permitted to enter quotes until the Wiki community agrees it is noteworthy. There’s a limit of 20, with no more spaces left.

Title – Character – Year

The Yogscast Network

Read more here:

Simon Lane | Yogscast Wiki | FANDOM powered by Wikia

Entheogen – Wikipedia

An entheogen is a class of psychoactive substances that induce any type of spiritual experience aimed at development.[2] The term entheogen is often chosen to contrast recreational use of the same drugs. For example, entheogens are used by curanderos to heal people, but also by malevolent sorcerers to allegedly “steal” people’s energy.[3]

The religious, shamanic, or spiritual significance of entheogens is well established in anthropological and modern contexts; entheogens have traditionally been used to supplement many diverse practices geared towards achieving transcendence, including white and black magic, sensory deprivation, divinatory, meditation, yoga, prayer, trance, rituals, chanting, hymns like peyote songs, and drumming. In the 1960s the hippie movement escalated its use to psychedelic art, binaural beats, sensory deprivation tanks, music, and rave parties.

Entheogens have been used by indigenous peoples for thousands of years. Some countries have legislation that allows for traditional entheogen use. However, in the mid-20th century, after the discovery of LSD, and the intervention of psychedelic therapy, the term entheogen, invented in 1979, later became an umbrella term used to include artificial drugs, alternative medical treatment, and spiritual practices, whether or not in a formal religious or traditional structure.

Entheogens have been used in a ritualized context for thousands of years.

R. Gordon Wasson and Giorgio Samorini have proposed several examples of the cultural use of entheogens that are found in the archaeological record.[6][7] Evidence for the first use of entheogens may come from Tassili, Algeria, with a cave painting of a mushroom-man, dating to 8000 BP.[citation needed] Hemp seeds discovered by archaeologists at Pazyryk suggest early ceremonial practices by the Scythians occurred during the 5th to 2nd century BC, confirming previous historical reports by Herodotus.[citation needed][8]

With the advent of organic chemistry, there now exist many synthetic drugs with similar psychoactive properties, many derived from the aforementioned plants. Many pure active compounds with psychoactive properties have been isolated from these respective organisms and chemically synthesized, including mescaline, psilocybin, DMT, salvinorin A, ibogaine, ergine, and muscimol.

Semi-synthetic (e.g., LSD) and synthetic drugs (e.g., DPT and 2C-B used by the Sangoma) have also been developed. Alexander Shulgin developed hundreds of entheogens in PiHKAL and TiHKAL. Most of the drugs in PiHKAL are synthetic.

Entheogens used by movements includes biotas like peyote (Neo-American Church), extracts like Ayahuasca (Santo Daime, Unio do Vegetal), the semi-synthetic drug LSD (Neo-American Church), and synthetic drugs like DPT (Temple of the True Inner Light) and 2C-B (Sangoma[10]).

Both Santo Daime and Unio do Vegetal now have members and churches throughout the world.

Psychedelic therapy refers to therapeutic practices involving the use of psychedelic drugs, particularly serotonergic psychedelics such as LSD, psilocybin, DMT, mescaline, and 2C-i, primarily to assist psychotherapy.

MAPS has pursued a number of other research studies examining the effects of psychedelics administered to human subjects. These studies include, but are not limited to, studies of Ayahuasca, DMT, ibogaine, ketamine, LSA, LSD, MDE, MDMA, mescaline, peyote, psilocybin, Salvia divinorum and conducted multi-drug studies as well as cross cultural and meta-analysis research.[11]

L. E. Hollister’s[who?] criteria for identifying a drug as hallucinogenic are:[12]

Drugs, including some that cause physical dependence, have been used with entheogenic intention, mostly in ancient times, like alcohol. Common recreational drugs that cause chemical dependence have a history of entheogenic use, perhaps because their users could not access traditional entheogens, as shamans, considering non-visioning uses of their entheogens as hedonistic, were very secretive with them.[citation needed]

Alcohol has sometimes been invested with religious significance.

In ancient Celtic religion, Sucellus or Sucellos was the god of agriculture, forests and alcoholic drinks of the Gauls.

Ninkasi is the ancient Sumerian tutelary goddess of beer.[13]

In the ancient Greco-Roman religion, Dionysos (or Bacchus) was the god of the grape harvest, winemaking and wine, of ritual madness and ecstasy, of merry making and theatre. The original rite of Dionysus is associated with a wine cult and he may have been worshipped as early as c. 15001100 BC by Mycenean Greeks. The Dionysian Mysteries were a ritual of ancient Greece and Rome which used intoxicants and other trance-inducing techniques (like dance and music) to remove inhibitions and social constraints, liberating the individual to return to a natural state. In his Laws, Plato said that alcoholic drinking parties should be the basis of any educational system, because the alcohol allows relaxation of otherwise fixed views. The Symposium (literally, ‘drinking together’) was a dramatised account of a drinking party where the participants debated the nature of love.

In the Homeric Hymn to Demeter, a cup of wine is offered to Demeter which she refuses, instead insisting upon a potion of barley, water, and glechon, known as the ceremonial drink Kykeon, an essential part of the Mysteries. The potion has been hypothesized to be an ergot derivative from barley, similar to LSD.[14]

Egyptian pictographs clearly show wine as a finished product around 4000 BC. Osiris, the god who invented beer and brewing, was worshiped throughout the country. The ancient Egyptians made at least 24 types of wine and 17 types of beer. These beverages were used for pleasure, nutrition, rituals, medicine, and payments. They were also stored in the tombs of the deceased for use in the afterlife.[15] The Osirian Mysteries paralleled the Dionysian, according to contemporary Greek and Egyptian observers. Spirit possession involved liberation from civilization’s rules and constraints. It celebrated that which was outside civilized society and a return to the source of being, which would later assume mystical overtones. It also involved escape from the socialized personality and ego into an ecstatic, deified state or the primal herd (sometimes both).

Some scholars[who?] have postulated that pagan religions actively promoted alcohol and drunkenness as a means of fostering fertility. Alcohol was believed to increase sexual desire and make it easier to approach another person for sex.

Chgyam Trungpa Rinpoche introduced “Mindful Drinking” to the West when he fled Tibet.[16][17]

The present day Arabic word for alcohol appears in The Qur’an (in verse 37:47) as al-awl, properly meaning “spirit” or “demon”, in the sense of “the thing that gives the wine its headiness.”[citation needed]

Many Christian denominations use wine in the Eucharist or Communion and permit alcohol consumption in moderation. Other denominations use unfermented grape juice in Communion; they either voluntarily abstain from alcohol or prohibit it outright.[citation needed]

Judaism uses wine on Shabbat and some holidays for Kiddush as well as more extensively in the Passover ceremony and other religious ceremonies. The secular consumption of alcohol is allowed. Some Jewish texts, e.g., the Talmud, encourage moderate drinking on holidays (such as Purim) in order to make the occasion more joyous.[citation needed]

Bah’s are forbidden to drink alcohol or to take drugs, unless prescribed by doctors. Accordingly, the sale and trafficking of such substances is also forbidden. Smoking is discouraged but not prohibited.

Kava cultures are the religious and cultural traditions of western Oceania which consume kava. There are similarities in the use of kava between the different cultures, but each one also has its own traditions.[citation needed]

Entheogens have been used by individuals to pursue spiritual goals such as divination, ego death, egolessness, faith healing, psychedelic therapy and spiritual formation.[18]

“Don Alejandro (a Mazatecan shaman) taught me that the visionary experiences are much more important than the plants and drugs that produce them. He no longer needed to take the vision-inducing plants for his journeys.”[19]

There are also instances where people have been given entheogens without their knowledge or consent (e.g., tourists in Ayahuasca),[20] as well as attempts to use such drugs in other contexts, such as cursing, psychochemical weaponry, psychological torture, brainwashing and mind control; CIA experiments with LSD were used in Project MKUltra, and controversial entheogens like alcohol are often mentioned in context of bread and circuses.

In some areas, there are purported malevolent sorcerers who masquerade as real shamans and who entice tourists to drink ayahuasca in their presence. Shamans believe one of the purposes for this is to steal one’s energy and/or power, of which they believe every person has a limited stockpile.[3]

The Native American Church (NAC) is also known as Peyotism and Peyote Religion. Peyotism is a Native American religion characterized by mixed traditional as well as Protestant beliefs and by sacramental use of the entheogen peyote.

The Peyote Way Church of God believe that “Peyote is a holy sacrament, when taken according to our sacramental procedure and combined with a holistic lifestyle”.[21]

Some religions forbid, discourage, or restrict the drinking of alcoholic beverages. These include Islam, Jainism, the Bah’ Faith, The Church of Jesus Christ of Latter-day Saints (LDS Church), the Seventh-day Adventist Church, the Church of Christ, Scientist, the United Pentecostal Church International, Theravada, most Mahayana schools of Buddhism, some Protestant denominations of Christianity, some sects of Taoism (Five Precepts and Ten Precepts), and Hinduism.

The Pali Canon, the scripture of Theravada Buddhism, depicts refraining from alcohol as essential to moral conduct because intoxication causes a loss of mindfulness. The fifth of the Five Precepts states, “Sur-meraya-majja-pamdahn verama sikkhpada samdiymi.” In English: “I undertake to refrain from meraya and majja (the two fermented drinks used in the place and time of writing) to heedless intoxication.” Although the Fifth Precept only names a specific wine and cider, this has traditionally been interpreted to mean all alcoholic beverages. Technically, this prohibition does also not even include light to moderate drinking, only to the point of drunkenness. It also doesn’t include other mind-altering drugs, but Buddhist tradition includes all intoxicants. The canon does not suggest that alcohol is evil but believes that the carelessness produced by intoxication creates bad karma. Therefore, any drug (beyond tea or mild coffee) that affects one’s mindfulness be considered by some to be covered by this prohibition.[citation needed]

Many Christian denominations disapprove of the use of most illicit drugs. The early history of the Church, however, was filled with a variety of drug use, recreational and otherwise.[22]

The primary advocate of a religious use of cannabis plant in early Judaism was Sula Benet, also called Sara Benetowa, a Polish anthropologist, who claimed in 1967 that the plant kaneh bosm – mentioned five times in the Hebrew Bible, and used in the holy anointing oil of the Book of Exodus, was in fact cannabis.[23] The Ethiopian Zion Coptic Church confirmed it as a possible valid interpretation.[24] The lexicons of Hebrew and dictionaries of plants of the Bible such as by Michael Zohary (1985), Hans Arne Jensen (2004) and James A. Duke (2010) and others identify the plant in question as either Acorus calamus or Cymbopogon citratus.[25] Kaneh-bosm is listed as an incense in the Old Testament.

Rabbi Zalman Schachter-Shalomi (founder of Jewish Renewal) and Richard Alpert (later known as Ram Dass) were influential early Jewish explorers of the connections between hallucinogenics and spirituality, from the early 1960s onwards.

It is generally held by academics specializing in the archaeology and paleobotany of Ancient Israel, and those specializing in the lexicography of the Hebrew Bible that cannabis is not documented or mentioned in early Judaism. Against this some popular writers have argued that there is evidence for religious use of cannabis in the Hebrew Bible,[26][27] although this hypothesis and some of the specific case studies (e.g., John Allegro in relation to Qumran, 1970) have been “widely dismissed as erroneous, others continue”.[28]

According to The Living Torah, cannabis may have been one of the ingredients of the holy anointing oil mentioned in various sacred Hebrew texts.[29] The herb of interest is most commonly known as kaneh-bosm (Hebrew: -). This is mentioned several times in the Old Testament as a bartering material, incense, and an ingredient in holy anointing oil used by the high priest of the temple. Although Chris Bennett’s research in this area focuses on cannabis, he mentions evidence suggesting use of additional visionary plants such as henbane, as well.[30]

The Septuagint translates kaneh-bosm as calamus, and this translation has been propagated unchanged to most later translations of the old testament. However, Polish anthropologist Sula Benet published etymological arguments that the Aramaic word for hemp can be read as kannabos and appears to be a cognate to the modern word ‘cannabis’,[31] with the root kan meaning reed or hemp and bosm meaning fragrant. Both cannabis and calamus are fragrant, reedlike plants containing psychotropic compounds.

In his research, Professor Dan Merkur points to significant evidence of an awareness within the Jewish mystical tradition recognizing manna as an entheogen, thereby substantiating with rabbinic texts theories advanced by the superficial biblical interpretations of Terence McKenna, R. Gordon Wasson and other ethnomycologists.

Although philologist John Marco Allegro has suggested that the self-revelation and healing abilities attributed to the figure of Jesus may have been associated with the effects of the plant medicines, this evidence is dependent on pre-Septuagint interpretation of Torah and Tenach. Allegro was the only non-Catholic appointed to the position of translating the Dead Sea scrolls. His extrapolations are often the object of scorn due to Allegro’s non-mainstream theory of Jesus as a mythological personification of the essence of a “psychoactive sacrament”. Furthermore, they conflict with the position of the Catholic Church with regard to transubstantiation and the teaching involving valid matter, form, and drug that of bread and wine (bread does not contain psychoactive drugs, but wine contains ethanol). Allegro’s book The Sacred Mushroom and the Cross relates the development of language to the development of myths, religions, and cultic practices in world cultures. Allegro believed he could prove, through etymology, that the roots of Christianity, as of many other religions, lay in fertility cults, and that cult practices, such as ingesting visionary plants (or “psychedelics”) to perceive the mind of God, persisted into the early Christian era, and to some unspecified extent into the 13th century with reoccurrences in the 18th century and mid-20th century, as he interprets the Plaincourault chapel’s fresco to be an accurate depiction of the ritual ingestion of Amanita muscaria as the Eucharist.[citation needed]

The historical picture portrayed by the Entheos journal is of fairly widespread use of visionary plants in early Christianity and the surrounding culture, with a gradual reduction of use of entheogens in Christianity.[32] R. Gordon Wasson’s book Soma prints a letter from art historian Erwin Panofsky asserting that art scholars are aware of many “mushroom trees” in Christian art.[33]

The question of the extent of visionary plant use throughout the history of Christian practice has barely been considered yet by academic or independent scholars. The question of whether visionary plants were used in pre-Theodosius Christianity is distinct from evidence that indicates the extent to which visionary plants were utilized or forgotten in later Christianity, including heretical or quasi- Christian groups,[34] and the question of other groups such as elites or laity within orthodox Catholic practice.[35]

Daniel Merkur at the University of Toronto contends that a minority of Christian hermits and mystics could possibly have used entheogens, in conjunction with fasting, meditation, and prayer.[citation needed]

According to R.C. Parker, “The use of entheogens in the Vajrayana tradition has been documented by such scholars as Ronald M Davidson, William George Stablein, Bulcsu Siklos, David B. Gray, Benoytosh Bhattacharyya, Shashibhusan Das Gupta, Francesca Fremantle, Shinichi Tsuda, David Gordon White, Rene de Nebesky-Wojkowitz, James Francis Hartzell, Edward Todd Fenner, Ian Baker, Dr. Pasang Yonten Arya and numerous others.” These scholars have established entheogens were used in Vajrayana (in a limited context) as well as in Tantric Saivite traditions. The major entheogens in the Vajrayana Anuttarayoga Tantra tradition are cannabis and Datura which were used in various pills, ointments, and elixirs. Several tantras within Vajrayana specifically mention these entheogens and their use, including the Laghusamvara-tantra (aka Cakrasavara Tantra), Samputa-tantra, Samvarodaya-tantra, Mahakala-tantra, Guhyasamaja-tantra, Vajramahabhairava-tantra, and the Krsnayamari-tantra.[36] In the Cakrasavara Tantra, the use of entheogens is coupled with mediation practices such as the use of a mandala of the Heruka meditation deity (yidam) and visualization practices which identify the yidam’s external body and mandala with one’s own body and ‘internal mandala’.[37]

It has also been proposed by Scott Hajicek-Dobberstein that the Amanita muscaria mushroom was used by the Tantric Buddhist mahasiddha tradition of the 8th to 12th century.[38]

In the West, some modern Buddhist teachers have written on the usefulness of psychedelics. The Buddhist magazine Tricycle devoted their entire fall 1996 edition to this issue.[39] Some teachers such as Jack Kornfield have acknowledged the possibility that psychedelics could complement Buddhist practice, bring healing and help people understand their connection with everything which could lead to compassion.[40] Kornfield warns however that addiction can still be a hindrance. Other teachers such as Michelle McDonald-Smith expressed views which saw entheogens as not conductive to Buddhist practice (“I don’t see them developing anything”).[41]

Entheogens have been used in various ways, e.g., as part of established religious rituals, as aids for personal spiritual development (“plant teachers”),[42][43] as recreational drugs, and for medical and therapeutic use. The use of entheogens in human cultures is nearly ubiquitous throughout recorded history.

Naturally occurring entheogens such as psilocybin and DMT (in the preparation ayahuasca), were, for the most part, discovered and used by older cultures, as part of their spiritual and religious life, as plants and agents that were respected, or in some cases revered for generations and may be a tradition that predates all modern religions as a sort of proto-religious rite.

One of the most widely used entheogens is cannabis, entheogenic use of cannabis has been used in regions such as China, Europe, and India, and, in some cases, for thousands of years. It has also appeared as a part of religions and cultures such as the Rastafari movement, the Sadhus of Hinduism, the Scythians, Sufi Islam, and others.

The best-known entheogen-using culture of Africa is the Bwitists, who used a preparation of the root bark of Tabernanthe iboga.[44] Although the ancient Egyptians may have been using the sacred blue lily plant in some of their religious rituals or just symbolically, it has been suggested that Egyptian religion once revolved around the ritualistic ingestion of the far more psychoactive Psilocybe cubensis mushroom, and that the Egyptian White Crown, Triple Crown, and Atef Crown were evidently designed to represent pin-stages of this mushroom.[45] There is also evidence for the use of psilocybin mushrooms in Ivory Coast.[46] Numerous other plants used in shamanic ritual in Africa, such as Silene capensis sacred to the Xhosa, are yet to be investigated by western science. A recent revitalization has occurred in the study of southern African psychoactives and entheogens (Mitchell and Hudson 2004; Sobiecki 2002, 2008, 2012).[47]

The artificial drug 2C-B is interestingly used as entheogen by the Sangoma, Nyanga, and Amagqirha people over their traditional plants; they refer to the chemical as Ubulawu Nomathotholo, which roughly translates to “Medicine of the Singing Ancestors”.[48][49][50]

Entheogens have played a pivotal role in the spiritual practices of most American cultures for millennia. The first American entheogen to be subject to scientific analysis was the peyote cactus (Lophophora williamsii). For his part, one of the founders of modern ethno-botany, the late-Richard Evans Schultes of Harvard University documented the ritual use of peyote cactus among the Kiowa, who live in what became Oklahoma. While it was used traditionally by many cultures of what is now Mexico, in the 19th century its use spread throughout North America, replacing the deadly toxic mescal bean (Calia secundiflora) who are questioned to be an entheogen at all. Other well-known entheogens used by Mexican cultures include the alcoholic Aztec sacrament, pulque, ritual tobacco (known as ‘picietl’ to the Aztecs, and ‘sikar’ to the Maya (from where the word ‘cigar’ derives), psilocybin mushrooms, morning glories (Ipomoea tricolor and Turbina corymbosa), and Salvia divinorum.

Indigenous peoples of South America employ a wide variety of entheogens. Better-known examples include ayahuasca (most commonly Banisteriopsis caapi and Psychotria viridis) among indigenous peoples (such as the Urarina) of Peruvian Amazon. Other entheogens include San Pedro cactus (Echinopsis pachanoi, syn. Trichocereus pachanoi), Peruvian torch cactus (Echinopsis peruviana, syn. Trichocereus peruvianus), and various DMT-snuffs, such as epen (Virola spp.), vilca and yopo (Anadenanthera colubrina and A. peregrina, respectively). The familiar tobacco plant, when used uncured in large doses in shamanic contexts, also serves as an entheogen in South America. Also, a tobacco that contains higher nicotine content, and therefore smaller doses required, called Nicotiana rustica was commonly used.[citation needed]

Entheogens also play an important role in contemporary religious movements such as the Rastafari movement and the Church of the Universe.

Datura wrightii is sacred to some Native Americans and has been used in ceremonies and rites of passage by Chumash, Tongva, and others. Among the Chumash, when a boy was 8 years old, his mother would give him a preparation of momoy to drink. This supposed spiritual challenge should help the boy develop the spiritual wellbeing that is required to become a man. Not all of the boys undergoing this ritual survived.[51] Momoy was also used to enhance spiritual wellbeing among adults . For instance, during a frightening situation, such as when seeing a coyote walk like a man, a leaf of momoy was sucked to help keep the soul in the body.

The indigenous peoples of Siberia (from whom the term shaman was borrowed) have used Amanita muscaria as an entheogen.

In Hinduism, Datura stramonium and cannabis have been used in religious ceremonies, although the religious use of datura is not very common, as the primary alkaloids are strong deliriants, which causes serious intoxication with unpredictable effects.

Also, the ancient drink Soma, mentioned often in the Vedas, appears to be consistent with the effects of an entheogen. In his 1967 book, Wasson argues that Soma was Amanita muscaria. The active ingredient of Soma is presumed by some to be ephedrine, an alkaloid with stimulant properties derived from the soma plant, identified as Ephedra pachyclada. However, there are also arguments to suggest that Soma could have also been Syrian rue, cannabis, Atropa belladonna, or some combination of any of the above plants.[citation needed]

Fermented honey, known in Northern Europe as mead, was an early entheogen in Aegean civilization, predating the introduction of wine, which was the more familiar entheogen of the reborn Dionysus and the maenads. Its religious uses in the Aegean world are bound up with the mythology of the bee.

Dacians were known to use cannabis in their religious and important life ceremonies, proven by discoveries of large clay pots with burnt cannabis seeds in ancient tombs and religious shrines. Also, local oral folklore and myths tell of ancient priests that dreamed with gods and walked in the smoke. Their names, as transmitted by Herodotus, were “kap-no-batai” which in Dacian was supposed to mean “the ones that walk in the clouds”.

The growth of Roman Christianity also saw the end of the two-thousand-year-old tradition of the Eleusinian Mysteries, the initiation ceremony for the cult of Demeter and Persephone involving the use of a drug known as kykeon. The term ‘ambrosia’ is used in Greek mythology in a way that is remarkably similar to the Soma of the Hindus as well.

A theory that natural occurring gases like ethylene used by inhalation may have played a role in divinatory ceremonies at Delphi in Classical Greece received popular press attention in the early 2000s, yet has not been conclusively proven.[52]

Mushroom consumption is part of the culture of Europeans in general, with particular importance to Slavic and Baltic peoples. Some academics consider that using psilocybin- and or muscimol-containing mushrooms was an integral part of the ancient culture of the Rus’ people.[53]

It has been suggested that the ritual use of small amounts of Syrian rue is an artifact of its ancient use in higher doses as an entheogen (possibly in conjunction with DMT containing acacia).[citation needed]

Philologist John Marco Allegro has argued in his book The Sacred Mushroom and the Cross that early Jewish and Christian cultic practice was based on the use of Amanita muscaria, which was later forgotten by its adherents. Allegro’s hypothesis is that Amanita use was sacred knowledge kept only by high figures to hide the true beginnings of the Christian cult, seems supported by his own view that the Plaincourault Chapel shows evidence of Christian amanita use in the 13th century.[54]

In general, indigenous Australians are thought not to have used entheogens, although there is a strong barrier of secrecy surrounding Aboriginal shamanism, which has likely limited what has been told to outsiders. A plant that the Australian Aboriginals used to ingest is called Pitcheri, which is said to have a similar effect to that of coca. Pitcheri was made from the bark of the shrub Duboisia myoporoides. This plant is now grown commercially and is processed to manufacture an eye medication. There are no known uses of entheogens by the Mori of New Zealand aside from a variant species of kava.[55] Natives of Papua New Guinea are known to use several species of entheogenic mushrooms (Psilocybe spp, Boletus manicus).[56]

Kava or kava kava (Piper Methysticum) has been cultivated for at least 3000 years by a number of Pacific island-dwelling peoples. Historically, most Polynesian, many Melanesian, and some Micronesian cultures have ingested the psychoactive pulverized root, typically taking it mixed with water. Much traditional usage of kava, though somewhat suppressed by Christian missionaries in the 19th and 20th centuries, is thought to facilitate contact with the spirits of the dead, especially relatives and ancestors.[57]

Studies such as Timothy Leary’s Marsh Chapel Experiment and Roland Griffiths’ psilocybin studies at Johns Hopkins have documented reports of mystical/spiritual/religious experiences from participants who were administered psychoactive drugs in controlled trials.[58] Ongoing research is limited due to widespread drug prohibition.

Notable early testing of the entheogenic experience includes the Marsh Chapel Experiment, conducted by physician and theology doctoral candidate, Walter Pahnke, under the supervision of Timothy Leary and the Harvard Psilocybin Project. In this double-blind experiment, volunteer graduate school divinity students from the Boston area almost all claimed to have had profound religious experiences subsequent to the ingestion of pure psilocybin. In 2006, a more rigorously controlled experiment was conducted at Johns Hopkins University, and yielded similar results.[59] To date there is little peer-reviewed research on this subject, due to ongoing drug prohibition and the difficulty of getting approval from institutional review boards.[60]

Furthermore, scientific studies on entheogens present some significant challenges to investigators, including philosophical questions relating to ontology, epistemology and objectivity.[61]

Peyote is listed by the United States DEA as a Schedule I controlled substance. However, practitioners of the Peyote Way Church of God, a Native American religion, perceive the regulations regarding the use of peyote as discriminating, leading to religious discrimination issues regarding about the U.S. policy towards drugs. As the result of Peyote Way Church of God v. Thornburgh the American Indian Religious Freedom Act of 1978 was passed. This federal statute allow the “Traditional Indian religious use of the peyote sacrament,” exempting only use by Native American persons. Other jurisdictions have similar statutory exemptions in reaction to the U.S. Supreme Court’s decision in Employment Division v. Smith, 494 U.S. 872 (1990), which held that laws prohibiting the use of peyote that do not specifically exempt religious use nevertheless do not violate the Free Exercise Clause of the First Amendment.

Between 2011 and 2012, the Australian Federal Government was considering changes to the Australian Criminal Code that would classify any plants containing any amount of DMT as “controlled plants”.[62] DMT itself was already controlled under current laws. The proposed changes included other similar blanket bans for other substances, such as a ban on any and all plants containing Mescaline or Ephedrine. The proposal was not pursued after political embarrassment on realisation that this would make the official Floral Emblem of Australia, Acacia pycnantha (Golden Wattle), illegal. The Therapeutic Goods Administration and federal authority had considered a motion to ban the same, but this was withdrawn in May 2012 (as DMT may still hold potential entheogenic value to native and/or religious peoples).[63]

In 1963 in Sherbert v. Verner the Supreme Court established the Sherbert Test, which consists of four criteria that are used to determine if an individual’s right to religious free exercise has been violated by the government. The test is as follows:

For the individual, the court must determine

If these two elements are established, then the government must prove

This test was eventually all-but-eliminated in Employment Division v. Smith 494 U.S. 872 (1990), but was resurrected by Congress in the federal Religious Freedom Restoration Act (RFRA) of 1993.

In City of Boerne v. Flores, 521 U.S. 507 (1997) and Gonzales v. O Centro Esprita Beneficente Unio do Vegetal, 546 U.S. 418 (2006), the RFRA was held to trespass on state sovereignty, and application of the RFRA was essentially limited to federal law enforcement.

As of 2001, Arizona, Idaho, New Mexico, Oklahoma, South Carolina, and Texas had enacted so-called “mini-RFRAs.”

Although entheogens are taboo and most of them are officially prohibited in Christian and Islamic societies, their ubiquity and prominence in the spiritual traditions of various other cultures is unquestioned. “The spirit, for example, need not be chemical, as is the case with the ivy and the olive: and yet the god was felt to be within them; nor need its possession be considered something detrimental, like drugged, hallucinatory, or delusionary: but possibly instead an invitation to knowledge or whatever good the god’s spirit had to offer.”[64]

Most of the well-known modern examples, such as peyote, psilocybin mushrooms, and morning glories are from the native cultures of the Americas. However, it has also been suggested that entheogens played an important role in ancient Indo-European culture, for example by inclusion in the ritual preparations of the Soma, the “pressed juice” that is the subject of Book 9 of the Rig Veda. Soma was ritually prepared and drunk by priests and initiates and elicited a paean in the Rig Veda that embodies the nature of an entheogen:

Splendid by Law! declaring Law, truth speaking, truthful in thy works, Enouncing faith, King Soma!… O [Soma] Pavmana (mind clarifying), place me in that deathless, undecaying world wherein the light of heaven is set, and everlasting lustre shines…. Make me immortal in that realm where happiness and transports, where joy and felicities combine…

The kykeon that preceded initiation into the Eleusinian Mysteries is another entheogen, which was investigated (before the word was coined) by Carl Kernyi, in Eleusis: Archetypal Image of Mother and Daughter. Other entheogens in the Ancient Near East and the Aegean include the opium poppy, datura, and the unidentified “lotus” (likely the sacred blue lily) eaten by the Lotus-Eaters in the Odyssey and Narcissus.

According to Ruck, Eyan, and Staples, the familiar shamanic entheogen that the Indo-Europeans brought knowledge of was Amanita muscaria. It could not be cultivated; thus it had to be found, which suited it to a nomadic lifestyle. When they reached the world of the Caucasus and the Aegean, the Indo-Europeans encountered wine, the entheogen of Dionysus, who brought it with him from his birthplace in the mythical Nysa, when he returned to claim his Olympian birthright. The Indo-European proto-Greeks “recognized it as the entheogen of Zeus, and their own traditions of shamanism, the Amanita and the ‘pressed juice’ of Soma but better, since no longer unpredictable and wild, the way it was found among the Hyperboreans: as befit their own assimilation of agrarian modes of life, the entheogen was now cultivable.”[64] Robert Graves, in his foreword to The Greek Myths, hypothesises that the ambrosia of various pre-Hellenic tribes was Amanita muscaria (which, based on the morphological similarity of the words amanita, amrita and ambrosia, is entirely plausible) and perhaps psilocybin mushrooms of the genus Panaeolus.

Amanita was divine food, according to Ruck and Staples, not something to be indulged in or sampled lightly, not something to be profaned. It was the food of the gods, their ambrosia, and it mediated between the two realms. It is said that Tantalus’s crime was inviting commoners to share his ambrosia.

The entheogen is believed to offer godlike powers in many traditional tales, including immortality. The failure of Gilgamesh in retrieving the plant of immortality from beneath the waters teaches that the blissful state cannot be taken by force or guile: When Gilgamesh lay on the bank, exhausted from his heroic effort, the serpent came and ate the plant.

Another attempt at subverting the natural order is told in a (according to some) strangely metamorphosed myth, in which natural roles have been reversed to suit the Hellenic world-view. The Alexandrian Apollodorus relates how Gaia (spelled “Ge” in the following passage), Mother Earth herself, has supported the Titans in their battle with the Olympian intruders. The Giants have been defeated:

When Ge learned of this, she sought a drug that would prevent their destruction even by mortal hands. But Zeus barred the appearance of Eos (the Dawn), Selene (the Moon), and Helios (the Sun), and chopped up the drug himself before Ge could find it.[65]

The legends of the Assassins had much to do with the training and instruction of Nizari fida’is, famed for their public missions during which they often gave their lives to eliminate adversaries.

The tales of the fidais training collected from anti-Ismaili historians and orientalists writers were confounded and compiled in Marco Polos account, in which he described a “secret garden of paradise”.[citation needed] After being drugged, the Ismaili devotees were said to be taken to a paradise-like garden filled with attractive young maidens and beautiful plants in which these fidais would awaken. Here, they were told by an old man that they were witnessing their place in Paradise and that should they wish to return to this garden permanently, they must serve the Nizari cause.[66] So went the tale of the “Old Man in the Mountain”, assembled by Marco Polo and accepted by Joseph von Hammer-Purgstall (17741856), a prominent orientalist writer responsible for much of the spread of this legend. Until the 1930s, von Hammers retelling of the Assassin legends served as the standard account of the Nizaris across Europe.[citation needed]

The neologism entheogen was coined in 1979 by a group of ethnobotanists and scholars of mythology (Carl A. P. Ruck, Jeremy Bigwood, Danny Staples, Richard Evans Schultes, Jonathan Ott and R. Gordon Wasson). The term is derived from two words of Ancient Greek, (ntheos) and (gensthai). The adjective entheos translates to English as “full of the god, inspired, possessed”, and is the root of the English word “enthusiasm.” The Greeks used it as a term of praise for poets and other artists. Genesthai means “to come into being.” Thus, an entheogen is a drug that causes one to become inspired or to experience feelings of inspiration, often in a religious or “spiritual” manner.[67]

Entheogen was coined as a replacement for the terms hallucinogen and psychedelic. Hallucinogen was popularized by Aldous Huxley’s experiences with mescaline, which were published as The Doors of Perception in 1954. Psychedelic, in contrast, is a Greek neologism for “mind manifest”, and was coined by psychiatrist Humphry Osmond; Huxley was a volunteer in experiments Osmond was conducting on mescaline.

Ruck et al. argued that the term hallucinogen was inappropriate owing to its etymological relationship to words relating to delirium and insanity. The term psychedelic was also seen as problematic, owing to the similarity in sound to words pertaining to psychosis and also due to the fact that it had become irreversibly associated with various connotations of 1960s pop culture. In modern usage entheogen may be used synonymously with these terms, or it may be chosen to contrast with recreational use of the same drugs. The meanings of the term entheogen were formally defined by Ruck et al.:

In a strict sense, only those vision-producing drugs that can be shown to have figured in shamanic or religious rites would be designated entheogens, but in a looser sense, the term could also be applied to other drugs, both natural and artificial, that induce alterations of consciousness similar to those documented for ritual ingestion of traditional entheogens.

Many works of literature have described entheogen use; some of those are:

Read the original:

Entheogen – Wikipedia

Ethereum Mining Guide for AMD and NVidia GPUs – Windows …

Im mining ethereum for quite some time now. I have a collection of lots of RX 4xx/5xx GPUs and many Nvidia Gtx 1060/1070 Gpus and I have helped people on various forums, and had a lot of customers from which Ive gained all of this knowledge. I have written this guide to help you setup your own gpu for mining purposes.

If my guide helped you please send a donation to:

Ethereum Address: 0xC3935595660f16A6549EFd3263673C6a2fb25327

If you need help in setting up your bios for your GPU, send me your original bios through Skype, my ID is: bijac666, but try to check the GPU Bios Guide first, it will teach you how to bios mod with just 1 click, and yes it will 99% mod your bios the proper way, if not contact me.

Please follow all the steps described in this guide by their order! This is a collection of my experiences with fixing various mining problems. I have helped over 500 people with their problems and this guide should have an answer to most or probably all of them

IMPORTANT: You MUST have the latest motherboard bios installed. (you can check your motherboard bios update history, to see what was changed from your bios version till the latest available one. If there are no major changes, then you can skip this step.(Chipset, PCI-E, GPU support changes are the most important ones and you NEED to upgrade your Motherboard BIOS if they came out).

IMPORTANT never use WiFi to connect your mining rigs, from my experiences that can cause big trouble. Higher ping, random disconnects, Wifi freezing at start of mining, shares rejected and so on. For example my Wifi adapter would stop working if it was directly connected to the mining rig, but if I was using an usb extender so I can place Wifi Adapter away from the rig (1-2m) then it would work, as if the rig itself disrupted the Wifi signal, as strange as it sounds. If you want to use Wifi, use the 5G. The more mining rigs you have, the more trouble with Wifi you will get.

RAM

CPU

PSU

Risers

Disk

You can get Windows 10 Pro for FREE at their official website. You can download their Windows tool for making a bootable USB stick(If you are doing this on a PC that has Original Windows already on it) or download the Windows 10 ISO from their site and make yourself a bootable USB, install Windows on your SSD and , you will never have to pay for the license if you dont want to. Still dont have any GPUs connected to your Mining rig! Because the first thing we want is to optimize Windows for our own mining purposes. THIS IS THE MOST IMPORTANT PART THAT PEOPLE DO WRONG!If you want to have original Windows 10, buy them at Kinguin. The only difference from OEM version instead of Retail version is that you can have the Windows activated on only 1 PC (you cant reactivate the key on another PC), this version is used by most miners.

Most people think that Linux is a more stable operating system or because its so lightweight, that it runs better, it can have more GPU support (Windows 10 supports 12 GPU now) and should be a better option for 24/7 mining. It would seem logical, but it isnt:

Because Linux has various problems, such as:

If you have downloaded Windows 10 from the official Microsoft website (never use torrents for this) then your Windows 10 pro is almost up to date.

Download latest.NET Framework 3.5 Offline Installer it is required to run Polaris 1.6 and OverdriveNtool. Windows 10 Comes with 4.x Framework, but that one will not work with Polaris and OverdriveNtool. You need to install the .NET 3.5 manually. Insert the Windows 10 USB stick into the PC (the one you used to install Windows) and set the USB Disc Drive as the Source for installing the .NET 3.5. Here is the Guide how to do it, its very simple.

Download LATEST Drivers for your motherboard, especially latest Chipset driver. This is very important.

Now after Windows is setup properly, download a tool called DDU,That tool will uninstall your current driver (even your integrated GPU) and block Windows from automatically installing GPU drivers. Thats important so that Windows does not install an outdated driver! It will ask you to run in safe mode but that is not necessary. When you run the program just click on Clean and restart. We want to manually download and install the right drivers.

Now turn off your PC and connect onlyONEGPU.

IMPORTANT From AMD Crimson 17.10 driver (and all drivers released after that) they have added the mining mode in the driver and enabled up to 12 AMD GPUs to be able to run on Windows 10.

AMD released Radeon Software Adrenaline Edition, download latest version of it. It will improve hashrate on some cards and in general give you best possible hashrate on all RX 4xx and 5xx cards.

Very rare its possible that you will get better results with the Beta Blockchain Driver , but that driver only supports 8 AMD GPUs and please try first the Adrenaline edition (In the Blockchain driver you will not need to change GPUs to Compute mode, they are there on default).

If Adrenaline or Blockchain driver is not working, your last hope is latest Crimson ReLive Driver release (you need to change in Radeon Settings to compute mode each GPU)

At the beginning of the install process go toCUSTOMinstead of Express and ONLY select AMD Display Driver and AMD Settings. During installSKIP theinstalling of ReLive, because we wont need it.

After you have installed the driver restart your PC. If youve already modified your GPUs before, there might be a possibility that you wont be able to see them anymore. That is most likely a problem with the RX 570 series and its very rare with some RX 580 models. The problem comes from the bios mod, because it changes how the GPUs work and you will need to Patch your drivers to make them work properly or the driver will just end up disabling or hiding the GPUs (Error 43). This is only needed if you cant see your bios modified GPUs in the Windows Device Manager. Download the Pixel Clock Patcher. Run the program, it should give you a message that the values were patched. After that restart your PC and you should have proper working modified GPUs.

Once you successfully installed the driver with just one GPU, shut down your PC and plug in all of the other GPUs. After that, when you turn the PC back on it should automatically detect each of them and it will install the drivers for all of them. Just remember that it will take some time (about 5-10 minutes) for all of the GPUs to be detected properly. You can open up the Device manager, to see if all of the GPUs are listed there. Just turn the PC on and wait 5-10 minutes before doing anything, Windows will do its job.

Now after you have all of your GPUs under the right driver, there is one more important step to make.

Radeon Settings

Radeon Settings Gaming

Radeon Settings Global Settings

Each GPU has its own bios, that tells it how it should work. There are four different memory types that you will encounter on your GPU : Hynix, Elpida, Micron, Samsung

During the mining of ethereum, you will only be using memory of the GPU, that means that the higher quality of the memory is, the better hashrate you can get. While testing all of the memory types, Ive found out that Samsung and Hynix are a little bit better than Elpida and Micron, but the difference is very subtle.

Download a tool called GPU-Z.

This tool allows you to see what memory type your card has as you can see in this picture.

On the Green selection you can see the Memory Type. In this example its Elpida. If you bought your GPUs all at once, and they are the same card type and if you see that they all have the same Memory Type, that means that they allCANUSE THE SAME BIOS.ExportingGPU BIOScan be done with clicking on the Red circle as displayed in the picture above, under the BIOS Version. Now you have your original bios exported, make a backup before going to the next step.

Go to my guide: GPU Bios Mod

IMPORTANT Always work with the original BIOS of your cards, dont download random BIOS online cause you cant be sure they are made for your card type, even if they are the same model that does not mean they have the same bios.Its very important to work with the original card BIOS to reduce the unnecessary risk to the minimum.

First you will need to download a tool for flashing the bios called ATIFlash.

With this tool you can put the custom bios over your current one. Always make a backup for your current bios and store it somewhere safe, you can never know when you are going to need it.

IMPORTANT be very careful what bios you are going to flash on what GPU, I would recommend you to never have different card types plugged in when you are going to flash, so you dont flash by accident a wrong bios to a wrong card (even if this is almost impossible, because if you use the AtiFlash properly (as explained in this guide) it should give you a warning that you cant flash the specific bios, because its a different type than your original card)

Upgraded BIOS

Copy File Path

Run CMD as Admininstrator

Change Directory to AtiWinFlash

Now after all your GPUs are flashed with the right upgraded bios we can move on to the most important step, the mining software part. There are a couple of different popular mining programs, depending on the algorithm they are working with, the most popular are:

This guide is focused on theEthashalgorithm mining, so the settings and the tutorials from this guide are not optimized for the other mining algorithms likeCryptoNight.For this I plan to make another guide or expand this one so that you will understand how to optimize your GPUs for the other algorithms.

Claymore 11.8 is currently the best miner for Ethereum, and it comes with a nice option of dual mining with some other altcoins (Decred, Sia ) that can boost your profit by around 20-30% for 20% more power draw. Even if you have expensive electricity the bonus profit is probably worth it.

Claymore software has a fixed fee of 1% when you are mining ethereum or 2% fee when you are mining decred. There are various problems that can happen due to the way the Fee is working. The fee works in a way that each hour you will be disconnected from your mining process and for about 1-2 minutes, you will mine for the Claymore developers. After that it will connect you again to your pool and start the mining again. By constant disconnecting and reconnecting each hour your GPU cools down and then heats up again, and by doing that you are risking the life of your GPUs. I heard from many people that after some time one of the GPUs would reset to the default clock settings because of the constant disconnecting/reconnecting or it would hang and crash the miner or cause it to recreate the DAG file, and you end up losing valuable time with that. Claymore is a really cool software and I think there could be a better way to support the developers, rather than risking our own miner stability. By using the official Claymore I lost about 3% of my shares compared to using the Claymore without the Developer Fee, everyone can try it for themselves and see the difference.

Recently there is a good source for the NoFee version that is constantly uploaded to the newest version, and from my testing I get exact 1.1% higher hashrate compared to the official Claymore release (calculated by 24hour comparison of found shares on the mining pool Im using nanopool)

Latest Claymore version brings a straight 0.3-0.5% performance increase compared to the previous Claymore versions. This is only for AMD gpus, there is no effect on the hashrate for Nvidia Gpus.

The comparison tested on 12 RX 570 4GB GPU rig (1-2 MH/s more total hashrate):

You can download the Claymore with the developer fee removed:Claymore Ethereum Miner 11.8 No Fee Download Thanks tod33z0r upload.

The Claymore miner software source code is encrypted (if someone would have the source code he could make his own miner and would be much easier to remove the developer fee. Thats why Windows Defender is going mad when he encounters Claymore miner, because it does not know what the Claymore software is doing it will try to remove it and warn you about dangerous file.

The best way to disable Windows Defender (its good in general to disable it cause it can disrupt mining performance or even crash the rig, especially the real-time protection) is to follow those steps:

Opening Local Group Security Policy

Windows Defender Antivirus Disable Option

Turning On the Disable of Windows Defender

Claymore runs through its start.bat file. In the start.bat (you can open it with the notepad) youJUST NEED TO WRITE THE FOLLOWING (NO SETX COMMANDS BEFORE THAT):

EthDcrMiner64.exe -epool yourMiningPool -ewal yourEthAddress -epsw x -dcri 6

-epool is the mining pool you are mining on, its just a persona preference, some people like to use nanopool, some like dwarfpool, ethermine, you can use whatever pool you like. Be careful what pool you are using, it should be based on your location, it would make no sense to mine on an European pool if you are in America because of the high ping. Always use the pool that is close to you (nanopool,dwarfpool,ethermine and others have mostly location specific pools, you cant miss them, they mostly start with eu, us or asia. After that you can write your own ethereum address which is used to collect your ethereum shares. You can view statistics on the mining pool by searching it with your address, for example if you are using the nanopool pool you can see your current active statistics with:https://eth.nanopool.org/yourEthereumAddress.For example using Nanopool:

EthDcrMiner64.exe -epool eth-eu1.nanopool.org:9999 -ewal yourEthAddress -epsw x -dcri 6

Do not add SETX commands at start, they are not needed.

I use nanopool to mine Ethereum, you can use ethermine or dwarfpool also, but ethermine gives most reliable statistics. Go to Chapter 11 to see why I use nanopool.

-ewall is your ethereum address, be careful because you will always need to write only an ETHEREUM wallet address, not a bitcoin or any other address. Most easy way to create an ethereum wallet and keep it safe is to use the exchange sites like Bitfinex or Bittrex. They will offer you high security and you can use the Two Factor Authentication which makes it very secure. For big amounts I would recommend to use offline wallets like Trezor Bitcoin Wallet.

Ethereum is mined just by using the memory of your GPU, so the GPUs core is almost not affected by the ethereum mining at all. This gives a possibility to utilize the GPU core for mining some other coins in the same time as you mine ethereum without affecting its hashrate. Of course if you would mine the dual coin at full potency, it would affect the ethereum hashrate, thats why we will need to optimize the intensity of the dual coin, lowering it at such degree that its not affecting the ethereum hashrate.

DUAL MINING CLAYMORE START.BAT CONFIG:

EthDcrMiner64.exe -epool yourMiningPool -ewal yourEthAddress -epsw x -dpool dualCoinMiningPool -dwal dualCoinWalletAddress -dpsw x -dcri 25

The part before the -dwal is the same as for the solo ethereum mining described above. The -dwal has the same representation as -ewal, it just is the mining pool of the dual coin. I would recommend to mineONLYDecred as a dual coin, because it has the highest efficiency of all of them . As described above the Dual coin uses the GPUs core for mining and not all dual coins give the same results. For RX 5xx cards the best way would be to go with Decred. I use the Supernovadecred mining pool. You need to create an account there, and the account name will serve you as a decred mining pool address. This way it gives you one more security improvement, because you dont show people your address, instead just your account name. On your account you will need to create a worker and give it a name for example: worker1, and leave its password as it is (password). Now to connect properly to the decred mining pool you would need to put -dwal supernovaAccountName.supernovaWorkerName

You can create a decred wallet at Bittrex.Its a very good trading site featuring a lot of altcoins including decred. You can cash out your decred at your account page in supernova, under My Account -> Edit Account -> Payment Address and you need to type your bittrex address there. And now you just need to set Automatic Payout Threshold to your desired value, I use 0.5 as my payout cap. I convert my mined decred to ethereum at a bittrex exchange site, and store my value like that. Its safe if you use a 2FA (authenticator).

IMPORTANT DUAL MINING INFORMATION

As you can see in the dual mining configuration, the last part is -dcri 25. That means that the dual coin is set to mine intensively , and it shows how much GPU core is assigned for that task.Yes its needed for solo mining too, and needs to be set to 6!This is a very important part because itsDEPENDANT ON THE GPU SERIES. The only noticeable difference between theRX 570andRX 580series is theirGPU Core. The memory (used for ethereum mining) is almost the same on those cards, so there is basically no difference in the ethereum hashrate, but the big difference comes in the GPU Core. The RX 580 series can handle around -dcri 25, dont go above that because it can reduce your ethereum hashrate. For RX 570 series the optimal -dcri is around 19-22. For some cards even lower as 13, this needs to be tested by yourself. The proper way would be to start with -dcri 10. Then using your keyboard press + or -, that way you can increase or decrease -dcri by 1, as you will see on the claymore miner. By going up you will see the dual coin hash rate going up, repeat that until you can start to see the ethereum hashrate decrease, then, after you find that spot reduce -dcri by 3, so you are not pushing the GPU to the limit.On the RX 570 series its possible to get a higher hashrate on ethereum with dual mining rather than just solo mining.Optimal for RX 570 is around -dcri 19 , optimal for RX 580 series is around -dcri 25. For some cards its possible to go even further, but its not worth it to stress the GPU too much.

This is the most important part of this guide, its very important for you to learn the right way of overclocking and undervolting to optimize the GPU as much as possible.

Now after your GPUs are at their default settings, well be using OverdriveNtool to handle the overclocking,target the GPUs temperature and its undervolting. There is no other tool where you can have full control of your GPU and the ability to quickly optimize the GPUs. You cant be 100% sure the overclock/undervolt settings are working properly. This is a special software that gives you FULL access to your AMD GPUs and its very easy to use once you know the basics.

This software may seem confusing or complicated at first, but its very easy to understand. I will explain it through the following picture:

GREEN this is the target temperature of your GPU. OverdriveNtool will automatically keep your GPUs at their desired temperature by increasing/decreasing the fan speed, as its needed to stay at those temperatures. The optimal value would be60C. You can check this during the mining in Claymore, by seeing how much the current fan speed is in percentage. If the fans speed goes over70%increase the target temperature to65C, but that can only happen if you have a high room temperature, probably because of no cooling or a weak air flow.

PROFILES This serves to save current overclock settings for further use. For example after you turn on your PC, you can automatically load all the overclock settings to the desired GPUs.We will have1 profile per GPU on your mining rig. First make a new .txt file in the folder in which you have the OverdriveNtool.

After that go to Save As and change the Save as type to All Files and then name the script overclock.bat. That way you will create a Batch file the same type as Claymores start.bat and it will work in very similar way.

Now after that open the overclock.bat file with notepad and write in the following:

OverdriveNTool.exe -r1 -p1gpu1 -r2 -p2gpu2 -r3 -p3gpu3 -r4 -p4gpu4 -r5 -p5gpu5 -r6 -p6gpu6

As you can see in the following picture:

This will make a batch script that will run the OverdriveNtool.exe and set the each gpu (-p) to a predefined profile (profileName)

If you have 10 or more GPUs you need to have double digits to numerize them (-p01,-p02,,-p11,-p12 and -r01,-r02,.,-r11,-r12) or else the 10-12 GPU wont be recognized.

Carefull,as you can see in the displayed image in my case, I have on this mining rig 7 GPUs enabled. The first one is anINTEGRATED GPUand its ID is -p0 , all others are mining gpus (p1,p6). So if you have your integrated GPU disabled or for some reason you use a motherboard that does not have it, then your mining GPU ids start from p0, but you can see the GPUs order as displayed in the picture below. The GPUs order in OverdriveNtool is thesameas in the GPU-Z and Claymore 11.8.

Now make6 Newprofiles and name themgpu1, gpu2 gpu6and each profile will represent the GPU its attached to, so for example, we are using the -p1 GPU to the profile gpu1 and so on. You need to make so many profiles as you have mining GPUs (all GPUs except the Integrated one)

RED this part shows you the real GPUs core clock rates and its voltages. In other overclocking tools you will only see the last one, in this case 1340 Mhz. Asyou noticed there are 8 of them (P0,P1.P7) and that are the GPUs core states. This means the GPU switches automatically to default between those states, depending on how much you use the GPU. From all those 8 states, we dont want the GPU to switch between them, we want it to run stable at the fixed clock rates we put it on. To do that we will need toDISABLEall the GPUs states except the last one (P7). You can disable every state from P0 till P7 simply by double clicking on its name (just go with the mouse over P0 and doubleclick), you will know if you are successful if that state changes colour.

GPU CORE OVERCLOCK/UNDERVOLT we need to do two things to the GPU core. First, we will need to set P7 clock rate and its voltage. You need to remember that the GPUs core is not used to mine ethereum a lot, it just helps the memory to do the hashrate. The GPUs core generates the most heat on the GPU and uses the most power, so our intention is to push the GPUs core down as much as possible to save power and lower the temperature on the GPU without losing ethereum hashrate, or lose some hashrate because we save more on the power cost reduction than the small ethereum hashrate drop. It would be very recommended to haveWattmeterto make your own calculations to see whats more worth for you. In general most optimal clock rates for ALL GPUs is around1150mhz. Some RX 570 can even work at around 1100mhz without losing any, or very low hashrate reduction on ethereum and that will reduce the power draw drastically. Some RX 580 need 1200mhz to have the optimum hashrate, but most of them work best at 1150mhz. In general never go above 1200mhz because it will start to use much more power, and you can will that with your Wattmeter. For the Voltage part, its best to keep them at850mV. You can try to reduce the voltage to 825mV or 800mV if you are going to keep the GPUs at 1100mhz, but it is possible to get a freeze or crash. The best way for you is to test your hashrate with those values described and see what effect it has for your GPU to run it at 1100mhz, 1150mhz, 1200mhz with 850mV voltage in all cases. Then compare the power draw with the hashrate and calculate whats more profitable for you. In most cases1150mhz/ 850mVis optimal.

MEMORY This works identically as the GPU core, except its for the memory. This is the holy grail, this is the most important part of GPU mining and its veryRANDOM. There is no fixed values from which you can know that it will work 100% on your GPU. There is just one proper way of doing it without risking any problems. We will need to disable P0 and P1 by double clicking on them.

HOW TO SET OVERDRIVENTool PROPERLY?

We will need to repeat the process for each GPU individually, its very important to test it that way so if you end up getting a crash or reset, you will know exactly at what part that happened so that you can reverse the crashing settings.

First we will need to test thefirst mining GPUonly, not all at once:

As you can see in the picture, you will need to have values set exactly like that. Apply settings first, then click on the Save button near the profile or else the profile settings wont be applied properly. You have your first GPU all set and ready to make theFINAL STEP

This is the most important question people want to have an answer for and its the most tricky one. There are no optimal or universal values, because on the identical GPUs the same Overclock/Undervolt settings dont work the same way. Each GPU is unique and requires individual testing to optimize it properly.

Download a tool called HWinfo64.

Install it and run it in Sensors only mode as displayed in this picture:

After that scroll all the way down till you see your GPUs, they are located at the end. Now after you found the GPUs, select all sensors except Memory Errors and HIDE them (right click on the sensors and press hide). Ater that, you will have something as in the image below:

In my case there are 6 AMD GPUs and I have disabled all other sensors because they dont interest me.We only want to have GPU Memory Errors displayed, this will tell you if your GPU is overclocked too much.Now this is the way we will test your GPUs optimal settings.

Now after you found the optimal value for your GPU you can do the following:

After you done all that for the FIRST GPU, you can repeat the process for each other GPU, always keep an eye in HWinfo64 for memory errors, so that you dont have an unstable rig. The rig can work with a bunch of memory errors but that can cause:

If all of your GPUs on the RIG are the same, you can try to apply the profile settings that worked for the first GPU to the next GPU and test if it works, then try to adjust the small settings to reduce memory errors if you get them. Its possible that the same GPU with the same settings will crash the PC or cause a freeze, thats why you testone GPU at a time.

Read more here:

Ethereum Mining Guide for AMD and NVidia GPUs – Windows …