Bitcoin Price Forecast: Three Reasons Why Bitcoin Price Crashed

Daily Bitcoin News Update
A 10% drop in BTC prices over the course of 24 hours is no longer as alarming as it used to be. Perhaps investors are slowly getting complacent to price volatility. But a 10% drop in a matter of a few minutes? Now that should certainly send shockwaves across the market, no matter how risk-loving that crypto investors get.

This is exactly what happened on Wednesday afternoon, when Bitcoin prices took a deep sudden plunge, leaving everyone in a tizzy. Three negative events transpired side-by-side in this short time frame, all of which adversely affected investor sentiments..

The post Bitcoin Price Forecast: Three Reasons Why Bitcoin Price Crashed appeared first on Profit Confidential.

Read more:
Bitcoin Price Forecast: Three Reasons Why Bitcoin Price Crashed

Litecoin Price Prediction: Exchanges Wreak Havoc on LTC Prices

Daily Litecoin News Update
At least two incidents yesterday have sent Litecoin prices tanking below $200.00. Both involved cryptocurrency exchanges, not cryptocurrencies directly. It’s fair to say that fear, uncertainty, and doubt (FUD) was once again pulling the strings on investor sentiment.

First, one of the world’s largest cryptocurrency exchanges, Binance, got infiltrated by hackers on Wednesday. While the attack didn’t do much damage, the fear was enough to force a massive sell-off.

What’s alarming, however, is that the Litecoin dumping primarily occurred on U.S. exchange GDAX and South Korean exchange OKEx,.

The post Litecoin Price Prediction: Exchanges Wreak Havoc on LTC Prices appeared first on Profit Confidential.

See the original post:
Litecoin Price Prediction: Exchanges Wreak Havoc on LTC Prices

Monero Price Forecast: Beware of the XMR Price Surge Ahead of MoneroV Hard Fork

Why the Monero Coin Price Is Surging
On a day when the broader crypto market was drenched in red, the Monero price chart was showing green candlesticks. The XMR coin price is riding a tailwind that helped it thwart Wednesday's marketwide correction. In fact, Monero has turned out to be one of the few cryptocurrencies that has managed to recoup most of its losses post the February crypto market crash.

Take a look at the Monero chart below and see how its past one-month performance stacks up against industry leader Bitcoin and its biggest rival ZCash. Monero conspicuously stands out in blue..

The post Monero Price Forecast: Beware of the XMR Price Surge Ahead of MoneroV Hard Fork appeared first on Profit Confidential.

Original post:
Monero Price Forecast: Beware of the XMR Price Surge Ahead of MoneroV Hard Fork

Ethereum Price Forecast: SEC & Japan’s FSA Weigh Heavy on ETH

Ethereum News Update
Ethereum prices, and cryptocurrency values more broadly, slipped again on Thursday as investors mulled regulatory actions in the United States and Japan.

This is surprising, given that U.S. regulators have largely welcomed cryptocurrencies with open arms. They tend to harbor a deep respect for innovation. Even if they worry about the dangers of initial coin offerings (ICOs), those concerns haven’t led to a broad crackdown.

For example, a recent Blockchain Summit featured speakers from the U.S. Commerce Department and the Office of Personnel Management. Both of them spoke glowingly about the potential of blockchain technology, demonstrating the.

The post Ethereum Price Forecast: SEC & Japan’s FSA Weigh Heavy on ETH appeared first on Profit Confidential.

Read more here:
Ethereum Price Forecast: SEC & Japan’s FSA Weigh Heavy on ETH

High Sea | Definition of High Sea by Merriam-Webster

Ditto the fact that China, Spain, Taiwan, Japan and South Korea take 85 percent of it all on the high seas.

Lewis Bennett, 41, of Delray Beach, is charged with second-degree murder on the high seas in the May disappearance of Isabella Hellman, also 41, the FBI said in an affidavit filed in Miami federal court.

That is because of its remote location on the high seas and also the type of petroleum involved: condensate, a toxic, liquid byproduct of natural gas production.

The track's bombast came through loud and clear, and its arena-worthy guitar swells held their own on the high seas.

After a few years of glamorous cruising on the high seas, the Principality of Monaco sold the vessel to a businessman, who in turn sold it to a private owner in the Caribbean.

North Korean fishing boats have often been found adrift in the high seas, carrying fishermen seeking to meet catch quotas.

This can leave crews desperate and taking risks, which can then result in disaster, with the ships stranded on the high sea and running out of food and water.

Strong winds and waves in Key West as Hurricane Irma nearsStrong winds and high seas were some of the first signs that Hurricane Irma was nearing Key West on Sept. 9, 2017.

View original post here:

High Sea | Definition of High Sea by Merriam-Webster

Space Shuttle – Wikipedia

Space ShuttleFunctionCrewed orbital launch and reentryManufacturerUnited Space AllianceThiokol/Alliant Techsystems (SRBs)Lockheed Martin/Martin Marietta (ET)Boeing/Rockwell (orbiter)Country of originUnited StatesProject costUS$ 210 billion (2010)[1][2][3]Cost per launchUS$ 450 million (2011)[4] to 1.5 billion (2011)[2][3][5][6]SizeHeight56.1 m (184.2 ft)Diameter8.7 m (28.5 ft)Mass2,030 t (4,470,000 lb)Stages2CapacityPayload to LEO27,500kg (60,600lb)Payload to ISS16,050kg (35,380lb)Payload to GTO3,810kg (8,400lb)Payload to Polar orbit12,700kg (28,000lb)Payload to Earth return14,400kg (31,700lb)[7]Launch historyStatusRetiredLaunch sitesLC-39, Kennedy Space CenterSLC-6, Vandenberg AFB (unused)Total launches135Successes134 launches and 133 landingsFailures2Challenger (launch failure, 7 fatalities),Columbia (re-entry failure, 7 fatalities)First flightApril 12, 1981Last flightJuly 21, 2011Notable payloadsTracking and Data Relay SatellitesSpacelabHubble Space TelescopeGalileo, Magellan, UlyssesCompton Gamma Ray ObservatoryMir Docking ModuleChandra X-ray ObservatoryISS componentsBoosters - Solid Rocket BoostersNo. boosters2[8]Engines2 solidThrust12,500kN (2,800,000lbf) each, sea level liftoffSpecific impulse269 seconds (2.64km/s)Burn time124 sFuelSolid (Ammonium perchlorate composite propellant)First stage - Orbiter plus External TankEngines3 SSMEs located on OrbiterThrust5,250kN (1,180,000lbf) total, sea level liftoff [9]Specific impulse455 seconds (4.46km/s)Burn time480 sFuelLOX/LH2

The Space Shuttle was a partially reusable low Earth orbital spacecraft system operated by the U.S. National Aeronautics and Space Administration (NASA), as part of the Space Shuttle program. Its official program name was Space Transportation System (STS), taken from a 1969 plan for a system of reusable spacecraft of which it was the only item funded for development.[10] The first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982. In addition to the prototype whose completion was cancelled, five complete Shuttle systems were built and used on a total of 135 missions from 1981 to 2011, launched from the Kennedy Space Center (KSC) in Florida. Operational missions launched numerous satellites, interplanetary probes, and the Hubble Space Telescope (HST); conducted science experiments in orbit; and participated in construction and servicing of the International Space Station. The Shuttle fleet's total mission time was 1322 days, 19 hours, 21 minutes and 23 seconds.[11]

Shuttle components included the Orbiter Vehicle (OV) with three clustered Rocketdyne RS-25 main engines, a pair of recoverable solid rocket boosters (SRBs), and the expendable external tank (ET) containing liquid hydrogen and liquid oxygen. The Space Shuttle was launched vertically, like a conventional rocket, with the two SRBs operating in parallel with the OV's three main engines, which were fueled from the ET. The SRBs were jettisoned before the vehicle reached orbit, and the ET was jettisoned just before orbit insertion, which used the orbiter's two Orbital Maneuvering System (OMS) engines. At the conclusion of the mission, the orbiter fired its OMS to de-orbit and re-enter the atmosphere. The orbiter then glided as a spaceplane to a runway landing, usually to the Shuttle Landing Facility at Kennedy Space Center, Florida or Rogers Dry Lake in Edwards Air Force Base, California. After landing at Edwards, the orbiter was flown back to the KSC on the Shuttle Carrier Aircraft, a specially modified version of the Boeing 747.

The first orbiter, Enterprise, was built in 1976, used in Approach and Landing Tests and had no orbital capability. Four fully operational orbiters were initially built: Columbia, Challenger, Discovery, and Atlantis. Of these, two were lost in mission accidents: Challenger in 1986 and Columbia in 2003, with a total of fourteen astronauts killed. A fifth operational (and sixth in total) orbiter, Endeavour, was built in 1991 to replace Challenger. The Space Shuttle was retired from service upon the conclusion of Atlantis's final flight on July 21, 2011. The U.S. has since relied on the Russian Soyuz spacecraft to transport supplies and astronauts to the International Space Station.

The Space Shuttle was a partially reusable[12] human spaceflight vehicle capable of reaching low Earth orbit, commissioned and operated by the US National Aeronautics and Space Administration (NASA) from 1981 to 2011. It resulted from shuttle design studies conducted by NASA and the US Air Force in the 1960s and was first proposed for development as part of an ambitious second-generation Space Transportation System (STS) of space vehicles to follow the Apollo program in a September 1969 report of a Space Task Group headed by Vice President Spiro Agnew to President Richard Nixon. Nixon's post-Apollo NASA budgeting withdrew support of all system components except the Shuttle, to which NASA applied the STS name.[10]

The vehicle consisted of a spaceplane for orbit and re-entry, fueled from expendable liquid hydrogen and liquid oxygen tanks, with reusable strap-on solid booster rockets. The first of four orbital test flights occurred in 1981, leading to operational flights beginning in 1982, all launched from the Kennedy Space Center, Florida. The system was retired from service in 2011 after 135 missions,[13] with Atlantis making the final launch of the three-decade Shuttle program on July 8, 2011.[14] The program ended after Atlantis landed at the Kennedy Space Center on July 21, 2011. Major missions included launching numerous satellites and interplanetary probes,[15] conducting space science experiments, and servicing and construction of space stations. The first orbiter vehicle, named Enterprise, was used in the initial Approach and Landing Tests phase but installation of engines, heat shielding, and other equipment necessary for orbital flight was cancelled.[16] A total of five operational orbiters were built, and of these, two were destroyed in accidents.

It was used for orbital space missions by NASA, the US Department of Defense, the European Space Agency, Japan, and Germany.[17][18] The United States funded Shuttle development and operations except for the Spacelab modules used on D1 and D2sponsored by Germany.[17][19][20][21][22] SL-J was partially funded by Japan.[18]

At launch, it consisted of the "stack", including the dark orange external tank (ET) (for the first two launches the tank was painted white);[23][24] two white, slender solid rocket boosters (SRBs); and the Orbiter Vehicle, which contained the crew and payload. Some payloads were launched into higher orbits with either of two different upper stages developed for the STS (single-stage Payload Assist Module or two-stage Inertial Upper Stage). The Space Shuttle was stacked in the Vehicle Assembly Building, and the stack mounted on a mobile launch platform held down by four frangible nuts[25] on each SRB, which were detonated at launch.[26]

The Shuttle stack launched vertically like a conventional rocket. It lifted off under the power of its two SRBs and three main engines, which were fueled by liquid hydrogen and liquid oxygen from the ET. The Space Shuttle had a two-stage ascent. The SRBs provided additional thrust during liftoff and first-stage flight. About two minutes after liftoff, frangible nuts were fired, releasing the SRBs, which then parachuted into the ocean, to be retrieved by NASA recovery ships for refurbishment and reuse. The orbiter and ET continued to ascend on an increasingly horizontal flight path under power from its main engines. Upon reaching 17,500mph (7.8km/s), necessary for low Earth orbit, the main engines were shut down. The ET, attached by two frangible nuts[27] was then jettisoned to burn up in the atmosphere.[28] After jettisoning the external tank, the orbital maneuvering system (OMS) engines were used to adjust the orbit. The orbiter carried astronauts and payloads such as satellites or space station parts into low Earth orbit, the Earth's upper atmosphere or thermosphere.[29] Usually, five to seven crew members rode in the orbiter. Two crew members, the commander and pilot, were sufficient for a minimal flight, as in the first four "test" flights, STS-1 through STS-4. The typical payload capacity was about 50,045 pounds (22,700kg) but could be increased depending on the choice of launch configuration. The orbiter carried its payload in a large cargo bay with doors that opened along the length of its top, a feature which made the Space Shuttle unique among spacecraft. This feature made possible the deployment of large satellites such as the Hubble Space Telescope and also the capture and return of large payloads back to Earth.

When the orbiter's space mission was complete, it fired its OMS thrusters to drop out of orbit and re-enter the lower atmosphere.[29] During descent, the orbiter passed through different layers of the atmosphere and decelerated from hypersonic speed primarily by aerobraking. In the lower atmosphere and landing phase, it was more like a glider but with reaction control system (RCS) thrusters and fly-by-wire-controlled hydraulically actuated flight surfaces controlling its descent. It landed on a long runway as a conventional aircraft. The aerodynamic shape was a compromise between the demands of radically different speeds and air pressures during re-entry, hypersonic flight, and subsonic atmospheric flight. As a result, the orbiter had a relatively high sink rate at low altitudes, and it transitioned during re-entry from using RCS thrusters at very high altitudes to flight surfaces in the lower atmosphere.

The formal design of what became the Space Shuttle began with the "Phase A" contract design studies issued in the late 1960s. Conceptualization had begun two decades earlier, before the Apollo program of the 1960s. One of the places the concept of a spacecraft returning from space to a horizontal landing originated was within NACA, in 1954, in the form of an aeronautics research experiment later named the X-15. The NACA proposal was submitted by Walter Dornberger.

In 1958, the X-15 concept further developed into a proposal to launch an X-15 into space, and another X-series spaceplane proposal, named X-20 Dyna-Soar, as well as variety of aerospace plane concepts and studies. Neil Armstrong was selected to pilot both the X-15 and the X-20. Though the X-20 was not built, another spaceplane similar to the X-20 was built several years later and delivered to NASA in January 1966 called the HL-10 ("HL" indicated "horizontal landing").

In the mid-1960s, the US Air Force conducted classified studies on next-generation space transportation systems and concluded that semi-reusable designs were the cheapest choice. It proposed a development program with an immediate start on a "ClassI" vehicle with expendable boosters, followed by slower development of a "ClassII" semi-reusable design and possible "ClassIII" fully reusable design later. In 1967, George Mueller held a one-day symposium at NASA headquarters to study the options. Eighty people attended and presented a wide variety of designs, including earlier US Air Force designs such as the X-20 Dyna-Soar.

In 1968, NASA officially began work on what was then known as the Integrated Launch and Re-entry Vehicle (ILRV). At the same time, NASA held a separate Space Shuttle Main Engine (SSME) competition. NASA offices in Houston and Huntsville jointly issued a Request for Proposal (RFP) for ILRV studies to design a spacecraft that could deliver a payload to orbit but also re-enter the atmosphere and fly back to Earth. For example, one of the responses was for a two-stage design, featuring a large booster and a small orbiter, called the DC-3, one of several Phase A Shuttle designs. After the aforementioned "Phase A" studies, B, C, and D phases progressively evaluated in-depth designs up to 1972. In the final design, the bottom stage consisted of recoverable solid rocket boosters, and the top stage used an expendable external tank.[30]

In 1969, President Richard Nixon decided to support proceeding with Space Shuttle development. A series of development programs and analysis refined the basic design, prior to full development and testing. In August 1973, the X-24B proved that an unpowered spaceplane could re-enter Earth's atmosphere for a horizontal landing.

Across the Atlantic, European ministers met in Belgium in 1973 to authorize Western Europe's manned orbital project and its main contribution to Space Shuttlethe Spacelab program.[31] Spacelab would provide a multidisciplinary orbital space laboratory and additional space equipment for the Shuttle.[31]

The Space Shuttle was the first operational orbital spacecraft designed for reuse. It carried different payloads to low Earth orbit, provided crew rotation and supplies for the International Space Station (ISS), and performed satellite servicing and repair. The orbiter could also recover satellites and other payloads from orbit and return them to Earth. Each Shuttle was designed for a projected lifespan of 100 launches or ten years of operational life, although this was later extended. The person in charge of designing the STS was Maxime Faget, who had also overseen the Mercury, Gemini, and Apollo spacecraft designs. The crucial factor in the size and shape of the Shuttle orbiter was the requirement that it be able to accommodate the largest planned commercial and military satellites, and have over 1,000 mile cross-range recovery range to meet the requirement for classified USAF missions for a once-around abort from a launch to a polar orbit. The militarily specified 1,085nmi (2,009km; 1,249mi) cross range requirement was one of the primary reasons for the Shuttle's large wings, compared to modern commercial designs with very minimal control surfaces and glide capability. Factors involved in opting for solid rockets and an expendable fuel tank included the desire of the Pentagon to obtain a high-capacity payload vehicle for satellite deployment, and the desire of the Nixon administration to reduce the costs of space exploration by developing a spacecraft with reusable components.

Each Space Shuttle was a reusable launch system composed of three main assemblies: the reusable OV, the expendable ET, and the two reusable SRBs.[32] Only the OV entered orbit shortly after the tank and boosters are jettisoned. The vehicle was launched vertically like a conventional rocket, and the orbiter glided to a horizontal landing like an airplane, after which it was refurbished for reuse. The SRBs parachuted to splashdown in the ocean where they were towed back to shore and refurbished for later Shuttle missions.

Five operational OVs were built: Columbia (OV-102), Challenger (OV-099), Discovery (OV-103), Atlantis (OV-104), and Endeavour (OV-105). A mock-up, Inspiration, currently stands at the entrance to the Astronaut Hall of Fame. An additional craft, Enterprise (OV-101), was built for atmospheric testing gliding and landing; it was originally intended to be outfitted for orbital operations after the test program, but it was found more economical to upgrade the structural test article STA-099 into orbiter Challenger (OV-099). Challenger disintegrated 73 seconds after launch in 1986, and Endeavour was built as a replacement from structural spare components. Building Endeavour cost about US$1.7billion. Columbia broke apart over Texas during re-entry in 2003. A Space Shuttle launch cost around $450million.[33]

Roger A. Pielke, Jr. has estimated that the Space Shuttle program cost about US$170billion (2008 dollars) through early 2008; the average cost per flight was about US$1.5billion.[34] Two missions were paid for by Germany, Spacelab D1 and D2 (D for Deutschland) with a payload control center in Oberpfaffenhofen.[35][36] D1 was the first time that control of a manned STS mission payload was not in U.S. hands.[17]

At times, the orbiter itself was referred to as the Space Shuttle. This was not technically correct as the Space Shuttle was the combination of the orbiter, the external tank, and the two solid rocket boosters. These components, once assembled in the Vehicle Assembly Building originally built to assemble the Apollo Saturn V rocket, were commonly referred to as the "stack".[37]

Responsibility for the Shuttle components was spread among multiple NASA field centers. The Kennedy Space Center was responsible for launch, landing and turnaround operations for equatorial orbits (the only orbit profile actually used in the program), the US Air Force at the Vandenberg Air Force Base was responsible for launch, landing and turnaround operations for polar orbits (though this was never used), the Johnson Space Center served as the central point for all Shuttle operations, the Marshall Space Flight Center was responsible for the main engines, external tank, and solid rocket boosters, the John C. Stennis Space Center handled main engine testing, and the Goddard Space Flight Center managed the global tracking network.[38]

The orbiter resembled a conventional aircraft, with double-delta wings swept 81 at the inner leading edge and 45 at the outer leading edge. Its vertical stabilizer's leading edge was swept back at a 50 angle. The four elevons, mounted at the trailing edge of the wings, and the rudder/speed brake, attached at the trailing edge of the stabilizer, with the body flap, controlled the orbiter during descent and landing.

The orbiter's 60-foot (18m)-long payload bay, comprising most of the fuselage, could accommodate cylindrical payloads up to 15 feet (4.6m) in diameter. Information declassified in 2011 showed that these measurements were chosen specifically to accommodate the KH-9 HEXAGON spy satellite operated by the National Reconnaissance Office.[39] Two mostly-symmetrical lengthwise payload bay doors hinged on either side of the bay comprised its entire top. Payloads were generally loaded horizontally into the bay while the orbiter was standing upright on the launch pad and unloaded vertically in the near-weightless orbital environment by the orbiter's robotic remote manipulator arm (under astronaut control), EVA astronauts, or under the payloads' own power (as for satellites attached to a rocket "upper stage" for deployment.)

Three Space Shuttle Main Engines (SSMEs) were mounted on the orbiter's aft fuselage in a triangular pattern. The engine nozzles could gimbal 10.5 degrees up and down, and 8.5 degrees from side to side during ascent to change the direction of their thrust to steer the Shuttle. The orbiter structure was made primarily from aluminum alloy, although the engine structure was made primarily from titanium alloy.

The operational orbiters built were OV-102 Columbia, OV-099 Challenger, OV-103 Discovery, OV-104 Atlantis, and OV-105 Endeavour.[40]

Space Shuttle Endeavour being transported by a Shuttle Carrier Aircraft

An overhead view of Atlantis as it sits atop the Mobile Launcher Platform (MLP) before STS-79. Two Tail Service Masts (TSMs) to either side of the orbiter's tail provide umbilical connections for propellant loading and electrical power.

Water is released onto the mobile launcher platform on Launch Pad 39A at the start of a sound suppression system test in 2004. During launch, 350,000 US gallons (1,300,000L) of water are poured onto the pad in 41 seconds.[41]

The main function of the Space Shuttle external tank was to supply the liquid oxygen and hydrogen fuel to the main engines. It was also the backbone of the launch vehicle, providing attachment points for the two solid rocket boosters and the orbiter. The external tank was the only part of the Shuttle system that was not reused. Although the external tanks were always discarded, it would have been possible to take them into orbit and re-use them (such as a wet workshop for incorporation into a space station).[28][42]

Two solid rocket boosters (SRBs) each provided 12,500kN (2,800,000lbf) of thrust at liftoff,[43] which was 83% of the total thrust at liftoff. The SRBs were jettisoned two minutes after launch at a height of about 46km (150,000ft), and then deployed parachutes and landed in the ocean to be recovered.[44] The SRB cases were made of steel about inch (13mm) thick.[45] The solid rocket boosters were re-used many times; the casing used in Ares I engine testing in 2009 consisted of motor cases that had been flown, collectively, on 48 Shuttle missions, including STS-1.[46]

Astronauts who have flown on multiple spacecraft report that Shuttle delivers a rougher ride than Apollo or Soyuz.[47][48] The additional vibration is caused by the solid rocket boosters, as solid fuel does not burn as evenly as liquid fuel. The vibration dampens down after the solid rocket boosters have been jettisoned.[49][50]

Two SRB on the crawler prior to mating with the Shuttle

SRB sections filled with solid propellant being assembled

Orbiter and the external tank, flanked by the two solid rocket boosters

The orbiter could be used in conjunction with a variety of add-ons depending on the mission. This included orbital laboratories (Spacelab, Spacehab), boosters for launching payloads farther into space (Inertial Upper Stage, Payload Assist Module), and other functions, such as provided by Extended Duration Orbiter, Multi-Purpose Logistics Modules, or Canadarm (RMS). An upper stage called Transfer Orbit Stage (Orbital Science Corp. TOS-21) was also used once with the orbiter.[51] Other types of systems and racks were part of the modular Spacelab system pallets, igloo, IPS, etc., which also supported special missions such as SRTM.[52]

A major component of the Space Shuttle Program was Spacelab, primarily contributed by a consortium of European countries, and operated in conjunction with the United States and international partners.[52] Supported by a modular system of pressurized modules, pallets, and systems, Spacelab missions executed on multidisciplinary science, orbital logistics, and international cooperation.[52] Over 29 missions flew on subjects ranging from astronomy, microgravity, radar, and life sciences, to name a few.[52] Spacelab hardware also supported missions such as Hubble (HST) servicing and space station resupply.[52] STS-2 and STS-3 provided testing, and the first full mission was Spacelab-1 (STS-9) launched on November 28, 1983.[52]

Spacelab formally began in 1973, after a meeting in Brussels, Belgium, by European heads of state.[31] Within the decade, Spacelab went into orbit and provided Europe and the United States with an orbital workshop and hardware system.[31] International cooperation, science, and exploration were realized on Spacelab.[52]

The Shuttle was one of the earliest craft to use a computerized fly-by-wire digital flight control system. This means no mechanical or hydraulic linkages connected the pilot's control stick to the control surfaces or reaction control system thrusters. The control algorithm, which used a classical Proportional Integral Derivative (PID) approach, was developed and maintained by Honeywell.[citation needed] The Shuttle's fly-by-wire digital flight control system was composed of 4 control systems each addressing a different mission phase: Ascent, Descent, On-Orbit and Aborts.[citation needed] Honeywell is also credited with the design and implementation of the Shuttle's Nose Wheel Steering Control Algorithm that allowed the Orbiter to safely land at Kennedy Space Center's Shuttle Runway.[citation needed]

A concern with using digital fly-by-wire systems on the Shuttle was reliability. Considerable research went into the Shuttle computer system. The Shuttle used five identical redundant IBM 32-bit general purpose computers (GPCs), model AP-101, constituting a type of embedded system. Four computers ran specialized software called the Primary Avionics Software System (PASS). A fifth backup computer ran separate software called the Backup Flight System (BFS). Collectively they were called the Data Processing System (DPS).[53][54]

The design goal of the Shuttle's DPS was fail-operational/fail-safe reliability. After a single failure, the Shuttle could still continue the mission. After two failures, it could still land safely.

The four general-purpose computers operated essentially in lockstep, checking each other. If one computer provided a different result than the other three (i.e. the one computer failed), the three functioning computers "voted" it out of the system. This isolated it from vehicle control. If a second computer of the three remaining failed, the two functioning computers voted it out. A very unlikely failure mode would have been where two of the computers produced result A, and two produced result B (a two-two split). In this unlikely case, one group of two was to be picked at random.

The Backup Flight System (BFS) was separately developed software running on the fifth computer, used only if the entire four-computer primary system failed. The BFS was created because although the four primary computers were hardware redundant, they all ran the same software, so a generic software problem could crash all of them. Embedded system avionic software was developed under totally different conditions from public commercial software: the number of code lines was tiny compared to a public commercial software product, changes were only made infrequently and with extensive testing, and many programming and test personnel worked on the small amount of computer code. However, in theory it could have still failed, and the BFS existed for that contingency. While the BFS could run in parallel with PASS, the BFS never engaged to take over control from PASS during any Shuttle mission.

The software for the Shuttle computers was written in a high-level language called HAL/S, somewhat similar to PL/I. It is specifically designed for a real time embedded system environment.

The IBM AP-101 computers originally had about 424 kilobytes of magnetic core memory each. The CPU could process about 400,000 instructions per second. They had no hard disk drive, and loaded software from magnetic tape cartridges.

In 1990, the original computers were replaced with an upgraded model AP-101S, which had about 2.5 times the memory capacity (about 1 megabyte) and three times the processor speed (about 1.2million instructions per second). The memory was changed from magnetic core to semiconductor with battery backup.

Early Shuttle missions, starting in November 1983, took along the Grid Compass, arguably one of the first laptop computers. The GRiD was given the name SPOC, for Shuttle Portable Onboard Computer. Use on the Shuttle required both hardware and software modifications which were incorporated into later versions of the commercial product. It was used to monitor and display the Shuttle's ground position, path of the next two orbits, show where the Shuttle had line of sight communications with ground stations, and determine points for location-specific observations of the Earth. The Compass sold poorly, as it cost at least US$8000, but it offered unmatched performance for its weight and size.[55] NASA was one of its main customers.[56]

During its service life, the Shuttle's Control System never experienced a failure. Many of the lessons learned have been used to design today's high speed control algorithms.[57]

The prototype orbiter Enterprise originally had a flag of the United States on the upper surface of the left wing and the letters "USA" in black on the right wing. The name "Enterprise" was painted in black on the payload bay doors just above the hinge and behind the crew module; on the aft end of the payload bay doors was the NASA "worm" logotype in gray. Underneath the rear of the payload bay doors on the side of the fuselage just above the wing is the text "United States" in black with a flag of the United States ahead of it.

The first operational orbiter, Columbia, originally had the same markings as Enterprise, although the letters "USA" on the right wing were slightly larger and spaced farther apart. Columbia also had black markings which Enterprise lacked on its forward RCS module, around the cockpit windows, and on its vertical stabilizer, and had distinctive black "chines" on the forward part of its upper wing surfaces, which none of the other orbiters had.

Challenger established a modified marking scheme for the shuttle fleet that was matched by Discovery, Atlantis and Endeavour. The letters "USA" in black above an American flag were displayed on the left wing, with the NASA "worm" logotype in gray centered above the name of the orbiter in black on the right wing. The name of the orbiter was inscribed not on the payload bay doors, but on the forward fuselage just below and behind the cockpit windows. This would make the name visible when the shuttle was photographed in orbit with the doors open.

In 1983, Enterprise had its wing markings changed to match Challenger, and the NASA "worm" logotype on the aft end of the payload bay doors was changed from gray to black. Some black markings were added to the nose, cockpit windows and vertical tail to more closely resemble the flight vehicles, but the name "Enterprise" remained on the payload bay doors as there was never any need to open them. Columbia had its name moved to the forward fuselage to match the other flight vehicles after STS-61-C, during the 198688 hiatus when the shuttle fleet was grounded following the loss of Challenger, but retained its original wing markings until its last overhaul (after STS-93), and its unique black wing "chines" for the remainder of its operational life.

Beginning in 1998, the flight vehicles' markings were modified to incorporate the NASA "meatball" insignia. The "worm" logotype, which the agency had phased out, was removed from the payload bay doors and the "meatball" insignia was added aft of the "United States" text on the lower aft fuselage. The "meatball" insignia was also displayed on the left wing, with the American flag above the orbiter's name, left-justified rather than centered, on the right wing. The three surviving flight vehicles, Discovery, Atlantis and Endeavour, still bear these markings as museum displays. Enterprise became the property of the Smithsonian Institution in 1985 and was no longer under NASA's control when these changes were made, hence the prototype orbiter still has its 1983 markings and still has its name on the payload bay doors.

The Space Shuttle was initially developed in the 1970s,[58] but received many upgrades and modifications afterward to improve performance, reliability and safety. Internally, the Shuttle remained largely similar to the original design, with the exception of the improved avionics computers. In addition to the computer upgrades, the original analog primary flight instruments were replaced with modern full-color, flat-panel display screens, called a glass cockpit, which is similar to those of contemporary airliners. To facilitate construction of ISS, the internal airlocks of each orbiter except Columbia[59] were replaced with external docking systems to allow for a greater amount of cargo to be stored on the Shuttle's mid-deck during station resupply missions.

The Space Shuttle Main Engines (SSMEs) had several improvements to enhance reliability and power. This explains phrases such as "Main engines throttling up to 104 percent." This did not mean the engines were being run over a safe limit. The 100 percent figure was the original specified power level. During the lengthy development program, Rocketdyne determined the engine was capable of safe reliable operation at 104 percent of the originally specified thrust. NASA could have rescaled the output number, saying in essence 104 percent is now 100 percent. To clarify this would have required revising much previous documentation and software, so the 104 percent number was retained. SSME upgrades were denoted as "block numbers", such as block I, block II, and block IIA. The upgrades improved engine reliability, maintainability and performance. The 109% thrust level was finally reached in flight hardware with the Block II engines in 2001. The normal maximum throttle was 104 percent, with 106 percent or 109 percent used for mission aborts.

For the first two missions, STS-1 and STS-2, the external tank was painted white to protect the insulation that covers much of the tank, but improvements and testing showed that it was not required. The weight saved by not painting the tank resulted in an increase in payload capability to orbit.[60] Additional weight was saved by removing some of the internal "stringers" in the hydrogen tank that proved unnecessary. The resulting "light-weight external tank" was first flown on STS-6 [61] and used on the majority of Shuttle missions. STS-91 saw the first flight of the "super light-weight external tank". This version of the tank was made of the 2195 aluminum-lithium alloy. It weighed 3.4 metric tons (7,500lb) less than the last run of lightweight tanks, allowing the Shuttle to deliver heavy elements to ISS's high inclination orbit.[61] As the Shuttle was always operated with a crew, each of these improvements was first flown on operational mission flights.

The solid rocket boosters underwent improvements as well. Design engineers added a third O-ring seal to the joints between the segments after the 1986 Space Shuttle Challenger disaster.

Several other SRB improvements were planned to improve performance and safety, but never came to be. These culminated in the considerably simpler, lower cost, probably safer and better-performing Advanced Solid Rocket Booster. These rockets entered production in the early to mid-1990s to support the Space Station, but were later canceled to save money after the expenditure of $2.2billion.[62] The loss of the ASRB program resulted in the development of the Super LightWeight external Tank (SLWT), which provided some of the increased payload capability, while not providing any of the safety improvements. In addition, the US Air Force developed their own much lighter single-piece SRB design using a filament-wound system, but this too was canceled.

STS-70 was delayed in 1995, when woodpeckers bored holes in the foam insulation of Discovery's external tank. Since then, NASA has installed commercial plastic owl decoys and inflatable owl balloons which had to be removed prior to launch.[63] The delicate nature of the foam insulation had been the cause of damage to the Thermal Protection System, the tile heat shield and heat wrap of the orbiter. NASA remained confident that this damage, while it was the primary cause of the Space Shuttle Columbia disaster on February 1, 2003, would not jeopardize the completion of the International Space Station (ISS) in the projected time allotted.

A cargo-only, unmanned variant of the Shuttle was variously proposed and rejected since the 1980s. It was called the Shuttle-C, and would have traded re-usability for cargo capability, with large potential savings from reusing technology developed for the Space Shuttle. Another proposal was to convert the payload bay into a passenger area, with versions ranging from 30 to 74 seats, three days in orbit, and cost US$1.5million per seat.[64]

On the first four Shuttle missions, astronauts wore modified US Air Force high-altitude full-pressure suits, which included a full-pressure helmet during ascent and descent. From the fifth flight, STS-5, until the loss of Challenger, one-piece light blue nomex flight suits and partial-pressure helmets were worn. A less-bulky, partial-pressure version of the high-altitude pressure suits with a helmet was reinstated when Shuttle flights resumed in 1988. The Launch-Entry Suit ended its service life in late 1995, and was replaced by the full-pressure Advanced Crew Escape Suit (ACES), which resembled the Gemini space suit in design, but retained the orange color of the Launch-Entry Suit.

To extend the duration that orbiters could stay docked at the ISS, the Station-to-Shuttle Power Transfer System (SSPTS) was installed. The SSPTS allowed these orbiters to use power provided by the ISS to preserve their consumables. The SSPTS was first used successfully on STS-118.

Orbiter[65] (for Endeavour, OV-105)

External tank (for SLWT)

Solid Rocket Boosters

System Stack

All Space Shuttle missions were launched from Kennedy Space Center (KSC). The weather criteria used for launch included, but were not limited to: precipitation, temperatures, cloud cover, lightning forecast, wind, and humidity.[70] The Shuttle was not launched under conditions where it could have been struck by lightning. Aircraft are often struck by lightning with no adverse effects because the electricity of the strike is dissipated through its conductive structure and the aircraft is not electrically grounded. Like most jet airliners, the Shuttle was mainly constructed of conductive aluminum, which would normally shield and protect the internal systems. However, upon liftoff the Shuttle sent out a long exhaust plume as it ascended, and this plume could have triggered lightning by providing a current path to ground. The NASA Anvil Rule for a Shuttle launch stated that an anvil cloud could not appear within a distance of 10 nautical miles.[71] The Shuttle Launch Weather Officer monitored conditions until the final decision to scrub a launch was announced. In addition, the weather conditions had to be acceptable at one of the Transatlantic Abort Landing sites (one of several Space Shuttle abort modes) to launch as well as the solid rocket booster recovery area.[70][72] While the Shuttle might have safely endured a lightning strike, a similar strike caused problems on Apollo 12, so for safety NASA chose not to launch the Shuttle if lightning was possible (NPR8715.5).

Historically, the Shuttle was not launched if its flight would run from December to January (a year-end rollover or YERO). Its flight software, designed in the 1970s, was not designed for this, and would require the orbiter's computers be reset through a change of year, which could cause a glitch while in orbit. In 2007, NASA engineers devised a solution so Shuttle flights could cross the year-end boundary.[73]

After the final hold in the countdown at T-minus 9 minutes, the Shuttle went through its final preparations for launch, and the countdown was automatically controlled by the Ground Launch Sequencer (GLS), software at the Launch Control Center, which stopped the count if it sensed a critical problem with any of the Shuttle's onboard systems. The GLS handed off the count to the Shuttle's on-board computers at T minus 31 seconds, in a process called auto sequence start.

At T-minus 16 seconds, the massive sound suppression system (SPS) began to drench the Mobile Launcher Platform (MLP) and SRB trenches with 300,000 US gallons (1,100m3) of water to protect the Orbiter from damage by acoustical energy and rocket exhaust reflected from the flame trench and MLP during lift off.[74][75]

At T-minus 10 seconds, hydrogen igniters were activated under each engine bell to quell the stagnant gas inside the cones before ignition. Failure to burn these gases could trip the onboard sensors and create the possibility of an overpressure and explosion of the vehicle during the firing phase. The main engine turbopumps also began charging the combustion chambers with liquid hydrogen and liquid oxygen at this time. The computers reciprocated this action by allowing the redundant computer systems to begin the firing phase.

The three main engines (SSMEs) started at T-6.6 seconds. The main engines ignited sequentially via the Shuttle's general purpose computers (GPCs) at 120 millisecond intervals. All three SSMEs were required to reach 90% rated thrust within three seconds, otherwise the onboard computers would initiate an RSLS abort. If all three engines indicated nominal performance by T-3 seconds, they were commanded to gimbal to liftoff configuration and the command would be issued to arm the SRBs for ignition at T-0.[76] Between T-6.6 seconds and T-3 seconds, while the SSMEs were firing but the SRBs were still bolted to the pad, the offset thrust caused the entire launch stack (boosters, tank and orbiter) to pitch down 650mm (25.5in) measured at the tip of the external tank. The three second delay after confirmation of SSME operation was to allow the stack to return to nearly vertical. At T-0 seconds, the 8 frangible nuts holding the SRBs to the pad were detonated, the SSMEs were commanded to 100% throttle, and the SRBs were ignited. By T+0.23 seconds, the SRBs built up enough thrust for liftoff to commence, and reached maximum chamber pressure by T+0.6 seconds.[77] The Johnson Space Center's Mission Control Center assumed control of the flight once the SRBs had cleared the launch tower.

Shortly after liftoff, the Shuttle's main engines were throttled up to 104.5% and the vehicle began a combined roll, pitch and yaw maneuver that placed it onto the correct heading (azimuth) for the planned orbital inclination and in a heads down attitude with wings level. The Shuttle flew upside down during the ascent phase. This orientation allowed a trim angle of attack that was favorable for aerodynamic loads during the region of high dynamic pressure, resulting in a net positive load factor, as well as providing the flight crew with a view of the horizon as a visual reference. The vehicle climbed in a progressively flattening arc, accelerating as the mass of the SRBs and main tank decreased. To achieve low orbit requires much more horizontal than vertical acceleration. This was not visually obvious, since the vehicle rose vertically and was out of sight for most of the horizontal acceleration. The near circular orbital velocity at the 380 kilometers (236mi) altitude of the International Space Station is 27,650km/h (17,180mph), roughly equivalent to Mach 23 at sea level. As the International Space Station orbits at an inclination of 51.6 degrees, missions going there must set orbital inclination to the same value in order to rendezvous with the station.

Around 30 seconds into ascent, the SSMEs were throttled downusually to 72%, though this variedto reduce the maximum aerodynamic forces acting on the Shuttle at a point called Max Q. Additionally, the propellant grain design of the SRBs caused their thrust to drop by about 30% by 50 seconds into ascent. Once the Orbiter's guidance verified that Max Q would be within Shuttle structural limits, the main engines were throttled back up to 104.5%; this throttling down and back up was called the "thrust bucket". To maximize performance, the throttle level and timing of the thrust bucket was shaped to bring the Shuttle as close to aerodynamic limits as possible.[78]

At around T+126 seconds, pyrotechnic fasteners released the SRBs and small separation rockets pushed them laterally away from the vehicle. The SRBs parachuted back to the ocean to be reused. The Shuttle then began accelerating to orbit on the main engines. Acceleration at this point would typically fall to .9 g, and the vehicle would take on a somewhat nose-up angle to the horizon it used the main engines to gain and then maintain altitude while it accelerated horizontally towards orbit. At about five and three-quarter minutes into ascent, the orbiter's direct communication links with the ground began to fade, at which point it rolled heads up to reroute its communication links to the Tracking and Data Relay Satellite system.

At about seven and a half minutes into ascent, the mass of the vehicle was low enough that the engines had to be throttled back to limit vehicle acceleration to 3 g (29.4m/s or 96.5ft/s, equivalent to accelerating from zero to 105.9km/h (65.8mph) in a second). The Shuttle would maintain this acceleration for the next minute, and main engine cut-off (MECO) occurred at about eight and a half minutes after launch.[79] The main engines were shut down before complete depletion of propellant, as running dry would have destroyed the engines. The oxygen supply was terminated before the hydrogen supply, as the SSMEs reacted unfavorably to other shutdown modes. (Liquid oxygen has a tendency to react violently, and supports combustion when it encounters hot engine metal.) A few seconds after MECO, the external tank was released by firing pyrotechnic fasteners.

At this point the Shuttle and external tank were on a slightly suborbital trajectory, coasting up towards apogee. Once at apogee, about half an hour after MECO, the Shuttle's Orbital Maneuvering System (OMS) engines were fired to raise its perigee and achieve orbit, while the external tank fell back into the atmosphere and burned up over the Indian Ocean or the Pacific Ocean depending on launch profile.[65] The sealing action of the tank plumbing and lack of pressure relief systems on the external tank helped it break up in the lower atmosphere. After the foam burned away during re-entry, the heat caused a pressure buildup in the remaining liquid oxygen and hydrogen until the tank exploded. This ensured that any pieces that fell back to Earth were small.

The Shuttle was monitored throughout its ascent for short range tracking (10 seconds before liftoff through 57 seconds after), medium range (7 seconds before liftoff through 110 seconds after) and long range (7 seconds before liftoff through 165 seconds after). Short range cameras included 22 16mm cameras on the Mobile Launch Platform and 8 16mm on the Fixed Service Structure, 4 high speed fixed cameras located on the perimeter of the launch complex plus an additional 42 fixed cameras with 16mm motion picture film. Medium range cameras included remotely operated tracking cameras at the launch complex plus 6 sites along the immediate coast north and south of the launch pad, each with 800mm lens and high speed cameras running 100 frames per second. These cameras ran for only 410 seconds due to limitations in the amount of film available. Long range cameras included those mounted on the external tank, SRBs and orbiter itself which streamed live video back to the ground providing valuable information about any debris falling during ascent. Long range tracking cameras with 400-inch film and 200-inch video lenses were operated by a photographer at Playalinda Beach as well as 9 other sites from 38 miles north at the Ponce Inlet to 23 miles south to Patrick Air Force Base (PAFB) and additional mobile optical tracking camera was stationed on Merritt Island during launches. A total of 10 HD cameras were used both for ascent information for engineers and broadcast feeds to networks such as NASA TV and HDNet. The number of cameras significantly increased and numerous existing cameras were upgraded at the recommendation of the Columbia Accident Investigation Board to provide better information about the debris during launch. Debris was also tracked using a pair of Weibel Continuous Pulse Doppler X-band radars, one on board the SRB recovery ship MV Liberty Star positioned north east of the launch pad and on a ship positioned south of the launch pad. Additionally, during the first 2 flights following the loss of Columbia and her crew, a pair of NASA WB-57 reconnaissance aircraft equipped with HD Video and Infrared flew at 60,000 feet (18,000m) to provide additional views of the launch ascent.[80] Kennedy Space Center also invested nearly $3million in improvements to the digital video analysis systems in support of debris tracking.[81]

Once in orbit, the Shuttle usually flew at an altitude of 320km (170nmi) and occasionally as high as 650km (350nmi).[82] In the 1980s and 1990s, many flights involved space science missions on the NASA/ESA Spacelab, or launching various types of satellites and science probes. By the 1990s and 2000s the focus shifted more to servicing the space station, with fewer satellite launches. Most missions involved staying in orbit several days to two weeks, although longer missions were possible with the Extended Duration Orbiter add-on or when attached to a space station.

Almost the entire Space Shuttle re-entry procedure, except for lowering the landing gear and deploying the air data probes, was normally performed under computer control. However, the re-entry could be flown entirely manually if an emergency arose. The approach and landing phase could be controlled by the autopilot, but was usually hand flown.

The vehicle began re-entry by firing the Orbital maneuvering system engines, while flying upside down, backside first, in the opposite direction to orbital motion for approximately three minutes, which reduced the Shuttle's velocity by about 200mph (322km/h). The resultant slowing of the Shuttle lowered its orbital perigee down into the upper atmosphere. The Shuttle then flipped over, by pushing its nose down (which was actually "up" relative to the Earth, because it was flying upside down). This OMS firing was done roughly halfway around the globe from the landing site.

The vehicle started encountering more significant air density in the lower thermosphere at about 400,000ft (120km), at around Mach 25, 8,200m/s (30,000km/h; 18,000mph). The vehicle was controlled by a combination of RCS thrusters and control surfaces, to fly at a 40-degree nose-up attitude, producing high drag, not only to slow it down to landing speed, but also to reduce reentry heating. As the vehicle encountered progressively denser air, it began a gradual transition from spacecraft to aircraft. In a straight line, its 40-degree nose-up attitude would cause the descent angle to flatten-out, or even rise. The vehicle therefore performed a series of four steep S-shaped banking turns, each lasting several minutes, at up to 70 degrees of bank, while still maintaining the 40-degree angle of attack. In this way it dissipated speed sideways rather than upwards. This occurred during the 'hottest' phase of re-entry, when the heat-shield glowed red and the G-forces were at their highest. By the end of the last turn, the transition to aircraft was almost complete. The vehicle leveled its wings, lowered its nose into a shallow dive and began its approach to the landing site.

Simulation of the outside of the Shuttle as it heats up to over 1,500C during re-entry.

A Space Shuttle model undergoes a wind tunnel test in 1975. This test is simulating the ionized gasses that surround a Shuttle as it reenters the atmosphere.

A computer simulation of high velocity air flow around the Space Shuttle during re-entry.

The orbiter's maximum glide ratio/lift-to-drag ratio varies considerably with speed, ranging from 1:1 at hypersonic speeds, 2:1 at supersonic speeds and reaching 4.5:1 at subsonic speeds during approach and landing.[83]

In the lower atmosphere, the orbiter flies much like a conventional glider, except for a much higher descent rate, over 50m/s (180km/h; 110mph) or 9,800 fpm. At approximately Mach 3, two air data probes, located on the left and right sides of the orbiter's forward lower fuselage, are deployed to sense air pressure related to the vehicle's movement in the atmosphere.

When the approach and landing phase began, the orbiter was at a 3,000m (9,800ft) altitude, 12km (7.5mi) from the runway. The pilots applied aerodynamic braking to help slow down the vehicle. The orbiter's speed was reduced from 682 to 346km/h (424 to 215mph), approximately, at touch-down (compared to 260km/h or 160mph for a jet airliner). The landing gear was deployed while the Orbiter was flying at 430km/h (270mph). To assist the speed brakes, a 12m (39ft) drag chute was deployed either after main gear or nose gear touchdown (depending on selected chute deploy mode) at about 343km/h (213mph). The chute was jettisoned once the orbiter slowed to 110km/h (68.4mph).

Media related to Landings of space Shuttles at Wikimedia Commons

After landing, the vehicle stayed on the runway for several hours for the orbiter to cool. Teams at the front and rear of the orbiter tested for presence of hydrogen, hydrazine, monomethylhydrazine, nitrogen tetroxide and ammonia (fuels and by-products of the reaction control system and the orbiter's three APUs). If hydrogen was detected, an emergency would be declared, the orbiter powered down and teams would evacuate the area. A convoy of 25 specially designed vehicles and 150 trained engineers and technicians approached the orbiter. Purge and vent lines were attached to remove toxic gases from fuel lines and the cargo bay about 4560 minutes after landing. A flight surgeon boarded the orbiter for initial medical checks of the crew before disembarking. Once the crew left the orbiter, responsibility for the vehicle was handed from the Johnson Space Center back to the Kennedy Space Center.[84]

If the mission ended at Edwards Air Force Base in California, White Sands Space Harbor in New Mexico, or any of the runways the orbiter might use in an emergency, the orbiter was loaded atop the Shuttle Carrier Aircraft, a modified 747, for transport back to the Kennedy Space Center, landing at the Shuttle Landing Facility. Once at the Shuttle Landing Facility, the orbiter was then towed 2 miles (3.2km) along a tow-way and access roads normally used by tour buses and KSC employees to the Orbiter Processing Facility where it began a months-long preparation process for the next mission.[84]

NASA preferred Space Shuttle landings to be at Kennedy Space Center.[85] If weather conditions made landing there unfavorable, the Shuttle could delay its landing until conditions are favorable, touch down at Edwards Air Force Base, California, or use one of the multiple alternate landing sites around the world. A landing at any site other than Kennedy Space Center meant that after touchdown the Shuttle must be mated to the Shuttle Carrier Aircraft and returned to Cape Canaveral. Space Shuttle Columbia (STS-3) once landed at the White Sands Space Harbor, New Mexico; this was viewed as a last resort as NASA scientists believed that the sand could potentially damage the Shuttle's exterior.

There were many alternative landing sites that were never used.[86][87]

An example of technical risk analysis for a STS mission is SPRA iteration 3.1 top risk contributors for STS-133:[88][89]

An internal NASA risk assessment study (conducted by the Shuttle Program Safety and Mission Assurance Office at Johnson Space Center) released in late 2010 or early 2011 concluded that the agency had seriously underestimated the level of risk involved in operating the Shuttle. The report assessed that there was a 1 in 9 chance of a catastrophic disaster during the first nine flights of the Shuttle but that safety improvements had later improved the risk ratio to 1 in 90.[90]

Below is a list of major events in the Space Shuttle orbiter fleet.

Sources: NASA launch manifest,[94] NASA Space Shuttle archive[95]

On January 28, 1986, Challenger disintegrated 73 seconds after launch due to the failure of the right SRB, killing all seven astronauts on board. The disaster was caused by low-temperature impairment of an O-ring, a mission critical seal used between segments of the SRB casing. Failure of the O-ring allowed hot combustion gases to escape from between the booster sections and burn through the adjacent external tank, causing it to explode.[96] Repeated warnings from design engineers voicing concerns about the lack of evidence of the O-rings' safety when the temperature was below 53F (12C) had been ignored by NASA managers.[97]

On February 1, 2003, Columbia disintegrated during re-entry, killing its crew of seven, because of damage to the carbon-carbon leading edge of the wing caused during launch. Ground control engineers had made three separate requests for high-resolution images taken by the Department of Defense that would have provided an understanding of the extent of the damage, while NASA's chief thermal protection system (TPS) engineer requested that astronauts on board Columbia be allowed to leave the vehicle to inspect the damage. NASA managers intervened to stop the Department of Defense's assistance and refused the request for the spacewalk,[98] and thus the feasibility of scenarios for astronaut repair or rescue by Atlantis were not considered by NASA management at the time.[99]

NASA retired the Space Shuttle in 2011, after 30 years of service. The Shuttle was originally conceived of and presented to the public as a "Space Truck", which would, among other things, be used to build a United States space station in low earth orbit in the early 1990s. When the US space station evolved into the International Space Station project, which suffered from long delays and design changes before it could be completed, the retirement of the Space Shuttle was delayed several times until 2011, serving at least 15 years longer than originally planned. Discovery was the first of NASA's three remaining operational Space Shuttles to be retired.[100]

Read this article:

Space Shuttle - Wikipedia

China’s Tiangong-1 space station will crash to Earth within …

Chinas first space station is expected to come crashing down to Earth within weeks, but scientists have not been able to predict where the 8.5-tonne module will hit.

The US-funded Aerospace Corporation estimates Tiangong-1 will re-enter the atmosphere during the first week of April, give or take a week. The European Space Agency says the module will come down between 24 March and 19 April.

In 2016 China admitted it had lost control of Tiangong-1 and would be unable to perform a controlled re-entry.

The statement from Aerospace said there was a chance that a small amount of debris from the module will survive re-entry and hit the Earth.

If this should happen, any surviving debris would fall within a region that is a few hundred kilometres in size, said Aerospace, a research organisation that advises government and private enterprise on space flight.

Aerospace warned that the space station might be carrying a highly toxic and corrosive fuel called hydrazine on board.

The report includes a map showing the module is expected to re-enter somewhere between 43 north and 43 south latitudes. The chances of re-entry are slightly higher in northern China, the Middle East, central Italy, northern Spain and the northern states of the US, New Zealand, Tasmania, parts of South America and southern Africa.

However, Aerospace insisted the chance of debris hitting anyone living in these nations was tiny. When considering the worst-case location the probability that a specific person (ie, you) will be struck by Tiangong-1 debris is about one million times smaller than the odds of winning the Powerball jackpot.

In the history of spaceflight no known person has ever been harmed by reentering space debris. Only one person has ever been recorded as being hit by a piece of space debris and, fortunately, she was not injured.

Jonathan McDowell, an astrophysicist from Harvard University and space industry enthusiast, also sounded a note of caution. He said fragments from a similar-sized rocket re-entered the atmosphere and landed in Peru in January. Every couple of years something like this happens, but Tiangong-1 is big and dense so we need to keep an eye on it, he told the Guardian.

McDowell said Tiangong-1s descent had been speeding up in recent months and it was now falling by about 6km a week, compared with 1.5km in October. It was difficult to predict when the module might land because its speed was affected by the constantly changing weather in space, he said.

It is only in the final week or so that we are going to be able to start speaking about it with more confidence, he said.

I would guess that a few pieces will survive re-entry. But we will only know where they are going to land after after the fact.

The Tiangong-1 or Heavenly Palace lab was launched in 2011 and described as a potent political symbol of China part of a scientific push to become a space superpower.

It was used for both manned and unmanned missions and visited by Chinas first female astronaut, Liu Yang, in 2012.

In 1991 the Soviet Unions 20-tonne Salyut 7 space station crashed to Earth while still docked to another 20-tonne spacecraft called Cosmos 1686. They broke up over Argentina, scattering debris over the town of Capitn Bermdez.

Nasas 77-tonne Skylab space station came hurtling to Earth in an almost completely uncontrolled descent in 1979, with some large pieces landing outside Perth in Western Australia.

Originally posted here:

China's Tiangong-1 space station will crash to Earth within ...

Transhumanism, Euthanasia and Risk Acceptance The …

Before I begin I would emphasise this is a gross simplification and I am narrowly discussing Euthanasia with a correlation to contingencies for extreme self experimentation.

Securing the legal rights is of vital significance within the H+ community bodily autonomy, Morphological freedom and cognitive liberty; these are somewhat recurring themes. Something often overlooked (perhaps due to the messy nature of rhetoric) is the right to terminate oneself under specific circumstances; debatably the biggest indicator of bodily autonomy. I would posit that the right to Active Euthanasia (Active Euthanasia i.e assisted suicide) is a favourable right to ensure transhumanist liberties in the future, and Risk Acceptance is the most viable metric for assessment. I have an inkling that Libertarian Transhumanists would be prime supporters of assisted suicide but for Extropians and democratic transhumanists the issue is not clear cut. I would implore consideration to be weighed sooner than later. Primarily, there is an outright morphological freedom argument to be made but contextually this correlates to termination under the individuals calculation of utility and risk acceptance pertaining to extreme experimentation. In simpler terms, if in the future, people choose to engage in extreme transhuman experimentation to the point of self-destruction, then those people should ensure they have the right to terminate themselves should the process/result become overwhelmingly undesirable (to the individual, barring affect on other parties). Intuitively this is antithetical to the end goals of H+ Schools such as Abolitionism and Hedonism but I suspect this is contrary to the result, not the process itself.

When engaging with proactionary self-experimentation, risk control is a major concern however Risk Acceptability to the individual is an overlooked proponent.

Health is not everything to everyone. This may be indicative of why transhumanism splits into so many families. Many transhumanists consider Happy > Healthy. This is not to say happy and healthy are mutually exclusive but in cases where this applies, the risk acceptance should be in the hands of the individual. Similar to how we balance cosmetic surgery, in-vivo contraceptives and alcohol/cigarettes (albeit this is more complex re: addiction but still pertaining to a similar end). Considering this, Risk Acceptance within Transhumanism experimentation (and within Euthanasia) is an overlooked and under stressed element. Precautionary medicine only accounts for magnitude and probability of risk but not a willingness to accept it. This Risk Acceptance (whilst not exclusively progressive) is our ethical responsibility regarding the conscious evolution of humans and I would prefer this remain at the individuals discretion. I would posit this risk acceptability should be extrapolated to basic transhuman rights/technologies applying this to modifications but ultimately, accountability over risk of death even at the hands on oneself in a worst case scenario.

On the grounds that Transhumanism has an element of conscious evolution, this risk acceptability when facing genetic cul de sacs should explored vehemently as a turning point of importance. Control over ones autonomy including the right to die should be recognised as exceedingly important especially when pertaining to future circumstances resulting in unforeseen consequences. Of course, I wouldnt espouse this to facetiously its self-evidently complex there should be a barrage of unincluded caveats in this post. But currently I dont believe our societal and political infrastructure supports potential contingencies needed for realistic ethical self experimentation hence why it is seen as somewhat underground.

The specific unforeseen consequence that sparked this post was the hypothetical experimentation of a neural lace. Frankly if I was to engage in preliminary testing (of which I am willing under certain requirements) and the result is akin to a digital lobotomy, I would want an exit plan. If religion or political beliefs can intercede on an individuals treatment, it seems fitting philosophical beliefs be extended the same degree of importance. For me, a Digital lobotomy is a fate worse than death, but I would be willing to undertake the risk in regards to a wider goal (dependant on probability of failure of course). Understandably the medical system, from a utilitarian perspective, cant be expected to just give resources to people that willingly hurt themselves (or at least those who are intelligent enough to realise what they are doing). But in the response to this, there is a much greater risk here associated with the utilitarian approach to medical treatment on these grounds as a zero-tolerance policy is unrealistic. If the medical industry cant/wont help two scarily realistic alternatives involve,

A) Underground industry potentially leading to outright dangerous scenarios.B) Private industry leading to particularly absurd insurance costs to negate risk management.

The later pertains to great importance re: a class divide via tech, but I digress.

Upon reaching this point my transhumanist ethics come into direct question for each school of transhumanist thought, ethical arguments can be made. The acceptability of options A and B are dependant on the school of transhumanism one subscribes to; I can see this hypothetical being useful in deciphering where ones intentions lie. For the sake of discussion, widespread medical acceptable for risk accountability in these circumstances is Option C. Intuitively, I can consider A and B to be a wastage of public and personal utility. However, I believe this may be entirely too narrow spectrum of thinking. To the Democratic transhumanist option C is desireable as the overall process involves the average joe having an opportunity to keep up. I would posit currently, this is the most synchronous to the current medical industry (or at least how its presented in Australia). To the Libertarian transhumanist option A (or B) may be the most ethical path. Ignoring the notion libertarians to a degree reject government regulations, these options allow for growth outside regulation and legality. Zero tolerance is unrealistic hence underground culture is almost entirely independant, whilst private industry can support growth and ultimately influence law as opposed to law influencing growth.

However, to the Extropian I imagine it could be entirely dependent on the context of all options. Options A and B provide an environment closest to a Transhuman arms race this gives the individual willing to take the biggest risk the biggest advantage. It seems amenable that Extropians support whatever provides the human race with the best tech regardless of political circumstance. A hardcore extropian may not consider mandatory government upgrades as a bad thing if a few years of suffering ultimately improves the human condition undeniably for sequential generations. I expected transhumanist philosophy to give me grounds for a more decisive argument, it has only served to demonstrate how contextual the argument is. Each Transhumanist theory does provide insights into favorability and probability of potential transhuman rights, but only once enough variables are controlled that the hypothetical becomes practically useless. Only the subsequent generations of Transhumanist Anthropologists can make judgement on literal context. We are in the eye of the storm.

To reiterate and conclude, I cannot wholeheartedly agree with a generalisation regarding Euthanasia within Transhuman rights but I would press contextual importance. Predictably, the development of Risk Acceptability will be an organic process and it would be wise to avoid and homogenisation of opinion amongst Transhumanist philosophies as this would result in the homogenisation of evolutionary potential. I believe the most useful tool in avoiding this would be for the individual to establish their metric for Risk Acceptance and to attempt to secure your own right to die under certain circumstances realistically, this is the only way to organically grow the acceptance of this choice without impacting others.

Like Loading...

Related

Read more:

Transhumanism, Euthanasia and Risk Acceptance The ...

Nazi eugenics – Wikipedia

Nazi eugenics (German: Nationalsozialistische Rassenhygiene, "National Socialist racial hygiene") were Nazi Germany's racially based social policies that placed the biological improvement of the Aryan race or Germanic "bermenschen" master race through eugenics at the center of Nazi ideology.[1] In Germany, eugenics were mostly known under the synonymous term racial hygiene. Following the Second World War, both terms effectively vanished and were replaced by Humangenetik (human genetics).

Eugenics research in Germany before and during the Nazi period was similar to that in the United States (particularly California), by which it had been partly inspired. However, its prominence rose sharply under Adolf Hitler's leadership when wealthy Nazi supporters started heavily investing in it. The programs were subsequently shaped to complement Nazi racial policies.[2]

Those humans targeted for destruction under Nazi eugenics policies were largely living in private and state-operated institutions, identified as "life unworthy of life" (German: Lebensunwertes Leben), including prisoners, "degenerates", dissidents, people with congenital cognitive and physical disabilities (including people who were "feebleminded", epileptic, schizophrenic, manic-depressive, cerebral palsy, muscular dystrophy, deaf, blind) (German: erbkranken), homosexual, idle, insane, and the weak, for elimination from the chain of heredity. More than 400,000 people were sterilized against their will, while more than 70,000 were killed under Action T4, a euthanasia program.[3][4][5][6] In June 1935, Hitler and his cabinet made a list of seven new decrees, number 5 was to speed up the investigations of sterilization.[7]

The early German eugenics movement was led by Wilhelm Schallmayer and Alfred Ploetz.[8][9] Henry Friedlander wrote that although the German and American eugenics movements were similar, the German movement was more centralized and did not contain as many diverse ideas as the American movement.[9] Unlike the American movement, one publication and one society, the German Society for Racial Hygiene, represented all eugenicists.[9]

Edwin Black wrote that after the eugenics movement was well established in the United States, it was spread to Germany. California eugenicists began producing literature promoting eugenics and sterilization and sending it overseas to German scientists and medical professionals.[10] By 1933, California had subjected more people to forceful sterilization than all other U.S. states combined. The forced sterilization program engineered by the Nazis was partly inspired by California's.[2]

In 1927, the Kaiser Wilhelm Institute for Anthropology (KWIA), an organization which concentrated on physical and social anthropology as well as human genetics, was founded in Berlin with significant financial support from the American philanthropic group, the Rockefeller Foundation.[11] German professor of medicine, anthropology and eugenics Eugen Fischer was the director of this organization, a man whose work helped provide the scientific basis for the Nazis' eugenics policies.[12][13] The Rockefeller Foundation even funded some of the research conducted by Josef Mengele before he went to Auschwitz.[10]

Upon returning from Germany in 1934, where more than 5,000 people per month were being forcibly sterilized, the California eugenics leader C. M. Goethe bragged to a colleague:

You will be interested to know that your work has played a powerful part in shaping the opinions of the group of intellectuals who are behind Hitler in this epoch-making program. Everywhere I sensed that their opinions have been tremendously stimulated by American thought... I want you, my dear friend, to carry this thought with you for the rest of your life, that you have really jolted into action a great government of 60 million people.[10]

Eugenics researcher Harry H. Laughlin often bragged that his Model Eugenic Sterilization laws had been implemented in the 1935 Nuremberg racial hygiene laws.[14] In 1936, Laughlin was invited to an award ceremony at Heidelberg University in Germany (scheduled on the anniversary of Hitler's 1934 purge of Jews from the Heidelberg faculty), to receive an honorary doctorate for his work on the "science of racial cleansing". Due to financial limitations, Laughlin was unable to attend the ceremony and had to pick it up from the Rockefeller Institute. Afterwards, he proudly shared the award with his colleagues, remarking that he felt that it symbolized the "common understanding of German and American scientists of the nature of eugenics."[15]

Adolf Hitler read about racial hygiene during his imprisonment in Landsberg Prison.[16]

Hitler believed the nation had become weak, corrupted by dysgenics, the infusion of degenerate elements into its bloodstream.[17]

The racialism and idea of competition, termed social Darwinism in 1944, were discussed by European scientists and also in the Vienna press during the 1920s. Where Hitler picked up the ideas is uncertain. The theory of evolution had been generally accepted in Germany at the time, but this sort of extremism was rare.[18]

In his Second Book, which was unpublished during the Nazi era, Hitler praised Sparta, (using ideas perhaps borrowed from Ernst Haeckel),[19] adding that he considered Sparta to be the first "Vlkisch State". He endorsed what he perceived to be an early eugenics treatment of deformed children:

Sparta must be regarded as the first Vlkisch State. The exposure of the sick, weak, deformed children, in short, their destruction, was more decent and in truth a thousand times more humane than the wretched insanity of our day which preserves the most pathological subject, and indeed at any price, and yet takes the life of a hundred thousand healthy children in consequence of birth control or through abortions, in order subsequently to breed a race of degenerates burdened with illnesses.[20][21]

In organizing their eugenics program the Nazis were inspired by the United States' programs of forced sterilization, especially on the eugenics laws that had been enacted in California.[10]

The Law for the Prevention of Hereditarily Diseased Offspring, enacted on July 14, 1933, allowed the compulsory sterilisation of any citizen who according to the opinion of a "Genetic Health Court" suffered from a list of alleged genetic disorders and required physicians to register every case of hereditary illness known to them, except in women over 45 years of age.[22] Physicians could be fined for failing to comply.

In 1934, the first year of the Law's operation, nearly 4,000 persons appealed against the decisions of sterilization authorities. A total of 3,559 of the appeals failed. By the end of the Nazi regime, over 200 Hereditary Health Courts (Erbgesundheitsgerichte) were created, and under their rulings over 400,000 persons were sterilized against their will.[23]

The Hadamar Clinic was a mental hospital in the German town of Hadamar used by the Nazi-controlled German government as the site of Action T4. The Kaiser Wilhelm Institute of Anthropology, Human Heredity, and Eugenics was founded in 1927. Hartheim Euthanasia Centre was also part of the euthanasia programme where the Nazis killed individuals they deemed disabled. The first method used involved transporting patients by buses in which the engine exhaust gases were passed into the interior of the buses, and so killed the passengers. Gas chambers were developed later and used pure carbon monoxide gas to kill the patients.[citation needed] In its early years, and during the Nazi era, the Clinic was strongly associated with theories of eugenics and racial hygiene advocated by its leading theorists Fritz Lenz and Eugen Fischer, and by its director Otmar von Verschuer. Under Fischer, the sterilization of so-called Rhineland Bastards was undertaken. Grafeneck Castle was one of Nazi Germany's killing centers, and today it is a memorial place dedicated to the victims of the Action T4.[24]

The Law for Simplification of the Health System of July 1934 created Information Centers for Genetic and Racial Hygiene, as well as Health Offices. The law also described procedures for 'denunciation' and 'evaluation' of persons, who were then sent to a Genetic Health Court where sterilization was decided.[25]

Information to determine who was considered 'genetically sick' was gathered from routine information supplied by people to doctor's offices and welfare departments. Standardized questionnaires had been designed by Nazi officials with the help of Dehomag (a subsidiary of IBM in the 1930s), so that the information could be encoded easily onto Hollerith punch cards for fast sorting and counting.[26]

In Hamburg, doctors gave information into a Central Health Passport Archive (circa 1934), under something called the 'Health-Related Total Observation of Life'. This file was to contain reports from doctors, but also courts, insurance companies, sports clubs, the Hitler Youth, the military, the labor service, colleges, etc. Any institution that gave information would get information back in return. In 1940, the Reich Interior Ministry tried to impose a Hamburg-style system on the whole Reich.[27]

After the Nazis passed the Nuremberg Laws in 1935, it became compulsory for both marriage partners to be tested for hereditary diseases in order to preserve the perceived racial purity of the Aryan race. Everyone was encouraged to carefully evaluate his or her prospective marriage partner eugenically during courtship. Members of the SS were cautioned to carefully interview prospective marriage partners to make sure they had no family history of hereditary disease or insanity, but to do this carefully so as not to hurt the feelings of the prospective fiancee and, if it became necessary to reject her for eugenic reasons, to do it tactfully and not cause her any offense.[28]

Notes

Bibliography

General reference

United States Holocaust Memorial Museum

Read more from the original source:

Nazi eugenics - Wikipedia

Illiberal Reformers: Race, Eugenics, and American …

Winner of the 2017 Joseph J. Spengler Best Book Prize, History of Economics SocietyFinalist for the 2017 Hayek Prize, The Manhattan InstituteOne of Bloomberg View's Great History Books of 2016

"Illiberal Reformers is the perfect title for this slim but vital account of the perils of intellectual arrogance in dealing with explosive social issues."--David Oshinsky, New York Times Book Review

"A deft analysis. . . . [I]nsightful."--Amity Shlaes, Wall Street Journal

"Particularly timely . . . a superlative narrative about a pivotal era of American history."--American Thinker

"Compelling. . . . Leonard reveals the largely forgotten intellectual origins of many current controversies."--Virginia Postrel, Bloomberg View

"Excellent."--Tyler Cowen, Marginal Revolution

"Explosively brilliant."--Jeffrey Tucker, Foundation for Economic Education

"[A] brief, well written book."--Herbert Hovenkamp, The New Rambler

"Elegant and persuasive. . . . Read Leonard."--Deirdre Nansen McCloskey, Reason

"Those puzzled by the ease with which contemporary progressive political movements have turned against liberal values such as free speech will find much material for reflection in Leonard's lucid intellectual history of early twentieth-century progressivism. . . . [Illiberal Reformers] illuminates one phase in the centuries-long American struggle between the quest for liberal values and the impulse to build a godly commonwealth on the back of a strong state."--Walter Russell Mead, Foreign Affairs

"Leonard combines rigorous research with lucid writing, presenting a work that is intellectually sound, relevant, and original."--Joseph Larsen, josephjonlarsen.com

"Illiberal Reformers is a great achievement and an important contribution to the revisionist historical literature."--Steven Hayward, National Review

"Illiberal Reformers is a downright frightening tale of how intellectual arrogance and a belief in one's own superiority leads to callous disregard for individual rights and dignity. Budding social engineers, whether the social justice warriors of the left or the theocratic conservatives of the right, should take note of this past and seriously reckon with it as they grope for state power to implement their messianic visions of the common good. But somehow I have a feeling they'll be too thoroughly convinced of their own moral rectitude to take seriously the lessons of the Progressive Era. Cautionary tales have a way of missing those who need them most."--Matthew Harwood, American Conservative

"To reflect on the significance of the Progressive era, Illiberal Reformers is a must read."--Pierre Lemieux, Regulation

"An excellent book and a cautionary tale for our own times."--Robert Whales, Choice

"Thomas Leonard has crafted an elegant, original, and cleverly argued account of core progressive ideas. Illiberal Reformers is deeply researched, and far ranging in the deployment of primary sources. Leonard has not just recycled material from the voluminous secondary literatures on eugenics, economics, immigration, race theory,' labor studies, and Darwinism. Instead he has invariably read key thinkers' publications and quotes from these primary documents, often to devastating effect. The book is a major achievement."--Desmond King, Perspectives on Politics

"One hopes that Leonard's fine volume will put an end to the reflexive habit of many to defend the early liberals, who when it came to people unlike themselves were with rare exception not liberal at all."--Stephen Carter, Bloomberg View

"A very important book that deserves to be read by every economist and academic, particularly those interested in American history, and especially those interested in the history of economic thought and the economics profession."--Patrick Newman, Independent Review

"The work of patient and pathbreaking economists like Leonard has opened up so much critical territory for those studying the history of economic knowledge from other disciplinary vantages. Illiberal Reformers places the consequential alliance between economics and eugenics in the Progressive Era in clear focus and suggests exciting new lines of inquiry for scholars interested in the tangled history of race, state, and market in modern America."--Daniel Platt, Journal of Cultural Economy

"A well-researched and clearly argued work which effectively ties changes in political economy to changes in popular thought, and shows how those changes to thinking effected the very bodies of people living in that society. A very accessible book."--Wesley R. Bishop, Labour-Le Travail

"Illiberal Reformers represents scholarship of the highest order."--Braham Dabscheck, Economic and Labour Relations Review

"Illiberal Reformers is a tour de force."--Leslie Jones, Quarterly Review

"Illiberal Reformers admirably reconstructs the much-repressed 'dark side' of social science progressivism."--Guy Alchon, Labor: Studies in Working-Class History of the Americas

"Illiberal Reformers is a masterly account of the intellectual currents that came to dominate American politics in the first half of the 20th century and, in many respects, dominate it still."--Michael M. Uhlmann, Claremont Review of Books

"In this fascinating book, Thomas C. Leonard explains how many leading progressives came to advocate for race-based immigration restrictions, eugenics, Social Darwinism, unequal pay for women, and even 'protecting women from employment' altogether."--Mark Joseph Stern, SlatePicks

"Illiberal Reformers tells a story that captures the mind, breaks the heart, and turns the stomach."--Art Carden, Cato Journal

"Required reading for anyone interested in the history of economics and U.S. politics."--Eric Scorsone & David Schweikhardt, Journal of Economic Issues

"Leonard's book offers a broad, forceful treatment and will have to be taken seriously by anyone seeking to understand and evaluate progressivism."--Kevin Schmiesing, Catholic Social Science Review

"Thomas Leonard's Illiberal Reformers is a significant contribution to the historiography of the Progressive Era, by one of the finest scholars working in the field."--Marco Cavalieri, Journal of The History of Economic Thought

"A masterly account of the intellectual currents that came to dominate American politics in the first half of the 20th century and, in many respects, dominate it still."--Michael M. Uhlmann, Claremont Review of Books

Go here to read the rest:

Illiberal Reformers: Race, Eugenics, and American ...

The Second Amendment, That’s Why. It’s The Answer On Both …

Karen Bleier/AFP/Getty Images

Karen Bleier/AFP/Getty Images

"The Second Amendment."

If you've lived in America, you've heard those words spoken with feeling.

The feeling may have been forceful, even vehement.

"Why? The Second Amendment, that's why."

The same words can be heard uttered in bitterness, as if in blame.

"Why? The Second Amendment, that's why."

Or then again, with reverence, an invocation of the sacred rather like "the Second Coming."

Talk of gun rights and gun control is back on full boil after 17 people were killed in the Parkland, Fla., school shooting, so the conversation turns to the Second Amendment quickly and often.

We are talking, of course, about the Second Amendment to U.S. Constitution, in the Bill of Rights.

It reads in full:

"A well-regulated militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed."

Simple. And not simple. Assuming it means just what it says, just what does it actually say?

Scholars have parsed the words, and courts and lawyers have argued over their meaning. Historians have debated what was meant by "well-regulated militia" back in 1789.

Some say the framers only meant to protect well-organized militias in the respective states, forerunners of today's National Guard. Others say the framers also intended to shield the guns of individuals, the weapons they would use if those militias were called upon to fight.

Heller brings some clarity

To some extent, the issue was clarified, if not settled, by the Heller decision of the U.S. Supreme Court in 2008. The 5-4 decision held that the Second Amendment meant individuals had an inherent right to own guns for lawful purposes.

Heller applied that standard to overturn a ban on privately held handguns, enacted in the District of Columbia. But the same basic reasoning has also been used to defend the private ownership of AR-15-type rifles such as the one used in Parkland and other mass shootings in recent years.

Congress tried to ban "assault-style" weapons in 1994 but put a 10-year sunset provision in the law. It survived court challenges at the time, but when the 10-year term had passed, the majority control of Congress had also passed from the Democrats, who had enacted the ban, to the Republicans, who let it lapse.

Since then, all efforts to restrict the sale of such weapons have failed. Even relatively bipartisan attempts at strengthening other restrictions, such as the Manchin-Toomey background check expansion bill in 2013, have fallen short of the necessary supermajority needed for passage in the Senate.

It was not, as President Trump alleged Wednesday, because of a lack of "presidential backup." President Barack Obama supported the bill, as Sen. Pat Toomey, a Pennsylvania Republican, pointed out to Trump. Republicans filibustered the bill, which got 54 votes.

In each case, defenders of gun rights have invoked the Second Amendment, the text that casts a long shadow across all discussions of guns in the U.S. At times, it seems to all but end such discussion.

Parkland changes calculus

But now, the tide is running the other way. The Parkland shootings have created a new moment and a new movement, led by teenagers who survived the tragedy and took their protests to social media and beyond.

Suddenly, even Trump is tossing out ideas about keeping students safe, arming teachers, restraining gun sales through background checks and higher age limits, and even banning accessories such as "bump stocks" that enable nonautomatic weapons to fire rapidly and repeatedly.

And it's still unclear what Trump wants exactly. Republicans on Capitol Hill seem flummoxed by Trump's posture.

After Trump's made-for-cable bipartisan meeting at the White House with members of Congress, Texas Republican John Cornyn, a leader on gun issues in the Senate, seemed to scratch his head.

"I think everybody is trying to absorb what we just heard," Cornyn told reporters. "He's a unique president, and I think if he was focused on a specific piece of legislation rather than a grab bag of ideas, then I think he could have a lot of influence, but right now we don't have that."

He added that he didn't think simply because the president says he supports something that it would pass muster with Republicans. "I wouldn't confuse what he said with what can actually pass," Cornyn said. "I don't expect to see any great divergence in terms of people's views on the Second Amendment, for example."

Ah, and there are those two words again Second Amendment.

If new restrictions are enacted a prospect far from certain, as Cornyn rightly points out they will surely be tested in the courts. There, it will be argued that they infringe on the rights of law-abiding citizens to "keep and bear" firearms.

In other words, they will run afoul of, that's right, the Second Amendment.

Anticipating that, some gun control advocates and at least one lifelong Republican want to leap to the ultimate battlement and do it now. They want to repeal, or substantially alter, the formidable amendment itself.

That would seem logical, at least to these advocates. If some 70 percent of Americans want more gun control and the Second Amendment stands in their way, why shouldn't they be able to do something about it?

Someday, it is conceivable, the people and politicians of the United States may be ready for that. But it will need to be a very different United States than we know today.

Why? Because amendments to the Constitution, once ratified, become fully part of the Constitution. Changing or removing them requires a two-stage process that has proved historically difficult.

The Founding Fathers were willing to be edited, it seems, but they did not want it to be easy. So they made the amending process a steep uphill climb, requiring a clear national consensus to succeed.

Why it takes consensus

A proposed amendment to the Constitution must first be passed by Congress with two-thirds majorities in both the House and the Senate.

The two chambers have not achieved such a margin for a newly written amendment to the Constitution in nearly half a century. The last such effort was the 26th Amendment (lowering the voting age nationwide from 21 to 18), and it cleared Capitol Hill in March 1971.

(There has been another amendment added since, in 1992, but it had been written and approved by Congress literally generations ago. More about that curious "zombie" amendment below.)

Even after surviving both chambers of Congress in 1971, the 18-year-old vote amendment still had to survive the second stage of the process the more difficult stage.

Just like all the other amendments before it, the new voting age had to be ratified by three-fourths of the states. That is currently at least 38 states. Another way to look at it: If as few as 13 states refuse, the amendment stalls.

This arduous process has winnowed out all but a handful of the amendments proposed over the past 230 years. Every Congress produces scores of proposals, sometimes well over 100. The 101st Congress (1989 to 1991) produced 214.

Some deal with obscure concerns; many address facets of the electoral process especially the Electoral College and the choosing of a president. Many are retreads from earlier sessions of Congress. The one thing most have in common is that they never even come to a vote.

Two that fell short

In 1995, a watershed year with big new GOP majorities in both chambers, two major constitutional amendments were brought to votes in the Capitol. One would have imposed term limits on members of Congress. It failed to get even close to two-thirds in the House, so the Senate did not bother.

The other proposed amendment would have required the federal government to balance its budget, not in theory down the road but in reality and in real time. It quickly got two-thirds in the House but failed to reach that threshold in the Senate by a single vote (one Republican in the chamber voted no).

So even relatively popular ideas with a big head of steam can hit the wall of the amendment process. How much more challenging would it be to tackle individual gun ownership in a country where so many citizens own guns and care passionately about their right to do so?

Overcoming the NRA and other elements of the gun lobby is only the beginning. The real obstacle would be tremendous support for guns in Southern, Western and rural Midwestern states, which would easily total up to more than enough states to block a gun control amendment.

There have been six amendments that got the needed margins in House and Senate but not the needed margin of support in the state legislatures. The most recent was the Equal Rights Amendment, a remarkably simple statement ("Equality of rights under the law shall not be denied or abridged by the United States or any State on account of sex") that cleared Congress with bipartisan support in 1972 and quickly won nods from most of the states.

But in the mid-1970s, a resistance campaign began and stymied the ERA in many of the remaining states. The resistance then managed to persuade several states to rescind their ratification votes. With momentum now reversed, the ERA died when its window for ratification closed.

Zombie amendments

Other amendments that met similar fates included one granting statehood to the District of Columbia. Like the ERA, the D.C. amendment had a time limit for ratification that expired. But other amendments sent out for ratification in the past did not have a limit, and so might still be ratified at least theoretically.

The granddaddy of these "zombie" amendments was the very first among the Bill of Rights, which began with 12 items rather than 10. The proposed amendment sought to regulate the number of constituents to be represented by a member of the House, and its numbers were soon outdated. So it has never been ratified and presumably will not be.

The one other amendment originally proposed in 1789 but not ratified as part of the original 10 amendments sat around for generations. Then it caught the attention of state legislatures in the late 1980s, at a time of popular reaction against pay raises for Congress. This amendment stated that a member of Congress who voted for a pay raise could not receive that raise until after the next election for the House of Representatives.

That amendment was dusted off and recirculated, and it reached the ratification threshold in 1992, more than 200 years after it had first been proposed. It is now the 27th Amendment to the Constitution, and the last at least so far.

A new Constitutional Convention?

If all this seems daunting, as it should, there is one alternative for changing the Constitution. That is the calling of a Constitutional Convention. This, too, is found in Article V of the Constitution and allows for a new convention to bypass Congress and address issues of amendment on its own.

To exist with this authority, the new convention would need to be called for by two-thirds of the state legislatures.

So if 34 states saw fit, they could convene their delegations and start writing amendments. Some believe such a convention would have the power to rewrite the entire 1787 Constitution, if it saw fit. Others say it would and should be limited to specific issues or targets, such as term limits or balancing the budget or changing the campaign-finance system or restricting the individual rights of gun owners.

There have been calls for an "Article V convention" from prominent figures on the left as well as the right. But there are those on both sides of the partisan divide who regard the entire proposition as suspect, if not frightening.

One way or another, any changes made by such a powerful convention would need to be ratified by three-fourths of the states just like amendments that might come from Congress.

And three-fourths would presumably be as high a hurdle for convention-spawned amendments as it has been for those from Congress dating to the 1700s.

Read this article:

The Second Amendment, That's Why. It's The Answer On Both ...

How are Cryptocurrencies Taxed?—Paying Taxes on Bitcoin and Other Cryptos

How Are Cryptocurrencies Taxed?
The past year has seen a huge surge in the popularity of cryptocurrencies. Everybody and his brother are looking to make a quick buck by buying new cryptos for pennies and selling them for dollars.

As the income comes pouring in, so do tons of tax-related questions, such as: "Do you need to pay taxes on cryptocurrencies?," "Are there any cryptocurrency taxes in the U.S.?," "Who needs to pay taxes on cryptocurrencies?," "How are cryptocurrencies taxed?," and "Who do I pay taxes to?"

We’ll answer all of these.

The post How are Cryptocurrencies Taxed?—Paying Taxes on Bitcoin and Other Cryptos appeared first on Profit Confidential.

Read the original post:
How are Cryptocurrencies Taxed?—Paying Taxes on Bitcoin and Other Cryptos

Bitcoin Price Forecast: Russia Finalizing Regulations by July, Bitcoin Future Looks Safe

Daily Bitcoin News Update
Another country is swiftly moving toward Bitcoin regulations. This time, it’s Russia. Yesterday, Russian President Vladimir Putin signaled the country’s regulators to get the cryptocurrency regulations in place, at the very latest by July 2018. Why does it matter? Because the regulations would define the future of Bitcoin and other cryptocurrencies in the country.

Putin set the ball rolling on crypto regulations last year, and the ball is now in the regulators’ court.

On one side, the Central Bank of Russia is of the view that cryptocurrencies pose investment risks and so should not be legalized for investing. On the other side, the Ministry.

The post Bitcoin Price Forecast: Russia Finalizing Regulations by July, Bitcoin Future Looks Safe appeared first on Profit Confidential.

See original here:
Bitcoin Price Forecast: Russia Finalizing Regulations by July, Bitcoin Future Looks Safe

Societal Consequences of Human Genetic Engineering …

Section 15 of NOVAs program, Cracking the Code of Life, utilizes popular film and television scenarios to relate to its audience the potential possibilities of future genetic modification of humans. In a scene from GATTACA, the doctor explains the process of choosing simply the best of the two parents DNA to create their child in a petri dish. According to Francis Collins, former director of the National Human Genome Research Institute (NHGRI) and current director of the National Institutes of Health (NIH), that technology is right in front of us or almost in front of us.

[http://www.stumbleupon.com/su/1EdlIO/www.wickedreport.com/genetic-errors/]

The advancement of research in genetic modification raises ethical concerns of how this information technology will be used in the future. Who will regulate which genes are modified and which are not? If law prohibits genetic modification except in cases of modifying mutations that cause diseases, how will the law regulator, presumably the government, define a disease? What will be the standards for disease severity? Will the law provide genetic modification for mutated genes like BRCA but not for blindness or alcoholism? How will they decide which diseases are more important or more severe than others?

Society as a whole can generally agree that using genetic modification to help prevent incurable diseases like cancer, diabetes, and Tay Sachs disease, is highly favorable. Potential prevention of these diseases could save thousands of people pain, suffering, anxiety, and, on a more superficial level, millions of dollars. The line begins to blur when society examines the possibility of using this genetic modification technology not only to prevent disease, but to make their children genetically different to enhance their performance.

If society decides that anyone who can afford genetic modification can take advantage of its benefits, will parents begin to alter the characteristics of their future children? Program host Robert Krulwich asks, what parent wouldnt want to introduce a child that would at least be where all the other kids could be?

All parents want their children to have the best possible start to life and have the best advantages that they can provide. I wonder how far some parents would go to secure the best genetic start for their children. If genetic modification becomes a public option, it will probably only be available to those who can afford it. Because of the inevitability of its high cost, the only people who would be able to afford to create genetically perfect children would be those in the highest percentile of wealth. Therefore, if only a certain group with a specific socio-economic status could even have access to this science, the gap between social classes will increase not only because of a disparity of wealth, but also because of a disparity in gene perfection. The definition of elite will encompass human perfection through genetic modification.

The First Genetically Modified Human Embryo

Defying nature to build super-humans is not a real concern until science has proven that this is possible, and currently this technology is not perfected. Science should be allowed to progress and discoveries should not be hindered or stopped. However, it is important for society to decide now how they will deal with the ultimate results of future scientific research.

By: Elizabeth S.

Like Loading...

Related

~ by elizabethstinson on January 31, 2010.

Posted in Ethics of science, Genetic engineering, Science and humanitiesTags: Ethics of Genetics, Gattaca, Genetic engineering, genetic modification, Nova Cracking the Code of Life

Link:

Societal Consequences of Human Genetic Engineering ...

History of the First Amendment | JEM First Amendment Project

The First Amendment of the United States was ratified, along with nine other amendments to the Constitution of the United States making up the Bill of Rights, on December 15, 1791. The text of the First Amendment reads:

Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.

These forty-five words encompass the most basic of American rights: freedom of religion, freedom of speech, freedom of the press, the right of assembly, and the right of petition. But what do those words mean? The meaning was not clear in 1791 and still is the subject of continuing interpretation and dispute in the 21st Century.

The First Amendment was not important in American life until well into the 20th Century. Yes the words were there, but the first word of the First Amendment restricted its sweep to the federal government: Congress shall make no law . . . And even in its 18th Century origins, despite democratic stirrings and impulses to expanding freedom among some leaders, there is reason to believe that the Bill of Rights was offered as an 18th Century political compromise, a hollow gesture in comparison to the sweeping words. When the Federaliststhose favoring the centralized government proposed by the draft Constitution of 1787---feared that opposition by the Antifederalists would stop adoption of the second Frame of Governnment (to replace the Articles of Confederation).

You may download the contents of this page here.

Original post:

History of the First Amendment | JEM First Amendment Project

Jordan Towers 70news

ALT-LEFT JORDAN TOWERS VOWS TO CRUSH SKULLS WITH NAILED BASEBALL BAT DURING PATRIOT PRAYER RALLY THISWEEKEND

Posted on August 25, 2017 Updated on August 25, 2017

Unhinged violence! Jordan Towers,a self-proclaimed Alt-Left leader, has vowed to use a baseball bat embedded with nails to crush skulls during a Patriot Prayer peace rally set to take place in San Francisco this weekend.

The Twitter user, who goes by the handle @ibPrinceJordan, tweeted, The Patriot Prayer rally is a nazi white supremacist event. Ill be their to crush some nazi skulls.

Cant wait! Going to bring this nailed bat for some nazi pounding, read one.

He repeated the threat in a series of other tweets that have now been deleted.

Jordan describes himself as: Alt-Left resistance leader. iraq war veteran. cute communist & catholic.

Like Loading...

This entry was posted in NEWS and tagged baseball bat, Jordan Towers, Patriot Prayer, rally, San Francisco.

Excerpt from:

Jordan Towers 70news

Ethereum fixes serious eclipse flaw that could be exploited …

Developers of Ethereum, the world's No. 2 digital currency by market capitalization, have closed a serious security hole that allowed virtually anyone with an Internet connection to manipulate individual users' access to the publicly accessible ledger.

So-called eclipse attacks work by preventing a cryptocurrency user from connecting to honest peers. Attacker-controlled peers then feed the target a manipulated version of the blockchain the entire currency community relies on to reconcile transactions and enforce contractual obligations. Eclipse attacks can be used to trick targets into paying for a good or service more than once and to co-opt the target's computing power to manipulate algorithms that establish crucial user consensus. Because Ethereum supports "smart contracts" that automatically execute transactions when certain conditions in the blockchain are present, Ethereum eclipse attacks can also be used to interfere with those self-enforcing agreements.

Like most cryptocurrencies, Ethereum uses a peer-to-peer mechanism that compiles input from individual users into an authoritative blockchain. In 2015 and again in 2016, separate research teams devised eclipse attacks against Bitcoin that exploited P2P weaknesses. Both were relatively hard to pull off. The 2015 attack required a botnet or a small ISP that controlled thousands of devices, while the 2016 attack relied on the control of huge chunks of Internet addresses through a technique known as border gateway protocol hijacking. The demands made it likely that both attacks could be carried out only by sophisticated and well-resourced hackers.

Many researchers believed that the resources necessary for a successful eclipse attack against Ethereum would considerably higher than the Bitcoin attacks. After all, Ethereum's P2P network includes a robust mechanism for cryptographically authenticating messages and by default peers establish 13 outgoing connections, compared with eight for Bitcoin. Now, some of the same researchers who devised the 2015 Bitcoin attack are back to set the record straight. In a paper published Thursday, they wrote:

We demonstrate that the conventional wisdom is false. We present new eclipse attacks showing that, prior to the disclosure of this work in January 2018, Ethereum's peer-to-peer network was significantly less secure than that of Bitcoin. Our eclipse attackers need only control two machines, each with only a single IP address. The attacks are off-path-the attacker controls endhosts only and does not occupy a privileged position between the victim and the rest of the Ethereum network. By contrast, the best known off-path eclipse attacks on Bitcoin require the attacker to control hundreds of host machines, each with a distinct IP address. For most Internet users, it is far from trivial to obtain hundreds (or thousands) of IP addresses. This is why the Bitcoin eclipse attacker envisioned [in the 2015 research] was a full-fledged botnet or Internet Service Provider, while the BGP-hijacker Bitcoin eclipse attacker envisioned [in the 2016 paper] needed access to a BGP-speaking core Internet router. By contrast, our attacks can be run by any kid with a machine and a script.

In January, the researchers reported their findings to Ethereum developers. They responded by making changes to geth, the most popular application supporting the Ethereum protocol. Ethereum users who rely on geth should ensure they've installed version 1.8 or higher. The researchers didn't attempt the same attacks against other Ethereum clients. In an email, Ethereum developer Felix Lange wrote:

"We have done our best to mitigate the attacks within the limits of the protocol. The paper is concerned with 'low-resource' eclipse attacks. As far as we know, the bar has been raised high enough that eclipse attacks are not feasible without more substantial resources, with the patches that have been implemented in geth v1.8.0." Lange went on to say he didn't believe another popular Ethereum app called Parity is vulnerable to the same attacks.

The paper, titled Low-Resource Eclipse Attacks on Ethereum's Peer-to-Peer Network, described two separate attacks. The simplest one relied on two IP addresses, which each generate large numbers of cryptographic keys that the Ethereum protocol uses to designate peer-to-peer nodes. The attacker then waits for a target to reboot the computer, either in the due course of time, or after the hacker sends various malicious packets that cause a system crash. As the target is rejoining the Ethereum network, the attacker uses the pool of nodes to establish incoming connections before the target can establish any outgoing ones.

The second technique works by creating a large number of attacker-controlled nodes and sending a special packet that effectively poisons the target's database with the fraudulent nodes. When the target reboots, all of the peers it connects to will belong to the attacker. In both cases, once the target is isolated from legitimate nodes, the attacker can present a false version of the blockchain. With no peers challenging that version, the target will assume the manipulated version is the official blockchain.

The researchers presented a third technique that makes eclipse attacks easier to carry out. In a nutshell, it works by setting the target's computer clock 20 or more seconds ahead of the other nodes in the Ethereum network. To prevent so-called replay attacksin which a hacker resends an old authenticated message in an attempt to get it executed more than oncethe Ethereum protocol rejects messages that are more than 20 seconds old. By setting a target's clock ahead, attackers can cause the target to lose touch with all legitimate users. The attackers use malicious nodes with the same clock time to connect to the target. Some of the same researchers behind the Ethereum eclipse technique described a variety of timing attacks in a separate paper published in 2015.

Ethereum developers put a countermeasure in place against the first attack that ensures each node will always make outgoing connections to other peers. The fix for the second attack involved limiting the number of outgoing connections a target can make to the same /24 chunk of IP address to 10. The changes are designed to make it significantly harder to completely isolate a user from other legitimate users. When even a single node presents users with a different version of the blockchain, they will be warned of an error that effectively defeats the attack.

Ethereum developers haven't implemented a fix for the time-based attack. Since it generally requires an attacker to manipulate traffic over the target's Internet connection or to exploit non-Ethereum vulnerabilities on the target's computer, it likely poses less of a threat than the other two attacks.

The researchers, from Boston University and the University of Pittsburgh, warned users to protect themselves against the eclipse threat.

"Given the increasing importance of Ethereum to the global blockchain ecosystem, we think it's imperative that countermeasures preventing them be adopted as soon as possible," they wrote. "Ethereum node operators should immediately upgrade to geth v1.8."

Here is the original post:

Ethereum fixes serious eclipse flaw that could be exploited ...

Orlan and Stelarc: Manifesting Posthuman Performance …

As our speculative object surrounds ideas of Posthumanism and body modification, I will explore the work of two artists, Orlan and Stelarc, and their artistic notions reflecting the augmentation of the body.An artistic framework provides a site for abstraction and consideration, allowing for the exploration of these notions.

Posthumanism is defined as an attitude on how to deal with the limitations of the human form. It is a vision of how to move beyond those limits by the radical use of technology and other means (Ust, 2001). Essentially, it involves the augmentation of the human form by combining it with a technical force. We can see this alteration in aspects of our current society plastic surgery, tattoos, the use of prosthetic limbs. What is the impact of this technology in regards to human identity? How do we distinguish what is human and what is augmentation, and in the future will there even be a contentious issue, or will technology and human identity mesh into one?

Stelarc explores this notion through his workPingBody, wherehis body was connected to the Internet via electrodes linked to modems (Nayar 2004). In this performance, virtual spaces become the location of action. Online data controls the movement of Stelarc through electrodes the artist is reduced from a participant to an observer: He is at the disposal of others and can only observe what others are doing to him. Although the aim ofPingBodyis primarily to demonstrate the notion of losing control of ones self and having others manipulate their movements, and thus their identity, it may also be interpreted as a performance that demonstrates the decline of the bodys identity grounded within physicality to one that is shifting towards the possibility of a new virtual-body-identity.

Orlan similarly explores these notions within her work, although challenges ideas of body modification through commenting on a Posthumanist process available today plastic surgery. InThe Reincarnation of Saint Orlan,the artist undergoes plastic surgery in order gain the appearance of famous women in fine art:She had her mouth changed to that of BouchersEuropa, her chin like BotticellisVenus, and her forehead like an exaggerated LeonardosMona Lisa (Cook, 2003). Her work shows the ease in which we can change identity through biotechnology. One of her messages, among others, has been achieving this level of technical advancement can be advantageous, but the process is a destructive and horrifying. Ideal beauty, and by extension the ultimate human, is unattainable and becomes a commercial commodity. Her artistic intent has been both feminist and psychoanalytical and reflects her post-modern view of biotechnology in the 21st century.

These works challenge societiesviews of post-humanist identity, specifically the notion of a technology based identity. These artistssuccessfully attempt to show, literally in some instances, the possibilities of a body; how we can alter the body to aid or assist, and alter our current state for aesthetic and conceptual means.These alterations can push the limits of what humanity is capable of, and elevate us into a higher, post-human state of being.

REFERENCE LIST

Like Loading...

Related

Read the original post:

Orlan and Stelarc: Manifesting Posthuman Performance ...

Posthuman Review People with Meeples

Posthuman is a post-apocalyptic survival adventure game where youll move across a variety of terrain, forage for supplies, survive the elements, and fight for your life on your journey to the last beacon of hope. Youll have to keep yourself fed, collect ammunition, find weapons, and avoid becoming the very things you are fighting. This game was published in 2015 by Mighty Box and Mr. B Games following a successful funding campaign on Kickstarter. Posthuman boasts a number of familiar mechanics while offering an interesting take on partnership/player vs. player, exploration, and custom player creation.

The game supports 1-4, or 5-6 players with the Defiant expansion. Our journey to the Fortress from set up time to finish took about 2 hours. The game states it should take 30 minutes per player but I think it is safe to say expect longer playtimes, at least when you first start playing the game. I found that this game works best with 1-2 players. The solo game is quite challenging, and neither Gareth nor I have managed to get to the fortress before time runs out. With 4+ players there is a lot of down time between player turns as they wait for combat encounters to be resolved. The designers do suggest that with a 4+ player game that players resolve actions simultaneously, and then pair up to resolve combat together however the game does not provide enough dice to allow for simultaneous combat resolution with 4+ players. In a two player game resolving combat together is quick and easy. Unfortunately as far as I can tell there is no easy way to acquire more of the custom dice to use in 4+ player games.

Combat is a huge component of this game. Posthuman uses dice rolling mechanics with custom D6s to determine successes or failures during rounds of combat. However combat success/failure is not entirely random. Various skills, weapons, equipment, and attributes can modify dice or add additional damage; but beware some of your enemies may possess these powers as well. Unfortunately there are aspects of the combat that are not very intuitive, detracting from streamlined play. Interpretation of melee combat results can bog down play, and detract from immersion.

Dont fret about player elimination through combat, if youre ever brought below zero you are simply knocked out and find yourself back at your original safe house (starting terrain tile). You may have lived to fight another day, but if you collect one to many Mutation Scar cards from skirmishing with the Evolved, youll be transformed into one of the Evolved yourself! In this case your goal will change from reaching the fortress, to preventing the others from getting there. Youll use unique Mutant Actions drawn at random to attack and hinder the other survivors. There are some variables regarding when, where and whom you may use an actions against but it is exciting none the less to take an active role in preventing the success of those around you.

Enemies, Obstacles, and Opportunites

Equipment and Skills

Youll need to be the first player to collect 10 Journey points through increasingly difficult encounters to reach the safety of the Fortress. Your path may seem straight forward on the centre board, but the real journey unfolds in front of your as you draw terrain tiles and forge your own custom path. Each tile is an adventure in of its self. A terrain tile has several features on it: the terrain, the number of supplies it will yield, the number of encounters need to complete it, and the direction of future paths. Youll need to complete all the encounters on the tile to collect the supplies and forage for even more. Keep in mind that not every encounter will result in combat. At times you may stumble across an obstacle that requires a test of your speed or mind; some encounters may even prompt moral choices with resulting rewards or consequences. The exploration allows you to collect supplies and journey points in your own way. You may cross paths with other players if you occupy the same terrain, and this opens the option of partnership between players (trading, and using abilities) keep in mind that there is only one winner in this game. The exploration mechanics in Posthuman are unique, streamlined, and really add to the immersion of theme. The partnership and player vs. player mixed mechanic is unique and adds to that survive at all cost feel to the theme.

After Scouting

Starting Terrain Tile

Getting there will be easy!

There are over three hundred and fifty cards contained within this game, for fourteen different decks. Set up is simple shuffle the decks, arrange them around the centre board, draw your equipment cards, mark your stats on the character board, and then choose one action from a set of four possible choices. The quality of the cards is great, the meeples are unique, and the art is thematic and immersive. The only negative thing I can say about the quality of the components is that the character sheet is flimsy and the stat tokens are not held well in their slots. Setting up alone will take some time. It is possible the set up may feel tedious or fiddly to some players.

Custom Dice

Character Armed and Ready

350 Tiny Cards

Character Sheet Issue

With the custom character creation, the endless combination of weapons, armor, skills, and equipment, the hordes of enemies, and randomly generated individual exploration leads me to believe that the replay value of this game is high. I recommend this game if you are interested in challenging solo play or playing with 2-3 players. This game certainly has it merits: exploration, mutated/Evolved players, theme, immersion etc etc. However, Gareth and I believe that a lot of the interesting aspects such as playing as an Evolved, or trading and partnership are just not well supported at lower player counts, and higher player counts are not well supported with the present configuration of rules and components. There are many intricacies to this game that I just have not been able to include due to the length of this review already, I hope I have been able to highlight key aspects of the game. In the future I hope to go more in depth on how to play and teach this game. Overall I have really enjoyed this game but I just wish it supported the higher player counts that it claims. We may have to create some house rules or variants to support more players to keep this one around.

Thanks for reading!

Like Loading...

Link:

Posthuman Review People with Meeples

America’s War on Drugs – HISTORY

Americas War on Drugs is an immersive trip through the last five decades, uncovering how the CIA, obsessed with keeping America safe in the fight against communism, allied itself with the mafia and foreign drug traffickers. In exchange for support against foreign enemies, the groups were allowed to grow their drug trade in the United States. The series explores the unintended consequences of when gangsters, war lords, spies, outlaw entrepreneurs, street gangs and politicians vie for power and control of the global black market for narcotics all told through the firsthand accounts of former CIA and DEA officers, major drug traffickers, gang members, noted experts and insiders.Night one of Americas War on Drugs divulges covert Cold War operations that empowered a generation of drug traffickers and reveals the peculiar details of secret CIA LSD experiments which helped fuel the counter-culture movement, leading to President Nixons crackdown and declaration of a war on drugs. The documentary series then delves into the rise of the cocaine cowboys, a secret island cocaine base, the CIAs connection to the crack epidemic, the history of the cartels and their murderous tactics, the era of Just Say No, the negative effect of NAFTA, and the unlikely career of an almost famous Midwest meth queen.The final chapter of the series examines how the attacks on September 11th intertwined the War on Drugs and the War on Terror, transforming Afghanistan into a narco-state teeming with corruption. It also explores how American intervention in Mexico helped give rise to El Chapo and the Super Cartels, bringing unprecedented levels of violence and sending even more drugs across Americas borders. Five decades into the War on Drugs, a move to legalize marijuana gains momentum, mega-corporations have become richer and more powerful than any nations drug cartel, and continuing to rise is the demand for heroin and other illegal drugs.

See more here:

America's War on Drugs - HISTORY