...10...1819202122...304050...


Tesla Model Prices, Photos, News, Reviews and Videos …

It’s almost impossible to separate the audacious reality of Tesla, elevating the EV from an unsexy commuter appliance to a powerful and luxurious statement of success, from its indomitable founder, Elon Musk. The company, like the founder, thrives on publicity that raises the profile of the whole enterprise. The Model S was tipping point, since the earlier Roadster was a niche product, providing serious real-world range and a huge network of ultra-fast chargers; accomplishments no full-line automakers have fully rivaled to date. The Model X SUV with its novel falcon doors came next, and more recently, the Model 3. As of this writing, the Model 3 is in limited production and is experiencing some quality-control hiccups. If Model 3 production can ramp up as Musk expects, Telsa is poised to perhaps secure its future. Whatever Tesla’s fate, its meteoric rise as a purveyor of fast, green American sedans and SUVs has been incredible.

Original post:

Tesla Model Prices, Photos, News, Reviews and Videos …

Comet – Wikipedia

A comet is an icy small Solar System body that, when passing close to the Sun, warms and begins to release gases, a process called outgassing. This produces a visible atmosphere or coma, and sometimes also a tail. These phenomena are due to the effects of solar radiation and the solar wind acting upon the nucleus of the comet. Comet nuclei range from a few hundred metres to tens of kilometres across and are composed of loose collections of ice, dust, and small rocky particles. The coma may be up to 15 times the Earth’s diameter, while the tail may stretch one astronomical unit. If sufficiently bright, a comet may be seen from the Earth without the aid of a telescope and may subtend an arc of 30 (60 Moons) across the sky. Comets have been observed and recorded since ancient times by many cultures.

Comets usually have highly eccentric elliptical orbits, and they have a wide range of orbital periods, ranging from several years to potentially several millions of years. Short-period comets originate in the Kuiper belt or its associated scattered disc, which lie beyond the orbit of Neptune. Long-period comets are thought to originate in the Oort cloud, a spherical cloud of icy bodies extending from outside the Kuiper belt to halfway to the nearest star.[1] Long-period comets are set in motion towards the Sun from the Oort cloud by gravitational perturbations caused by passing stars and the galactic tide. Hyperbolic comets may pass once through the inner Solar System before being flung to interstellar space. The appearance of a comet is called an apparition.

Comets are distinguished from asteroids by the presence of an extended, gravitationally unbound atmosphere surrounding their central nucleus. This atmosphere has parts termed the coma (the central part immediately surrounding the nucleus) and the tail (a typically linear section consisting of dust or gas blown out from the coma by the Sun’s light pressure or outstreaming solar wind plasma). However, extinct comets that have passed close to the Sun many times have lost nearly all of their volatile ices and dust and may come to resemble small asteroids.[2] Asteroids are thought to have a different origin from comets, having formed inside the orbit of Jupiter rather than in the outer Solar System.[3][4] The discovery of main-belt comets and active centaur minor planets has blurred the distinction between asteroids and comets.

As of November2014[update] there are 5,253 known comets,[5] a number that is steadily increasing as they are discovered. However, this represents only a tiny fraction of the total potential comet population, as the reservoir of comet-like bodies in the outer Solar System (in the Oort cloud) is estimated to be one trillion.[6][7] Roughly one comet per year is visible to the naked eye, though many of those are faint and unspectacular.[8] Particularly bright examples are called “great comets”. Comets have been visited by unmanned probes such as the European Space Agency’s Rosetta, which became the first ever to land a robotic spacecraft on a comet,[9] and NASA’s Deep Impact, which blasted a crater on Comet Tempel 1 to study its interior.

The word comet derives from the Old English cometa from the Latin comta or comts. That, in turn, is a latinisation of the Greek (“wearing long hair”), and the Oxford English Dictionary notes that the term () already meant “long-haired star, comet” in Greek. was derived from (“to wear the hair long”), which was itself derived from (“the hair of the head”) and was used to mean “the tail of a comet”.[10][11]

The astronomical symbol for comets is (in Unicode U+2604), consisting of a small disc with three hairlike extensions.[12]

The solid, core structure of a comet is known as the nucleus. Cometary nuclei are composed of an amalgamation of rock, dust, water ice, and frozen carbon dioxide, carbon monoxide, methane, and ammonia.[13] As such, they are popularly described as “dirty snowballs” after Fred Whipple’s model.[14] However, some comets may have a higher dust content, leading them to be called “icy dirtballs”.[15] Research conducted in 2014 suggests that comets are like “deep fried ice cream”, in that their surfaces are formed of dense crystalline ice mixed with organic compounds, while the interior ice is colder and less dense.[16]

The surface of the nucleus is generally dry, dusty or rocky, suggesting that the ices are hidden beneath a surface crust several metres thick. In addition to the gases already mentioned, the nuclei contain a variety of organic compounds, which may include methanol, hydrogen cyanide, formaldehyde, ethanol, and ethane and perhaps more complex molecules such as long-chain hydrocarbons and amino acids.[17][18] In 2009, it was confirmed that the amino acid glycine had been found in the comet dust recovered by NASA’s Stardust mission.[19] In August 2011, a report, based on NASA studies of meteorites found on Earth, was published suggesting DNA and RNA components (adenine, guanine, and related organic molecules) may have been formed on asteroids and comets.[20][21]

The outer surfaces of cometary nuclei have a very low albedo, making them among the least reflective objects found in the Solar System. The Giotto space probe found that the nucleus of Halley’s Comet reflects about four percent of the light that falls on it,[22] and Deep Space 1 discovered that Comet Borrelly’s surface reflects less than 3.0%;[22] by comparison, asphalt reflects seven percent. The dark surface material of the nucleus may consist of complex organic compounds. Solar heating drives off lighter volatile compounds, leaving behind larger organic compounds that tend to be very dark, like tar or crude oil. The low reflectivity of cometary surfaces causes them to absorb the heat that drives their outgassing processes.[23]

Comet nuclei with radii of up to 30 kilometres (19mi) have been observed,[24] but ascertaining their exact size is difficult.[25] The nucleus of 322P/SOHO is probably only 100200 metres (330660ft) in diameter.[26] A lack of smaller comets being detected despite the increased sensitivity of instruments has led some to suggest that there is a real lack of comets smaller than 100 metres (330ft) across.[27] Known comets have been estimated to have an average density of 0.6g/cm3 (0.35oz/cuin).[28] Because of their low mass, comet nuclei do not become spherical under their own gravity and therefore have irregular shapes.[29]

Roughly six percent of the near-Earth asteroids are thought to be extinct nuclei of comets that no longer experience outgassing,[30] including 14827 Hypnos and 3552 Don Quixote.

Results from the Rosetta and Philae spacecraft show that the nucleus of 67P/ChuryumovGerasimenko has no magnetic field, which suggests that magnetism may not have played a role in the early formation of planetesimals.[31][32] Further, the ALICE spectrograph on Rosetta determined that electrons (within 1km (0.62mi) above the comet nucleus) produced from photoionization of water molecules by solar radiation, and not photons from the Sun as thought earlier, are responsible for the degradation of water and carbon dioxide molecules released from the comet nucleus into its coma.[33][34] Instruments on the Philae lander found at least sixteen organic compounds at the comet’s surface, four of which (acetamide, acetone, methyl isocyanate and propionaldehyde) have been detected for the first time on a comet.[35][36][37]

The streams of dust and gas thus released form a huge and extremely thin atmosphere around the comet called the “coma”. The force exerted on the coma by the Sun’s radiation pressure and solar wind cause an enormous “tail” to form pointing away from the Sun.[46]

The coma is generally made of H2O and dust, with water making up to 90% of the volatiles that outflow from the nucleus when the comet is within 3 to 4 astronomical units (450,000,000 to 600,000,000km; 280,000,000 to 370,000,000mi) of the Sun.[47] The H2O parent molecule is destroyed primarily through photodissociation and to a much smaller extent photoionization, with the solar wind playing a minor role in the destruction of water compared to photochemistry.[47] Larger dust particles are left along the comet’s orbital path whereas smaller particles are pushed away from the Sun into the comet’s tail by light pressure.[48]

Although the solid nucleus of comets is generally less than 60 kilometres (37mi) across, the coma may be thousands or millions of kilometres across, sometimes becoming larger than the Sun.[49] For example, about a month after an outburst in October 2007, comet 17P/Holmes briefly had a tenuous dust atmosphere larger than the Sun.[50] The Great Comet of 1811 also had a coma roughly the diameter of the Sun.[51] Even though the coma can become quite large, its size can decrease about the time it crosses the orbit of Mars around 1.5 astronomical units (220,000,000km; 140,000,000mi) from the Sun.[51] At this distance the solar wind becomes strong enough to blow the gas and dust away from the coma, and in doing so enlarging the tail.[51] Ion tails have been observed to extend one astronomical unit (150 million km) or more.[50]

Both the coma and tail are illuminated by the Sun and may become visible when a comet passes through the inner Solar System, the dust reflects Sunlight directly while the gases glow from ionisation.[52] Most comets are too faint to be visible without the aid of a telescope, but a few each decade become bright enough to be visible to the naked eye.[53] Occasionally a comet may experience a huge and sudden outburst of gas and dust, during which the size of the coma greatly increases for a period of time. This happened in 2007 to Comet Holmes.[54]

In 1996, comets were found to emit X-rays.[55] This greatly surprised astronomers because X-ray emission is usually associated with very high-temperature bodies. The X-rays are generated by the interaction between comets and the solar wind: when highly charged solar wind ions fly through a cometary atmosphere, they collide with cometary atoms and molecules, “stealing” one or more electrons from the atom in a process called “charge exchange”. This exchange or transfer of an electron to the solar wind ion is followed by its de-excitation into the ground state of the ion by the emission of X-rays and far ultraviolet photons.[56]

In the outer Solar System, comets remain frozen and inactive and are extremely difficult or impossible to detect from Earth due to their small size. Statistical detections of inactive comet nuclei in the Kuiper belt have been reported from observations by the Hubble Space Telescope[57][58] but these detections have been questioned.[59][60] As a comet approaches the inner Solar System, solar radiation causes the volatile materials within the comet to vaporize and stream out of the nucleus, carrying dust away with them.

The streams of dust and gas each form their own distinct tail, pointing in slightly different directions. The tail of dust is left behind in the comet’s orbit in such a manner that it often forms a curved tail called the type II or dust tail.[52] At the same time, the ion or type I tail, made of gases, always points directly away from the Sun because this gas is more strongly affected by the solar wind than is dust, following magnetic field lines rather than an orbital trajectory.[61] On occasions – such as when the Earth passes through a comet’s orbital plane, a tail pointing in the opposite direction to the ion and dust tails called the antitail may be seen.[62]

The observation of antitails contributed significantly to the discovery of solar wind.[63] The ion tail is formed as a result of the ionisation by solar ultra-violet radiation of particles in the coma. Once the particles have been ionized, they attain a net positive electrical charge, which in turn gives rise to an “induced magnetosphere” around the comet. The comet and its induced magnetic field form an obstacle to outward flowing solar wind particles. Because the relative orbital speed of the comet and the solar wind is supersonic, a bow shock is formed upstream of the comet in the flow direction of the solar wind. In this bow shock, large concentrations of cometary ions (called “pick-up ions”) congregate and act to “load” the solar magnetic field with plasma, such that the field lines “drape” around the comet forming the ion tail.[64]

If the ion tail loading is sufficient, the magnetic field lines are squeezed together to the point where, at some distance along the ion tail, magnetic reconnection occurs. This leads to a “tail disconnection event”.[64] This has been observed on a number of occasions, one notable event being recorded on 20 April 2007, when the ion tail of Encke’s Comet was completely severed while the comet passed through a coronal mass ejection. This event was observed by the STEREO space probe.[65]

In 2013, ESA scientists reported that the ionosphere of the planet Venus streams outwards in a manner similar to the ion tail seen streaming from a comet under similar conditions.”[66][67]

Uneven heating can cause newly generated gases to break out of a weak spot on the surface of comet’s nucleus, like a geyser.[68] These streams of gas and dust can cause the nucleus to spin, and even split apart.[68] In 2010 it was revealed dry ice (frozen carbon dioxide) can power jets of material flowing out of a comet nucleus.[69] Infrared imaging of Hartley2 shows such jets exiting and carrying with it dust grains into the coma.[70]

Most comets are small Solar System bodies with elongated elliptical orbits that take them close to the Sun for a part of their orbit and then out into the further reaches of the Solar System for the remainder.[71] Comets are often classified according to the length of their orbital periods: The longer the period the more elongated the ellipse.

Periodic comets or short-period comets are generally defined as those having orbital periods of less than 200 years.[72] They usually orbit more-or-less in the ecliptic plane in the same direction as the planets.[73] Their orbits typically take them out to the region of the outer planets (Jupiter and beyond) at aphelion; for example, the aphelion of Halley’s Comet is a little beyond the orbit of Neptune. Comets whose aphelia are near a major planet’s orbit are called its “family”.[74] Such families are thought to arise from the planet capturing formerly long-period comets into shorter orbits.[75]

At the shorter orbital period extreme, Encke’s Comet has an orbit that does not reach the orbit of Jupiter, and is known as an Encke-type comet. Short-period comets with orbital periods less than 20 years and low inclinations (up to 30 degrees) to the ecliptic are called traditional Jupiter-family comets (JFCs).[76][77] Those like Halley, with orbital periods of between 20 and 200 years and inclinations extending from zero to more than 90 degrees, are called Halley-type comets (HTCs).[78][79] As of 2018[update], only 82 HTCs have been observed,[80] compared with 659 identified JFCs.[81]

Recently discovered main-belt comets form a distinct class, orbiting in more circular orbits within the asteroid belt.[82]

Because their elliptical orbits frequently take them close to the giant planets, comets are subject to further gravitational perturbations.[83] Short-period comets have a tendency for their aphelia to coincide with a giant planet’s semi-major axis, with the JFCs being the largest group.[77] It is clear that comets coming in from the Oort cloud often have their orbits strongly influenced by the gravity of giant planets as a result of a close encounter. Jupiter is the source of the greatest perturbations, being more than twice as massive as all the other planets combined. These perturbations can deflect long-period comets into shorter orbital periods.[84][85]

Based on their orbital characteristics, short-period comets are thought to originate from the centaurs and the Kuiper belt/scattered disc[86] a disk of objects in the trans-Neptunian regionwhereas the source of long-period comets is thought to be the far more distant spherical Oort cloud (after the Dutch astronomer Jan Hendrik Oort who hypothesised its existence).[87] Vast swarms of comet-like bodies are thought to orbit the Sun in these distant regions in roughly circular orbits. Occasionally the gravitational influence of the outer planets (in the case of Kuiper belt objects) or nearby stars (in the case of Oort cloud objects) may throw one of these bodies into an elliptical orbit that takes it inwards toward the Sun to form a visible comet. Unlike the return of periodic comets, whose orbits have been established by previous observations, the appearance of new comets by this mechanism is unpredictable.[88]

Long-period comets have highly eccentric orbits and periods ranging from 200 years to thousands of years.[89] An eccentricity greater than 1 when near perihelion does not necessarily mean that a comet will leave the Solar System.[90] For example, Comet McNaught had a heliocentric osculating eccentricity of 1.000019 near its perihelion passage epoch in January 2007 but is bound to the Sun with roughly a 92,600-year orbit because the eccentricity drops below 1 as it moves farther from the Sun. The future orbit of a long-period comet is properly obtained when the osculating orbit is computed at an epoch after leaving the planetary region and is calculated with respect to the center of mass of the Solar System. By definition long-period comets remain gravitationally bound to the Sun; those comets that are ejected from the Solar System due to close passes by major planets are no longer properly considered as having “periods”. The orbits of long-period comets take them far beyond the outer planets at aphelia, and the plane of their orbits need not lie near the ecliptic. Long-period comets such as Comet West and C/1999 F1 can have aphelion distances of nearly 70,000 AU with orbital periods estimated around 6 million years.

Single-apparition or non-periodic comets are similar to long-period comets because they also have parabolic or slightly hyperbolic trajectories[89] when near perihelion in the inner Solar System. However, gravitational perturbations from giant planets cause their orbits to change. Single-apparition comets have a hyperbolic or parabolic osculating orbit which allows them to permanently exit the Solar System after a single pass of the Sun.[91] The Sun’s Hill sphere has an unstable maximum boundary of 230,000 AU (1.1 parsecs (3.6 light-years)).[92] Only a few hundred comets have been seen to reach a hyperbolic orbit (e > 1) when near perihelion[93] that using a heliocentric unperturbed two-body best-fit suggests they may escape the Solar System.

As of 2018, 1I/Oumuamua is the only object with an eccentricity significantly greater than one that has been detected, indicating an origin outside the Solar System. While Oumuamua showed no optical signs of cometary activity during its passage through the inner Solar System in October 2017, changes to its trajectorywhich suggests outgassingindicate that it is indeed a comet.[94] Comet C/1980 E1 had an orbital period of roughly 7.1 million years before the 1982 perihelion passage, but a 1980 encounter with Jupiter accelerated the comet giving it the largest eccentricity (1.057) of any known hyperbolic comet.[95] Comets not expected to return to the inner Solar System include C/1980 E1, C/2000 U5, C/2001 Q4 (NEAT), C/2009 R1, C/1956 R1, and C/2007 F1 (LONEOS).

Some authorities use the term “periodic comet” to refer to any comet with a periodic orbit (that is, all short-period comets plus all long-period comets),[96] whereas others use it to mean exclusively short-period comets.[89] Similarly, although the literal meaning of “non-periodic comet” is the same as “single-apparition comet”, some use it to mean all comets that are not “periodic” in the second sense (that is, to also include all comets with a period greater than 200 years).

Early observations have revealed a few genuinely hyperbolic (i.e. non-periodic) trajectories, but no more than could be accounted for by perturbations from Jupiter. If comets pervaded interstellar space, they would be moving with velocities of the same order as the relative velocities of stars near the Sun (a few tens of km per second). If such objects entered the Solar System, they would have positive specific orbital energy and would be observed to have genuinely hyperbolic trajectories. A rough calculation shows that there might be four hyperbolic comets per century within Jupiter’s orbit, give or take one and perhaps two orders of magnitude.[97]

The Oort cloud is thought to occupy a vast space starting from between 2,000 and 5,000AU (0.03 and 0.08ly)[99] to as far as 50,000AU (0.79ly)[78] from the Sun. Some estimates place the outer edge at between 100,000 and 200,000AU (1.58 and 3.16ly).[99] The region can be subdivided into a spherical outer Oort cloud of 20,00050,000AU (0.320.79ly), and a doughnut-shaped inner cloud, the Hills cloud, of 2,00020,000AU (0.030.32ly).[100] The outer cloud is only weakly bound to the Sun and supplies the long-period (and possibly Halley-type) comets that fall to inside the orbit of Neptune.[78] The inner Oort cloud is also known as the Hills cloud, named after J. G. Hills, who proposed its existence in 1981.[101] Models predict that the inner cloud should have tens or hundreds of times as many cometary nuclei as the outer halo;[101][102][103] it is seen as a possible source of new comets that resupply the relatively tenuous outer cloud as the latter’s numbers are gradually depleted. The Hills cloud explains the continued existence of the Oort cloud after billions of years.[104]

Exocomets beyond the Solar System have also been detected and may be common in the Milky Way.[105] The first exocomet system detected was around Beta Pictoris, a very young A-type main-sequence star, in 1987.[106][107] A total of 10 such exocomet systems have been identified as of 2013[update], using the absorption spectrum caused by the large clouds of gas emitted by comets when passing close to their star.[105][106]

As a result of outgassing, comets leave in their wake a trail of solid debris too large to be swept away by radiation pressure and the solar wind.[108] If the Earth’s orbit sends it through that debris, there are likely to be meteor showers as Earth passes through. The Perseid meteor shower, for example, occurs every year between 9 and 13 August, when Earth passes through the orbit of Comet SwiftTuttle.[109] Halley’s Comet is the source of the Orionid shower in October.[109]

Many comets and asteroids collided with Earth in its early stages. Many scientists think that comets bombarding the young Earth about 4 billion years ago brought the vast quantities of water that now fill the Earth’s oceans, or at least a significant portion of it. Others have cast doubt on this idea.[110] The detection of organic molecules, including polycyclic aromatic hydrocarbons,[16] in significant quantities in comets has led to speculation that comets or meteorites may have brought the precursors of lifeor even life itselfto Earth.[111] In 2013 it was suggested that impacts between rocky and icy surfaces, such as comets, had the potential to create the amino acids that make up proteins through shock synthesis.[112] In 2015, scientists found significant amounts of molecular oxygen in the outgassings of comet 67P, suggesting that the molecule may occur more often than had been thought, and thus less an indicator of life as has been supposed.[113]

It is suspected that comet impacts have, over long timescales, also delivered significant quantities of water to the Earth’s Moon, some of which may have survived as lunar ice.[114] Comet and meteoroid impacts are also thought to be responsible for the existence of tektites and australites.[115]

Fear of comets as acts of God and signs of impending doom was highest in Europe from AD 1200 to 1650.[citation needed] The year after the Great Comet of 1618, for example, Gotthard Arthusius published a pamphlet stating that it was a sign that the Day of Judgment was near.[116] He listed ten pages of comet-related disasters, including “earthquakes, floods, changes in river courses, hail storms, hot and dry weather, poor harvests, epidemics, war and treason and high prices”. By 1700 most scholars concluded that such events occurred whether a comet was seen or not. Using Edmund Halley’s records of comet sightings, however, William Whiston in 1711 wrote that the Great Comet of 1680 had a periodicity of 574 years and was responsible for the worldwide flood in the Book of Genesis, by pouring water on the Earth. His announcement revived for another century fear of comets, now as direct threats to the world instead of signs of disasters.[117] Spectroscopic analysis in 1910 found the toxic gas cyanogen in the tail of Halley’s Comet,[118] causing panicked buying of gas masks and quack “anti-comet pills” and “anti-comet umbrellas” by the public.[119]

If a comet is traveling fast enough, it may leave the Solar System. Such comets follow the open path of a hyperbola, and as such they are called hyperbolic comets. To date, comets are only known to be ejected by interacting with another object in the Solar System, such as Jupiter.[120] An example of this is thought to be Comet C/1980 E1, which was shifted from a predicted orbit of 7.1 million years around the Sun, to a hyperbolic trajectory, after a 1980 close pass by the planet Jupiter.[121]

Jupiter-family comets and long-period comets appear to follow very different fading laws. The JFCs are active over a lifetime of about 10,000 years or ~1,000 orbits whereas long-period comets fade much faster. Only 10% of the long-period comets survive more than 50 passages to small perihelion and only 1% of them survive more than 2,000 passages.[30] Eventually most of the volatile material contained in a comet nucleus evaporates, and the comet becomes a small, dark, inert lump of rock or rubble that can resemble an asteroid.[122] Some asteroids in elliptical orbits are now identified as extinct comets.[123] [124] [125] [126] Roughly six percent of the near-Earth asteroids are thought to be extinct comet nuclei.[30]

The nucleus of some comets may be fragile, a conclusion supported by the observation of comets splitting apart.[127] A significant cometary disruption was that of Comet ShoemakerLevy 9, which was discovered in 1993. A close encounter in July 1992 had broken it into pieces, and over a period of six days in July 1994, these pieces fell into Jupiter’s atmospherethe first time astronomers had observed a collision between two objects in the Solar System.[128][129] Other splitting comets include 3D/Biela in 1846 and 73P/SchwassmannWachmann from 1995 to 2006.[130] Greek historian Ephorus reported that a comet split apart as far back as the winter of 372373 BC.[131] Comets are suspected of splitting due to thermal stress, internal gas pressure, or impact.[132]

Comets 42P/Neujmin and 53P/Van Biesbroeck appear to be fragments of a parent comet. Numerical integrations have shown that both comets had a rather close approach to Jupiter in January 1850, and that, before 1850, the two orbits were nearly identical.[133]

Some comets have been observed to break up during their perihelion passage, including great comets West and IkeyaSeki. Biela’s Comet was one significant example, when it broke into two pieces during its passage through the perihelion in 1846. These two comets were seen separately in 1852, but never again afterward. Instead, spectacular meteor showers were seen in 1872 and 1885 when the comet should have been visible. A minor meteor shower, the Andromedids, occurs annually in November, and it is caused when the Earth crosses the orbit of Biela’s Comet.[134]

Some comets meet a more spectacular end either falling into the Sun[135] or smashing into a planet or other body. Collisions between comets and planets or moons were common in the early Solar System: some of the many craters on the Moon, for example, may have been caused by comets. A recent collision of a comet with a planet occurred in July 1994 when Comet ShoemakerLevy 9 broke up into pieces and collided with Jupiter.[136]

Ghost tail of C/2015 D1 (SOHO) after passage at the sun

The names given to comets have followed several different conventions over the past two centuries. Prior to the early 20th century, most comets were simply referred to by the year when they appeared, sometimes with additional adjectives for particularly bright comets; thus, the “Great Comet of 1680”, the “Great Comet of 1882”, and the “Great January Comet of 1910”.

After Edmund Halley demonstrated that the comets of 1531, 1607, and 1682 were the same body and successfully predicted its return in 1759 by calculating its orbit, that comet became known as Halley’s Comet.[138] Similarly, the second and third known periodic comets, Encke’s Comet[139] and Biela’s Comet,[140] were named after the astronomers who calculated their orbits rather than their original discoverers. Later, periodic comets were usually named after their discoverers, but comets that had appeared only once continued to be referred to by the year of their appearance.[141]

In the early 20th century, the convention of naming comets after their discoverers became common, and this remains so today. A comet can be named after its discoverers, or an instrument or program that helped to find it.[141]

From ancient sources, such as Chinese oracle bones, it is known that comets have been noticed by humans for millennia.[142] Until the sixteenth century, comets were usually considered bad omens of deaths of kings or noble men, or coming catastrophes, or even interpreted as attacks by heavenly beings against terrestrial inhabitants.[143][144]

Aristotle believed that comets were atmospheric phenomena, due to the fact that they could appear outside of the Zodiac and vary in brightness over the course of a few days.[145] Pliny the Elder believed that comets were connected with political unrest and death.[146]

In India, by the 6th century astronomers believed that comets were celestial bodies that re-appeared periodically. This was the view expressed in the 6th century by the astronomers Varhamihira and Bhadrabahu, and the 10th-century astronomer Bhaotpala listed the names and estimated periods of certain comets, but it is not known how these figures were calculated or how accurate they were.[147]

In the 16th century Tycho Brahe demonstrated that comets must exist outside the Earth’s atmosphere by measuring the parallax of the Great Comet of 1577 from observations collected by geographically separated observers. Within the precision of the measurements, this implied the comet must be at least four times more distant than from the Earth to the Moon.[148][149]

Isaac Newton, in his Principia Mathematica of 1687, proved that an object moving under the influence of gravity must trace out an orbit shaped like one of the conic sections, and he demonstrated how to fit a comet’s path through the sky to a parabolic orbit, using the comet of 1680 as an example.[150]

In 1705, Edmond Halley (16561742) applied Newton’s method to twenty-three cometary apparitions that had occurred between 1337 and 1698. He noted that three of these, the comets of 1531, 1607, and 1682, had very similar orbital elements, and he was further able to account for the slight differences in their orbits in terms of gravitational perturbation caused by Jupiter and Saturn. Confident that these three apparitions had been three appearances of the same comet, he predicted that it would appear again in 17589.[151] Halley’s predicted return date was later refined by a team of three French mathematicians: Alexis Clairaut, Joseph Lalande, and Nicole-Reine Lepaute, who predicted the date of the comet’s 1759 perihelion to within one month’s accuracy.[152][153] When the comet returned as predicted, it became known as Halley’s Comet (with the latter-day designation of 1P/Halley). It will next appear in 2061.[154]

From his huge vapouring train perhaps to shakeReviving moisture on the numerous orbs,Thro’ which his long ellipsis winds; perhapsTo lend new fuel to declining suns,To light up worlds, and feed th’ ethereal fire.

James Thomson The Seasons (1730; 1748)[155]

Isaac Newton described comets as compact and durable solid bodies moving in oblique orbit and their tails as thin streams of vapor emitted by their nuclei, ignited or heated by the Sun. Newton suspected that comets were the origin of the life-supporting component of air.[156]

As early as the 18th century, some scientists had made correct hypotheses as to comets’ physical composition. In 1755, Immanuel Kant hypothesized that comets are composed of some volatile substance, whose vaporization gives rise to their brilliant displays near perihelion.[157] In 1836, the German mathematician Friedrich Wilhelm Bessel, after observing streams of vapor during the appearance of Halley’s Comet in 1835, proposed that the jet forces of evaporating material could be great enough to significantly alter a comet’s orbit, and he argued that the non-gravitational movements of Encke’s Comet resulted from this phenomenon.[158]

In 1950, Fred Lawrence Whipple proposed that rather than being rocky objects containing some ice, comets were icy objects containing some dust and rock.[159] This “dirty snowball” model soon became accepted and appeared to be supported by the observations of an armada of spacecraft (including the European Space Agency’s Giotto probe and the Soviet Union’s Vega 1 and Vega 2) that flew through the coma of Halley’s Comet in 1986, photographed the nucleus, and observed jets of evaporating material.[160]

On 22 January 2014, ESA scientists reported the detection, for the first definitive time, of water vapor on the dwarf planet Ceres, the largest object in the asteroid belt.[161] The detection was made by using the far-infrared abilities of the Herschel Space Observatory.[162] The finding is unexpected because comets, not asteroids, are typically considered to “sprout jets and plumes”. According to one of the scientists, “The lines are becoming more and more blurred between comets and asteroids.”[162] On 11 August 2014, astronomers released studies, using the Atacama Large Millimeter/Submillimeter Array (ALMA) for the first time, that detailed the distribution of HCN, HNC, H2CO, and dust inside the comae of comets C/2012 F6 (Lemmon) and C/2012 S1 (ISON).[163][164]

Approximately once a decade, a comet becomes bright enough to be noticed by a casual observer, leading such comets to be designated as great comets.[131] Predicting whether a comet will become a great comet is notoriously difficult, as many factors may cause a comet’s brightness to depart drastically from predictions.[173] Broadly speaking, if a comet has a large and active nucleus, will pass close to the Sun, and is not obscured by the Sun as seen from the Earth when at its brightest, it has a chance of becoming a great comet. However, Comet Kohoutek in 1973 fulfilled all the criteria and was expected to become spectacular but failed to do so.[174] Comet West, which appeared three years later, had much lower expectations but became an extremely impressive comet.[175]

The late 20th century saw a lengthy gap without the appearance of any great comets, followed by the arrival of two in quick successionComet Hyakutake in 1996, followed by HaleBopp, which reached maximum brightness in 1997 having been discovered two years earlier. The first great comet of the 21st century was C/2006 P1 (McNaught), which became visible to naked eye observers in January 2007. It was the brightest in over 40 years.[176]

A sungrazing comet is a comet that passes extremely close to the Sun at perihelion, generally within a few million kilometres.[177] Although small sungrazers can be completely evaporated during such a close approach to the Sun, larger sungrazers can survive many perihelion passages. However, the strong tidal forces they experience often lead to their fragmentation.[178]

About 90% of the sungrazers observed with SOHO are members of the Kreutz group, which all originate from one giant comet that broke up into many smaller comets during its first passage through the inner Solar System.[179] The remainder contains some sporadic sungrazers, but four other related groups of comets have been identified among them: the Kracht, Kracht 2a, Marsden, and Meyer groups. The Marsden and Kracht groups both appear to be related to Comet 96P/Machholz, which is also the parent of two meteor streams, the Quadrantids and the Arietids.[180]

Of the thousands of known comets, some exhibit unusual properties. Comet Encke (2P/Encke) orbits from outside the asteroid belt to just inside the orbit of the planet Mercury whereas the Comet 29P/SchwassmannWachmann currently travels in a nearly circular orbit entirely between the orbits of Jupiter and Saturn.[181] 2060 Chiron, whose unstable orbit is between Saturn and Uranus, was originally classified as an asteroid until a faint coma was noticed.[182] Similarly, Comet ShoemakerLevy 2 was originally designated asteroid 1990 UL3.[183] (See also Fate of comets, above)

Centaurs typically behave with characteristics of both asteroids and comets.[184] Centaurs can be classified as comets such as 60558 Echeclus, and 166P/NEAT. 166P/NEAT was discovered while it exhibited a coma, and so is classified as a comet despite its orbit, and 60558 Echeclus was discovered without a coma but later became active,[185] and was then classified as both a comet and an asteroid (174P/Echeclus). One plan for Cassini involved sending it to a centaur, but NASA decided to destroy it instead.[186]

A comet may be discovered photographically using a wide-field telescope or visually with binoculars. However, even without access to optical equipment, it is still possible for the amateur astronomer to discover a sungrazing comet online by downloading images accumulated by some satellite observatories such as SOHO.[187] SOHO’s 2000th comet was discovered by Polish amateur astronomer Micha Kusiak on 26 December 2010[188] and both discoverers of Hale-Bopp used amateur equipment (although Hale was not an amateur).

A number of periodic comets discovered in earlier decades or previous centuries are now lost comets. Their orbits were never known well enough to predict future appearances or the comets have disintegrated. However, occasionally a “new” comet is discovered, and calculation of its orbit shows it to be an old “lost” comet. An example is Comet 11P/TempelSwiftLINEAR, discovered in 1869 but unobservable after 1908 because of perturbations by Jupiter. It was not found again until accidentally rediscovered by LINEAR in 2001.[189] There are at least 18 comets that fit this category.[190]

The depiction of comets in popular culture is firmly rooted in the long Western tradition of seeing comets as harbingers of doom and as omens of world-altering change.[191] Halley’s Comet alone has caused a slew of sensationalist publications of all sorts at each of its reappearances. It was especially noted that the birth and death of some notable persons coincided with separate appearances of the comet, such as with writers Mark Twain (who correctly speculated that he’d “go out with the comet” in 1910)[191] and Eudora Welty, to whose life Mary Chapin Carpenter dedicated the song “Halley Came to Jackson”.[191]

In times past, bright comets often inspired panic and hysteria in the general population, being thought of as bad omens. More recently, during the passage of Halley’s Comet in 1910, the Earth passed through the comet’s tail, and erroneous newspaper reports inspired a fear that cyanogen in the tail might poison millions,[192] whereas the appearance of Comet HaleBopp in 1997 triggered the mass suicide of the Heaven’s Gate cult.[193]

In science fiction, the impact of comets has been depicted as a threat overcome by technology and heroism (as in the 1998 films Deep Impact and Armageddon), or as a trigger of global apocalypse (Lucifer’s Hammer, 1979) or zombies (Night of the Comet, 1984).[191] In Jules Verne’s Off on a Comet a group of people are stranded on a comet orbiting the Sun, while a large manned space expedition visits Halley’s Comet in Sir Arthur C. Clarke’s novel 2061: Odyssey Three.[194]

NASA is developing a comet harpoon for returning samples to Earth

Link:

Comet – Wikipedia

Overview | Comets Solar System Exploration: NASA Science

Comets are cosmic snowballs of frozen gases, rock and dust that orbit the Sun. When frozen, they are the size of a small town. When a comet’s orbit brings it close to the Sun, it heats up and spews dust and gases into a giant glowing head larger than most planets. The dust and gases form a tail that stretches away from the Sun for millions of miles. There are likely billions of comets orbiting our Sun in the Kuiper Belt and even more distant Oort Cloud. The current number of known comets is:

More

See the article here:

Overview | Comets Solar System Exploration: NASA Science

Comet | Definition of Comet by Merriam-Webster

Recent Examples on the Web. Near-Earth objects are comets (cosmic snowballs of frozen gases, rock and dust the size of a small town) and asteroids (basically, space rocks smaller than planets) that pass within 28 miles of Earths orbit. Ashley May, USA TODAY, “NASA: Here’s the big plan to protect the planet from ‘near-Earth objects’,” 21 June 2018 Organic molecules are being found in …

More:

Comet | Definition of Comet by Merriam-Webster

Island Maps: Caribbean Islands, Greek Islands, Pacific …

Arctic OceanAtlantic Ocean (North)North of the equatorAtlantic Ocean (South)South of the equatorAssorted (A – Z)Found in a variety of bays, channels, lakes, rivers, seas, straits, etc.Caribbean SeaFound in a variety of bays, channels, lakes, rivers, seas, straits, etc.Greek IslesIndian OceanMediterranean SeaPacific Ocean (north)north of the equatorPacific Ocean (South)south of the equatorOceania and the South Pacific Islands Trending on WorldAtlas

This page was last updated on August 26, 2015.

View original post here:

Island Maps: Caribbean Islands, Greek Islands, Pacific …

The Hawaiian Islands | Hawaii.com

The Hawaiian Islands | Hawaii.com Aloha! We’ve updated our Privacy Policy and Terms of Service. Please click Accept if you’re okay with these updates. Accept You’re currently on: Home The Hawaiian Islands

-Pick an Island-OahuMauiKauaiBig IslandLanaiMolokai

The Hawaiian Islands are one of the most geographically isolated places on earth, over 2,400 miles and nearly 4,000 km to the closest landmass, which is California, USA. Born of a volcanic hotspot rising from the sea floor of the Pacific Ocean, the Hawaiian archipelago formed nearly 75 million years ago, with the eldest islands of the chain long since eroded and submerged beneath the seas surface to the northwest and the youngest of the islands still forming beneath the seas surface to the south east.

This unique history of formation and isolation has given rise to breathtaking and extraordinary wonders. Perfect white sand beaches, abundant reefs, towering waterfalls, lush valleys, snow-capped mountains and fiery hot volcanic cauldrons captivate the hearts of those who visit as well as those who call this beautiful place home. A special culture has evolved from the unique natural environment of these islands. Native Hawaiians are the host culture here, and the values of Aloha have laid the foundation for the Hawaii we have today. Since the 1700s, peoples of various cultures have been arriving on these shores, bringing their foods, their music and their ways of life.

Today Hawaii is a bold showcase for farm-to-table fusion cuisine, culturally conscious fashion and innovation. Visitors will find themselves spoiled for options between romantic boutique getaways and family friendly five star resorts. High-end retailers have put Hawaii on the map of world-class shopping destinations, and Hawaiis passionate chefs have created a foodie frenzy here. As far forward as Hawaii has evolved, those looking for a walk back in time can still find Old Hawaii tucked away off the beaten paths. And the ancient stories still exist in the lovely hula hands of dancers who have given themselves as keepers of the culture.

Current Specials

Saturday, Oct. 13, 2018, 1:42:29 AM HST | Current Conditions: 72.1 Mostly Cloudy | Weather data provided by Weather Underground

Read more:

The Hawaiian Islands | Hawaii.com

New Atheism – Wikipedia

New Atheism is a term coined in 2006 by the agnostic journalist Gary Wolf to describe the positions promoted by some atheists of the twenty-first century.[1][2] This modern-day atheism is advanced by a group of thinkers and writers who advocate the view that superstition, religion and irrationalism should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever their influence arises in government, education, and politics.[3][4] According to Richard Ostling, Bertrand Russell, in his 1927 book Why I Am Not a Christian, put forward similar positions as those espoused by the New Atheists, suggesting that there are no substantive differences between traditional atheism and New Atheism.[5]

New Atheism lends itself to and often overlaps with secular humanism and antitheism, particularly in its criticism of what many New Atheists regard as the indoctrination of children and the perpetuation of ideologies founded on belief in the supernatural. Some critics of the movement characterise it pejoratively as “militant atheism” or “fundamentalist atheism”.[a][6][7][8][9]

The Harvard botanist Asa Gray, a believing Christian and one of the first supporters of Charles Darwin’s theory of evolution, commented in 1868 that the more worldly Darwinists in England had “the English-materialistic-positivistic line of thought”.[10] Darwin’s supporter Thomas Huxley was openly skeptical, as the biographer Janet Browne describes:

Huxley was rampaging on miracles and the existence of the soul. A few months later, he was to coin the word “agnostic” to describe his own position as neither a believer nor a disbeliever, but one who considered himself free to inquire rationally into the basis of knowledge, a philosopher of pure reason […] The term fitted him well […] and it caught the attention of the other free thinking, rational doubters in Huxley’s ambit, and came to signify a particularly active form of scientific rationalism during the final decades of the 19th century. […] In his hands, agnosticism became as doctrinaire as anything else–a religion of skepticism. Huxley used it as a creed that would place him on a higher moral plane than even bishops and archbishops. All the evidence would nevertheless suggest that Huxley was sincere in his rejection of the charge of outright atheism against himself. He refused to be “a liar”. To inquire rigorously into the spiritual domain, he asserted, was a more elevated undertaking than slavishly to believe or disbelieve. “A deep sense of religion is compatible with the entire absence of theology,” he had told [Anglican clergyman] Charles Kingsley back in 1860. “Pope Huxley”, the [magazine] Spectator dubbed him. The label stuck.” Janet Browne[11]

The 2004 publication of The End of Faith: Religion, Terror, and the Future of Reason by Sam Harris, a bestseller in the United States, was joined over the next couple years by a series of popular best-sellers by atheist authors.[12] Harris was motivated by the events of 11 September 2001, which he laid directly at the feet of Islam, while also directly criticizing Christianity and Judaism.[13] Two years later Harris followed up with Letter to a Christian Nation, which was also a severe criticism of Christianity.[14] Also in 2006, following his television documentary The Root of All Evil?, Richard Dawkins published The God Delusion, which was on the New York Times best-seller list for 51 weeks.[15]

In a 2010 column entitled “Why I Don’t Believe in the New Atheism”, Tom Flynn contends that what has been called “New Atheism” is neither a movement nor new, and that what was new was the publication of atheist material by big-name publishers, read by millions, and appearing on bestseller lists.[16]

On 6 November 2015, the New Republic published an article entitled, Is the New Atheism dead?[17] The atheist and evolutionary biologist David Sloan Wilson wrote, “The world appears to be tiring of the New Atheism movement..”[18] In 2017, PZ Myers who formerly considered himself a new atheist, publicly renounced the New Atheism movement.[19]

On 30 September 2007, four prominent atheists (Richard Dawkins, Sam Harris, Christopher Hitchens and Daniel Dennett) met at Hitchens’ residence in Washington, D.C., for a private two-hour unmoderated discussion. The event was videotaped and titled “The Four Horsemen”.[21] During “The God Debate” in 2010 featuring Christopher Hitchens versus Dinesh D’Souza, the men were collectively referred to as the “Four Horsemen of the Non-Apocalypse”,[22] an allusion to the biblical Four Horsemen of the Apocalypse from the Book of Revelation.[23] The four have been described disparagingly as “evangelical atheists”.[24]

Sam Harris is the author of the bestselling non-fiction books The End of Faith, Letter to a Christian Nation, The Moral Landscape, and Waking Up: A Guide to Spirituality Without Religion, as well as two shorter works, initially published as e-books, Free Will[25] and Lying.[26] Harris is a co-founder of the Reason Project.

Richard Dawkins is the author of The God Delusion,[27] which was preceded by a Channel 4 television documentary titled The Root of All Evil?. He is the founder of the Richard Dawkins Foundation for Reason and Science. He wrote: “I don’t object to the horseman label, by the way. I’m less keen on ‘new atheist’: it isn’t clear to me how we differ from old atheists.”[28]

Christopher Hitchens was the author of God Is Not Great[29] and was named among the “Top 100 Public Intellectuals” by Foreign Policy and Prospect magazines. In addition, Hitchens served on the advisory board of the Secular Coalition for America. In 2010 Hitchens published his memoir Hitch-22 (a nickname provided by close personal friend Salman Rushdie, whom Hitchens always supported during and following The Satanic Verses controversy).[30] Shortly after its publication, Hitchens was diagnosed with esophageal cancer, which led to his death in December 2011.[31] Before his death, Hitchens published a collection of essays and articles in his book Arguably;[32] a short edition Mortality[33] was published posthumously in 2012. These publications and numerous public appearances provided Hitchens with a platform to remain an astute atheist during his illness, even speaking specifically on the culture of deathbed conversions and condemning attempts to convert the terminally ill, which he opposed as “bad taste”.[34][35]

Daniel Dennett, author of Darwin’s Dangerous Idea,[36] Breaking the Spell[37] and many others, has also been a vocal supporter of The Clergy Project,[38] an organization that provides support for clergy in the US who no longer believe in God and cannot fully participate in their communities any longer.[39]

After the death of Hitchens, Ayaan Hirsi Ali (who attended the 2012 Global Atheist Convention, which Hitchens was scheduled to attend) was referred to as the “plus one horse-woman”, since she was originally invited to the 2007 meeting of the “Horsemen” atheists but had to cancel at the last minute.[40] Hirsi Ali was born in Mogadishu, Somalia, fleeing in 1992 to the Netherlands in order to escape an arranged marriage.[41] She became involved in Dutch politics, rejected faith, and became vocal in opposing Islamic ideology, especially concerning women, as exemplified by her books Infidel and The Caged Virgin.[42] Hirsi Ali was later involved in the production of the film Submission, for which her friend Theo Van Gogh was murdered with a death threat to Hirsi Ali pinned to his chest.[43] This event resulted in Hirsi Ali’s hiding and later immigration to the United States, where she now resides and remains a prolific critic of Islam.[44] She regularly speaks out against the treatment of women in Islamic doctrine and society[45] and is a proponent of free speech and the freedom to offend.[46][47]

Many contemporary atheists write from a scientific perspective. Unlike previous writers, many of whom thought that science was indifferent or even incapable of dealing with the “God” concept, Dawkins argues to the contrary, claiming the “God Hypothesis” is a valid scientific hypothesis,[70] having effects in the physical universe, and like any other hypothesis can be tested and falsified. The late Victor Stenger proposed that the personal Abrahamic God is a scientific hypothesis that can be tested by standard methods of science. Both Dawkins and Stenger conclude that the hypothesis fails any such tests,[71] and argue that naturalism is sufficient to explain everything we observe in the universe, from the most distant galaxies to the origin of life, the existence of different species, and the inner workings of the brain and consciousness. Nowhere, they argue, is it necessary to introduce God or the supernatural to understand reality. New Atheists reject Jesus’ divinity.[72]

Non-believers assert that many religious or supernatural claims (such as the virgin birth of Jesus and the afterlife) are scientific claims in nature. They argue, as do deists and Progressive Christians, for instance, that the issue of Jesus’ supposed parentage is not a question of “values” or “morals” but a question of scientific inquiry.[73] Rational thinkers believe science is capable of investigating at least some, if not all, supernatural claims.[74] Institutions such as the Mayo Clinic and Duke University are attempting to find empirical support for the healing power of intercessory prayer.[75] According to Stenger, these experiments have thus far found no evidence that intercessory prayer works.[76]

Stenger also argues in his book, God: The Failed Hypothesis, that a God having omniscient, omnibenevolent and omnipotent attributes, which he termed a 3O God, cannot logically exist.[77] A similar series of logical disproofs of the existence of a God with various attributes can be found in Michael Martin and Ricki Monnier’s The Impossibility of God,[78] or Theodore M. Drange’s article, “Incompatible-Properties Arguments”.[79]

Richard Dawkins has been particularly critical of the conciliatory view that science and religion are not in conflict, noting, for example, that the Abrahamic religions constantly deal in scientific matters. In a 1998 article published in Free Inquiry magazine[73] and later in his 2006 book The God Delusion, Dawkins expresses disagreement with the view advocated by Stephen Jay Gould that science and religion are two non-overlapping magisteria (NOMA), each existing in a “domain where one form of teaching holds the appropriate tools for meaningful discourse and resolution”. In Gould’s proposal, science and religion should be confined to distinct non-overlapping domains: science would be limited to the empirical realm, including theories developed to describe observations, while religion would deal with questions of ultimate meaning and moral value. Dawkins contends that NOMA does not describe empirical facts about the intersection of science and religion: “It is completely unrealistic to claim, as Gould and many others do, that religion keeps itself away from science’s turf, restricting itself to morals and values. A universe with a supernatural presence would be a fundamentally and qualitatively different kind of universe from one without. The difference is, inescapably, a scientific difference. Religions make existence claims, and this means scientific claims.”

Popularized by Sam Harris is the view that science and thereby currently unknown objective facts may instruct human morality in a globally comparable way. Harris’ book The Moral Landscape[80] and accompanying TED Talk How Science can Determine Moral Values[81] propose that human well-being and conversely suffering may be thought of as a landscape with peaks and valleys representing numerous ways to achieve extremes in human experience, and that there are objective states of well-being.

New Atheism is politically engaged in a variety of ways. These include campaigns to draw attention to the biased privileged position religion has and to reduce the influence of religion in the public sphere, attempts to promote cultural change (centering, in the United States, on the mainstream acceptance of atheism), and efforts to promote the idea of an “atheist identity”. Internal strategic divisions over these issues have also been notable, as are questions about the diversity of the movement in terms of its gender and racial balance.[82]

The theologians Jeffrey Robbins and Christopher Rodkey take issue with what they regard as “the evangelical nature of the New Atheism, which assumes that it has a Good News to share, at all cost, for the ultimate future of humanity by the conversion of as many people as possible.” They believe they have found similarities between New Atheism and evangelical Christianity and conclude that the all-consuming nature of both “encourages endless conflict without progress” between both extremities.[83]

Sociologist William Stahl said, “What is striking about the current debate is the frequency with which the New Atheists are portrayed as mirror images of religious fundamentalists.”[84]

The atheist philosopher of science Michael Ruse has made the claim that Richard Dawkins would fail “introductory” courses on the study of “philosophy or religion” (such as courses on the philosophy of religion), courses which are offered, for example, at many educational institutions such as colleges and universities around the world.[85][86] Ruse also claims that the movement of New Atheismwhich is perceived, by him, to be a “bloody disaster”makes him ashamed, as a professional philosopher of science, to be among those holding to an atheist position, particularly as New Atheism does science a “grave disservice” and does a “disservice to scholarship” at more general level.[85][86]

Paul Kurtz, editor in chief of Free Inquiry, founder of Prometheus Books, was critical of many of the new atheists.[8] He said, “I consider them atheist fundamentalists… They’re anti-religious, and they’re mean-spirited, unfortunately. Now, they’re very good atheists and very dedicated people who do not believe in God. But you have this aggressive and militant phase of atheism, and that does more damage than good”.[9]

Jonathan Sacks, author of The Great Partnership: Science, Religion, and the Search for Meaning, feels the new atheists miss the target by believing the “cure for bad religion is no religion, as opposed to good religion”. He wrote:

Atheism deserves better than the new atheists whose methodology consists of criticizing religion without understanding it, quoting texts without contexts, taking exceptions as the rule, confusing folk belief with reflective theology, abusing, mocking, ridiculing, caricaturing, and demonizing religious faith and holding it responsible for the great crimes against humanity. Religion has done harm; I acknowledge that. But the cure for bad religion is good religion, not no religion, just as the cure for bad science is good science, not the abandonment of science.[87]

The philosopher Massimo Pigliucci contends that the new atheist movement overlaps with scientism, which he finds to be philosophically unsound. He writes: “What I do object to is the tendency, found among many New Atheists, to expand the definition of science to pretty much encompassing anything that deals with ‘facts’, loosely conceived…, it seems clear to me that most of the New Atheists (except for the professional philosophers among them) pontificate about philosophy very likely without having read a single professional paper in that field…. I would actually go so far as to charge many of the leaders of the New Atheism movement (and, by implication, a good number of their followers) with anti-intellectualism, one mark of which is a lack of respect for the proper significance, value, and methods of another field of intellectual endeavor.”[88]

Atheist professor Jacques Berlinerblau has criticised the New Atheists’ mocking of religion as being inimical to their goals and claims that they haven’t achieved anything politically.[89]

See the original post:

New Atheism – Wikipedia

Litecoin – Wikipedia

Litecoin

Official Litecoin logo

Litecoin (LTC or [4]) is a peer-to-peer cryptocurrency and open source software project released under the MIT/X11 license.[5] Creation and transfer of coins is based on an open source cryptographic protocol and is not managed by any central authority.[5][6] Litecoin was an early bitcoin spinoff or altcoin, starting in October 2011.[7] In technical details, litecoin is nearly identical to Bitcoin.

Litecoin was released via an open-source client on GitHub on October 7, 2011 by Charlie Lee, a Google employee and former Engineering Director at Coinbase.[7][8][9] The Litecoin network went live on October 13, 2011.[10] It was a fork of the Bitcoin Core client, differing primarily by having a decreased block generation time (2.5 minutes), increased maximum number of coins, different hashing algorithm (scrypt, instead of SHA-256), and a slightly modified GUI.[11]

During the month of November 2013, the aggregate value of Litecoin experienced massive growth which included a 100% leap within 24 hours.[12]

Litecoin reached a $1 billion market capitalization in November 2013.[13]

In May 2017, Litecoin became the first of the top 5 (by market cap) cryptocurrencies to adopt Segregated Witness.[14] Later in May of the same year, the first Lightning Network transaction was completed through Litecoin, transferring 0.00000001 LTC from Zrich to San Francisco in under one second.[15]

Litecoin is different in some ways from Bitcoin.

Due to Litecoin’s use of the scrypt algorithm, FPGA and ASIC devices made for mining Litecoin are more complicated to create and more expensive to produce than they are for Bitcoin, which uses SHA-256.[18]

View post:

Litecoin – Wikipedia

LTC USD – Litecoin Price Chart TradingView

Litecoin slides through sloping channel, retraces 88.6% Fibonacci levels but no traces of recoveries Investment attributes and recommendations:The minor trend of LTCUSD (at BITFINEX) slides through sloping channel, while bearish engulfing pattern have popped up yesterday to nudge price below DMAs, to substantiate this bearish stance, both momentum oscillators …

Go here to see the original:

LTC USD – Litecoin Price Chart TradingView

Survivalism in fiction – Wikipedia

Portrayals of survivalism, and survivalist themes and elements such as survival retreats have been fictionalised in print, film, and electronic media. This genre was especially influenced by the advent of nuclear weapons, and the potential for societal collapse in light of a Cold War nuclear conflagration.

Bear Grylls’ Beck Granger Series: This series of books follows the events of a 13 year old survivalist who, against all odds. survives anything. However a dangerous organisation called Lumos are trying to destroy beautiful parts of the world. With Beck taking over from his deceased parents, he will do anything to take down Lumos.

More:

Survivalism in fiction – Wikipedia

Medicine | Define Medicine at Dictionary.com

n.

c.1200, “medical treatment, cure, remedy,” also used figuratively, of spiritual remedies, from Old French medecine (Modern French mdicine) “medicine, art of healing, cure, treatment, potion,” from Latin medicina “the healing art, medicine; a remedy,” also used figuratively, perhaps originally ars medicina “the medical art,” from fem. of medicinus (adj.) “of a doctor,” from medicus “a physician” (see medical); though OED finds evidence for this is wanting. Meaning “a medicinal potion or plaster” in English is mid-14c.

To take (one’s) medicine “submit to something disagreeable” is first recorded 1865. North American Indian medicine-man “shaman” is first attested 1801, from American Indian adoption of the word medicine in sense of “magical influence.” The U.S.-Canadian boundary they called Medicine Line (first attested 1910), because it conferred a kind of magic protection: punishment for crimes committed on one side of it could be avoided by crossing over to the other. Medicine show “traveling show meant to attract a crowd so patent medicine can be sold to them” is American English, 1938. Medicine ball “stuffed leather ball used for exercise” is from 1889.

View post:

Medicine | Define Medicine at Dictionary.com

My Medicine – WebMD

WebMD My Medicine Help

Q: What is an interaction?

A: Mixing certain medicines together may cause a bad reaction. This is called an interaction. For example, one medicine may cause side effects that create problems with other medicines. Or one medicine may make another medicine stronger or weaker.

Q: How do you classify the seriousness of an interaction?

A: The following classification is used:

Contraindicated: Never use this combination of drugs because of high risk for dangerous interaction

Serious: Potential for serious interaction; regular monitoring by your doctor required or alternate medication may be needed

Significant: Potential for significant interaction (monitoring by your doctor is likely required)

Mild: Interaction is unlikely, minor, or nonsignificant

Q: What should I do if my medications show interactions?

A: Call your doctor or pharmacist if you are concerned about an interaction. Do not stop taking any prescribed medication without your doctor’s approval. Sometimes the risk of not taking the medication outweighs the risk or the interaction.

Q: Why can’t I enter my medication?

A: There may be medications, especially otc or supplements, that have not been adequately studied for interactions. If we do not have interaction information for a certain medication it can’t be saved in My Medicine.

Q: Do you cover all FDA warnings?

A: WebMD will alert users to the most important FDA warnings and alerts affecting consumers such as recalls, label changes and investigations. Not all FDA actions are included. Go to the FDA for a comprehensive list of warnings.

Q: Can I be alerted by email if there is an FDA warning or alert?

A: Yes. If you are signed in to WebMD.com and using My Medicine you can sign up to receive email alerts when you add a medicine. To unsubscribe click here.

Q: Can I add medicines for family members?

A: Yes. Click the arrow next to your picture to add drug profiles for family or loved ones.

Q: Can I access My Medicine from my mobile phone?

A: Yes. Sign in to the WebMD Mobile App. Your saved medicine can be found under “Saved.”

Q: Why are there already medicines saved when this my first time using this tool?

A: If you have previously saved a medication on WebMD, for example, in the WebMD Mobile App, these may display in My Medicine.

Original post:

My Medicine – WebMD

Medicine | Definition of Medicine by Merriam-Webster

1a : a substance or preparation used in treating disease cough medicine

b : something that affects well-being he’s bad medicine Zane Grey

2a : the science and art dealing with the maintenance of health and the prevention, alleviation, or cure of disease She’s interested in a career in medicine.

b : the branch of medicine concerned with the nonsurgical treatment of disease

3 : a substance (such as a drug or potion) used to treat something other than disease

4 : an object held in traditional American Indian belief to give control over natural or magical forces also : magical power or a magical rite

Excerpt from:

Medicine | Definition of Medicine by Merriam-Webster

Gene therapy – Wikipedia

In the medicine field, gene therapy (also called human gene transfer) is the therapeutic delivery of nucleic acid into a patient’s cells as a drug to treat disease.[1][2] The first attempt at modifying human DNA was performed in 1980 by Martin Cline, but the first successful nuclear gene transfer in humans, approved by the National Institutes of Health, was performed in May 1989.[3] The first therapeutic use of gene transfer as well as the first direct insertion of human DNA into the nuclear genome was performed by French Anderson in a trial starting in September 1990.

Between 1989 and February 2016, over 2,300 clinical trials had been conducted, more than half of them in phase I.[4]

Not all medical procedures that introduce alterations to a patient’s genetic makeup can be considered gene therapy. Bone marrow transplantation and organ transplants in general have been found to introduce foreign DNA into patients.[5] Gene therapy is defined by the precision of the procedure and the intention of direct therapeutic effect.

Gene therapy was conceptualized in 1972, by authors who urged caution before commencing human gene therapy studies.

The first attempt, an unsuccessful one, at gene therapy (as well as the first case of medical transfer of foreign genes into humans not counting organ transplantation) was performed by Martin Cline on 10 July 1980.[6][7] Cline claimed that one of the genes in his patients was active six months later, though he never published this data or had it verified[8] and even if he is correct, it’s unlikely it produced any significant beneficial effects treating beta-thalassemia.

After extensive research on animals throughout the 1980s and a 1989 bacterial gene tagging trial on humans, the first gene therapy widely accepted as a success was demonstrated in a trial that started on 14 September 1990, when Ashi DeSilva was treated for ADA-SCID.[9]

The first somatic treatment that produced a permanent genetic change was performed in 1993.[citation needed]

Gene therapy is a way to fix a genetic problem at its source. The polymers are either translated into proteins, interfere with target gene expression, or possibly correct genetic mutations.

The most common form uses DNA that encodes a functional, therapeutic gene to replace a mutated gene. The polymer molecule is packaged within a “vector”, which carries the molecule inside cells.

Early clinical failures led to dismissals of gene therapy. Clinical successes since 2006 regained researchers’ attention, although as of 2014[update], it was still largely an experimental technique.[10] These include treatment of retinal diseases Leber’s congenital amaurosis[11][12][13][14] and choroideremia,[15] X-linked SCID,[16] ADA-SCID,[17][18] adrenoleukodystrophy,[19] chronic lymphocytic leukemia (CLL),[20] acute lymphocytic leukemia (ALL),[21] multiple myeloma,[22] haemophilia,[18] and Parkinson’s disease.[23] Between 2013 and April 2014, US companies invested over $600 million in the field.[24]

The first commercial gene therapy, Gendicine, was approved in China in 2003 for the treatment of certain cancers.[25] In 2011 Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia.[26]In 2012 Glybera, a treatment for a rare inherited disorder, became the first treatment to be approved for clinical use in either Europe or the United States after its endorsement by the European Commission.[10][27]

Following early advances in genetic engineering of bacteria, cells, and small animals, scientists started considering how to apply it to medicine. Two main approaches were considered replacing or disrupting defective genes.[28] Scientists focused on diseases caused by single-gene defects, such as cystic fibrosis, haemophilia, muscular dystrophy, thalassemia, and sickle cell anemia. Glybera treats one such disease, caused by a defect in lipoprotein lipase.[27]

DNA must be administered, reach the damaged cells, enter the cell and either express or disrupt a protein.[29] Multiple delivery techniques have been explored. The initial approach incorporated DNA into an engineered virus to deliver the DNA into a chromosome.[30][31] Naked DNA approaches have also been explored, especially in the context of vaccine development.[32]

Generally, efforts focused on administering a gene that causes a needed protein to be expressed. More recently, increased understanding of nuclease function has led to more direct DNA editing, using techniques such as zinc finger nucleases and CRISPR. The vector incorporates genes into chromosomes. The expressed nucleases then knock out and replace genes in the chromosome. As of 2014[update] these approaches involve removing cells from patients, editing a chromosome and returning the transformed cells to patients.[33]

Gene editing is a potential approach to alter the human genome to treat genetic diseases,[34] viral diseases,[35] and cancer.[36] As of 2016[update] these approaches were still years from being medicine.[37][38]

Gene therapy may be classified into two types:

In somatic cell gene therapy (SCGT), the therapeutic genes are transferred into any cell other than a gamete, germ cell, gametocyte, or undifferentiated stem cell. Any such modifications affect the individual patient only, and are not inherited by offspring. Somatic gene therapy represents mainstream basic and clinical research, in which therapeutic DNA (either integrated in the genome or as an external episome or plasmid) is used to treat disease.

Over 600 clinical trials utilizing SCGT are underway[when?] in the US. Most focus on severe genetic disorders, including immunodeficiencies, haemophilia, thalassaemia, and cystic fibrosis. Such single gene disorders are good candidates for somatic cell therapy. The complete correction of a genetic disorder or the replacement of multiple genes is not yet possible. Only a few of the trials are in the advanced stages.[39]

In germline gene therapy (GGT), germ cells (sperm or egg cells) are modified by the introduction of functional genes into their genomes. Modifying a germ cell causes all the organism’s cells to contain the modified gene. The change is therefore heritable and passed on to later generations. Australia, Canada, Germany, Israel, Switzerland, and the Netherlands[40] prohibit GGT for application in human beings, for technical and ethical reasons, including insufficient knowledge about possible risks to future generations[40] and higher risks versus SCGT.[41] The US has no federal controls specifically addressing human genetic modification (beyond FDA regulations for therapies in general).[40][42][43][44]

The delivery of DNA into cells can be accomplished by multiple methods. The two major classes are recombinant viruses (sometimes called biological nanoparticles or viral vectors) and naked DNA or DNA complexes (non-viral methods).

In order to replicate, viruses introduce their genetic material into the host cell, tricking the host’s cellular machinery into using it as blueprints for viral proteins. Retroviruses go a stage further by having their genetic material copied into the genome of the host cell. Scientists exploit this by substituting a virus’s genetic material with therapeutic DNA. (The term ‘DNA’ may be an oversimplification, as some viruses contain RNA, and gene therapy could take this form as well.) A number of viruses have been used for human gene therapy, including retroviruses, adenoviruses, herpes simplex, vaccinia, and adeno-associated virus.[4] Like the genetic material (DNA or RNA) in viruses, therapeutic DNA can be designed to simply serve as a temporary blueprint that is degraded naturally or (at least theoretically) to enter the host’s genome, becoming a permanent part of the host’s DNA in infected cells.

Non-viral methods present certain advantages over viral methods, such as large scale production and low host immunogenicity. However, non-viral methods initially produced lower levels of transfection and gene expression, and thus lower therapeutic efficacy. Later technology remedied this deficiency.[citation needed]

Methods for non-viral gene therapy include the injection of naked DNA, electroporation, the gene gun, sonoporation, magnetofection, the use of oligonucleotides, lipoplexes, dendrimers, and inorganic nanoparticles.

Some of the unsolved problems include:

Three patients’ deaths have been reported in gene therapy trials, putting the field under close scrutiny. The first was that of Jesse Gelsinger, who died in 1999 because of immune rejection response.[51] One X-SCID patient died of leukemia in 2003.[9] In 2007, a rheumatoid arthritis patient died from an infection; the subsequent investigation concluded that the death was not related to gene therapy.[52]

In 1972 Friedmann and Roblin authored a paper in Science titled “Gene therapy for human genetic disease?”[53] Rogers (1970) was cited for proposing that exogenous good DNA be used to replace the defective DNA in those who suffer from genetic defects.[54]

In 1984 a retrovirus vector system was designed that could efficiently insert foreign genes into mammalian chromosomes.[55]

The first approved gene therapy clinical research in the US took place on 14 September 1990, at the National Institutes of Health (NIH), under the direction of William French Anderson.[56] Four-year-old Ashanti DeSilva received treatment for a genetic defect that left her with ADA-SCID, a severe immune system deficiency. The defective gene of the patient’s blood cells was replaced by the functional variant. Ashantis immune system was partially restored by the therapy. Production of the missing enzyme was temporarily stimulated, but the new cells with functional genes were not generated. She led a normal life only with the regular injections performed every two months. The effects were successful, but temporary.[57]

Cancer gene therapy was introduced in 1992/93 (Trojan et al. 1993).[58] The treatment of glioblastoma multiforme, the malignant brain tumor whose outcome is always fatal, was done using a vector expressing antisense IGF-I RNA (clinical trial approved by NIH protocolno.1602 November 24, 1993,[59] and by the FDA in 1994). This therapy also represents the beginning of cancer immunogene therapy, a treatment which proves to be effective due to the anti-tumor mechanism of IGF-I antisense, which is related to strong immune and apoptotic phenomena.

In 1992 Claudio Bordignon, working at the Vita-Salute San Raffaele University, performed the first gene therapy procedure using hematopoietic stem cells as vectors to deliver genes intended to correct hereditary diseases.[60] In 2002 this work led to the publication of the first successful gene therapy treatment for adenosine deaminase deficiency (ADA-SCID). The success of a multi-center trial for treating children with SCID (severe combined immune deficiency or “bubble boy” disease) from 2000 and 2002, was questioned when two of the ten children treated at the trial’s Paris center developed a leukemia-like condition. Clinical trials were halted temporarily in 2002, but resumed after regulatory review of the protocol in the US, the United Kingdom, France, Italy, and Germany.[61]

In 1993 Andrew Gobea was born with SCID following prenatal genetic screening. Blood was removed from his mother’s placenta and umbilical cord immediately after birth, to acquire stem cells. The allele that codes for adenosine deaminase (ADA) was obtained and inserted into a retrovirus. Retroviruses and stem cells were mixed, after which the viruses inserted the gene into the stem cell chromosomes. Stem cells containing the working ADA gene were injected into Andrew’s blood. Injections of the ADA enzyme were also given weekly. For four years T cells (white blood cells), produced by stem cells, made ADA enzymes using the ADA gene. After four years more treatment was needed.[62]

Jesse Gelsinger’s death in 1999 impeded gene therapy research in the US.[63][64] As a result, the FDA suspended several clinical trials pending the reevaluation of ethical and procedural practices.[65]

The modified cancer gene therapy strategy of antisense IGF-I RNA (NIH n 1602)[59] using antisense / triple helix anti-IGF-I approach was registered in 2002 by Wiley gene therapy clinical trial – n 635 and 636. The approach has shown promising results in the treatment of six different malignant tumors: glioblastoma, cancers of liver, colon, prostate, uterus, and ovary (Collaborative NATO Science Programme on Gene Therapy USA, France, Poland n LST 980517 conducted by J. Trojan) (Trojan et al., 2012). This anti-gene antisense/triple helix therapy has proven to be efficient, due to the mechanism stopping simultaneously IGF-I expression on translation and transcription levels, strengthening anti-tumor immune and apoptotic phenomena.

Sickle-cell disease can be treated in mice.[66] The mice which have essentially the same defect that causes human cases used a viral vector to induce production of fetal hemoglobin (HbF), which normally ceases to be produced shortly after birth. In humans, the use of hydroxyurea to stimulate the production of HbF temporarily alleviates sickle cell symptoms. The researchers demonstrated this treatment to be a more permanent means to increase therapeutic HbF production.[67]

A new gene therapy approach repaired errors in messenger RNA derived from defective genes. This technique has the potential to treat thalassaemia, cystic fibrosis and some cancers.[68]

Researchers created liposomes 25 nanometers across that can carry therapeutic DNA through pores in the nuclear membrane.[69]

In 2003 a research team inserted genes into the brain for the first time. They used liposomes coated in a polymer called polyethylene glycol, which unlike viral vectors, are small enough to cross the bloodbrain barrier.[70]

Short pieces of double-stranded RNA (short, interfering RNAs or siRNAs) are used by cells to degrade RNA of a particular sequence. If a siRNA is designed to match the RNA copied from a faulty gene, then the abnormal protein product of that gene will not be produced.[71]

Gendicine is a cancer gene therapy that delivers the tumor suppressor gene p53 using an engineered adenovirus. In 2003, it was approved in China for the treatment of head and neck squamous cell carcinoma.[25]

In March researchers announced the successful use of gene therapy to treat two adult patients for X-linked chronic granulomatous disease, a disease which affects myeloid cells and damages the immune system. The study is the first to show that gene therapy can treat the myeloid system.[72]

In May a team reported a way to prevent the immune system from rejecting a newly delivered gene.[73] Similar to organ transplantation, gene therapy has been plagued by this problem. The immune system normally recognizes the new gene as foreign and rejects the cells carrying it. The research utilized a newly uncovered network of genes regulated by molecules known as microRNAs. This natural function selectively obscured their therapeutic gene in immune system cells and protected it from discovery. Mice infected with the gene containing an immune-cell microRNA target sequence did not reject the gene.

In August scientists successfully treated metastatic melanoma in two patients using killer T cells genetically retargeted to attack the cancer cells.[74]

In November researchers reported on the use of VRX496, a gene-based immunotherapy for the treatment of HIV that uses a lentiviral vector to deliver an antisense gene against the HIV envelope. In a phase I clinical trial, five subjects with chronic HIV infection who had failed to respond to at least two antiretroviral regimens were treated. A single intravenous infusion of autologous CD4 T cells genetically modified with VRX496 was well tolerated. All patients had stable or decreased viral load; four of the five patients had stable or increased CD4 T cell counts. All five patients had stable or increased immune response to HIV antigens and other pathogens. This was the first evaluation of a lentiviral vector administered in a US human clinical trial.[75][76]

In May researchers announced the first gene therapy trial for inherited retinal disease. The first operation was carried out on a 23-year-old British male, Robert Johnson, in early 2007.[77]

Leber’s congenital amaurosis is an inherited blinding disease caused by mutations in the RPE65 gene. The results of a small clinical trial in children were published in April.[11] Delivery of recombinant adeno-associated virus (AAV) carrying RPE65 yielded positive results. In May two more groups reported positive results in independent clinical trials using gene therapy to treat the condition. In all three clinical trials, patients recovered functional vision without apparent side-effects.[11][12][13][14]

In September researchers were able to give trichromatic vision to squirrel monkeys.[78] In November 2009, researchers halted a fatal genetic disorder called adrenoleukodystrophy in two children using a lentivirus vector to deliver a functioning version of ABCD1, the gene that is mutated in the disorder.[79]

An April paper reported that gene therapy addressed achromatopsia (color blindness) in dogs by targeting cone photoreceptors. Cone function and day vision were restored for at least 33 months in two young specimens. The therapy was less efficient for older dogs.[80]

In September it was announced that an 18-year-old male patient in France with beta-thalassemia major had been successfully treated.[81] Beta-thalassemia major is an inherited blood disease in which beta haemoglobin is missing and patients are dependent on regular lifelong blood transfusions.[82] The technique used a lentiviral vector to transduce the human -globin gene into purified blood and marrow cells obtained from the patient in June 2007.[83] The patient’s haemoglobin levels were stable at 9 to 10 g/dL. About a third of the hemoglobin contained the form introduced by the viral vector and blood transfusions were not needed.[83][84] Further clinical trials were planned.[85] Bone marrow transplants are the only cure for thalassemia, but 75% of patients do not find a matching donor.[84]

Cancer immunogene therapy using modified antigene, antisense/triple helix approach was introduced in South America in 2010/11 in La Sabana University, Bogota (Ethical Committee 14 December 2010, no P-004-10). Considering the ethical aspect of gene diagnostic and gene therapy targeting IGF-I, the IGF-I expressing tumors i.e. lung and epidermis cancers were treated (Trojan et al. 2016).[86][87]

In 2007 and 2008, a man (Timothy Ray Brown) was cured of HIV by repeated hematopoietic stem cell transplantation (see also allogeneic stem cell transplantation, allogeneic bone marrow transplantation, allotransplantation) with double-delta-32 mutation which disables the CCR5 receptor. This cure was accepted by the medical community in 2011.[88] It required complete ablation of existing bone marrow, which is very debilitating.

In August two of three subjects of a pilot study were confirmed to have been cured from chronic lymphocytic leukemia (CLL). The therapy used genetically modified T cells to attack cells that expressed the CD19 protein to fight the disease.[20] In 2013, the researchers announced that 26 of 59 patients had achieved complete remission and the original patient had remained tumor-free.[89]

Human HGF plasmid DNA therapy of cardiomyocytes is being examined as a potential treatment for coronary artery disease as well as treatment for the damage that occurs to the heart after myocardial infarction.[90][91]

In 2011 Neovasculgen was registered in Russia as the first-in-class gene-therapy drug for treatment of peripheral artery disease, including critical limb ischemia; it delivers the gene encoding for VEGF.[92][26] Neovasculogen is a plasmid encoding the CMV promoter and the 165 amino acid form of VEGF.[93][94]

The FDA approved Phase 1 clinical trials on thalassemia major patients in the US for 10 participants in July.[95] The study was expected to continue until 2015.[85]

In July 2012, the European Medicines Agency recommended approval of a gene therapy treatment for the first time in either Europe or the United States. The treatment used Alipogene tiparvovec (Glybera) to compensate for lipoprotein lipase deficiency, which can cause severe pancreatitis.[96] The recommendation was endorsed by the European Commission in November 2012[10][27][97][98] and commercial rollout began in late 2014.[99] Alipogene tiparvovec was expected to cost around $1.6 million per treatment in 2012,[100] revised to $1 million in 2015,[101] making it the most expensive medicine in the world at the time.[102] As of 2016[update], only one person had been treated with drug.[103]

In December 2012, it was reported that 10 of 13 patients with multiple myeloma were in remission “or very close to it” three months after being injected with a treatment involving genetically engineered T cells to target proteins NY-ESO-1 and LAGE-1, which exist only on cancerous myeloma cells.[22]

In March researchers reported that three of five adult subjects who had acute lymphocytic leukemia (ALL) had been in remission for five months to two years after being treated with genetically modified T cells which attacked cells with CD19 genes on their surface, i.e. all B-cells, cancerous or not. The researchers believed that the patients’ immune systems would make normal T-cells and B-cells after a couple of months. They were also given bone marrow. One patient relapsed and died and one died of a blood clot unrelated to the disease.[21]

Following encouraging Phase 1 trials, in April, researchers announced they were starting Phase 2 clinical trials (called CUPID2 and SERCA-LVAD) on 250 patients[104] at several hospitals to combat heart disease. The therapy was designed to increase the levels of SERCA2, a protein in heart muscles, improving muscle function.[105] The FDA granted this a Breakthrough Therapy Designation to accelerate the trial and approval process.[106] In 2016 it was reported that no improvement was found from the CUPID 2 trial.[107]

In July researchers reported promising results for six children with two severe hereditary diseases had been treated with a partially deactivated lentivirus to replace a faulty gene and after 732 months. Three of the children had metachromatic leukodystrophy, which causes children to lose cognitive and motor skills.[108] The other children had Wiskott-Aldrich syndrome, which leaves them to open to infection, autoimmune diseases, and cancer.[109] Follow up trials with gene therapy on another six children with Wiskott-Aldrich syndrome were also reported as promising.[110][111]

In October researchers reported that two children born with adenosine deaminase severe combined immunodeficiency disease (ADA-SCID) had been treated with genetically engineered stem cells 18 months previously and that their immune systems were showing signs of full recovery. Another three children were making progress.[18] In 2014 a further 18 children with ADA-SCID were cured by gene therapy.[112] ADA-SCID children have no functioning immune system and are sometimes known as “bubble children.”[18]

Also in October researchers reported that they had treated six hemophilia sufferers in early 2011 using an adeno-associated virus. Over two years later all six were producing clotting factor.[18][113]

In January researchers reported that six choroideremia patients had been treated with adeno-associated virus with a copy of REP1. Over a six-month to two-year period all had improved their sight.[114][115] By 2016, 32 patients had been treated with positive results and researchers were hopeful the treatment would be long-lasting.[15] Choroideremia is an inherited genetic eye disease with no approved treatment, leading to loss of sight.

In March researchers reported that 12 HIV patients had been treated since 2009 in a trial with a genetically engineered virus with a rare mutation (CCR5 deficiency) known to protect against HIV with promising results.[116][117]

Clinical trials of gene therapy for sickle cell disease were started in 2014.[118][119] There is a need for high quality randomised controlled trials assessing the risks and benefits involved with gene therapy for people with sickle cell disease.[120]

In February LentiGlobin BB305, a gene therapy treatment undergoing clinical trials for treatment of beta thalassemia gained FDA “breakthrough” status after several patients were able to forgo the frequent blood transfusions usually required to treat the disease.[121]

In March researchers delivered a recombinant gene encoding a broadly neutralizing antibody into monkeys infected with simian HIV; the monkeys’ cells produced the antibody, which cleared them of HIV. The technique is named immunoprophylaxis by gene transfer (IGT). Animal tests for antibodies to ebola, malaria, influenza, and hepatitis were underway.[122][123]

In March, scientists, including an inventor of CRISPR, Jennifer Doudna, urged a worldwide moratorium on germline gene therapy, writing “scientists should avoid even attempting, in lax jurisdictions, germline genome modification for clinical application in humans” until the full implications “are discussed among scientific and governmental organizations”.[124][125][126][127]

In October, researchers announced that they had treated a baby girl, Layla Richards, with an experimental treatment using donor T-cells genetically engineered using TALEN to attack cancer cells. One year after the treatment she was still free of her cancer (a highly aggressive form of acute lymphoblastic leukaemia [ALL]).[128] Children with highly aggressive ALL normally have a very poor prognosis and Layla’s disease had been regarded as terminal before the treatment.[129]

In December, scientists of major world academies called for a moratorium on inheritable human genome edits, including those related to CRISPR-Cas9 technologies[130] but that basic research including embryo gene editing should continue.[131]

In April the Committee for Medicinal Products for Human Use of the European Medicines Agency endorsed a gene therapy treatment called Strimvelis[132][133] and the European Commission approved it in June.[134] This treats children born with adenosine deaminase deficiency and who have no functioning immune system. This was the second gene therapy treatment to be approved in Europe.[135]

In October, Chinese scientists reported they had started a trial to genetically modify T-cells from 10 adult patients with lung cancer and reinject the modified T-cells back into their bodies to attack the cancer cells. The T-cells had the PD-1 protein (which stops or slows the immune response) removed using CRISPR-Cas9.[136][137]

A 2016 Cochrane systematic review looking at data from four trials on topical cystic fibrosis transmembrane conductance regulator (CFTR) gene therapy does not support its clinical use as a mist inhaled into the lungs to treat cystic fibrosis patients with lung infections. One of the four trials did find weak evidence that liposome-based CFTR gene transfer therapy may lead to a small respiratory improvement for people with CF. This weak evidence is not enough to make a clinical recommendation for routine CFTR gene therapy.[138]

In February Kite Pharma announced results from a clinical trial of CAR-T cells in around a hundred people with advanced Non-Hodgkin lymphoma.[139]

In March, French scientists reported on clinical research of gene therapy to treat sickle-cell disease.[140]

In August, the FDA approved tisagenlecleucel for acute lymphoblastic leukemia.[141] Tisagenlecleucel is an adoptive cell transfer therapy for B-cell acute lymphoblastic leukemia; T cells from a person with cancer are removed, genetically engineered to make a specific T-cell receptor (a chimeric T cell receptor, or “CAR-T”) that reacts to the cancer, and are administered back to the person. The T cells are engineered to target a protein called CD19 that is common on B cells. This is the first form of gene therapy to be approved in the United States. In October, a similar therapy called axicabtagene ciloleucel was approved for non-Hodgkin lymphoma.[142]

In December the results of using an adeno-associated virus with blood clotting factor VIII to treat nine haemophilia A patients were published. Six of the seven patients on the high dose regime increased the level of the blood clotting VIII to normal levels. The low and medium dose regimes had no effect on the patient’s blood clotting levels.[143][144]

In December, the FDA approved Luxturna, the first in vivo gene therapy, for the treatment of blindness due to Leber’s congenital amaurosis.[145] The price of this treatment was 850,000 US dollars for both eyes.[146][147]

Speculated uses for gene therapy include:

Athletes might adopt gene therapy technologies to improve their performance.[148] Gene doping is not known to occur, but multiple gene therapies may have such effects. Kayser et al. argue that gene doping could level the playing field if all athletes receive equal access. Critics claim that any therapeutic intervention for non-therapeutic/enhancement purposes compromises the ethical foundations of medicine and sports.[149]

Genetic engineering could be used to cure diseases, but also to change physical appearance, metabolism, and even improve physical capabilities and mental faculties such as memory and intelligence. Ethical claims about germline engineering include beliefs that every fetus has a right to remain genetically unmodified, that parents hold the right to genetically modify their offspring, and that every child has the right to be born free of preventable diseases.[150][151][152] For parents, genetic engineering could be seen as another child enhancement technique to add to diet, exercise, education, training, cosmetics, and plastic surgery.[153][154] Another theorist claims that moral concerns limit but do not prohibit germline engineering.[155]

Possible regulatory schemes include a complete ban, provision to everyone, or professional self-regulation. The American Medical Associations Council on Ethical and Judicial Affairs stated that “genetic interventions to enhance traits should be considered permissible only in severely restricted situations: (1) clear and meaningful benefits to the fetus or child; (2) no trade-off with other characteristics or traits; and (3) equal access to the genetic technology, irrespective of income or other socioeconomic characteristics.”[156]

As early in the history of biotechnology as 1990, there have been scientists opposed to attempts to modify the human germline using these new tools,[157] and such concerns have continued as technology progressed.[158][159] With the advent of new techniques like CRISPR, in March 2015 a group of scientists urged a worldwide moratorium on clinical use of gene editing technologies to edit the human genome in a way that can be inherited.[124][125][126][127] In April 2015, researchers sparked controversy when they reported results of basic research to edit the DNA of non-viable human embryos using CRISPR.[160][161] A committee of the American National Academy of Sciences and National Academy of Medicine gave qualified support to human genome editing in 2017[162][163] once answers have been found to safety and efficiency problems “but only for serious conditions under stringent oversight.”[164]

Regulations covering genetic modification are part of general guidelines about human-involved biomedical research. There are no international treaties which are legally binding in this area, but there are recommendations for national laws from various bodies.

The Helsinki Declaration (Ethical Principles for Medical Research Involving Human Subjects) was amended by the World Medical Association’s General Assembly in 2008. This document provides principles physicians and researchers must consider when involving humans as research subjects. The Statement on Gene Therapy Research initiated by the Human Genome Organization (HUGO) in 2001 provides a legal baseline for all countries. HUGOs document emphasizes human freedom and adherence to human rights, and offers recommendations for somatic gene therapy, including the importance of recognizing public concerns about such research.[165]

No federal legislation lays out protocols or restrictions about human genetic engineering. This subject is governed by overlapping regulations from local and federal agencies, including the Department of Health and Human Services, the FDA and NIH’s Recombinant DNA Advisory Committee. Researchers seeking federal funds for an investigational new drug application, (commonly the case for somatic human genetic engineering,) must obey international and federal guidelines for the protection of human subjects.[166]

NIH serves as the main gene therapy regulator for federally funded research. Privately funded research is advised to follow these regulations. NIH provides funding for research that develops or enhances genetic engineering techniques and to evaluate the ethics and quality in current research. The NIH maintains a mandatory registry of human genetic engineering research protocols that includes all federally funded projects.

An NIH advisory committee published a set of guidelines on gene manipulation.[167] The guidelines discuss lab safety as well as human test subjects and various experimental types that involve genetic changes. Several sections specifically pertain to human genetic engineering, including Section III-C-1. This section describes required review processes and other aspects when seeking approval to begin clinical research involving genetic transfer into a human patient.[168] The protocol for a gene therapy clinical trial must be approved by the NIH’s Recombinant DNA Advisory Committee prior to any clinical trial beginning; this is different from any other kind of clinical trial.[167]

As with other kinds of drugs, the FDA regulates the quality and safety of gene therapy products and supervises how these products are used clinically. Therapeutic alteration of the human genome falls under the same regulatory requirements as any other medical treatment. Research involving human subjects, such as clinical trials, must be reviewed and approved by the FDA and an Institutional Review Board.[169][170]

Gene therapy is the basis for the plotline of the film I Am Legend[171] and the TV show Will Gene Therapy Change the Human Race?.[172] In 1994, gene therapy was a plot element in The Erlenmeyer Flask, The X-Files’ first-season finale. It is also used in Stargate as a means of allowing humans to use Ancient technology.[173]

Read the rest here:

Gene therapy – Wikipedia

Hedonism – Wikipedia

Hedonism is a school of thought that argues that the pursuit of pleasure and intrinsic goods are the primary or most important goals of human life.[1] A hedonist strives to maximize net pleasure (pleasure minus pain.) However upon finally gaining said pleasure, happiness may remain stationary.

Ethical hedonism is the idea that all people have the right to do everything in their power to achieve the greatest amount of pleasure possible to them. It is also the idea that every person’s pleasure should far surpass their amount of pain. Ethical hedonism is said to have been started by Aristippus of Cyrene, a student of Socrates. He held the idea that pleasure is the highest good.[2]

The name derives from the Greek word for “delight” ( hdonismos from hdon “pleasure”, cognate via Proto-Indo-European swhdus through Ancient Greek with English sweet + suffix – -ismos “ism”). An extremely strong aversion to hedonism is hedonophobia.

In the original Old Babylonian version of the Epic of Gilgamesh, which was written soon after the invention of writing, Siduri gave the following advice: “Fill your belly. Day and night make merry. Let days be full of joy. Dance and make music day and night […] These things alone are the concern of men.” This may represent the first recorded advocacy of a hedonistic philosophy.[3]

Scenes of a harper entertaining guests at a feast were common in ancient Egyptian tombs (see Harper’s Songs), and sometimes contained hedonistic elements, calling guests to submit to pleasure because they cannot be sure that they will be rewarded for good with a blissful afterlife. The following is a song attributed to the reign of one of the pharaohs around the time of the 12th dynasty, and the text was used in the eighteenth and nineteenth dynasties.[4][5]

Let thy desire flourish, In order to let thy heart forget the beatifications for thee.Follow thy desire, as long as thou shalt live.Put myrrh upon thy head and clothing of fine linen upon thee,Being anointed with genuine marvels of the gods’ property.Set an increase to thy good things;Let not thy heart flag.Follow thy desire and thy good.Fulfill thy needs upon earth, after the command of thy heart,Until there come for thee that day of mourning.

Democritus seems to be the earliest philosopher on record to have categorically embraced a hedonistic philosophy; he called the supreme goal of life “contentment” or “cheerfulness”, claiming that “joy and sorrow are the distinguishing mark of things beneficial and harmful” (DK 68 B 188).[6]

The Cyrenaics were an ultra-hedonist Greek school of philosophy founded in the 4th century BC, supposedly by Aristippus of Cyrene, although many of the principles of the school are believed to have been formalized by his grandson of the same name, Aristippus the Younger. The school was so called after Cyrene, the birthplace of Aristippus. It was one of the earliest Socratic schools. The Cyrenaics taught that the only intrinsic good is pleasure, which meant not just the absence of pain, but positively enjoyable momentary sensations. Of these, physical ones are stronger than those of anticipation or memory. They did, however, recognize the value of social obligation, and that pleasure could be gained from altruism[citation needed]. Theodorus the Atheist was a latter exponent of hedonism who was a disciple of younger Aristippus,[7] while becoming well known for expounding atheism. The school died out within a century, and was replaced by Epicureanism.

The Cyrenaics were known for their skeptical theory of knowledge. They reduced logic to a basic doctrine concerning the criterion of truth.[8] They thought that we can know with certainty our immediate sense-experiences (for instance, that I am having a sweet sensation now) but can know nothing about the nature of the objects that cause these sensations (for instance, that the honey is sweet).[9] They also denied that we can have knowledge of what the experiences of other people are like.[10] All knowledge is immediate sensation. These sensations are motions which are purely subjective, and are painful, indifferent or pleasant, according as they are violent, tranquil or gentle.[9][11] Further, they are entirely individual and can in no way be described as constituting absolute objective knowledge. Feeling, therefore, is the only possible criterion of knowledge and of conduct.[9] Our ways of being affected are alone knowable. Thus the sole aim for everyone should be pleasure.

Cyrenaicism deduces a single, universal aim for all people which is pleasure. Furthermore, all feeling is momentary and homogeneous. It follows that past and future pleasure have no real existence for us, and that among present pleasures there is no distinction of kind.[11] Socrates had spoken of the higher pleasures of the intellect; the Cyrenaics denied the validity of this distinction and said that bodily pleasures, being more simple and more intense, were preferable.[12] Momentary pleasure, preferably of a physical kind, is the only good for humans. However some actions which give immediate pleasure can create more than their equivalent of pain. The wise person should be in control of pleasures rather than be enslaved to them, otherwise pain will result, and this requires judgement to evaluate the different pleasures of life.[13] Regard should be paid to law and custom, because even though these things have no intrinsic value on their own, violating them will lead to unpleasant penalties being imposed by others.[12] Likewise, friendship and justice are useful because of the pleasure they provide.[12] Thus the Cyrenaics believed in the hedonistic value of social obligation and altruistic behaviour.

Epicureanism is a system of philosophy based upon the teachings of Epicurus (c. 341c. 270 BC), founded around 307 BC. Epicurus was an atomic materialist, following in the steps of Democritus and Leucippus. His materialism led him to a general stance against superstition or the idea of divine intervention. Following Aristippusabout whom very little is knownEpicurus believed that the greatest good was to seek modest, sustainable “pleasure” in the form of a state of tranquility and freedom from fear (ataraxia) and absence of bodily pain (aponia) through knowledge of the workings of the world and the limits of our desires. The combination of these two states is supposed to constitute happiness in its highest form. Although Epicureanism is a form of hedonism, insofar as it declares pleasure as the sole intrinsic good, its conception of absence of pain as the greatest pleasure and its advocacy of a simple life make it different from “hedonism” as it is commonly understood.

In the Epicurean view, the highest pleasure (tranquility and freedom from fear) was obtained by knowledge, friendship and living a virtuous and temperate life. He lauded the enjoyment of simple pleasures, by which he meant abstaining from bodily desires, such as sex and appetites, verging on asceticism. He argued that when eating, one should not eat too richly, for it could lead to dissatisfaction later, such as the grim realization that one could not afford such delicacies in the future. Likewise, sex could lead to increased lust and dissatisfaction with the sexual partner. Epicurus did not articulate a broad system of social ethics that has survived but had a unique version of the Golden Rule.

It is impossible to live a pleasant life without living wisely and well and justly (agreeing “neither to harm nor be harmed”),[14] and it is impossible to live wisely and well and justly without living a pleasant life.[15]

Epicureanism was originally a challenge to Platonism, though later it became the main opponent of Stoicism. Epicurus and his followers shunned politics. After the death of Epicurus, his school was headed by Hermarchus; later many Epicurean societies flourished in the Late Hellenistic era and during the Roman era (such as those in Antiochia, Alexandria, Rhodes and Ercolano). The poet Lucretius is its most known Roman proponent. By the end of the Roman Empire, having undergone Christian attack and repression, Epicureanism had all but died out, and would be resurrected in the 17th century by the atomist Pierre Gassendi, who adapted it to the Christian doctrine.

Some writings by Epicurus have survived. Some scholars consider the epic poem On the Nature of Things by Lucretius to present in one unified work the core arguments and theories of Epicureanism. Many of the papyrus scrolls unearthed at the Villa of the Papyri at Herculaneum are Epicurean texts. At least some are thought to have belonged to the Epicurean Philodemus.

Yangism has been described as a form of psychological and ethical egoism. The Yangist philosophers believed in the importance of maintaining self-interest through “keeping one’s nature intact, protecting one’s uniqueness, and not letting the body be tied by other things.” Disagreeing with the Confucian virtues of li (propriety), ren (humaneness), and yi (righteousness) and the Legalist virtue of fa (law), the Yangists saw wei wo, or “everything for myself,” as the only virtue necessary for self-cultivation. Individual pleasure is considered desirable, like in hedonism, but not at the expense of the health of the individual. The Yangists saw individual well-being as the prime purpose of life, and considered anything that hindered that well-being immoral and unnecessary.

The main focus of the Yangists was on the concept of xing, or human nature, a term later incorporated by Mencius into Confucianism. The xing, according to sinologist A. C. Graham, is a person’s “proper course of development” in life. Individuals can only rationally care for their own xing, and should not naively have to support the xing of other people, even if it means opposing the emperor. In this sense, Yangism is a “direct attack” on Confucianism, by implying that the power of the emperor, defended in Confucianism, is baseless and destructive, and that state intervention is morally flawed.

The Confucian philosopher Mencius depicts Yangism as the direct opposite of Mohism, while Mohism promotes the idea of universal love and impartial caring, the Yangists acted only “for themselves,” rejecting the altruism of Mohism. He criticized the Yangists as selfish, ignoring the duty of serving the public and caring only for personal concerns. Mencius saw Confucianism as the “Middle Way” between Mohism and Yangism.

Judaism believes that mankind was created for pleasure, as God placed Adam and Eve in the Garden of EdenEden being the Hebrew word for “pleasure.” In recent years, Rabbi Noah Weinberg articulated five different levels of pleasure; connecting with God is the highest possible pleasure.

Ethical hedonism as part of Christian theology has also been a concept in some evangelical circles, particularly in those of the Reformed tradition.[16] The term Christian Hedonism was first coined by Reformed Baptist theologian John Piper in his 1986 book Desiring God: My shortest summary of it is: God is most glorified in us when we are most satisfied in him. Or: The chief end of man is to glorify God by enjoying him forever. Does Christian Hedonism make a god out of pleasure? No. It says that we all make a god out of what we take most pleasure in. [16] Piper states his term may describe the theology of Jonathan Edwards, who in 1812 referred to a future enjoyment of him [God] in heaven.[17] Already in the 17th century, the atomist Pierre Gassendi had adapted Epicureanism to the Christian doctrine.

The concept of hedonism is also found in the Hindu scriptures.[18][19]

Utilitarianism addresses problems with moral motivation neglected by Kantianism by giving a central role to happiness. It is an ethical theory holding that the proper course of action is the one that maximizes the overall good of the society.[20] It is thus one form of consequentialism, meaning that the moral worth of an action is determined by its resulting outcome. The most influential contributors to this theory are considered to be the 18th and 19th-century British philosophers Jeremy Bentham and John Stuart Mill. Conjoining hedonismas a view as to what is good for peopleto utilitarianism has the result that all action should be directed toward achieving the greatest total amount of happiness (see Hedonic calculus). Though consistent in their pursuit of happiness, Bentham and Mill’s versions of hedonism differ. There are two somewhat basic schools of thought on hedonism:[1]

An extreme form of hedonism that views moral and sexual restraint as either unnecessary or harmful. Famous proponents are Marquis de sade[21][22] and John Wilmot[23]

Contemporary proponents of hedonism include Swedish philosopher Torbjrn Tnnsj,[24] Fred Feldman.[25] and Spanish ethic philosopher Esperanza Guisn (published a “Hedonist manifesto” in 1990).[26]

A dedicated contemporary hedonist philosopher and writer on the history of hedonistic thought is the French Michel Onfray. He has written two books directly on the subject (L’invention du plaisir: fragments cyraniques[27] and La puissance d’exister: Manifeste hdoniste).[28] He defines hedonism “as an introspective attitude to life based on taking pleasure yourself and pleasuring others, without harming yourself or anyone else.”[29] Onfray’s philosophical project is to define an ethical hedonism, a joyous utilitarianism, and a generalized aesthetic of sensual materialism that explores how to use the brain’s and the body’s capacities to their fullest extent — while restoring philosophy to a useful role in art, politics, and everyday life and decisions.”[30]

Onfray’s works “have explored the philosophical resonances and components of (and challenges to) science, painting, gastronomy, sex and sensuality, bioethics, wine, and writing. His most ambitious project is his projected six-volume Counter-history of Philosophy,”[30] of which three have been published. For him “In opposition to the ascetic ideal advocated by the dominant school of thought, hedonism suggests identifying the highest good with your own pleasure and that of others; the one must never be indulged at the expense of sacrificing the other. Obtaining this balance my pleasure at the same time as the pleasure of others presumes that we approach the subject from different angles political, ethical, aesthetic, erotic, bioethical, pedagogical, historiographical.”

For this he has “written books on each of these facets of the same world view.”[31] His philosophy aims for “micro-revolutions”, or “revolutions of the individual and small groups of like-minded people who live by his hedonistic, libertarian values.”[32]

The Abolitionist Society is a transhumanist group calling for the abolition of suffering in all sentient life through the use of advanced biotechnology. Their core philosophy is negative utilitarianism. David Pearce is a theorist of this perspective and he believes and promotes the idea that there exists a strong ethical imperative for humans to work towards the abolition of suffering in all sentient life. His book-length internet manifesto The Hedonistic Imperative[33] outlines how technologies such as genetic engineering, nanotechnology, pharmacology, and neurosurgery could potentially converge to eliminate all forms of unpleasant experience among human and non-human animals, replacing suffering with gradients of well-being, a project he refers to as “paradise engineering”.[34] A transhumanist and a vegan,[35] Pearce believes that we (or our future posthuman descendants) have a responsibility not only to avoid cruelty to animals within human society but also to alleviate the suffering of animals in the wild.

In a talk David Pearce gave at the Future of Humanity Institute and at the Charity International ‘Happiness Conference’ he said “Sadly, what won’t abolish suffering, or at least not on its own, is socio-economic reform, or exponential economic growth, or technological progress in the usual sense, or any of the traditional panaceas for solving the world’s ills. Improving the external environment is admirable and important; but such improvement can’t recalibrate our hedonic treadmill above a genetically constrained ceiling. Twin studies confirm there is a [partially] heritable set-point of well-being – or ill-being – around which we all tend to fluctuate over the course of a lifetime. This set-point varies between individuals. [It’s possible to lower an individual’s hedonic set-point by inflicting prolonged uncontrolled stress; but even this re-set is not as easy as it sounds: suicide-rates typically go down in wartime; and six months after a quadriplegia-inducing accident, studies[citation needed] suggest that we are typically neither more nor less unhappy than we were before the catastrophic event.] Unfortunately, attempts to build an ideal society can’t overcome this biological ceiling, whether utopias of the left or right, free-market or socialist, religious or secular, futuristic high-tech or simply cultivating one’s garden. Even if everything that traditional futurists have asked for is delivered – eternal youth, unlimited material wealth, morphological freedom, superintelligence, immersive VR, molecular nanotechnology, etc – there is no evidence that our subjective quality of life would on average significantly surpass the quality of life of our hunter-gatherer ancestors – or a New Guinea tribesman today – in the absence of reward pathway enrichment. This claim is difficult to prove in the absence of sophisticated neuroscanning; but objective indices of psychological distress e.g. suicide rates, bear it out. Unenhanced humans will still be prey to the spectrum of Darwinian emotions, ranging from terrible suffering to petty disappointments and frustrations – sadness, anxiety, jealousy, existential angst. Their biology is part of “what it means to be human”. Subjectively unpleasant states of consciousness exist because they were genetically adaptive. Each of our core emotions had a distinct signalling role in our evolutionary past: they tended to promote behaviours that enhanced the inclusive fitness of our genes in the ancestral environment.”[36]

Russian physicist and philosopher Victor Argonov argues that hedonism is not only a philosophical but also a verifiable scientific hypothesis. In 2014 he suggested “postulates of pleasure principle” confirmation of which would lead to a new scientific discipline, hedodynamics. Hedodynamics would be able to forecast the distant future development of human civilization and even the probable structure and psychology of other rational beings within the universe.[37] In order to build such a theory, science must discover the neural correlate of pleasure – neurophysiological parameter unambiguously corresponding to the feeling of pleasure (hedonic tone).

According to Argonov, posthumans will be able to reprogram their motivations in an arbitrary manner (to get pleasure from any programmed activity).[38] And if pleasure principle postulates are true, then general direction of civilization development is obvious: maximization of integral happiness in posthuman life (product of life span and average happiness). Posthumans will avoid constant pleasure stimulation, because it is incompatible with rational behavior required to prolong life. However, in average, they can become much happier than modern humans.

Many other aspects of posthuman society could be predicted by hedodynamics if the neural correlate of pleasure were discovered. For example, optimal number of individuals, their optimal body size (whether it matters for happiness or not) and the degree of aggression.

Critics of hedonism have objected to its exclusive concentration on pleasure as valuable.

In particular, G. E. Moore offered a thought experiment in criticism of pleasure as the sole bearer of value: he imagined two worldsone of exceeding beauty and the other a heap of filth. Neither of these worlds will be experienced by anyone. The question, then, is if it is better for the beautiful world to exist than the heap of filth. In this Moore implied that states of affairs have value beyond conscious pleasure, which he said spoke against the validity of hedonism.[39]

In Islam, God admonished mankind not to love the worldly pleasures, since they are related with greed and sources of sinful habits. He also threatened those who prefer worldly life rather than hereafter with Hell.

Those who choose the worldly life and its pleasures will be given proper recompense for their deeds in this life and will not suffer any loss. Such people will receive nothing in the next life except Hell fire. Their deeds will be made devoid of all virtue and their efforts will be in vain.

Here is the original post:

Hedonism – Wikipedia

Unit 731 – Wikipedia

Unit 731 (Japanese: 731, Hepburn: Nana-san-ichi Butai) was a covert biological and chemical warfare research and development unit of the Imperial Japanese Army that undertook lethal human experimentation during the Second Sino-Japanese War (19371945) of World War II. It was responsible for some of the most notorious war crimes carried out by Imperial Japan. Unit 731 was based at the Pingfang district of Harbin, the largest city in the Japanese puppet state of Manchukuo (now Northeast China).

It was officially known as the Epidemic Prevention and Water Purification Department of the Kwantung Army (, Kantgun Beki Kysuibu Honbu). Originally set up under the Kempeitai military police of the Empire of Japan, Unit 731 was taken over and commanded until the end of the war by General Shiro Ishii, a combat medic officer in the Kwantung Army. The facility itself was built between 1934 and 1939 and officially adopted the name “Unit 731” in 1941.

At least 3,000 men, women, and children[1][2]from which at least 600 every year were provided by the Kempeitai[3]were subjected as “logs” to experimentation conducted by Unit 731 at the camp based in Pingfang alone, which does not include victims from other medical experimentation sites, such as Unit 100.[4]

Unit 731 participants of Japan attest that most of the victims they experimented on were Chinese while a lesser percentage were Soviet, Mongolian, Korean, and other Allied POWs. The unit received generous support from the Japanese government up to the end of the war in 1945.

Instead of being tried for war crimes after the war, the researchers involved in Unit 731 were secretly given immunity by the U.S. in exchange for the data they gathered through human experimentation.[5] Other researchers that the Soviet forces managed to arrest first were tried at the Khabarovsk War Crime Trials in 1949. The Americans did not try the researchers so that the information and experience gained in bio-weapons could be co-opted into the U.S. biological warfare program, as had happened with Nazi researchers in Operation Paperclip.[6] On 6 May 1947, Douglas MacArthur, as Supreme Commander of the Allied Forces, wrote to Washington that “additional data, possibly some statements from Ishii probably can be obtained by informing Japanese involved that information will be retained in intelligence channels and will not be employed as ‘War Crimes’ evidence”.[5] Victim accounts were then largely ignored or dismissed in the West as communist propaganda.[7]

In 1932, Surgeon General Shir Ishii ( Ishii Shir), chief medical officer of the Japanese Army and protg of Army Minister Sadao Araki was placed in a command of the Army Epidemic Prevention Research Laboratory (AEPRL). Ishii organized a secret research group, the “Tg Unit”, for various chemical and biological experimentation in Manchuria. Ishii had proposed the creation of a Japanese biological and chemical research unit in 1930, after a two-year study trip abroad, on the grounds that Western powers were developing their own programs. One of Ishii’s main supporters inside the army was Colonel Chikahiko Koizumi, who later became Japan’s Health Minister from 1941 to 1945. Koizumi had joined a secret poison gas research committee in 1915, during World War I, when he and other Imperial Japanese Army officers became impressed by the successful German use of chlorine gas at the second battle of Ypres, where the Allies suffered 15,000 casualties as a result of the chemical attack.[8]

Unit Tg was implemented in the Zhongma Fortress, a prison/experimentation camp in Beiyinhe, a village 100km (62mi) south of Harbin on the South Manchuria Railway. A jailbreak in autumn 1934 and later explosion (believed to be an attack) in 1935 led Ishii to shut down Zhongma Fortress. He received the authorization to move to Pingfang, approximately 24km (15mi) south of Harbin, to set up a new and much larger facility.[9]

In 1936, Emperor Hirohito authorized by decree the expansion of this unit and its integration into the Kwantung Army as the Epidemic Prevention Department.[10] It was divided at the same time into the “Ishii Unit” and “Wakamatsu Unit” with a base in Hsinking. From August 1940, the units were known collectively as the “Epidemic Prevention and Water Purification Department of the Kwantung Army ()”[11] or “Unit 731” (731) for short. In addition to the establishment of Unit 731, the decree also called for the establishment of an additional biological warfare development unit called the Kwantung Army Military Horse Epidemic Prevention Workshop (later referred to as Manchuria Unit 100) and a chemical warfare development unit called the Kwantung Army Technical Testing Department. (later referred to as Manchuria Unit 516) After Japanese expansion throughout China in 1937, sister chemical and biological warfare units were founded in major Chinese cities, and were referred to as Epidemic Prevention and Water Supply Units. Detachments included Unit 1855 in Beijing, Unit Ei 1644 in Nanjing, Unit 8604 in Guangzhou and later, Unit 9420 in Singapore. The compilation of all these units comprised Ishiis network, and at its height in 1939, was composed of more than 10,000 personnel.[12]

Medical doctors and professors from Japan were attracted to join Unit 731 by the rare opportunity to conduct human experimentation and strong financial support from the Army.[13]

A special project code-named Maruta used human beings for experiments. Test subjects were gathered from the surrounding population and were sometimes referred to euphemistically as “logs” (, maruta), used in such contexts as “How many logs fell?”. This term originated as a joke on the part of the staff because the official cover story for the facility given to the local authorities was that it was a lumber mill. However, in an account by a man who worked as a junior uniformed civilian employee of the Imperial Japanese Army in Unit 731, the project was internally called “Holzklotz”, which is a German word for log.[14]In a further parallel, the corpses of “sacrificed” subjects were disposed of by incineration.[15] Researchers in Unit 731 also published some of their results in peer-reviewed journals, writing as though the research had been conducted on non-human primates called “Manchurian monkeys” or “long-tailed monkeys”.[16]

The test subjects were selected to give a wide cross-section of the population and included common criminals, captured bandits and anti-Japanese partisans, political prisoners and also people rounded up by the Kempeitai military police for alleged “suspicious activities”. They included infants, the elderly, and pregnant women. The members of the unit, approximately three hundred researchers, included doctors and bacteriologists; most were Japanese, although some were Chinese and Korean collaborators.[17] Many had been desensitized to performing unpleasant experiments from experience in animal research.[18]

Thousands of men, women, children, and infants interned at prisoner of war camps were subjected to vivisection, often without anesthesia and usually ending with the death of the victim.[19][20] Vivisections were performed on prisoners after infecting them with various diseases. Researchers performed invasive surgery on prisoners, removing organs to study the effects of disease on the human body. These were conducted while the patients were alive because it was thought that the death of the subject would affect the results.[21]

Prisoners had limbs amputated in order to study blood loss. Those limbs that were removed were sometimes re-attached to the opposite sides of the body. Some prisoners had their stomachs surgically removed and the esophagus reattached to the intestines. Parts of organs, such as the brain, lungs, and liver, were removed from some prisoners.[20] Imperial Japanese Army surgeon Ken Yuasa suggests that the practice of vivisection on human subjects (mostly Chinese communists) was widespread even outside Unit 731,[22] estimating that at least 1,000 Japanese personnel were involved in the practice in mainland China.[23]

Prisoners were injected with diseases, disguised as vaccinations,[24] to study their effects. To study the effects of untreated venereal diseases, male and female prisoners were deliberately infected with syphilis and gonorrhoea, then studied. Prisoners were also repeatedly subject to rape by guards.[25]

Plague fleas, infected clothing and infected supplies encased in bombs were dropped on various targets. The resulting cholera, anthrax, and plague were estimated to have killed around and possibly more than 400,000 Chinese civilians.[26] Tularemia was tested on Chinese civilians.[27]

Unit 731 and its affiliated units (Unit 1644 and Unit 100 among others) were involved in research, development and experimental deployment of epidemic-creating biowarfare weapons in assaults against the Chinese populace (both civilian and military) throughout World War II. Plague-infected fleas, bred in the laboratories of Unit 731 and Unit 1644, were spread by low-flying airplanes upon Chinese cities, including coastal Ningbo in 1940, and Changde, Hunan Province, in 1941. This military aerial spraying killed thousands of people with bubonic plague epidemics.[28]

It is possible that Unit 731’s methods and objectives were also followed in Indonesia, in a case of a failed experiment designed to validate a synthesized tetanus toxoid vaccine.[29]

Physiologist Yoshimura Hisato conducted experiments by taking captives outside, dipping various appendages into water, and allowing the limb to freeze. Once frozen, which testimony from a Japanese officer said “was determined after the ‘frozen arms, when struck with a short stick, emitted a sound resembling that which a board gives when it is struck'”,[30] ice was chipped away and the area doused in water. The effects of different water temperatures were tested by bludgeoning the victim to determine if any areas were still frozen. Variations of these tests in more gruesome forms were performed.

Doctors orchestrated forced sex acts between infected and non-infected prisoners to transmit the disease, as the testimony of a prison guard on the subject of devising a method for transmission of syphilis between patients shows:

“Infection of venereal disease by injection was abandoned, and the researchers started forcing the prisoners into sexual acts with each other. Four or five unit members, dressed in white laboratory clothing completely covering the body with only eyes and mouth visible, handled the tests. A male and female, one infected with syphilis, would be brought together in a cell and forced into sex with each other. It was made clear that anyone resisting would be shot.”[31]

After victims were infected, they were vivisected at different stages of infection, so that internal and external organs could be observed as the disease progressed. Testimony from multiple guards blames the female victims as being hosts of the diseases, even as they were forcibly infected. Genitals of female prisoners that were infected with syphilis were called “jam filled buns” by guards.[32]

Some children grew up inside the walls of Unit 731, infected with syphilis. A Youth Corps member deployed to train at Unit 731 recalled viewing a batch of subjects that would undergo syphilis testing: “one was a Chinese woman holding an infant, one was a White Russian woman with a daughter of four or five years of age, and the last was a White Russian woman with a boy of about six or seven.”[32] The children of these women were tested in ways similar to their parents, with specific emphasis on determining how longer infection periods affected the effectiveness of treatments.

Female prisoners were forced to become pregnant for use in experiments. The hypothetical possibility of vertical transmission (from mother to child) of diseases, particularly syphilis, was the stated reason for the torture. Fetal survival and damage to mother’s reproductive organs were objects of interest. Though “a large number of babies were born in captivity”, there have been no accounts of any survivors of Unit 731, children included. It is suspected that the children of female prisoners were killed or the pregnancies terminated.[32]

While male prisoners were often used in single studies, so that the results of the experimentation on them would not be clouded by other variables, women were sometimes used in bacteriological or physiological experiments, sex experiments, and as the victims of sex crimes. The testimony of a unit member that served as guard graphically demonstrated this reality:

“One of the former researchers I located told me that one day he had a human experiment scheduled, but there was still time to kill. So he and another unit member took the keys to the cells and opened one that housed a Chinese woman. One of the unit members raped her; the other member took the keys and opened another cell. There was a Chinese woman in there who had been used in a frostbite experiment. She had several fingers missing and her bones were black, with gangrene set in. He was about to rape her anyway, then he saw that her sex organ was festering, with pus oozing to the surface. He gave up the idea, left and locked the door, then later went on to his experimental work.”[32]

Human targets were used to test grenades positioned at various distances and in different positions. Flamethrowers were tested on humans. Humans were also tied to stakes and used as targets to test germ-releasing bombs, chemical weapons, and explosive bombs.[33][34]

In other tests, subjects were deprived of food and water to determine the length of time until death; placed into high-pressure chambers until death; experimented upon to determine the relationship between temperature, burns, and human survival; placed into centrifuges and spun until death; injected with animal blood; exposed to lethal doses of x-rays; subjected to various chemical weapons inside gas chambers; injected with sea water; and burned or buried alive.[35]

Japanese researchers performed tests on prisoners with bubonic plague, cholera, smallpox, botulism, and other diseases.[36] This research led to the development of the defoliation bacilli bomb and the flea bomb used to spread bubonic plague.[37] Some of these bombs were designed with porcelain shells, an idea proposed by Ishii in 1938.

These bombs enabled Japanese soldiers to launch biological attacks, infecting agriculture, reservoirs, wells, and other areas with anthrax, plague-carrier fleas, typhoid, dysentery, cholera, and other deadly pathogens. During biological bomb experiments, researchers dressed in protective suits would examine the dying victims. Infected food supplies and clothing were dropped by airplane into areas of China not occupied by Japanese forces. In addition, poisoned food and candies were given to unsuspecting victims, and the results examined.

In 2002, Changde, China, site of the flea spraying attack, held an “International Symposium on the Crimes of Bacteriological Warfare” which estimated that at least 580,000 people died as a result of the attack.[38] The historian Sheldon Harris claims that 200,000 died.[39] In addition to Chinese casualties, 1,700 Japanese in Chekiang were killed by their own biological weapons while attempting to unleash the biological agent, indicating serious issues with distribution.[1]

During the final months of World War II, Japan planned to use plague as a biological weapon against San Diego, California. The plan was scheduled to launch on September 22, 1945, but Japan surrendered five weeks earlier.[40][41][42][43]

According to A.S. Wells, the majority of victims were mostly Chinese (including accused “bandits” and “Communists”), Korean, and Soviet, although they may also have included European, American, Indian, and Australian prisoners of war.[44]

Unit 731 participants of Japan attest that most of the victims they experimented on were Chinese[22] while a small percentage were Soviet, Mongolian, Korean, and other Allied POWs.[45] Almost 70% of the victims who died in the Pingfang camp were Chinese, including both civilian and military.[46] Close to 30% of the victims were Soviet.[47] Some others were Southeast Asians and Pacific Islanders, at the time colonies of the Empire of Japan, and a small number of Allied prisoners of war.[48]

Robert Peaty (19031989), a British Major in the Royal Army Ordnance Corps, was the senior ranking allied officer. During this time, he kept a secret diary. A copy of his entire diary exists in the NARA archives.[49] An extract of the diary is available at the UK National Archives at Kew.[50] He was interviewed by the Imperial War Museum in 1981, and the audio recording tape reels are in the IWM’s archives.[51]

In April 2018, the National Archives of Japan for the first time disclosed a nearly-complete list of 3607 people who worked for Unit 731 to Dr. Katsuo Nishiyama of the Shiga University of Medical Science, who says that he intends to publish the list online.[52]

Unit 731 was divided into eight divisions:

The Unit 731 complex covered six square kilometres (2.3 square miles) and consisted of more than 150 buildings. The design of the facilities made them hard to destroy by bombing. The complex contained various factories. It had around 4,500 containers to be used to raise fleas, six cauldrons to produce various chemicals, and around 1,800 containers to produce biological agents. Approximately 30 kilograms (66 pounds) of bubonic plague bacteria could be produced in a few days.

Some of Unit 731’s satellite facilities are in use by various Chinese industrial concerns. A portion has been preserved and is open to visitors as a War Crimes Museum.

A medical school and research facility belonging to Unit 731 operated in the Shinjuku District of Tokyo during World War II. In 2006, Toyo Ishiia nurse who worked at the school during the warrevealed that she had helped bury bodies and pieces of bodies on the school’s grounds shortly after Japan’s surrender in 1945. In response, in February 2011 the Ministry of Health began to excavate the site.[54]

China requested DNA samples from any human remains discovered at the site. The Japanese governmentwhich has never officially acknowledged the atrocities committed by Unit 731rejected the request.[55]

The related Unit 8604 was operated by the Japanese Southern China Area Army and stationed at Guangzhou (Canton). This installation conducted human experimentation in food and water deprivation as well as water-borne typhus. According to postwar testimony, this facility served as the main rat breeding farm for the medical units to provide them with bubonic plague vectors for experiments.[56]

Unit 731 was part of the Epidemic Prevention and Water Purification Department which dealt with contagious disease and water supply generally.

Operations and experiments continued until the end of the war. Ishii had wanted to use biological weapons in the Pacific War since May 1944, but his attempts were repeatedly snubbed.

With the coming of the Red Army in August 1945, the unit had to abandon their work in haste. The members and their families fled to Japan.

Ishii ordered every member of the group “to take the secret to the grave”, threatening to find them if they failed, and prohibiting any of them from going into public work back in Japan. Potassium cyanide vials were issued for use in the event that the remaining personnel were captured.

Skeleton crews of Ishii’s Japanese troops blew up the compound in the final days of the war to destroy evidence of their activities, but most were so well constructed that they survived somewhat intact.

Among the individuals in Japan after its 1945 surrender was Lieutenant Colonel Murray Sanders, who arrived in Yokohama via the American ship Sturgess in September 1945. Sanders was a highly regarded microbiologist and a member of America’s military center for biological weapons. Sanders’ duty was to investigate Japanese biological warfare activity. At the time of his arrival in Japan he had no knowledge of what Unit 731 was.[32] Until Sanders finally threatened the Japanese with bringing the Soviets into the picture, little information about biological warfare was being shared with the Americans. The Japanese wanted to avoid prosecution under the Soviet legal system, so the next morning after he made his threat, Sanders received a manuscript describing Japan’s involvement in biological warfare.[57] Sanders took this information to General Douglas MacArthur, who was the Supreme Commander of the Allied Powers responsible for rebuilding Japan during the Allied occupations. MacArthur struck a deal with Japanese informants[58]he secretly granted immunity to the physicians of Unit 731, including their leader, in exchange for providing America, but not the other wartime allies, with their research on biological warfare and data from human experimentation.[5] American occupation authorities monitored the activities of former unit members, including reading and censoring their mail.[59] The U.S. believed that the research data was valuable, and did not want other nations, particularly the Soviet Union, to acquire data on biological weapons.[60]

The Tokyo War Crimes Tribunal heard only one reference to Japanese experiments with “poisonous serums” on Chinese civilians. This took place in August 1946 and was instigated by David Sutton, assistant to the Chinese prosecutor. The Japanese defense counsel argued that the claim was vague and uncorroborated and it was dismissed by the tribunal president, Sir William Webb, for lack of evidence. The subject was not pursued further by Sutton, who was probably unaware of Unit 731’s activities. His reference to it at the trial is believed to have been accidental.

Although publicly silent on the issue at the Tokyo Trials, the Soviet Union pursued the case and prosecuted twelve top military leaders and scientists from Unit 731 and its affiliated biological-war prisons Unit 1644 in Nanjing, and Unit 100 in Changchun, in the Khabarovsk War Crime Trials. Included among those prosecuted for war crimes, including germ warfare, was General Otoz Yamada, the commander-in-chief of the million-man Kwantung Army occupying Manchuria.

The trial of those captured Japanese perpetrators was held in Khabarovsk in December 1949. A lengthy partial transcript of the trial proceedings was published in different languages the following year by a Moscow foreign languages press, including an English language edition.[61] The lead prosecuting attorney at the Khabarovsk trial was Lev Smirnov, who had been one of the top Soviet prosecutors at the Nuremberg Trials. The Japanese doctors and army commanders who had perpetrated the Unit 731 experiments received sentences from the Khabarovsk court ranging from two to 25 years in a Siberian labor camp. The U.S. refused to acknowledge the trials, branding them communist propaganda.[62] The sentences doled out to the Japanese perpetrators were unusually lenient for Soviet standards, and all but one of the defendants returned to Japan by the 1950s (with the remaining prisoner committing suicide inside his cell). In addition to the accusations of propaganda, the US also asserted that the trials were to only serve as a distraction from the Soviet treatment of several hundred thousand Japanese prisoners of war; meanwhile, the USSR asserted that the US had given the Japanese diplomatic leniency in exchange for information regarding their human experimentation. The accusations of both the US and the USSR were true, and it is believed that they had also exchanged information to the Soviets regarding their biological experimentation for judicial leniency.[63] This was evidenced by the Soviet Union building a biological weapons facility in Sverdlovsk using documentation captured from Unit 731 in Manchuria.[64]

As above, under the American occupation the members of Unit 731 and other experimental units were allowed to go free. One graduate of Unit 1644, Masami Kitaoka, continued to do experiments on unwilling Japanese subjects from 1947 to 1956 while working for Japan’s National Institute of Health Sciences. He infected prisoners with rickettsia and mental health patients with typhus.[65]

Shiro Ishii, as the chief of the unit, was granted war crime immunity from the US occupation authorities, because of his provision of human experimentation research materials to the US. From 1948 to 1958, less than 5% of the documents were transferred onto microfilm and stored in the National Archives of the United States, before being shipped back to Japan.[66]

Japanese discussions of Unit 731’s activity began in the 1950s, after the end of the American occupation of Japan. In 1952, human experiments carried out in Nagoya City Pediatric Hospital, which resulted in one death, were publicly tied to former members of Unit 731.[67] Later in that decade, journalists suspected that the murders attributed by the government to Sadamichi Hirasawa were actually carried out by members of Unit 731. In 1958, Japanese author Shsaku End published the book The Sea and Poison about human experimentation, which is thought to have been based on a real incident.

The author Seiichi Morimura published The Devil’s Gluttony () in 1981, followed by The Devil’s Gluttony: A Sequel in 1983. These books purported to reveal the “true” operations of Unit 731, but actually confused them with that of Unit 100, and falsely used unrelated photos attributing them to Unit 731, which raised questions about its accuracy.[68][69]

Also in 1981 appeared the first direct testimony of human vivisection in China, by Ken Yuasa. Since then many more in-depth testimonies have appeared in Japanese. The 2001 documentary Japanese Devils was composed largely of interviews with 14 members of Unit 731 who had been taken as prisoners by China and later released.[70]

Since the end of the Allied occupation, the Japanese government has repeatedly apologized for its pre-war behavior in general, but specific apologies and indemnities are determined on the basis of bilateral determination that crimes occurred, which requires a high standard of evidence. Unit 731 presents a special problem, since unlike Nazi human experimentation which the U.S. publicly condemned, the activities of Unit 731 are known to the general public only from the testimonies of willing former unit members, and testimony cannot be employed to determine indemnity in this way.

Japanese history textbooks usually contain references to Unit 731, but do not go into detail about allegations, in accordance with this principle.[71][72] Sabur Ienaga’s New History of Japan included a detailed description, based on officers’ testimony. The Ministry for Education attempted to remove this passage from his textbook before it was taught in public schools, on the basis that the testimony was insufficient. The Supreme Court of Japan ruled in 1997 that the testimony was indeed sufficient and that requiring it to be removed was an illegal violation of freedom of speech.[73]

In 1997, the international lawyer Knen Tsuchiya filed a class action suit against the Japanese government, demanding reparations for the actions of Unit 731, using evidence filed by Professor Makoto Ueda of Rikkyo University. All Japanese court levels found that the suit was baseless. No findings of fact were made about the existence of human experimentation, but the decision of the court was that reparations are determined by international treaties and not by national court cases.[citation needed]

In August 2002, the Tokyo district court ruled for the first time that Japan had engaged in biological warfare. Presiding judge Koji Iwata ruled that the Unit 731, on the orders of the Imperial Japanese Army headquarters, used bacteriological weapons on Chinese civilians between 1940 and 1942, spreading diseases including plague and typhoid in the cities of Quzhou, Ningbo and Changde. However, he rejected the victims’ claims for compensation on the grounds that they had already been settled by international peace treaties.[74]

In October 2003, a member of the House of Representatives of Japan filed an inquiry. Japanese Prime Minister Junichiro Koizumi responded that the Japanese government did not then possess any records related to Unit 731, but the government recognized the gravity of the matter and would publicize any records that were located in the future.[75] In April 2018, the National Archives of Japan released the names of 3,607 members of Unit 731, in response to a request by Professor Katsuo Nishiyama of the Shiga University of Medical Science.[76]

After WWII, the Office of Special Investigations created a watchlist of suspected Axis collaborators and persecutors that are banned from entering the U.S. While they have added over 60,000 names to the watchlist, they have only been able to identify under 100 Japanese participants. In a 1998 correspondence letter between the DOJ and Rabbi Abraham Cooper, Eli Rosenbaum, director of OSI, stated that this was due to two factors. (1) While most documents captured by the U.S. in Europe were microfilmed before being returned to their respective governments, the Department of Defense decided to not microfilm its vast collection of documents before returning them back to the Japanese government. (2) The Japanese government has also failed to grant the OSI meaningful access to these and related records after the war, while European countries, on the other hand, have been largely cooperative.[77] The cumulative effect of which is that information pertaining to identifying these individuals is, in effect, impossible to recover.

There have been several films about the atrocities of Unit 731.

See the rest here:

Unit 731 – Wikipedia

Elon Musk

Musk was accepted to a graduate program at Stanford, but deferred attendance to launch his first business, software company Zip2.

In 2015 Musk gifted 1.2 million Tesla shares to his foundation, which spent $480,000 in late 2018 to provide clean drinking water to schools in Flint, Michigan.

Continue reading here:

Elon Musk

Elon Musk | SpaceX

CEO AND LEAD DESIGNER

Elon Musk is the founder, CEO and lead designer at Space Exploration Technologies (SpaceX), where he oversees the development and manufacturing of advanced rockets and spacecraft for missions to and beyond Earth orbit.

Founded in 2002, SpaceXs mission is to enable humans to become a spacefaring civilization and a multi-planet species by building a self-sustaining city on Mars. In 2008, SpaceXs Falcon 1 became the first privately developed liquid-fuel launch vehicle to orbit the Earth. Following that milestone, NASA awarded SpaceX with contracts to carry cargo and crew to the International Space Station (ISS). A global leader in commercial launch services, SpaceX is the first commercial provider to launch and recover a spacecraft from orbit, attach a commercial spacecraft to the ISS and successfully land an orbital-class rocket booster. By pioneering the development of fully and rapidly reusable rockets and spacecraft, SpaceX is dramatically reducing the cost of access to space, the first step in making life on Mars a reality in our lifetime.

Elon is also the co-founder, CEO and product architect of Tesla, which makes electric cars, giant batteries and solar products. He is the co-founder and chairman of OpenAI, a nonprofit research company working to build safe artificial intelligence and ensure that AI’s benefits are as widely and evenly distributed as possible.

Previously, Elon co-founded and sold PayPal, the world’s leading Internet payment system, and Zip2, one of the first internet maps and directions services, which helped bring major publishers, including the New York Times and Hearst, online.

Follow this link:

Elon Musk | SpaceX


...10...1819202122...304050...