Oregon Breweries Now Sell Beer In Reusable Bottles

CRACK A BOTTLE

Recycling a glass bottle means it’ll probably be taken to a facility where it will be sorted, crushed, melted and ultimately molded into new glassware. That’s better than just throwing it into a landfill, but the process is also inefficient and does little to mitigate the the impact of the resources used to first make it.

Now, NPR reports that the Oregon Beverage Recycling Cooperative, a “member-owned, cooperative corporation in charge of picking up and processing nearly 100 percent of all containers redeemed in Oregon,” is debuting a thick, durable beer bottle that it says can be cleaned, refilled, and resold without being broken down. It’s the first such system in the country, according to the report, and could vastly decrease the environmental impact of beverage sales.

“Every time that bottle gets reused, you’re cutting the carbon footprint of that bottle in half,” Joel Schoening, a spokesperson for the Oregon Beverage Recycling Cooperative, told NPR. “It’s the most sustainable choice in the beer aisle.”

MILK DUDS

Long before the modern era of recycling, milkmen who delivered fresh dairy to residential homes in reusable glass bottles were a staple of the local food systems in the United States and Europe. But they faded away after the 1960s, because milk had a longer shelf life that made it more practical to sell in grocery stores.

In Oregon, state officials say that reviving reusable beverage containers is an easy way to decrease waste without impacting consumers, who can collect a 10 cent deposit for the new bottles, just like old ones — plus an extra 12 cents each if they bring back 20.

EVERYTHING OLD IS NEW AGAIN

To kickstart the program, the Oregon Beverage Recycling Cooperative is partnering with seven breweries in the state. If all goes to plan, cider and winemakers will come next, Schoening told NPR.

When the effort was first announced in May, Oregon Public Broadcasting noted that the bottles would be taken to a facility in Montana to be washed, but it’ll still emit less carbon than making a whole new bottle or recycling one.

The project represents an optimistic vision of the environmental future: one that turns to old-fashioned approaches, in addition to high-tech solutions, to help solve some of our biggest sustainability challenges.

READ MORE: Oregon Launches First Statewide Refillable Bottle System In U.S. [NPR]

More on sustainability: Norway Plans a Sustainable “City of the Future”

The post Oregon Breweries Now Sell Beer In Reusable Bottles appeared first on Futurism.

Here is the original post:
Oregon Breweries Now Sell Beer In Reusable Bottles

CBD Oil is a Multibillion-Dollar Industry. But is the Supplement Right For You?

Just three months ago, and for the first time ever, the FDA approved the sale of a cannabidiol(CBD)-based medicine. Researchers and recreational users alike considered this approval to be a big step. CBD is one of the main compounds found in cannabis – unlike THC, however, it doesn’t cause psychoactive effects. Even so, CBD is still technically classified by the DEA as a Schedule 1 substance (i.e. one with no currently accepted medical use), even if marijuana is legal in your state.

However, based on the FDA’s new approval, it’s possible this classification could change. Which could mean opportunities for even more research. Already, some studies suggest CBD could lessen anxiety, help with painreduce the proliferation of breast cancer cells, or even – as was seen in a recent study from King’s College London – reduce abnormal brain function in people with psychosis.

A quick Google search yields nearly 97 million results touting the advantages of CBD. Youtube has hundreds of thousands of videos of people sharing stories about how this versatile compound has supposedly altered their lives for the better. Despite what might seem like compelling anecdotal and research-based evidence, scientists are still divided when it comes to CBD, its legal classification, and its potential benefits.

Many of these studies on the benefits of CBD are somewhat short-term human studies, or derived from animal research (which doesn’t always mean the same results will be seen in humans), according to NPR. Dr. Esther Blessing, for example, a psychiatrist and researcher at New York University, tells NPR that more clinical trials are needed with CBD to draw firm conclusions about it’s effectiveness. This is especially true since supplements aren’t regulated in the same was as pharmaceuticalswhich means that quality and consistency vary across CBD products.

To address this concern, CBD provider Mellowment has become a standard in the industry by using natural, ethically-sourced ingredients that serve its broad customer base. Taken daily, “Mellowment packs a multitude of benefits (in addition to anxiety and pain relief) including anti-seizure properties, and relief from inflammation, pain, anxiety, migraines, and irregular sleep” according to the Mellowment site.

Mellowment offers top-quality supplements that range from low to high impact which allows consumers to manage their stress or pain accordingly. If, for example, a consumer is having a busy or stressful work week, (s)he might opt to take the “low-impact” nootropic: a non-drowsy formula that may positively affect with cognitive function and anxiety. If a consumer is recovering from an injury, on the other hand, (s)he could be better served by the “high impact” pill that could help relieve severe pain and inflammation.

With the rise of personal, online testimony, cannabis is quickly gaining traction. 30 states have already legalized medical marijuana. It is estimated that CBD oil is a $1-2 billion industry already, and growing fast. Perhaps CBD’s impassioned adoption will lay the groundwork for future research.


A non-editorial team at Futurism has partnered with Mellowment to create this article, and we may receive a percentage of sales from this post. Mellowment is owned in part by an employee of Futurism. This supplement has not been evaluated by the FDA, and is not intended to cure or treat any ailments. Do not take CBD products if you are allergic to any of the ingredients in the product you are consuming. Tell your doctor about all medicines you may be on before consuming CBD to avoid negative reactions. Tell your doctor about all medical conditions. Tell your doctor about all the medicines you take, including prescription and nonprescription medicines, vitamins and herbal products. Other side effects of CBD include: dry mouth, cloudy thoughts, and wakefulness. You are encouraged to report negative side effects of any drugs to the FDA. Visit http://www.fda.gov/medwatch, or call 1-800-FDA-1088. 

The post CBD Oil is a Multibillion-Dollar Industry. But is the Supplement Right For You? appeared first on Futurism.

More:
CBD Oil is a Multibillion-Dollar Industry. But is the Supplement Right For You?

Lawmakers: Deepfakes Could “Undermine Public Trust” in “Objective Depictions of Reality”

FAKE NEWS

In the early, optimistic days of the internet, we thought it would be a repository of high-quality information. Instead, it’s starting to feel like a bottomless ocean of lies that rewards attention-grabbing disinformation and pollutes the political process.

That’s the note of alarm that three members of Congress sounded in a letter this week to Daniel Coats, the U.S. director of national intelligence. In it, the lawmakers warned specifically about the technology called deepfake, which lets computer users with little tech savvy create convincing footage of people doing and saying things that they never actually did.

“Hyper-realistic digital forgeries — popularly referred to as ‘deep fakes’ [sic] — use sophisticated machine learning techniques to produce convincing depictions of individuals doing or saying things they never did, without their consent or knowledge,” read the letter. “By blurring the line between fact and fiction, deep fake technology could undermine public trust in recorded images and videos as objective depictions of reality.”

MISINFO

Representatives Adam Schiff (D-CA), Stephanie Murphy (D-FL) and Carlos Curbelo (R-FL) signed the letter to Coats. In it, they requested that the heads of the intelligence community prepare a report that would tell Congress what steps it has planned to fight the dissemination of faked clips.

“Forged videos, images or audio could be used to target individuals for blackmail or for other nefarious purposes,” they wrote. “Of greater concern for national security, they could also be used by foreign or domestic actors to spread misinformation.”

INFOCALYPSE

Deepfakes rose to prominence early this year on Reddit, where posters started using it to splice the likenesses of celebrities into pornographic films and the visage of Nicolas Cage into movies he never appeared in. Soon afterward, experts at a DARPA meeting of media forensics experts became concerned about it the technology. One expert told the Outline that doctored footage of a world leader declaring war could spark a “full-blown nuclear holocaust.”

A deepfake hoax of a world leader hasn’t viral — yet. If it does, it will be a test of our collective skepticism — in an age when even genuine information is swiftly politicized online.

READ MORE: Deep Fakes Letter [Adam Schiff]

More on information warfare: If DARPA Wants To Stop Deepfakes, They Should Talk To Facebook And Google

The post Lawmakers: Deepfakes Could “Undermine Public Trust” in “Objective Depictions of Reality” appeared first on Futurism.

See the article here:
Lawmakers: Deepfakes Could “Undermine Public Trust” in “Objective Depictions of Reality”

A Conversation About First Amendment Rights with David L …

July is a special month for U.S. history because on the 4th of July, 241 years ago, our country declared its independence. Many Americans celebrate our nations birthday by gathering with friends and family to attend summer cookouts and watch fireworks. Others travel to the Capitol for even grander Independence Day festivities, filled with elected officials at the White House or at the Lincoln Memorial.

July is also a great month to reflect on the founding ideals of our country, the progress it has made, and how far it still has to go. In the midst of that reflection, some of us might even consider how our citizenship guarantees protections, which may not be accessible or practiced in other countries.

David L. Hudson Jr..

Advocacy and grassroots movements are key ingredients in affecting systemic change, but they are only possible because of the protections granted by the U.S. Constitution. Thats why we sat down with David L. Hudson Jr., First Amendment expert and law professor, to discuss some misconceptions about our First Amendment Rights. Hudson serves as First Amendment ombudsman for the Newseum Institutes First Amendment Center. He is an author, co-author or co-editor of more than 40 books, including Let The Students Speak: A History of the Fight for Free Expression in American Schools (Beacon Press, 2011), and The Encyclopedia of the First Amendment (CQ Press, 2008). He has served as a senior law clerk at the Tennessee Supreme Court, and teaches First Amendment and Professional Responsibility classes at Vanderbilt University School of Law and various classes at the Nashville School of Law.

David L. Hudson Jr. :My initial interest began in high school. I got in trouble for engaging in certain speech and felt the punishment was unfair. Later in life, my interest deepened after I joined the First Amendment Center. I got to speak at different schools and really enjoyed discussing student rights. Eventually, I took it up a notch by becoming personal friends with free speech activists like John and Mary Beth Tinker, and wrote books on the subject.

David L. Hudson Jr. :One misconception is that the First Amendment limits both public and private actors. Under the state action doctrine, the First Amendment limits only public actors. Another misconception is that many people dont realize that the First Amendment protects a great deal of obnoxious, offensive, or repugnant speech. Justice Brennan once referred to this as a bedrock principle of the First Amendment.

AGF NOTE: The protections you receive at a public park are much different from what you may be entitled to during working hours if you work at a private corporation. However, there is a grey area that exists in the law to ensure workers are not being exploited.Hate speech is, within reason, protected by the First Amendment. People are entitled to condemn religions, political parties, economic system, etc. The premise is that government should not control speech, whether it agrees or disagrees with what is being said.

David L. Hudson Jr. :Student organizers have to be carefuleven under the speech-protective standard articulated in the Tinker casebecause some courts have held that student walkouts are disruptive to the educational process. However, there is a healthy degree of protection for student political clubs and such. Advocacy should be protected but if it becomes substantially disruptive, then it becomes a problem.

AGF NOTE: The Tinker case refers to Tinker v. Des Moines Independent Community School District (1969). In the case, students in an Iowa public school organized a protest against the Vietnam War, where they wore black armbands as a symbol of their opposition to the war. Administrators found out and the Principal threatened to suspend all students who participated. After the protest, students were suspended and parents sued the school for violation of freedom of speech. The U.S District Court sided with the school, ruling the protest disrupted learning. The United States Supreme Court ruled in a 7-2 decision in favor of the students in 1969. The court agreed that students, dont shed their constitutional rights at the school house gates. This has become known as the Tinker standard.

David L. Hudson Jr. :Students played a very significant role in the Civil Rights Movement. One of my favorite cases is Edwards v. South Carolina (1963). In that case, 187 African-American youth (and one white youth) were arrested for protesting and marching against segregation in Columbia, South Carolina.

David L. Hudson Jr. :It encompasses the right to petition the government for a redress of grievances. In a sense filing a lawsuit is a petition. But, when I think of petition in this context, I think of a list of signed student signatures, peacefully expressing their opposition to a school policy (like an overbroad or onerous dress code).

David L. Hudson Jr. :A key unanswered question concerns student rights online. Or asked another way how far does the arm of school authority extend to off-campus, online speech? We still dont know the answer.

As July comes to a close, we want encourage all of you to think about your advocacy and activism. In what ways are you an advocate and what causes do you champion with your everyday decisions? Whether you are currently a student or working professionally, consider the different protections and rights youre entitled to, depending on the context. Think about your ability to advocate for yourself and for others as a sacred component of your ability to move our country forward.

Kevin Hurtado is the Communications and Development Associate at Andrew Goodman Foundation. He graduated from Ramapo College of New Jersey with a Bachelors in International Studies and a minor in Human Rights and Genocide.Previously, Kevin worked as an Executive Assistant and Officer Manager at Newark Charter School Fund, a nonprofit dedicated to promoting educational equity in the city of Newark.

Read more from the original source:

A Conversation About First Amendment Rights with David L ...

Ranking the best nootropics of 2018 – BodyNutrition

Many deep thinkers, CEOs and people of mastery are taking nootropics, commonly known as smart drugs, for increased brain function.

(This is your ability to learn, focus, remember things and solve complex problems)

From Joe Rogan to Tim Ferriss, smart drugs were being talked about years ago before recently hitting mainstream.

Nootropics work, but you got to be careful.

Since nootropics are such a hot topic right now, it can be hard to separate whats legit and whats whack.

Our research team looked at the scientific research in-depth to find the best nootropics on the market right now.

Click here for the lowest price on Amazon

OptiMind provides the best balance among proven nootropic ingredients. It balances a moderate dose of vitamin D with a heavy dose of vitamin B12, and several key nootropic supplements, including L-tyrosine, bacopa extract, caffeine, ALA, and vinpocetine, to name just a few. The dosages are on-point and there arent any extraneous ingredients, making OptiMind come out number one.

Click here for the lowest price on Amazon

The widely-known nootropic supplement from Onnit, made famous in part thanks to Joe Rogans ringing endorsement, fares well thanks to the strong results from a clinical trial conducted at the Boston Center for Memory and published in a peer-reviewed scientific journal in 2016 (1).

Alpha Brains blend of supplements seems particularly effective at boosting verbal skills, so its great if you need to read or write a lot.

Click here for the lowest price on Amazon

NeuroIgnite contains high doses of several powerful nootropic supplements, and moreover, the label actually tells you how much of each extract is in the supplement, unlike other companies who hide the specifics of their blend under the veil of a proprietary blend.

The heavy hitters in this blend are bacopa monnieri, DMAE, and ginkgo biloba, all heavily-researched supplements with evidence for nootropic effects.

Click here for the lowest price on Amazon

Zhou Nutritions Neuro-Peak is an immensely popular nootropic supplement that includes a massive dose of vitamin B12 and a slew of herbal extracts. Rhodiola rosea extract is one of the distinguishing factors in Neuro-Peak; this herbal extract appears to be effective at staving off mental fatigue.

Because of this, Neuro-Peak is a good choice if youre faced with a long stretch of continuous, mentally challenging work to do.

Click here for the lowest price on Amazon

We Are Fits Neuro Spark goes heavy on the St. Johns wort and ginkgo biloba, but includes some of the newer nootropic supplements like vinpocetine, bacopa monnieri, and huperzine-A as well.

Its a well-balanced, stimulant-free nootropic thats well-suited for boosting your performance at any cognitively challenging task.

Click here for the lowest price on Amazon

Ciltep by Natural Stacks is a little unusualit doesnt follow the lead of the more popular nootropic supplements. It chooses to include unconventional nootropic compounds like artichoke extract and forskolin extract (better known for its use as a weight loss supplement).

As you might guess, theres far less research on the cognitive enhancement properties of these supplements, but if normal nootropics arent doing it for you, Ciltep might be worth a shot.

Click here for the lowest price on Amazon

TruBrain focuses on delivering amino acids, plus the tried and true caffeine (only 100 mg per shot) and magnesium to achieve its nootropic effects. This approach is more about providing your brain with the natural building blocks it needs to function properly, versus trying to boost its function above its natural level. If you want more of a tune up than an upgrade, TruBrain is a good choice.

Click here for the lowest price on Amazon

Neovictas Clarity supplement is firmly in the kitchen sink camp, which is to say that it includes just about everything that might affect cognitive function. Its got vitamins, minerals, supplements, extracts, and synthetic compounds.

This betrays a lack of an overarching strategy when it comes to boosting cognitive function, but if you just want to cover your bases, its not a bad choice.

Click here for the lowest price on Amazon

Brain Boost is essentially a multivitamin along with several herbal extracts that affect biological processes related to cognitive function. Its purpose to to make sure your body is fueled up with the right micronutrients and biological precursors to function at its best.

Unfortunately, because its nootropic ingredients are part of a proprietary blend, its impossible to see how much of each ingredient is present in the supplement.

Click here for the lowest price on Amazon

NeuroFit is another nootropic that goes heavy on the vitamins and minerals, but is secretive about the amounts of the actual nootropic ingredients in its formulation. With so many ingredients, it seems unlikely that the active ingredients are present in high concentrations.

Nootropics are supplements that are designed to boost cognitive functionthat is, enhance your brains ability to learn, remember, and solve problems.

Though they are extremely popular among students, nootropics have a much broader appeal. Just about anybody in a complex job wants the ability to work faster and more effectively, and people who are getting older like the appeal of staving off brain fog and some of the cognitive decline that comes along with aging.

When a supplement claims to impact cognitive function, its a fairly easy claim to test.

Unlike other supplements that claim to boost well-being or promote a healthy immune system, testing cognitive function is straightforward: you get a group of subjects, give them the supplement in question or a placebo, then subject them to a battery of psychometric tests to assess their cognitive function.

There are a variety of types of these kinds of tests, and different supplements seem to affect different aspects of cognitive function. Some seem to boost memory, while others influence verbal abilities or help reduce mental fatigue (the diminishing of cognitive performance after long, challenging efforts).

A good case study in how testing nootropics work can be observed in the 2016 study that validated the effects of Alpha Brain (2).

Researchers split a group of volunteers into two groups, one of which was given Alpha Brain, and the other of which was given a placebo supplement. Both groups were tested for their cognitive abilities at the outset of the study, then took their assigned supplement for six weeks before being tested again.

The psychometric battery of tests used in the study included visual, spatial, logical, and verbal reasoning and memory. The results showed no improvement in most metrics, but a statistically significant increase in verbal memory.

Other supplements have been studied on an individual level to identify potential nootropic benefits. As you might guess, the most popular ingredients among the top-ranked nootropic supplements are also among the best-studied and most effective.

Bacopa monnieri, for example, is an herbal extract thats been demonstrated to have specific cognitive enhancing effects. A 2001 study in the scientific journal Psychopharmacology conducted a similar protocol to the Alpha Brain studya group of healthy adult subjects were given a 300 mg dose of bacopa monnieri extract or a placebo for twelve weeks, and were subjected to a battery of cognitive tests before and after the supplementation period (3).

In this case, the researchers found that the bacopa supplement increased the speed of visual information processing, learning rate, and memory consolidation.

Ginkgo biloba and vinpocetine looks to be an effective supplement when it comes to speeding up your short-term working memory.

A study published in the journal Human Psychopharmacology used a similar placebo-controlled experiment to study the effects of a combination of ginkgo biloba and vinpocetine on cognitive function, and found that the supplement combination increased the speed at which your working memory functions after being taken for two weeks (4).

When looking at nootropic supplements, youll have to think specifically about what kind of cognitive enhancement you are looking for.

When working on a major writing project, or attempting to work through a lot of reading material, taking something like Alpha Brain that increases verbal memory is could be very helpful.

On the other hand, something that improves visual information processing and learning rate, like bacopa monnieri extract, would likely speed your ability to learn math flash cards or process on-screen visual information.

If you were doing data entry, you might want something to improve short-term working memory, like ginkgo biloba and vinpocetine.

Its clear that taking the time to analyze the specific cognitive demands of the task in question will help immensely when you are deciding what the ideal nootropic supplement is for you.

Other nootropic supplements were studied initially to help with cognitive decline and dementia in the elderly, but have been hypothesized to be effective as well in healthy people.

One example of this is the herbal supplement huperzine-A. Early research found that it had a strong anti-dementia effect. A 1999 experiment described using a huperzine-A supplement to reverse natural dementia in elderly monkeys, as well as reversing chemically-induced cognitive decline in young monkeys (5).

Research into whether it can be used to actually boost cognitive performance in healthy humans is still lacking, but this hasnt stopped people from betting that it will.

An entirely different strategy in nootropics is simply providing your brain with extra building blocks to use in the process of synthesizing neurotransmitters, which are chemicals your brain uses to think. Many nootropics simply provide high doses of the amino acids that are associated with cognitive function.

These are less well-studied, perhaps because simply keeping your brains amino acid reserved topped off isnt as exciting as artificially enhancing its performance, but its nevertheless a strategically sound approach.

Since nootropic supplements are so new, their side effect profile is not well-studied. So far, there have been no major adverse side effects reported that are associated with the ingredients used in the best and most popular nootropic supplements.

In this regard they appear to have a safer safety profile than other categories of multi-ingredient supplements, like weight loss supplements.

The one caveat to this applies to nootropic supplements that contain caffeine. While caffeine is one of the best-studied and most effective cognitive enhancement supplements (as every coffee addict knows), it can cause side effects like jitters and nausea in people who are sensitive to it.

Further, taking it at night is a bad idea, thanks to caffeines ability to act as a stimulantunless, of course, you are trying to stay up all night. Theres nothing wrong with caffeine in a nootropic supplement, but make sure you know how many milligrams each serving contains.

Only a small number of nootropics have established effective doses, and these are mostly derived from the dosages chosen in scientific studies that examined the supplement in question.

Bacopa monnieri, for example, appears to be effective at doses of 300 mg, and ginkgo biloba extract can be effective at doses as low as 40 mg.

Vinpocetine seems to require doses of 30 to 60 mg, but this comes from scientific literature using it to study cognitive decline and dementia in the elderly, so its not clear if boosting brain function in healthy people can be accomplished with a lower dose.

DMAEs studied dosage range is typically about 100 mg, but this seems to come from studies looking at its use to induce lucid dreaming!

Clearly, more work is needed to establish optimal doses for nootropic supplements, but looking for dosages close to these guidelines is at least a good place to start.

Nootropics are a new field of supplementation, but there is fairly strong evidence emerging that a number of different supplements can positively affect your cognitive functioning.

Each seems to serve a slightly different purpose, so think about the cognitive demands in your life before you choose a nootropic supplement. Once you know the kinds of problems you want to solve, make sure your supplement of choice has dosages of the major nootropic compounds you want that are at least close to the range studied in the scientific literature.

When done with care, nootropics appear to be a safe and effective way to increase your brain power and help you work faster, smarter, and more effectively.

Read the original here:

Ranking the best nootropics of 2018 - BodyNutrition

Search for a route – sealand.com

ACMH NORTHBOUND MANZANILLO (PA) - WILLEMSTAD (CW) - ORANJESTAD (AW) - KINGSTON (JM) View Map ACMH SOUTHBOUND WILLEMSTAD (CW) - ORANJESTAD (AW) - KINGSTON (JM) - MANZANILLO (PA) View Map ACX NORTHBOUND ITAJAI (BR) - PARANAGUA (BR) - SANTOS (BR) - RIO DE JANEIRO (BR) - CARTAGENA (CO) - KINGSTON (JM) - NEW ORLEANS (US) - HOUSTON (US) - ALTAMIRA (MX) - VERACRUZ (MX) View Map ACX SOUTHBOUND NEW ORLEANS (US) - HOUSTON (US) - ALTAMIRA (MX) - VERACRUZ (MX) - KINGSTON (JM) - CARTAGENA (CO) - MANZANILLO (PA) - MANAUS (BR) - VITORIA (BR) - SANTOS (BR) - ITAJAI (BR) - PARANAGUA (BR) - SANTOS (BR) - RIO DE JANEIRO (BR) View Map CALYPSO EASTBOUND MANZANILLO (PA) - BARRANQUILLA (CO) - CARTAGENA (CO) - SANTA MARTA (CO) - POINT LISAS (TT) - GEORGETOWN (GY) - PARAMARIBO (SR) View Map CALYPSO WESTBOUND PARAMARIBO (SR) - GEORGETOWN (GY) - POINT LISAS (TT) - CARTAGENA (CO) - BARRANQUILLA (CO) - SANTA MARTA (CO) - MANZANILLO (PA) View Map CARIBBEAN WESTBOUND RIO HAINA (DO) - SAN JUAN (PR) - CAUCEDO (DO) - MANZANILLO (PA) - CARTAGENA (CO) View Map CARIBBEAN EASTBOUND MANZANILLO (PA) - CARTAGENA (CO) - CAUCEDO (DO) - RIO HAINA (DO) - SAN JUAN (PR) View Map ECUBEX NORTHBOUND GUAYAQUIL (EC) - BALBOA (PA) - MANZANILLO (PA) View Map ECUBEX SOUTHBOUND CARTAGENA (CO) - MANZANILLO (PA) - BALBOA (PA) - GUAYAQUIL (EC) View Map ECUMED NORTHBOUND BUENAVENTURA (CO) - GUAYAQUIL (EC) - BALBOA (PA) - MANZANILLO (PA) View Map ECUMED SOUTHBOUND NEWARK (US) - MANZANILLO (PA) - BUENAVENTURA (CO) - GUAYAQUIL (EC) - BALBOA (PA) - MANZANILLO (PA) View Map GUANTA EASTBOUND KINGSTON (JM) - MANZANILLO (PA) - GUANTA (VE) View Map GUANTA WESTBOUND GUANTA (VE) - KINGSTON (JM) - MANZANILLO (PA) View Map GULFEX NORTHBOUND RIO GRANDE (BR) - NAVEGANTES (BR) - SANTOS (BR) - RIO DE JANEIRO (BR) - CARTAGENA (CO) - VERACRUZ (MX) - ALTAMIRA (MX) - HOUSTON (US) - NEW ORLEANS (US) View Map GULFEX SOUTHBOUND VERACRUZ (MX) - ALTAMIRA (MX) - HOUSTON (US) - NEW ORLEANS (US) - CARTAGENA (CO) - SUAPE (BR) - SANTOS (BR) - RIO GRANDE (BR) - NAVEGANTES (BR) - RIO DE JANEIRO (BR) View Map JAMAICA-HAITI NORTHBOUND MANZANILLO (PA) - KINGSTON (JM) - PORT-AU-PRINCE (HT) View Map JAMAICA-HAITI SOUTHBOUND KINGSTON (JM) - PORT-AU-PRINCE (HT) - MANZANILLO (PA) View Map MARACAIBO EASTBOUND MANZANILLO (PA) - KINGSTON (JM) - GUARANAO (VE) - MARACAIBO (VE) View Map MARACAIBO WESTBOUND GUARANAO (VE) - MARACAIBO (VE) - MANZANILLO (PA) - KINGSTON (JM) View Map NAE NORTHBOUND CARTAGENA (CO) - TURBO (CO) - MANZANILLO (PA) - PUERTO MOIN (CR) - PHILADELPHIA (US) - NEW YORK (US) - SAVANNAH (US) - PORT EVERGLADES (US) View Map NAE SOUTHBOUND PHILADELPHIA (US) - NEW YORK (US) - SAVANNAH (US) - PORT EVERGLADES (US) - CARTAGENA (CO) - TURBO (CO) - MANZANILLO (PA) - PUERTO MOIN (CR) View Map OCEANIA NORTHBOUND MANZANILLO (PA) - CRISTOBAL (PA) - CARTAGENA (CO) - PHILADELPHIA (US) - CHARLESTON (US) View Map OCEANIA SOUTHBOUND PHILADELPHIA (US) - CHARLESTON (US) - CARTAGENA (CO) - BALBOA (PA) View Map SLING 2 NORTHBOUND BUENOS AIRES (AR) - MONTEVIDEO (UY) - RIO GRANDE (BR) - NAVEGANTES (BR) View Map SLING 2 SOUTHBOUND NAVEGANTES (BR) - BUENOS AIRES (AR) - MONTEVIDEO (UY) - RIO GRANDE (BR) View Map SAE NORTHBOUND MANZANILLO (PA) - PUERTO CORTES (HN) - SANTO TOMAS (GT) - PORT EVERGLADES (US) - WILMINGTON (US) - NORFOLK (US) - PHILADELPHIA (US) - SAVANNAH (US) View Map SAE SOUTHBOUND NORFOLK (US) - PHILADELPHIA (US) - WILMINGTON (US) - SAVANNAH (US) - PORT EVERGLADES (US) - SANTO TOMAS (GT) - PUERTO CORTES (HN) - MANZANILLO (PA) View Map VENEZUELA EASTBOUND (LAG) MANZANILLO (PA) - LA GUAIRA (VE) - CARTAGENA (CO) - CRISTOBAL (PA) - TURBO (CO) View Map VENEZUELA WESTBOUND (LAG) CARTAGENA (CO) - CRISTOBAL (PA) - TURBO (CO) - MANZANILLO (PA) - LA GUAIRA (VE) View Map VENEZUELA EASTBOUND (PBL) MANZANILLO (PA) - CARTAGENA (CO) - PUERTO CABELLO (VE) View Map VENEZUELA WESTBOUND (PBL) PUERTO CABELLO (VE) - MANZANILLO (PA) - CARTAGENA (CO) View Map

Original post:

Search for a route - sealand.com

Sealand Marine Toilet Parts | West Marine

{{if false }} {{if recVal['Product.outletNewUntilFlag'] && recVal['Product.outletNewUntilFlag'][0] == 'New Arrivals' }} New Arrival {{else recVal['Product.specialBuyFlag'] && recVal['Product.specialBuyFlag'][0] == 'Yes' }} Special Buy {{else recVal['Product.exclusiveFlag'] && recVal['Product.exclusiveFlag'][0] == 'true' }} Exclusive {{/if}} {{else false }} {{if recVal['Product.newItemFlag'] && recVal['Product.newItemFlag'][0] == 'New' }} New {{else recVal['Product.wmClearanceFlag'] && recVal['Product.wmClearanceFlag'][0] == 'Clearance' }} Clearance {{else recVal['Product.exclusiveFlag'] && recVal['Product.exclusiveFlag'][0] == 'true' }} Exclusive {{/if}} {{else false }} {{if recVal['Product.saleFlagOverlay'] && recVal['Product.saleFlagOverlay'][0] == 'Yes' }} Sale {{else recVal['Product.rebateAvailableFlag'] && recVal['Product.rebateAvailableFlag'][0] == 'true' }} Rebate {{else recVal['Product.wmClearanceFlag'] && recVal['Product.wmClearanceFlag'][0] == 'Clearance' }} Clearance {{else recVal['Product.specialBuyFlag'] && recVal['Product.specialBuyFlag'][0] == 'Yes' }} Special Buy {{else recVal['Product.exclusiveFlag'] && recVal['Product.exclusiveFlag'][0] == 'true' }} Exclusive {{/if}} {{else}} {{if recVal['Product.saleFlagOverlay'] && recVal['Product.saleFlagOverlay'][0] == 'Yes' }} Sale {{else recVal['Product.newItemFlag'] && recVal['Product.newItemFlag'][0] == 'New' }} New {{else recVal['Product.rebateAvailableFlag'] && recVal['Product.rebateAvailableFlag'][0] == 'true' }} Rebate {{else recVal['Product.wmClearanceFlag'] && recVal['Product.wmClearanceFlag'][0] == 'Clearance' }} Clearance {{else recVal['Product.specialBuyFlag'] && recVal['Product.specialBuyFlag'][0] == 'Yes' }} Special Buy {{else recVal['Product.exclusiveFlag'] && recVal['Product.exclusiveFlag'][0] == 'true' }} Exclusive {{/if}} {{/if}}

Visit link:

Sealand Marine Toilet Parts | West Marine

Top 10 Amazing Cyberpunk Movies – YouTube

Top 10 Amazing Cyberpunk MoviesSubscribe: http://goo.gl/Q2kKrDTIMESTAMPS BELOW ----------------------- CHECK OUT WATCHMOJO'S NEW BOOK, LINKS BELOW!

Amazing Science fiction movies that explore the cyberpunk aesthetic in all it's high-tech low-life glory! WatchMojo presents the best films that utilize the cyberpunk aesthetic. But which film will take the top spot? Will it be Ridley Scott's Bladerunner, the Anime Classic Akira, or the revolutionary The Matrix? Watch to find out!

The 10-Year Overnight Success: An Entrepreneur's Manifesto: How WatchMojo Built The Most Successful Media Brand On YouTubePAPERBACK: https://goo.gl/93prjzKINDLE: https://goo.gl/Hs1hKq

If you've never used the Kindle App before, now's your chance to CHECK it out for FREE! CLICK: https://goo.gl/WmULsn

#10. eXistenZ (1999) #9. Tetsuo The Iron Man (1989) #8. Dredd (2012) #7. Ghost in the Shell (1995) #6. Strange Days (1995) #5. RoboCop (1987) #4. The Terminator (1984) #3, #2 & #1 ????

WatchMojo's Social Media Pageshttp://www.Facebook.com/WatchMojohttp://www.Twitter.com/WatchMojo http://instagram.com/watchmojo

Get WatchMojo merchandise at http://watchmojo.com/store/

WatchMojos ten thousand videos on Top 10 lists, Origins, Biographies, Tips, How Tos, Reviews, Commentary and more on Pop Culture, Celebrity, Movies, Music, TV, Film, Video Games, Politics, News, Comics, Superheroes. Your trusted authority on ranking Pop Culture.

Here is the original post:

Top 10 Amazing Cyberpunk Movies - YouTube

Immortality: The Quest to Live Forever and How It Drives …

Cave has produced a strikingly original and compelling exploration of the age-old conundrum: Can we live forever, and do we really want to?John Horgan, science journalist and author of The End of War

Immortality is a fascinating history of mans greatest obsession and poses a stunning theory of society. The Daily Beast

In Immortality Stephen Cave tells wonderful stories about one of humanitys oldest desires and comes to a wise conclusion. Stefan Klein, author of The Science of Happiness and The Secret Pulse of Time

A beautifully clear and entertaining look at life after death. Cave does not shrink from the hard questions. Bold and thought-provoking. Eric Olson, author of The Human Animal and What Are We?

A must-read exploration of what spurs human ingenuity. Every once in a while a book comes along that catches me by surprise and provides me with an entirely new lens through which to view the world. . . . Such is the case with Stephen Caves book Immortality. . . . Cave presents an extremely compelling caseone that has changed my view of the driving force of civilization as much as Jared Diamond did years ago with his brilliant book Guns, Germs and Steel.S. Jay Olshanksy, New Scientist magazine

Informed and metaphysically nuanced. . . . Cave presents his arguments in a brisk, engaging style, and draws effectively upon a wide-ranging stock of religious, philosophical, and scientific sources, both ancient and contemporary. Weekly Standard

In his survey of the subject, Stephen Cave, a British philosopher, argues that mans various tales of immortality can be boiled down into four basic narratives. . . . For the aspiring undying, Mr Cave unfortunately concludes that immortality is a mirage. But his demolition project is fascinating in its own right. . . . If anything, readers might want more of Mr. Caves crisp conversational prose. The Economist

Cave explains how the seeking of immortality is the foundation of human achievement, the wellspring of art, religion and civilization. . . . .The author is rangy and recondite, searching the byways of elixirs, the surprises of alchemy, the faith in engineering and all the wonder to be found in discussions of life and death. . . . Luminous. Kirkus Reviews

A dramatic and frequently surprising story of the pursuit of immortality and its effects on human history. Booklist

Cave is smart, lucid, elegant and original. Immortality is an engaging read about our oldest obsession, and how that obsession propels some of our greatest accomplishments. Greg Critser, author of Eternity Soup

An epic inquiry into the human desire to defy deathand how to overcome it. Cave traces the histories of each of his four immortality narratives through the worlds great religions, heroes, leaders, thinkers and stories. Its an epic tale of human folly, featuring a cast of characters including Gilgamesh, Dante, Frankenstein, the King of Qin, Alexander the Great and the Dalai Lama. Cave, a Berlin-based writer and former diplomat, is an admirably clear elucidator, stripping down arguments to their essences and recounting them without any unnecessary jargon. The Financial Times

Immortality plumbs the depths of the human mind and ties the quest for the infinite prolongation of life into the very nature of civilization itself. Cave reveals remarkable depth and breadth of learning, yet is always a breeze to read. I thoroughly enjoyed his bookits a really intriguing study. David Boyd Haycock, author of Mortal Coil and A Crisis of Brilliance

[Caves] sort of nonfiction writing is exciting. It gets the juices flowing and draws one into the material. What Cave does so well throughout Immortality is to take the reader by the hand and carefully guide her or him through each concept, ensuring understanding before exploring assorted variations and difficulties. Hes writing for searchers, not people collecting knock em-dead refutations of positions theyve already rejected. And his appeal is to intellectual curiosity. The Humanist

I loved this. Cave has set himself an enormous task and accomplished itin spades. Establishing a four-level subject matter, he has stuck to his guns and never let up. As he left one level and went to the next, I was always a little worried: Would he be able to pull it off? This was especially true as he approached the end. There is a sense in which each level, as he left it smoking in the road, looked easy as he started the next. In fact, the last level, while it is the most difficult, is the best, the most satisfying. I am happy to live in the world Cave describes. Charles Van Doren, author of A History of Knowledge

This book by Stephen Cave offers a helpful framework for understanding the various different kinds of immortality. Cave employs this framework to analyze these types of immortality and to argue that the quest for immortality is misguided. Caves insights throughout the book are deep, and his argumentation is compelling and well-informed by all of the relevant literature. It is also a beautifully written and highly accessible book. I recommend it highly.John Martin Fischer leader of the Templeton Foundation's Immortality Project, and author of Near-Death Experiences

See the article here:

Immortality: The Quest to Live Forever and How It Drives ...

Evolution: The Cutting Edge Guide to Breaking Down Mental …

A day after working out with Joe Manganiello feels like the morning after going twelve rounds with Tyson. This is Hollywoods hardest workout. (Dan Jones editor-at-large for Mens Health UK)

A comprehensive, yet straightforward and effective roadmap to better health and fitness, not to mention a killer physiquethe kind that may just have people wondering if youre not a fitness expert yourself. After reading Evolution, you will be. (Shawn Perine editor-in-chief of Muscle & Fitness)

Im pretty sure that Joe Manganiellos picture is next to the definition of fitness in the Websters dictionary. Youll be inspired. (Channing Tatum Peoples 2012 Sexiest Man Alive)

Its incredible what kind of shape hes in, the joke on the set was he was walking CGI. (Steven Soderbergh director of Magic Mike)

"Joe and I have a mutual understanding of hard work and dedication and what it takes for those two aspects to pay off for you if you buy in to the process. Joe has been a positive force in my life! Turn-up!" (Marcedes Lewis All-Pro NFL tight end for the Jacksonville Jaguars)

This book will give you real results. I was able to put on 10 pounds of muscle in one month. Listen to this Joehe wont let you down! (Matt Bomer star of USAs White Collar)

If you want to know whether or not Joe Manganiello understands the mechanics of health and fitness, JUST LOOK AT HIM. Okay, stop staring. Now youre being creepy. (Chris Hardwick host of AMCs Talking Dead and BBC Americas The Nerdist)

Joes book is a must have for anyone that likes getting laid! (Max Martini star of Warner Bros. Pictures Pacific Rim)

Excerpt from:

Evolution: The Cutting Edge Guide to Breaking Down Mental ...

Evolution: It’s a Thing – Crash Course Biology #20 – YouTube

Hank gets real with us in a discussion of evolution - it's a thing, not a debate. Gene distribution changes over time, across successive generations, to give rise to diversity at every level of biological organization.

Crash Course Biology is now available on DVD! http://dft.ba/-8css

Like CrashCourse on Facebook: http://www.facebook.com/YouTubeCrashC...Follow CrashCourse on Twitter: http://www.twitter.com/TheCrashCourse

Table of Contents1) The Theory of Evolution 1:492) Fossils 2:423) Homologous Structures 4:364) Biogeography 7:025) Direct Observation 8:52

References for this episode can be found in the Google document here: http://dft.ba/-2Oyu

evolution, theory, biology, science, crashcourse, genetics, gene, facts, fossil, fossil record, dinosaur, extinct, extinction, organism, dorudon, rodhocetus, vestigial, structure, similarity, homologous structure, related, relationship, morganucodon, fore limb, hind limb, vertebrate, molecule, DNA, RNA, chimpanzee, fruit fly, biogeography, marsupial, finches, direct observation, drug resistance, resistance, selective pressure, italian wall lizard Support CrashCourse on Subbable: http://subbable.com/crashcourse

Read this article:

Evolution: It's a Thing - Crash Course Biology #20 - YouTube

Chris Hadfield Teaches Space Exploration | MasterClass

Only a few hundred humans have ever traveled to space. Chris describes in precise detail the emotions an astronaut feels on launch day and the physical feeling of leaving Earth.

Chris breaks down the equation for drag and shows how rockets are designed to overcome the biggest hurdle of launching into spacethe atmosphere.

Chris uses familiar situationslike driving a car and jumping off a diving boardto illustrate how the laws of orbital mechanics govern spaceflight and navigation

Chris explains the pros and cons of different types of rocket fuels including liquid fuel, solid fuel, and ionized gas.

"Rockets and spaceflight are dangerous by definition. Learn how astronauts manage their fears and cope with tragedy as Chris had to do after the loss of a friend in the Columbia Space Shuttle mission."

Learn the virtues and drawbacks of using the capsule model for human transport to space as Chris analyzes the designs of the Apollo, Gemini, Lunar Lander, and Soyuz.

Two-thirds of those whove flown to space got there on a Space Shuttle. Chris outlines the design of the Shuttle, the impact of its reusability, and how spacecraft will evolve in the future.

Learn how astronauts use stars, planets, and instruments to understand where their spaceship is, how its oriented, and where its going.

Its kind of like an elephant ballet. Chris talks you through the process of flying your spaceship to the ISS, docking, and beginning your adventure aboard the laboratory in the sky.

The International Space Station couldnt have been built without teams coming together from around the world. Chris details the process of constructing the ISS and explains the idea of shared exploration.

Learn about the many systems that work together to keeps astronauts alive aboard the ISS and how those systems are evolving so that we can travel even further in space.

Chris outlines a few experiments currently running on the ISS and explains how astronauts learn to conduct experiments in space on behalf of scientists on Earth.

Chris describes the great honor and responsibility of commanding the ISS, ranks the commanders priorities, and outlines what it takes to reach and fulfill such an elite and difficult leadership position.

Preparing for space travel means learning massive amounts of information. Learn how Chris used a series of one-page summaries to recall complex systems and concepts on the fly during his time in space.

The first words spoken from the Moon were directed to Mission Control for a reason. Learn how Mission Control functions and why it is so critical to the success of a mission to space.

Chris gives a head-to-toe tour of an EMU (Extravehicular Mobility Unit), explaining how it keeps astronauts alive while spacewalking and conducting work outside the ship.

Chris outlines the physical and mental challenges of walking in space, describing the important roles played by support teams on Earth and inside the spacecraft during a spacewalk.

Chris describes his personal experience training for spacewalking in an underwater simulation and emphasizes the importance of gaining confidence in maneuvering and monitoring the spacesuit.

What can we learn from looking down at Earth from above? Chris explains what spaceflight means for our human perspective and how we can use what we learn in space to preserve our species and planet.

Chris teaches you the principles behind simulation setup, the mindset you need to learn as much as possible from simulations, and how astronauts prepare for worst-case scenarios.

Chris explains the technical and societal challenges we face in traveling to Mars, including the ideal flight path required, the physics of slowing down and landing, and the risk of human life.

Chris walks through the basic human needs required to live on another planet. Learn what it takes to grow food in space, protect ourselves from the elements, and readjust to gravity.

If we can safely get to Mars, in-situ resource utilization could help us sustain life there. Chris breaks down the vital Sabatier process for creating hydrogen, oxygen, and methane on Mars.

Chris discusses how finding life on Mars could deepen our understanding of the universe and illuminate our place within it. Learn how were working with robots to search for life and build an outpost on Mars.

In his parting words, Chris reflects on the cyclical nature of human exploration and Earths place in outer space.

See the original post here:

Chris Hadfield Teaches Space Exploration | MasterClass

Space exploration – Science in space | Britannica.com

Science in space

In the decades following the first Sputnik and Explorer satellites, the ability to put their instruments into outer space gave scientists the opportunity to acquire new information about the natural universe, information that in many cases would have been unobtainable any other way. Space science added a new dimension to the quest for knowledge, complementing and extending what had been gained from centuries of theoretical speculations and ground-based observations.

After Gagarins 1961 flight, space missions involving human crews carried out a range of significant research, from on-site geologic investigations on the Moon to a wide variety of observations and experiments aboard orbiting spacecraft. In particular, the presence in space of humans as experimenters and, in some cases, as experimental subjects facilitated studies in biomedicine and materials science. Nevertheless, most space science was, and continues to be, performed by robotic spacecraft in Earth orbit, in other locations from which they observe the universe, or on missions to various bodies in the solar system. In general, such missions are far less expensive than those involving humans and can carry sophisticated automated instruments to gather a wide variety of relevant data.

In addition to the United States and the Soviet Union, several other countries achieved the capability of developing and operating scientific spacecraft and thus carrying out their own space science missions. They include Japan, China, Canada, India, and a number of European countries such as the United Kingdom, France, Italy, and Germany, acting alone and through cooperative organizations, particularly the European Space Agency. Furthermore, many other countries became involved in space activities through the participation of their scientists in specific missions. Bilateral or multilateral cooperation between various countries in carrying out space science missions grew to be the usual way of proceeding.

Scientific research in space can be divided into five general areas: (1) solar and space physics, including study of the magnetic and electromagnetic fields in space and the various energetic particles also present, with particular attention to their interactions with Earth, (2) exploration of the planets, moons, asteroids, comets, meteoroids, and dust in the solar system, (3) study of the origin, evolution, and current state of the varied objects in the universe beyond the solar system, (4) research on nonliving and living materials, including humans, in the very low gravity levels of the space environment, and (5) study of Earth from space.

The first scientific discovery made with instruments orbiting in space was the existence of the Van Allen radiation belts, discovered by Explorer 1 in 1958. Subsequent space missions investigated Earths magnetosphere, the surrounding region of space in which the planets magnetic field exerts a controlling effect (see Earth: The magnetic field and magnetosphere). Of particular and ongoing interest has been the interaction of the flux of charged particles emitted by the Sun, called the solar wind, with the magnetosphere. Early space science investigations showed, for example, that luminous atmospheric displays known as auroras are the result of this interaction, and scientists came to understand that the magnetosphere is an extremely complex phenomenon.

The focus of inquiry in space physics was later extended to understanding the characteristics of the Sun, both as an average star and as the primary source of energy for the rest of the solar system, and to exploring space between the Sun and Earth and other planets (see interplanetary medium). The magnetospheres of other planets, particularly Jupiter with its strong magnetic field, also came under study. Scientists sought a better understanding of the internal dynamics and overall behaviour of the Sun, the underlying causes of variations in solar activity, and the way in which those variations propagate through space and ultimately affect Earths magnetosphere and upper atmosphere. The concept of space weather was advanced to describe the changing conditions in the Sun-Earth region of the solar system. Variations in space weather can cause geomagnetic storms that interfere with the operation of satellites and even systems on the ground such as power grids.

To carry out the investigations required for addressing these scientific questions, the United States, Europe, the Soviet Union, and Japan developed a variety of space missions, often in a coordinated fashion. In the United States, early studies of the Sun were undertaken by a series of Orbiting Solar Observatory satellites (launched 196275) and the astronaut crews of the Skylab space station in 197374, using that facilitys Apollo Telescope Mount. These were followed by the Solar Maximum Mission satellite (launched 1980). ESA developed the Ulysses mission (1990) to explore the Suns polar regions. Solar-terrestrial interactions were the focus of many of the Explorer series of spacecraft (195875) and the Orbiting Geophysical Observatory satellites (196469). In the 1980s NASA, ESA, and Japans Institute of Space and Astronautical Science undertook a cooperative venture to develop a comprehensive series of space missions, named the International Solar-Terrestrial Physics Program, that would be aimed at full investigation of the Sun-Earth connection. This program was responsible for the U.S. Wind (1994) and Polar (1996) spacecraft, the European Solar and Heliospheric Observatory (SOHO; 1995) and Cluster (2000) missions, and the Japanese Geotail satellite (1992). Among many other missions, NASA has launched a number of satellites, including Thermosphere, Ionosphere, Mesosphere Energetics and Dynamics (TIMED, 2001); the Japanese-U.S.-U.K. collaboration Hinode (2006); and Solar Terrestrial Relations Observatory (STEREO, 2006), part of its Solar Terrestrial Probes program. The Solar Dynamics Observatory (2010) and the twin Van Allen Probes (2012) were part of another NASA program called Living with a Star. A two-satellite European/Chinese mission called Double Star (200304) studied the impact of the Sun on Earths environment.

From the start of space activity, scientists recognized that spacecraft could gather scientifically valuable data about the various planets, moons, and smaller bodies in the solar system. Both the United States and the U.S.S.R. attempted to send robotic missions to the Moon in the late 1950s. The first four U.S. Pioneer spacecraft, Pioneer 03, launched in 1958, were not successful in returning data about the Moon. The fifth mission, Pioneer 4 (1959), was the first U.S. spacecraft to escape Earths gravitational pull; it flew by the Moon at twice the planned distance but returned some useful data. Three Soviet missions, Luna 13, explored the vicinity of the Moon in 1959, confirming that it had no appreciable magnetic field and sending back the first-ever images of its far side. Luna 1 was the first spacecraft to fly past the Moon, beating Pioneer 4 by two months. Luna 2, in making a hard landing on the lunar surface, was the first spacecraft to strike another celestial object. Later, in the 1960s and early 1970s, Luna and Lunokhod spacecraft soft-landed on the Moon, and some gathered soil samples and returned them to Earth.

In the 1960s the United States became the first country to send a spacecraft to the vicinity of other planets; Mariner 2 flew by Venus in December 1962, and Mariner 4 flew past Mars in July 1965. Among significant accomplishments of planetary missions in succeeding decades were the U.S. Viking landings on Mars in 1976 and the Soviet Venera explorations of the atmosphere and surface of Venus from the mid-1960s to the mid-1980s. In the years since, the United States has continued an active program of solar system exploration, as did the Soviet Union until its dissolution in 1991. Japan launched missions to the Moon, Mars, and Halleys Comet and returned minute samples from the asteroid Itokawa. Europes first independent solar system mission, Giotto, also flew by Halley. After the turn of the 21st century, it sent missions to the Moon and Mars and an orbiter-lander, Rosetta-Philae, to a comet. India and China also sent the Chandrayaan-1 (2008) and two Change (2007, 2010) missions, respectively, to orbit the Moon. Chinas Change 3 mission also landed a small rover, Yutu, on the Moon in 2013. NASAs Dawn mission (2007) orbited the large asteroid Vesta from 2011 to 2012 and entered orbit around the dwarf planet Ceres in 2015.

Early on, scientists planned to conduct solar system exploration in three stages: initial reconnaissance from spacecraft flying by a planet, comet, or asteroid; detailed surveillance from a spacecraft orbiting the object; and on-site research after landing on the object or, in the case of a giant gas planet, by sending a probe into its atmosphere. By the early 21st century, all three of those stages had been carried out for the Moon, Venus, Mars, Jupiter, Saturn, and a near-Earth asteroid. Several Soviet and U.S. robotic spacecraft have landed on Venus and the Moon, and the United States has landed spacecraft on the surface of Mars. A long-term, detailed surveillance of Jupiter and its moons began in 1995 when the U.S. Galileo spacecraft took up orbit around the planet, at the same time releasing a probe into the turbulent Jovian atmosphere. In 2001 the U.S. Near Earth Asteroid Rendezvous (NEAR) spacecraft landed on the asteroid Eros and transmitted information from its surface for more than two weeks. Among the rocky inner planets, only Mercury was for some time relatively neglected. In the first half century of space exploration, Mercury was visited just once; the U.S. Mariner 10 probe made three flybys of the planet in 197475. In 2004 the U.S. Messenger spacecraft was launched to Mercury for a series of flybys beginning in 2008 and entered orbit around the planet in 2011.

As of 2017, the exploration of the two outer giant gas planetsUranus and Neptuneremained at the first stage. In a series of U.S. missions launched in the 1970s, Pioneer 10 flew by Jupiter, whereas Pioneer 11 and Voyager 1 and 2 flew by both Jupiter and Saturn. Voyager 2 then went on to travel past Uranus and Neptune. On August 25, 2012, Voyager 1 became the first space probe to enter interstellar space when it crossed the heliopause, the outer limit of the Suns magnetic field and solar wind. The Voyagers were expected to still be returning data through 2020. The U.S. Cassini spacecraft, launched in 1997, began a long-term surveillance mission in the Saturnian system in 2004; the following year its European-built Huygens probe descended to the surface of Titan, Saturns largest moon. In 2017 the Cassini mission ended when it burned up in Saturns atmosphere. In 2011 the United States launched the Juno mission, which studied the origin and evolution of Jupiter after it arrived at the giant planet in 2016. Thus, every significant body in the solar system, even the dwarf planet Pluto and its largest moon, Charon, has been visited at least once by a spacecraft. (The U.S. New Horizons spacecraft, launched in 2006, flew by Pluto and Charon in 2015.)

These exploratory missions sought information on the origin and evolution of the solar system and on the various objects that it comprises, including chemical composition; surface topography; data on magnetic fields, atmospheres, and volcanic activity; and, particularly for Mars, evidence of water in the present or past and perhaps even of extraterrestrial life in some form.

What has been learned to date confirms that Earth and the rest of the solar system formed at about the same time from the same cloud of gas and dust surrounding the Sun. The four outer giant gas planets are roughly similar in size and chemical composition, but each has a set of moons that differ widely in their characteristics, and in some ways they and their satellites resemble miniature solar systems. The four rocky inner planets had a common origin but followed very different evolutionary paths and today have very different surfaces, atmospheres, and internal activity. Ongoing comparative study of the evolution of Venus, Mars, and Earth could provide important insights into Earths future and its continued ability to support life.

The question of whether life has ever existed elsewhere in the solar system continues to intrigue both scientists and the general public. The United States sent two Viking spacecraft to land on the surface of Mars in 1976. Each contained three experiments intended to search for traces of organic material that might indicate the presence of past or present life-forms; none of the experiments produced positive results. Twenty years later, a team of scientists studying a meteorite of Martian origin found in Antarctica announced the discovery of possible microscopic fossils resulting from past organic life. Their claim was not universally accepted, but it led to an accelerated program of Martian exploration focused on the search for evidence of the action of liquid water, thought necessary for life to have evolved. Beginning in 2001, the United States sent a series of follow the water missions to orbit or land on Mars, including 2001 Mars Odyssey (2001), two Mars Exploration Rovers, Spirit and Opportunity (2003), Mars Reconnaissance Orbiter (2005), and the Curiosity rover (2011). Europe also launched the Mars Express mission in 2003. A major long-term goal of the Mars exploration program is to return samples of the Martian surface to Earth for laboratory analysis.

There are indications that water may be present in the outer solar system. The Galileo mission provided images and other data related to Jupiters moon Europa that suggest the presence of a liquid water ocean beneath its icy crust. Future missions are needed to confirm the existence of this ocean and search for evidence of organic or biological processes in it. The Cassini-Huygens mission confirmed the presence of lakes of liquid methane on Saturns moon Titan and suggested the likely existence of liquid water underneath the surface of another moon, Enceladus.

See the original post:

Space exploration - Science in space | Britannica.com

Space Exploration | Encyclopedia.com

space exploration, the investigation of physical conditions in space and on stars, planets, and other celestial bodies through the use of artificial satellites (spacecraft that orbit the earth), space probes (spacecraft that pass through the solar system and that may or may not orbit another celestial body), and spacecraft with human crews.

Satellites and Probes

Although studies from earth using optical and radio telescopes had accumulated much data on the nature of celestial bodies, it was not until after World War II that the development of powerful rockets made direct space exploration a technological possibility. The first artificial satellite, Sputnik I, was launched by the USSR (now Russia) on Oct. 4, 1957, and spurred the dormant U.S. program into action, leading to an international competition popularly known as the"space race."Explorer I, the first American satellite, was launched on Jan. 31, 1958. Although earth-orbiting satellites have by far accounted for the great majority of launches in the space program, even more information on the moon, other planets, and the sun has been acquired by space probes.

Lunar Probes

In the decade following Sputnik I, the United States and the USSR between them launched about 50 space probes to explore the moon. The first probes were intended either to pass very close to the moon (flyby) or to crash into it (hard landing). Later probes made soft landings with instruments intact and achieved stable orbits around the moon. Each of these four objectives required increasingly greater rocket power and more precise maneuvering; successive launches in the Soviet Luna series were the first to accomplish each objective. Luna 2 made a hard lunar landing in Sept., 1959, and Luna 3 took pictures of the moon's far side as the probe flew by in Nov., 1959. Luna 9 soft-landed in Feb., 1966, and Luna 10 orbited the moon in Apr., 1966; both sent back many television pictures to earth. Beginning with Luna 16, which was launched in Sept., 1970, the USSR sent a several probes to the moon that either returned lunar soil samples to earth or deployed Lunokhod rovers. In addition to the 24 lunar probes in the Luna program, the Soviets also launched five circumlunar probes in its Zond program.

Early American successes generally lagged behind Soviet accomplishments by several months but provided more detailed scientific information. The U.S. program did not bear fruit until 1964, when Rangers 7,8, and 9 transmitted thousands of pictures, many taken at altitudes of less than 1 mi (1.6 km) just before impact and showing craters only a few feet in diameter. Two years later, the Surveyor series began a program of soft landings on the moon. Surveyor 1 touched down in June, 1966; in addition to television cameras, it carried instruments to measure soil strength and composition. The Surveyor program established that the moon's surface was solid enough to support a spacecraft carrying astronauts.

In Aug., 1966, the United States successfully launched the first Lunar Orbiter, which took pictures of both sides of the moon as well as the first pictures of the earth from the moon's vicinity. The Orbiter's primary mission was to locate suitable landing sites for the Apollo Lunar Module, but in the process it also discovered the lunar mascons, regions of large concentration of mass on the moon's surface. Between May, 1966, and Nov., 1968, the United States launched seven Surveyors and five Lunar Orbiters. Clementine, launched in 1994, engaged in a systematic mapping of the lunar surface. In 1998, Lunar Prospector orbited the moon in a low polar orbit investigating possible polar ice deposits, but a controlled crash near the south pole detected no water. The U.S. Lunar Reconnaissance Orbiter, launched in 2009, was designed to collect data that can be used to prepare for future missions to the moon; information from it has been used to produce a relatively detailed, nearly complete topographic map of the moon.

China became the third nation to send a spacecraft to the moon when Chang'e 1, which was launched in 2007, orbited and mapped the moon until it was crash-landed on the lunar surface in 2009. Chang'e 2 also orbited and mapped the moon (201011) and later conducted a flyby of an asteroid (2012). In Dec., 2013, Chang'e 3 landed on the moon and deployed a rover, Yutu.

Interplanetary Probes

While the bulk of space exploration initially was directed at the earth-moon system, the focus gradually shifted to other members of the solar system. The U.S. Mariner program studied Venus and Mars, the two planets closest to the earth; the Soviet Venera series also studied Venus. From 1962 to 1971, these probes confirmed the high surface temperature and thick atmosphere of Venus, discovered signs of recent volcanism and possible water erosion on Mars, and investigated Mercury. Between 1971 and 1973 the Soviet Union launched six successful probes as part of its Mars program. Exploration of Mars continued with the U.S. Viking landings on the Martian surface. Two Viking spacecraft arrived on Mars in 1976. Their mechanical arms scooped up soil samples for automated tests that searched for photosynthesis, respiration, and metabolism by any microorganisms that might be present; one test suggested at least the possibility of organic activity. The Soviet Phobos 1 and 2 missions were unsuccessful in 1988. The U.S. Magellan spacecraft succeeded in orbiting Venus in 1990, returning a complete radar map of the planet's hidden surface. The Japanese probes Sakigake and Suisei and the European Space Agency's probe Giotto both rendezvoused with Halley's comet in 1986, and Giotto also came within 125 mi (200 km) of the nucleus of the comet Grigg-Skjellerup in 1992. The U.S. probe Ulysses returned data about the poles of the sun in 1994, and the ESA Solar and Heliospheric Observatory (SOHO) was put into orbit in 1995. Launched in 1996 to study asteroids and comets, the Near Earth Asteroid Rendezvous (NEAR) probe made flybys of the asteroids Mathilde (1997) and Eros (1999) and began orbiting the latter in 2000. The Mars Pathfinder and Mars Global Surveyor, both of which reached Mars in 1997, were highly successful, the former in analyzing the Martian surface and the latter in mapping it. The ESA Mars Express, launched in 2003, began orbiting Mars later that year, and although its Beagle 2 lander failed to establish contact, the orbiter has sent back data. Spirit and Opportunity, NASA rovers, landed successfully on Mars in 2004, as did the NASA rover Curiosity in 2012. Messenger, also launched by NASA, became the first space probe to orbit Mercury in 2011; its mission ended in 2015. In 2014 the ESA's Rosetta became the first probe to orbit a comet (Comet 67P); prior to that rendezvous the space probe had made flybys of Mars and two asteroids.

Space probes have also been aimed at the outer planets, with spectacular results. One such probe, Pioneer 10, passed through the asteroid belt in 1973, then became the first object made by human beings to move beyond the orbits of the planets. In 1974, Pioneer 11 photographed Jupiter's equatorial latitudes and its moons, and in 1979 it made the first direct observations of Saturn. Voyagers 1 and 2, which were launched in 1977, took advantage of a rare alignment of Jupiter, Saturn, Uranus, and Neptune to explore all four planets. Passing as close as 3,000 mi (4,800 km) to each planet's surface, the Voyagers discovered new rings, explored complex magnetic fields, and returned detailed photographs of the outer planets and their unique moons. They subsequently moved toward the heliopause, the boundary between the influence of the sun's magnetic field and the interstellar magnetic field, and in 2013 NASA reported that Voyager 1 most likely crossed the heliopause in 2012 and entered interstellar space, becoming the first spacecraft to do so.

Launched in 1989, the Galileo spacecraft followed a circuitous route that enabled it to return data about Venus (1990), the moon (1992), and the asteroids 951 Gaspra (1991) and 243 Ida (1993) before it orbited Jupiter (19952003); it also returned data about the Jupiter's atmosphere and its largest moons (Io, Ganymede, Europa, and Callisto). The joint U.S.-ESA Cassini mission, launched in 1997, began exploring Saturn, its rings, and some of its moons upon arriving in 2004. It deployed Huygens, which landed on the surface of Saturn's moom Titan in early 2005.

Human Space Exploration

Human spaceflight has progressed from the simple to the complex, starting with suborbital flights; subsequent highlights included the launching of a single astronaut in orbit, the launching of several astronauts in a single capsule, the rendezvous and docking of two spacecraft, the attainment of lunar orbit, and the televised landing of an astronaut on the moon. The first person in earth orbit was a Soviet cosmonaut, Yuri Gagarin, in Vostok 1 on Apr. 12, 1961. The American Mercury program had its first orbital success in Feb., 1962, when John Glenn circled the earth three times; a flight of 22 orbits was achieved by Mercury in May, 1963. In Oct., 1964, three Soviet cosmonauts were launched in a Voskhod spacecraft. During the second Voskhod flight in Mar., 1965, a cosmonaut left the capsule to make the first"walk in space."

The first launch of the Gemini program, carrying two American astronauts, occurred a few days after the Soviet spacewalk. The United States made its first spacewalk during Gemini 4, and subsequent flights established techniques for rendezvous and docking in space. The first actual docking of two craft in space was achieved in Mar., 1966, when Gemini 8 docked with a crewless vehicle. In Oct., 1967, two Soviet Cosmos spacecraft performed the first automatic crewless rendezvous and docking. Gemini and Voskhod were followed by the American Apollo and the Soviet Soyuz programs, respectively.

The Apollo Program

In 1961, President Kennedy had committed the United States to the goal of landing astronauts on the moon and bringing them safely back to earth by the end of the decade. The resulting Apollo program was the largest scientific and technological undertaking in history. Apollo 8 was the first craft to orbit both the earth and the moon (Dec., 1968); on July 20, 1969, astronauts Neil A. Armstrong and Edwin E. ("Buzz") Aldrin, Jr., stepped out onto the moon, while a third astronaut, Michael Collins, orbited the moon in the command ship. In all, there were 17 Apollo missions and 6 lunar landings (196972). Apollo 15 marked the first use of the Lunar Rover, a jeeplike vehicle. The scientific mission of Apollo centered around an automated geophysical laboratory, ALSEP (Apollo Lunar Surface Experimental Package). Much was learned about the physical constitution and early history of the moon, including information about magnetic fields, heat flow, volcanism, and seismic activity. The total lunar rock sample returned to earth weighed nearly 900 lb (400 kg).

Apollo moon flights were launched by the three-stage Saturn V rocket, which developed 7.5 million lb (3.4 million kg) of thrust at liftoff. At launch, the total assembly stood 363 ft (110 m) high and weighed more than 3,000 tons. The Apollo spacecraft itself weighed 44 tons and stood nearly 60 ft (20 m) high. It was composed of three sections: the command, service, and lunar modules. In earth orbit, the lunar module (LM) was freed from its protective compartment and docked to the nose of the command module. Once in lunar orbit, two astronauts transferred to the LM, which then detached from the command module and descended to the lunar surface. After lunar exploration, the descent stage of the LM remained on the moon, while the ascent stage was jettisoned after returning the astronauts to the command module. The service module was jettisoned just before reentering the earth's atmosphere. Thus, of the huge craft that left the earth, only the cone-shaped command module returned.

The Soyuz Program

Until late 1969 it appeared that the USSR was also working toward landing cosmonauts on the moon. In Nov., 1968, a Soviet cosmonaut in Soyuz 3 participated in an automated rendezvous and manual approach sequence with the crewless Soyuz 2.Soyuz 4 and 5 docked in space in Jan., 1969, and two cosmonauts transferred from Soyuz 5 to Soyuz 4; it was the first transfer of crew members in space from separately launched vehicles. But in July, 1969, the rocket that was to power the lunar mission exploded, destroying an entire launch complex, and the USSR abandoned the goal of human lunar exploration to concentrate on orbital flights. The program suffered a further setback in June, 1971, when Soyuz 11 accidentally depressurized during reentry, killing all three cosmonauts. In July, 1975, the United States and the USSR carried out the first internationally crewed spaceflight, when an Apollo and a Soyuz spacecraft docked while in earth orbit. Later Soyuz spacecraft have been used to ferry crew members to and from Salyut,Mir, and the International Space Station.

Space Stations

After the geophysical exploration of the moon via the Apollo program was completed, the United States continued human space exploration with Skylab, an earth-orbiting space station that served as workshop and living quarters for three astronauts. The main capsule was launched by a booster; the crews arrived later in an Apollo-type craft that docked to the main capsule. Skylab had an operational lifetime of eight months, during which three three-astronaut crews remained in the space station for periods of about one month, two months, and three months. The first crew reached Skylab in May, 1972.

Skylab's scientific mission alternated between predominantly solar astrophysical research and study of the earth's natural resources; in addition, the crews evaluated their response to prolonged conditions of weightlessness. The solar observatory contained eight high-resolution telescopes, each designed to study a different part of the spectrum (e.g., visible, ultraviolet, X-ray, or infrared light). Particular attention was given to the study of solar flares (see sun). The earth applications, which involved remote sensing of natural resources, relied on visible and infrared light in a technique called multispectral scanning (see space science). The data collected helped scientists to forecast crop and timber yields, locate potentially productive land, detect insect infestation, map deserts, measure snow and ice cover, locate mineral deposits, trace marine and wildlife migrations, and detect the dispersal patterns of air and water pollution. In addition, radar studies yielded information about the surface roughness and electrical properties of the sea on a global basis. Skylab fell out of orbit in July, 1979; despite diligent efforts, several large pieces of debris fell on land.

After that time the only continuing presence of humans in earth orbit were the Soviet Salyut and Mir space stations, in which cosmonauts worked for periods ranging to more than 14 months. In addition to conducting remote sensing and gathering medical data, cosmonauts used their microgravity environment to produce electronic and medical artifacts impossible to create on earth. In preparation for the International Space Station (ISS)a cooperative program of the United States, Russia, Japan, Canada, Brazil, and the ESAastronauts and cosmonauts from Afghanistan, Austria, Britain, Bulgaria, France, Germany, Japan, Kazakhstan, Syria, and the United States worked on Mir alongside their Russian counterparts. Assembly of the ISS began in Dec., 1998, with the linking of an American and a Russian module (see space station) Once the ISS was manned in 2000, maintaining Mir in orbit was no longer necessary and it was made to decay out of orbit in Mar., 2001.

The Space Shuttle

After the Skylab space station fell out of orbit in 1979, the United States did not resume sending astronauts into space until 1981, when the space shuttle, capable of ferrying people and equipment into orbit and back to earth, was launched. The shuttle itself was a hypersonic delta-wing airplane about the size of a DC-9. Takeoff was powered by three liquid-fuel engines fed from an external tank and two solid-fuel engines; the last were recovered by parachute. The shuttle itself returned to earth in a controlled glide, landing either in California or in Florida.

The shuttle put a payload of up to 25 tons (22,700 kg) in earth orbit below 600 mi (970 km); the payload was then boosted into final orbit by its own attached rocket. The Galileo probe, designed to investigate Jupiter's upper atmosphere, was launched from the space shuttle. Astronauts also used the shuttle to retrieve and repair satellites, to experiment with construction techniques needed for a permanent space station, and to conduct scientific experiments during extended periods in space.

At first it was hoped that shuttle flights could operate on a monthly basis, but schedule pressures contributed to the explosion of the Challenger shuttle in 1986, when cold launch conditions led to the failure of a rubber O-ring, and the resulting flame ruptured the main fuel tank. The shuttle program was suspended for three years, while the entire system was redesigned. The shuttle fleet subsequently operated on approximately a bimonthly schedule. A second accident occurred in 2003, when Columbia was lost during reentry because damaged heat shielding on the left wing, which had been damaged by insulation shed from the external fuel tank, failed to prevent superheated gas from entering the wing; the hot gas structurally weakened the wing and caused the shuttle to break up. Shuttle flights resumed in July, 2005, but new problems with fuel tank insulation led NASA to suspend shuttle launches for a year. The last shuttle flight was in July, 2011.

In 2004, President George W. Bush called for a return to the moon by 2020 and the establishment of a base there that would be used to support the human exploration of Mars. The following year NASA unveiled a $104 billion plan for a lunar expedition that resembled that Apollo program in many respects, except that two rockets would be used to launch the crew and lunar lander separately.

In June, 2004, SpaceShipOne, a privately financed spacecraft utilizing a reusable vehicle somewhat similar in concept to the shuttle, was launched into suborbital flight from the Mojave Desert in California. Unlike the shuttle, SpaceShipOne was carried aloft by a reusable jet mothership (White Knight) to 46,000 ft (13.8 km), where it was released and fires its rocket engine. The spacecraft was designed by Bert Rutan and built by his company, SCALED Composites. The vehicle's 90-minute flight was the first successful nongovernmental spaceflight. SpaceShipTwo, based on SpaceShipOne, is being developed for commercial tourist flights; it made its first powered flight in 2013. Another spacecraft was privately developed by Space Exploration Technologies, or SpaceX, in coordination with NASA. The company's Falcon 9 rocket had its first successful launch, from Cape Canaveral, in June, 2010. In Dec., 2010, SpaceX launched the Dragon space capsule, using a Falcon 9 rocket, and successfully returned the capsule to earth after almost two orbits. In May, 2012, the Dragon made its first resupply trip to the space station, returning with experiments and other items. Orbital Sciences Corp. (OSC) also developed a cargo capsule, Cygnus, in cooperation with NASA. OSC's Antares rocket, which is used to launch Cygnus, had its first test in Apr., 2013, and Cygnus had its first resupply flight later that year.

The Chinese Space Program

China launched its first satellite in 1970 and then began the Shuguang program to put an astronaut into space, but the program was twice halted, ending in 1980. In the 1990s, however, China began a new program, and launched the crewless Shenzhou 1, based on the Soyuz, in 1999. The Shenzhou, like the Soyuz, is capable of carrying a crew of three. In Oct., 2003, Shenzhou 5 carried a single astronaut, Yang Liwei, on a 21-hr, 14-orbit flight, making China only the third nation to place a person in orbit. A second mission, involving two astronauts, occurred in Oct., 2005. China also launched an unmanned moon mission in Oct., 2007. In June, 2012, the three-person Shenzhou 9, which included China's first woman astronaut, manually docked with the Tiangong 1 laboratory module.

Bibliography

See T. Wolfe, The Right Stuff (repr. 1983); B. C. Murray, Journey into Space (repr. 1990); V. Neal, Where Next, Columbus?: The Future of Space Exploration (1994); J. Harford, Korolev: How One Man Masterminded the Soviet Drive to Beat America to the Moon (1997); T. A. Heppenheimer, Countdown: A History of Space Flight (1997); F. J. Hale, Introduction to Space Flight (1998); R. D. Launius, Frontiers of Space Exploration (1998); C. Nelson, Rocket Men: The Epic Story of the First Men on the Moon (2009); A. Chaikin with V. Kohl, Voices from the Moon (2009).

Read the rest here:

Space Exploration | Encyclopedia.com

Supercomputer – Wikipedia

A supercomputer is a computer with a high level of performance compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of 2017, there are supercomputers which can perform up to nearly a hundred quadrillion FLOPS.[3] As of November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems.[4] Additional research is being conducted in China, the United States, the European Union, Taiwan and Japan to build even faster, more powerful and more technologically superior exascale supercomputers.[5]

Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.[6]

Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran faster than their more general-purpose contemporaries. Through the 1960s, they began to add increasing amounts of parallelism with one to four processors being typical. From the 1970s, the vector computing concept with specialized math units operating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallel supercomputers with tens of thousands of off-the-shelf processors became the norm.[7][8]

The US has long been a leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, but since then China has become increasingly active in the field. As of June 2018, the fastest supercomputer on the TOP500 supercomputer list is the Summit, in the United States, with a LINPACK benchmark score of 122.3PFLOPS, exceeding the previous record holder, Sunway TaihuLight, by around 29PFLOPS.[3][9] Sunway TaihuLight's is notable for its use of indigenous chips and is the first Chinese computer to enter the TOP500 list without using hardware from the United States. As of June 2018, China had more computers (206) on the TOP500 list than the United States (124); however, US built computers held eight of the top 20 positions;[10][11] the U.S. has six of the top 10 and China has two.

The history of supercomputing goes back to the 1960s, with the Atlas at the University of Manchester, the IBM 7030 Stretch and a series of computers at Control Data Corporation (CDC), designed by Seymour Cray. These used innovative designs and parallelism to achieve superior computational peak performance.[12]

The Atlas was a joint venture between Ferranti and the Manchester University and was designed to operate at processing speeds approaching onemicrosecond per instruction, about onemillion instructions per second.[13] The first Atlas was officially commissioned on 7 December 1962 as one of the world's first supercomputers considered to be the most powerful computer in the world at that time by a considerable margin, and equivalent to four IBM 7094s.[14]

For the CDC 6600 (which Cray designed) released in 1964, a switch from using germanium to silicon transistors was implemented, as they could run very fast, solving the overheating problem by introducing refrigeration,[15] and helped to make it the fastest in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each.[16][17][18][19]

Cray left CDC in 1972 to form his own company, Cray Research.[17] Four years after leaving CDC, Cray delivered the 80MHz Cray-1 in 1976, and it became one of the most successful supercomputers in history.[20][21] The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaFLOPS and was the world's second fastest after M-13 supercomputer in Moscow .[22]

In 1982, Osaka University's LINKS-1 Computer Graphics System used a massively parallel processing architecture, with 514 microprocessors, including 257 Zilog Z8001 control processors and 257 iAPX 86/20 floating-point processors. It was mainly used for rendering realistic 3D computer graphics.[23]

While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear in Japan and the United States, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7gigaFLOPS (GFLOPS) per processor.[24][25] The Hitachi SR2201 obtained a peak performance of 600GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network.[26][27][28] The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface.[29]

Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s.[citation needed]

Early supercomputer architectures pioneered by Seymour Cray relied on compact designs and local parallelism to achieve superior computational performance.[12] Cray had noted that increasing processor speeds did little if the rest of the system did not also improve; the CPU would end up waiting longer for data to arrive from the offboard storage units. The CDC 6600, the first mass-produced supercomputer, solved this problem by providing ten simple computers whose only purpose was to read and write data to and from main memory, allowing the CPU to concentrate solely on processing the data. This made both the main CPU and the ten "PPU" units much simpler. As such, they were physically smaller and reduced the amount of wiring between the various parts. This reduced the electrical signaling delays and allowed the system to run at a higher clock speed. The 6600 outperformed all other machines by an average of 10 times when it was introduced.

The CDC 6600's spot as the fastest computer was eventually replaced by its successor, the CDC 7600. This design was very similar to the 6600 in general organization but added instruction pipelining to further improve performance. Generally speaking, every computer instruction required several steps to process; first, the instruction is read from memory, then any required data it refers to is read, the instruction is processed, and the results are written back out to memory. Each of these steps is normally accomplished by separate circuitry. In most early computers, including the 6600, each of these steps runs in turn, and while any one unit is currently active, the hardware handling the other parts of the process is idle. In the 7600, as soon as one instruction cleared a particular unit, that unit began processing the next instruction. Although each instruction takes the same time to complete, there are parts of several instructions being processed at the same time, offering much-improved overall performance. This, combined with further packaging improvements and improvements in the electronics, made the 7600 about four to ten times as fast as the 6600.

The 7600 was intended to be replaced by the CDC 8600, which was essentially four 7600's in a small box. However, this design ran into intractable problems and was eventually canceled in 1974 in favor of another CDC design, the CDC STAR-100. The STAR was essentially a simplified and slower version of the 7600, but it was combined with new circuits that could rapidly process sequences of math instructions. The basic idea was similar to the pipeline in the 7600 but geared entirely toward math, and in theory, much faster. In practice, the STAR proved to have poor real-world performance, and ultimately only two or three were built.

Cray, meanwhile, had left CDC and formed his own company. Considering the problems with the STAR, he designed an improved version of the same basic concept but replaced the STAR's memory-based vectors with ones that ran in large registers. Combining this with his famous packaging improvements produced the Cray-1. This outperformed every computer in the world and would ultimately sell about 80 units, making it one of the most successful supercomputer systems in history. Through the 1970s, 80s, and 90s a series of machines from Cray further improved on these basic concepts.

The basic concept of using a pipeline dedicated to processing large data units became known as vector processing, and came to dominate the supercomputer field. A number of Japanese firms also entered the field, producing similar concepts in much smaller machines. Three main lines were produced by these companies, the Fujitsu VP, Hitachi HITAC and NEC SX series, all announced in the early 1980s and updated continually into the 1990s. CDC attempted to re-enter this market with the ETA10 but this was not very successful. Convex Computer took another route, introducing a series of much smaller vector machines aimed at smaller businesses.

The only computer to seriously challenge the Cray-1's performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a true massively parallel computer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate faster than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.

But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?"[30] But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably the Connection Machine (CM) that developed from research at MIT. The CM-1 used as many as 65,536 simplified custom microprocessors connected together in a network to share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.[31]

Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including the Evans & Sutherland ES-1, MasPar, nCUBE, Intel iPSC and the Goodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines adding graphic units to the mix.[7][8]

Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers.[32][33][34] The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components.[35] There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures.[36][37]

Systems with a massive number of processors generally take one of two paths. In the grid computing approach, the processing power of many computers, organised as distributed, diverse administrative domains, is opportunistically used whenever a computer is available.[38] In another approach, a large number of processors are used in proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects.[39][40] The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.[41][42]

As the price, performance and energy efficiency of general purpose graphic processors (GPGPUs) have improved,[43] a number of petaFLOPS supercomputers such as Tianhe-I and Nebulae have started to rely on them.[44] However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application towards it.[45][46] However, GPUs are gaining ground and in 2012 the Jaguar supercomputer was transformed into Titan by retrofitting CPUs with GPUs.[47][48][49]

High-performance computers have an expected life cycle of about three years before requiring an upgrade.[50]

A number of "special-purpose" systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom ASICs, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle,[51] Deep Blue,[52] and Hydra,[53] for playing chess, Gravity Pipe for astrophysics,[54] MDGRAPE-3 for protein structure computationmolecular dynamics[55] and Deep Crack,[56] for breaking the DES cipher.

A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04megawatts (MW) of electricity.[57] The cost to power and cool the system can be significant, e.g. 4MW at $0.10/kWh is $400 an hour or about $3.5 million per year.

Heat management is a major issue in complex electronic devices and affects powerful computer systems in various ways.[58] The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue.[59][60][61]

The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray 2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure.[36] However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.[37]

In the Blue Gene system, IBM deliberately used low power processors to deal with heat density.[62]The IBM Power 775, released in 2011, has closely packed elements that require water cooling.[63] The IBM Aquasar system uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.[64][65]

The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt". In 2008, IBM's Roadrunner operated at 3.76MFLOPS/W.[66][67] In November 2010, the Blue Gene/Q reached 1,684MFLOPS/W.[68][69] In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375MFLOPS/W.[70]

Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can remove waste heat,[71]the ability of the cooling systems to remove waste heat is a limiting factor.[72][73]As of 2015[update], many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited the thermal design power of the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.[74]

Since the end of the 20th century, supercomputer operating systems have undergone major transformations, based on the changes in supercomputer architecture.[75] While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux.[76]

Since modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a Linux-derivative on server and I/O nodes.[77][78][79]

While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.[80]

Although most modern supercomputers use the Linux operating system, each manufacturer has its own specific Linux-derivative, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.[75][81]

The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf.

In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA or OpenCL.

Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and debugging such applications.

Opportunistic Supercomputing is a form of networked grid computing whereby a "super virtual computer" of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing cannot handle traditional supercomputing tasks such as fluid dynamic simulations.

The fastest grid computing system is the distributed computing project Folding@home (F@h). F@h reported 101 PFLOPS of x86 processing power As of October2016[update]. Of this, over 100 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.[83]

The Berkeley Open Infrastructure for Network Computing (BOINC) platform hosts a number of distributed computing projects. As of February2017[update], BOINC recorded a processing power of over 166 PetaFLOPS through over 762 thousand active Computers (Hosts) on the network.[84]

As of October2016[update], Great Internet Mersenne Prime Search's (GIMPS) distributed Mersenne Prime search achieved about 0.313 PFLOPS through over 1.3 million computers.[85] The Internet PrimeNet Server supports GIMPS's grid computing approach, one of the earliest and most successful[citation needed] grid computing projects, since 1997.

Quasi-opportunistic supercomputing is a form of distributed computing whereby the super virtual computer of many networked geographically disperse computers performs computing tasks that demand huge processing power.[86] Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.[86]

Cloud Computing with its recent and rapid expansions and development have grabbed the attention of HPC users and developers in recent years. Cloud Computing attempts to provide HPC-as-a-Service exactly like other forms of services currently available in the Cloud such as Software-as-a-Service, Platform-as-a-Service, and Infrastructure-as-a-Service. HPC users may benefit from the Cloud in different angles such as scalability, resources being on-demand, fast, and inexpensive. On the other hand, moving HPC applications have a set of challenges too. Good examples of such challenges are virtualization overhead in the Cloud, multi-tenancy of resources, and network latency issues. Much research[87][88][89][90] is currently being done to overcome these challenges and make HPC in the cloud a more realistic possibility.

Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g., a very complex weather simulation application.[91]

Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems.[91] Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity but are not typically considered supercomputers, given that they do not solve a single very complex problem.[91]

In general, the speed of supercomputers is measured and benchmarked in "FLOPS" (FLoating point Operations Per Second), and not in terms of "MIPS" (Million Instructions Per Second), as is the case with general-purpose computers.[92] These measurements are commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) "Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS).

No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry.[93] The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists), which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK benchmarks and shown as "Rmax" in the TOP500 list.[94] The LINPACK benchmark typically performs LU decomposition of a large matrix.[95] The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.[93]

Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.

This is a recent list of the computers which appeared at the top of the TOP500 list,[96] and the "Peak speed" is given as the "Rmax" rating.

Source: TOP500

In 2018 Lenovo became the worlds largest provider (117) for the top500 supercomputers.[97]

The stages of supercomputer application may be summarized in the following table:

The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.[104]

Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.[105]

In 2011, the challenges and difficulties in pushing the envelope in supercomputing were underscored by IBM's abandonment of the Blue Waters petascale project.[106]

The Advanced Simulation and Computing Program currently uses supercomputers to maintain and simulate the United States nuclear stockpile.[107]

Currently, China, the United States, the European Union, and others are competing to be the first to create a 1 exaFLOP (1018 or one quintillion FLOPS) supercomputer, with estimates of completion ranging from 2019 to 2022.[108]

Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaFLOPS (1021 or one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two-week time span accurately.[109][110][111] Such systems might be built around 2030.[112]

Many Monte Carlo simulations use the same algorithm to process a randomly generated data set; particularly, integro-differential equations describing physical transport processes, the random paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc. The next step for microprocessors may be into the third dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.[113]

There are several international efforts to understand how supercomputing will develop over the next decade. The ETP4HPC Strategic Research Agenda (SRA) outlines a technology roadmap for exascale in Europe.[114] The Eurolab4HPC Vision provides a long-term roadmap (20232030) for academic excellence in HPC[115].

High performance supercomputers usually require high energy, as well. However, Iceland may be a benchmark for the future with the world's first zero-emission supercomputer. Located at the Thor Data Center in Reykjavik, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world of computers.[116]

Many science-fiction writers have depicted supercomputers in their works, both before and after the historical construction of such computers. Much of such fiction deals with the relations of humans with the computers they build and with the possibility of conflict eventually developing between them. Some scenarios of this nature appear on the AI-takeover page.

Examples of supercomputers in fiction include HAL-9000, Multivac, The Machine Stops, GLaDOS, The Evitable Conflict and Vulcan's Hammer.

More:

Supercomputer - Wikipedia

Euthanasia, Assisted Suicide & Health Care Decisions …

Euthanasia, Assisted Suicide & Health Care Decisions:Protecting Yourself & Your Family

Table of Contents |Part 1 |Part 2

byRita L. Marker

INTRODUCTION

The words euthanasia and assisted suicide are often used interchangeably. However, they are different and, in the law, they are treated differently. In this report, euthanasia is defined as intentionally, knowingly and directly acting to cause the death of another person (e.g., giving a lethal injection). Assisted suicide is defined as intentionally, knowingly and directly providing the means of death to another person so that the person can use that means to commit suicide (e.g., providing a prescription for a lethal dose of drugs).

Part I of this report discusses the reasons used by activists to promote changes in the law; the contradictions that the actual proposals have with those reasons; and the logical progression that occurs when euthanasia and assisted suicide are transformed into medical treatments. It explores the failure of so-called safeguards and outlines the impact that euthanasia and assisted suicide have on families and society in general.

Withholding and withdrawing medical treatment and care are not legally considered euthanasia or assisted suicide. Withholding or withdrawing food and fluids is considered acceptable removal of a medical treatment.

Part II of this report includes information about practical ways to protect oneself and loved ones during any time of incapacity and a discussion of some of the policies that have led to patients being denied care that they or their decision-makers have requested. It concludes with an examination of the ethical distinction between treatment and care.

PART I

EUTHANASIA & ASSISTED SUICIDE

MOVING THE BOUNDARIES

In 2002, the International Task Force report, Assisted Suicide: Not for Adults Only? (1) discussed euthanasia and assisted suicide for children and teens. At that time, such concerns were largely considered outside the realm of possibility.

Then, as now, assisted-suicide advocates claimed that they were only trying to offer compassionate options for competent, terminally ill adults who were suffering unbearably. By and large, their claims went unchallenged.

A crack in that carefully honed image appeared in 2004 when the Groningen Protocol elicited worldwide outrage. The primary purpose of that protocol formulated by doctors at the Groningen Academic Hospital in the Netherlands was to legally and professionally protect Dutch doctors who kill severely disabled newborns. (2)

While euthanasia for infants (infanticide) was not new, widespread discussion of it was. Dutch doctors were now explaining that it was a necessary part of pediatric care.

Also in 2004, Hollands most prestigious medical society (KNMG) urged the Health Ministry to set up a board to review euthanasia for people who had no free will, including children and individuals with mental retardation or severe brain damage following accidents. (3)

At first, it seemed that these revelations would be harmful to the euthanasia movement, but the opposite was true.

Why?

Awareness of infanticide and euthanasia deaths of other incompetent patients moved the boundaries.

Prior to the widespread realization that involuntary euthanasia was taking place, advocacy of assisted suicide for those who request it seemed to be on one end of the spectrum. Opposition to it was on the other end.

Now, the practice of involuntary euthanasia took its place as one extreme, opposition to it as the other extreme, and assisted suicide for terminally ill competent adults appeared to be in the moderate middle a very advantageous political position and expansion of the practice to others had entered the realm of respectable debate.

This repositioning has become a tool in the assisted-suicide arsenal. In May 2006, an assisted-suicide bill, patterned after Oregons law permitting assisted suicide, failed to gain approval in the British Parliament. The bills supporters immediately declared that they would reintroduce it during the next parliamentary session.

Within two weeks, Professor Len Doyal a former member of the British Medical Associations ethics committee who is considered one of Englands leading experts on medical ethics called for doctors to be able to end the lives of some patients swiftly, humanely and without guilt, even without the patients consent. (4) Doyals proposal was widely reported and, undoubtedly, when the next assisted-suicide bill is introduced in England, a measure that would permit assisted suicide only for consenting adults will appear less radical than it might have seemed prior to Doyals suggestion.

Currently, euthanasia is a medical treatment in the Netherlands and Belgium. Assisted suicide is a medical treatment in the Netherlands, Belgium and Oregon. Their advocates erroneously portray both practices as personal, private acts. However, legalization is not about the private and the personal. It is about public policy, and it affects ethics, medicine, law, families and children.

A FAMILY AFFAIR

In December 2005, ABC News World News Tonight reported, Anita and Frank go often to the burial place of their daughter Chanou. Chanou died when, with her parents consent, doctors gave her a lethal dose of morphine. Im convinced that if we meet again somewhere in heaven, her father said, shell tell us we reached the most perfect solution.'(5)

The report about the six-month-old Dutch childs death was introduced as a report on the debate over euthanizing infants. A Dutch legislator who agrees that doctors who intentionally end their tiny patients lives should not be prosecuted said, Im certainly pro-life. But Im also a human being. I think when there is extreme, unbearable suffering, then there can be extreme relief. (6)

Gone was the previous years outrage over the Groningen Protocols. Infanticide had entered the realm of respectable debate in the mainstream media. The message given to viewers was that loving parents, compassionate doctors and caring legislators favor infanticide. It left the impression that opposing such a death would be cold, unfeeling and, perhaps, intentionally cruel.

In Oregon, some assisted-suicide deaths have become family or social events.

Oregons law does not require family members to know that a loved one is planning to commit suicide with a doctors help. (7) Thus, the first knowledge of those plans could come when a family member finds the body. However, as two news features illustrate, some Oregonians who die from assisted suicide make it a teachable moment for children or a party event for friends and family.

According to the Mail Tribune (Medford, Oregon), on a sunny afternoon, Joan Lucas rode around looking at houses, then she sat in a park eating an ice cream cone. A few hours later, she committed suicide with a prescribed deadly drug overdose. Grandchildren were made to understand that Grandma Joan would be going away soon. Those who were old enough to understand were told what was happening. (8

Did these children learn from Grandma Joan that suicide is a good thing?

UCLAs student newspaper, the Daily Bruin, carried an article favoring assisted suicide. It described how Karen Janoch who committed suicide under the Oregon law, sent invitations for her suicide to about two dozen of her closest friends and family. The invitation read, You are invited to attend the actual ending of my life. (9) At the same time Californias legislature was considering an assisted-suicide bill that was virtually identical to Oregons law, UCLA students learned that suicide can be the occasion for a party.

In Oregon, assisted suicide has gone from the appalling to the appealing, from the tragic to the banal.

During the last half of 2005 and the first half of 2006, bills to legalize assisted suicide were under consideration in various states and countries including, but not limited to, Canada, Great Britain, California, Hawaii, Vermont, and Washington. All had met failure by the end of June 2006. But plans to reintroduce them with some cosmetic changes are currently underway. A brief examination of arguments used to promote them illustrates the small world nature of assisted-suicide advocacy.

TWO PILLARS OF ADVOCACY

Wherever an assisted-suicide measure is proposed, proponents arguments and strategies are similar. Invariably, promotion rests on two pillars: autonomy and the elimination of suffering.

Autonomy

Autonomy (independence and the right of self-determination) is certainly valued in modern society and patients do, and should, have the right to accept or reject medical treatment. However, those who favor assisted suicide claim that autonomy extends to the right of a patient to decide when, where, how and why to die as the following examples illustrate.

During debate over an assisted-suicide measure then pending before the British Parliament, proponents emphasized personal choice. The bill, titled The Assisted Dying for the Terminally Ill Bill, was introduced by Lord Joel Joffe. Dr. Margaret Branthwaite, a physician, barrister and former head of Englands Voluntary Euthanasia Society (recently renamed Dignity in Dying (10)), called for passage of the Joffe bill in an article in the British Medical Journal. As a matter of principle, she wrote, it reinforces current trends towards greater respect for personal autonomy. (11)

The focus on autonomy was also reflected in remarks about a plan to introduce an assisted-suicide initiative in Washington. Booth Gardner, former governor of Washington, said he plans to promote the initiative because it should be his decision when and how he dies. He told the Seattle Post-Intelligencer, When I go, I want to decide. (12)

The rationale is that when, where, why and how one dies should be a matter of self-determination, a matter of independent choice, and a matter of personal autonomy.

Elimination of suffering

The second pillar of assisted-suicide advocacy is elimination of suffering. During each and every attempt to permit euthanasia and assisted suicide, its advocates stress that ending suffering justifies legalization of the practices.

California Assemblywoman Patty Berg, the co-sponsor of Californias euphemistically named Compassionate Choices Act, (13) said the assisted-suicide measure was necessary so that people would have the comfort of knowing they could escape unbearable suffering if that were to occur. (14)

In an opinion piece supporting the failed 1998 assisted-suicide initiative in Michigan, a spokesperson for those favoring the measure wrote that the patients targeted by the proposal were those who were tortured by the unbearable suffering of a slow and agonizing death. (15)

In the United Kingdom, Lord Joffe said his bill would enable those who are suffering unbearably to get medical assistance to die. (16) Testimony before the British House of Lords Select Committee studying the bill noted that, where assisted dying has been legalized, it has done so as a response to patients who were suffering. (17)

The centerpiece of the 1994 Measure 16 campaign that resulted in Oregons assisted-suicide law was a television commercial featuring Patti Rosen. Describing her daughter who had cancer, Rosen said, The pain was so great that she couldnt bear to be touched. Measure 16 would have allowed my daughter to die with dignity. (18)

When an assisted-suicide proposal that later failed was being considered by the Hawaiian legislature in 2002, a public relations consultant who was working on behalf of the bill, e-mailed a template for use in written or oral testimony. The template suggested inclusion of the phrases agonizingly painful, pain was uncontrollable, and pain beyond my understanding. (19)

During consideration of an assisted-suicide bill in Vermont, the states former governor Philip Hoff said, The last thing I would want in this world is to be around and be in pain, and have no quality of life, and be a burden to my family and others. (20) Dick Walters, chairman of Death with Dignity Vermont, said the proposal would permit a person to peacefully end suffering and hasten death. (21)

Thus, the rationale given by euthanasia and assisted-suicide proponents for legalization always includes autonomy and/or elimination of suffering. However, the laws they propose actually contradict this rationale.

CONTRADICTIONS

When proposed, laws such as those now in existence in Oregon and similar measures introduced elsewhere include conditions or requirements limiting assisted suicide to certain groups of qualified patients. A patient qualified to receive the treatment of assisted suicide must be an adult who is capable of making decisions and must be diagnosed with a terminal condition.

If one accepts the premise that assisted suicide is a good medical treatment that should be permitted on the basis of personal autonomy or elimination of suffering, other questions must be raised.

If the reason for permitting assisted suicide is autonomy, why should assisted suicide be limited to the terminally ill?

Does ones autonomy depend upon a doctors diagnosis (or misdiagnosis) of a terminal illness? If a person is not terminally ill, but is suffering whether physically, psychologically or emotionally why isnt it up to that person to decide when, why and how to die? Does a person only have autonomy if he or she has a particular condition or illness? Is autonomy a basis for the law?

If assisted suicide is a good and acceptable medical treatment for the purpose of ending suffering, why should it be limited to adults who are capable of decision-making?

Isnt it both discriminatory and cruel to deny that good and acceptable medical treatment to a child or an incompetent adult? Why is a medical treatment that has been deemed appropriate to end suffering available to an 18-year-old, but not to a 16-year-old or 17-year-old? Why is a person only eligible to have his or her suffering ended if he or she has reached an arbitrary age?

And, what of the adult who never was, or no longer is, capable of decision-making? Should that person be denied medical treatment that ends suffering? Are euthanasia and assisted-suicide laws based on the need to eliminate suffering, or not?

Establishing arbitrary requirements that must be met prior to qualifying for the medical treatment of euthanasia or assisted suicide does, without doubt, contradict the two pillars on which justification for the practices is based.

The question then must be asked: Why are those arbitrary requirements included in Oregons law and other similar proposals? The answer is simple. After a series of defeats, euthanasia and assisted-suicide proponents learned that they had to propose laws that appeared palatable.

In April 2005, Lord Joffe, the British bills sponsor, acknowledged that his bill was intended to be only the first step. During hearings regarding the measure, he said that this is the first stage and went on to explain that one should go forward in incremental stages. I believe that this bill should initially be limited. (22)

He repeated his remarks a year later when discussing hearings about his bill. I can assure you that I would prefer that the [proposed] law did apply to patients who were younger and who were not terminally ill but who were suffering unbearable, he said and added, I believe that this bill should initially be limited. (23)

STEP-BY-STEP APPROACH

Proposals for euthanasia and assisted suicide have always emanated from advocacy groups, not from any grassroots desire. Those groups learned that attempting to go too far, too fast, leads to certain defeat.

After many failed attempts, most recently those in the early 90s in Washington and California when ballot initiatives that would have permitted both euthanasia by lethal injection and assisted suicide by lethal prescription were resoundingly defeated death with dignity activists changed their strategy. They decided to take a step-by-step approach, proposing an assisted-suicide-only bill which, when passed, would serve as a model for subsequent laws. Only after several such laws were passed, would they begin to expand them. That was the strategy that led to Oregons Measure 16, the Oregon Death with Dignity Act.

Those who were most involved in the successful Oregon strategy were not new to the scene.

Cheryl K. Smith, who wrote the first draft of Oregons law, had served as a special counsel to the political action group Oregon Right to Die (ORD). Smith had been the National Hemlock Societys legal advisor after her graduation from law school in 1989 and had been a top aide to Hemlocks co-founder, Derek Humphry. While a student at the University of Iowa College of Law, Smith helped draft a Model Aid-in-Dying Act that provided for childrens lives to be terminated either at their own request or, if under 6 years of age, by parental request. (24)

Barbara Coombs Lee was Measure 16s chief petitioner. At the time, she was a vice president for a large Oregon managed care program. After the laws passage, she took over the leadership of Compassion in Dying. (25) [Note: In early 2005, Compassion in Dying merged with the Hemlock Society. The combined organization is now called Compassion and Choices.]

Coombs Lees promotion of assisted suicide and euthanasia began prior to her involvement with the Death with Dignity Act. As a legislative aide to Oregon Senator Frank Roberts in 1991, she worked on Senate Bill 114 that would have permitted euthanasia on request of a patient and, if the patient was not competent, a designated representative would have been authorized to request the patients death. (26)

Upon passage of the Oregon law in 1994, many assisted-suicide supporters were certain that other states would immediately fall in line. However, that did not occur. Between 1994 and mid-2006, assisted-suicide measures were introduced in state after state.(27) Each and every proposal failed. All of the proposals were assisted-suicide-only bills and, with one exception, (28) every one was virtually identical to the Oregon law.

Among supporters of assisted suicide and euthanasia, though, the Oregon law is seen as the model for success and is referred to in debates about assisted suicide throughout the world. For that reason, a careful examination of the Oregon experience is vital to understanding the problems with legalized assisted suicide.

OREGON

Under Oregons law permitting physician-assisted suicide, the Oregon Department of Human Services (DHS) previously called the Oregon Health Division (OHD) is required to collect information, review a sample of cases and publish a yearly statistical report. (29)

However, due to major flaws in the law and the states reporting system, there is no way to know for sure how many or under what circumstances patients have died from physician-assisted suicide. Statistics from official reports are particularly questionable and have left some observers skeptical about their validity.

For example, when a similar proposal was under consideration in the British Parliament, members of a House of Lords Committee traveled to Oregon seeking information regarding Oregons law for use in their deliberations. The public and press were not present during the closed-door hearings. However, the House of Lords published the committees proceedings in three lengthy volumes, which included the exact wording of questions and answers.

After hearing witnesses claim that there have been no complications associated with more than 200 assisted-suicide deaths, committee member Lord McColl of Dulwich, a surgeon, said, If any surgeon or physician had told me that he did 200 procedures without any complications, I knew that he possibly needed counseling and had no insight. We come here and I am told there are no complications. There is something strange going on. (30)

The following includes statistical data from official reports and other published information dealing with troubling aspects of the practice of assisted suicide in Oregon. Statements from the 744-page second volume of the House of Lords committee proceedings are also included. None of the included statements from the committee hearings were made by opponents of Oregons law.

OFFICIAL REPORTS

Assisted-suicide deaths reported during the first eight years

Official Reports: 246Actual Number: Unknown

The latest annual report indicates that reported assisted-suicide deaths have increased by more than 230% since the first year of legal assisted suicide in Oregon. (31) The numbers, however, could be far greater. From the time the law went into effect, Oregon officials in charge of formulating annual reports have conceded theres no way to know if additional deaths went unreported because Oregon DHS has no regulatory authority or resources to ensure compliance with the law. (32)

The DHS has to rely on the word of doctors who prescribe the lethal drugs. (33) Referring to physicians reports, the reporting division admitted: For that matter the entire account [received from a prescribing doctor] could have been a cock-and-bull story. We assume, however, that physicians were their usual careful and accurate selves. (34)

The Death with Dignity law contains no penalties for doctors who do not report prescribing lethal doses for the purpose of suicide.

Complications occurring during assisted suicide

Official Reports: 13 (12 instances of vomiting & one patient who did not die fromlethal dose.)

Actual number: Unknown

Prescribing doctors may not know about all complications since, over the course of eight years, physicians who prescribed the lethal drugs for assisted suicide were present at only 19.5% of reported deaths. (35) Information they provide might come from secondhand accounts of those present at the deaths (36) or may be based on guesswork.

When asked if there is any systematic way of finding out and recording complications, Dr. Katrina Hedberg who was a lead author of most of Oregons official reports said, Not other than asking physicians. (37) She acknowledged that after they write the prescription, the physician may not keep track of the patient. (38) Dr. Melvin Kohn, a lead author of the eighth annual report, noted that, in every case that they hear about, it is the self-report, if you will, of the physician involved. (39)

Complications contained in news reports are not included in official reports

Patrick Matheny received his lethal prescription from Oregon Health Science University via Federal Express. He had difficulty when he tried to take the drugs four months later. His brother-in-law, Joe Hayes, said he had to help Matheny die. According to Hayes, It doesnt go smoothly for everyone. For Pat it was a huge problem. It would have not worked without help. (40) The annual report did not make note of this situation.

Speaking at Portland Community College, pro-assisted-suicide attorney Cynthia Barrett described a botched assisted suicide. The man was at home. There was no doctor there, she said. After he took it [the lethal dose], he began to have some physical symptoms. The symptoms were hard for his wife to handle. Well, she called 911. The guy ended up being taken by 911 to a local Portland hospital. Revived. In the middle of it. And taken to a local nursing facility. I dont know if he went back home. He died shortly someperiod of time after that. (41)

Overdoses of barbiturates are known to cause vomiting as a person begins to lose consciousness. The patient then inhales the vomit. In other cases, panic, feelings of terror and assaultive behavior can occur from the drug-induced confusion. (42) But Barrett would not say exactly which symptoms had taken place in this instance. She has refused any further discussion of the case.

Complications are not investigated

David Prueitt took the prescribed lethal dose in the presence of his family and members of Compassion & Choices. After being unconscious for 65 hours, he awoke. It was only after his family told the media about the botched assisted suicide that Compassion & Choices publicly acknowledged the case. (43) DHS issued a release saying it has no authority to investigate individual Death with Dignity cases. (44)

Referring to DHSs ability to look into complications, Dr. Hedberg explained that we are not given the resources to investigate and not only do we not have the resources to do it, but we do not have any legal authority to insert ourselves. (45)

David Hopkins, Data Analyst for the Eighth Annual Report, said, We do not report to the Board of Medical Examiners if complications occur; no, it is not required by law and it is not part of our duty. (46)

Jim Kronenberg, the Oregon Medical Associations (OMA) Chief Operating Officer, explained that the way the law is set up there is really no way to determine that [complications occurred] unless there is some kind of disaster. [P]ersonally I have never had a report where there was a true disaster, he said. Certainly that does not mean that you should infer there has not been, I just do not know. (47)

In the Netherlands, assisted-suicide complications and problems are not uncommon. One Dutch study found that, because of problems or complications, doctors in the Netherlands felt compelled to intervene (by giving a lethal injection) in 18% of cases.(48)

This led Dr. Sherwin Nuland of Yale University School of Medicine to question the credibility of Oregons lack of reported complications. Nuland, who favors physician-assisted suicide, noted that the Dutch have had years of practice to learn ways to overcome complications, yet complications are still reported. The Dutch findings seem more credible [than the Oregon reports], he wrote. (49)

Read the original post:

Euthanasia, Assisted Suicide & Health Care Decisions ...

Satoshi Nakamoto was interested in joining Trons Atlas …

The CEO and founder of Tron foundation has revealed that the unknown person who developed bitcoin, Satoshi Nakamoto, himself was interested in joining hands with the Atlas project of Tron.

Sun wrote on Twitter on Wednesday, "It seems that Satoshi Nakamoto himself was interested in joining our Atlas project since Nov. 03 2008."

The CEO revealed Tron's secret project, Atlas, on July 30. Every Tron enthusiasts had been anticipating for its announcement for quite sometime.

Sources report that the Atlas project is an amalgamation of both BitTorrent and Tron. Its initial phase has been completed and a route map for the upcoming three months has been charted out. Further details will be revealed by the end of August.

The CEO revealed, Finally, I would like to say a few things about our project, our secret project, now we name it Atlas, after the BitTorrent acquisition.

Bitcoin: A Peer-to-Peer Electronic Cash System created by Santoshi Nakamoto says: "for transferrable proof of work tokens to have value, they must have monetary value. To have monetary value, they must be transferred within a very large network- for example a file-trading network akin to BitTorrent."

In order to make the lifespan of BitTorrent swarms faster and durable the foundation is exploring the opportunity to add more features to the BitTorrent protocol with the help of Tron protocol. The network will act as an online protocol for the Atlas project.

The Founder and CEO of the Foundation said, Starting from today, Tron will enter a new phase of expanding our current ecosystem. We are grateful for what we have achieved in the past and we are looking forward to seeingwhat the future holds for us.

Tron has witnessed a series of new developments in the past few weeks. In fact, on July 30,Tron launched Tron Virtual Machine and introduced a new project known as Project Atlas withBitTorrent. Now, the blockchain founder Justin Suns next move is to set up an office in India,South China Morning Postreported.

Sun recently moved to his new office in Beijing and he is already making expansion plans. Hisnext move is to set up an office in India. Currently, crypto enthusiasts in India are waiting for Supreme Court of Indias final verdict on the fate of cryptocurrency exchanges in the country.

Join ourtelegramgroup

Read the original here:

Satoshi Nakamoto was interested in joining Trons Atlas ...

Censorship Synonyms, Censorship Antonyms | Thesaurus.com

Patricia forgot her censorship as the spirit of the explorer rose in her.

The Duc wondered what a censorship would let pass if there were one.

No: she had heard too much of it; it made you almost wish for a Censorship of the Press.

The newsletters, of course, might be under the censorship of Rome and Naples.

The discovery of a new spot on the sun is evidently a case for the censorship.

I call the censorship chaotic because of the chaos in its administration.

He got the impression that she put off all censorship from either her feeling or her expression.

A few voices, however, were raised in favour of a censorship.

I wish to claim no censorship over the style and diction of your letters.

How absurd, how inadequate this all is we see from the existence of the Censorship on Drama.

More here:

Censorship Synonyms, Censorship Antonyms | Thesaurus.com