Ascension | Call of Duty Wiki | Fandom powered by Wikia

"The risen dead have overtaken a Soviet Cosmodrome and all Hell has broken loose. The countdown to the zombie apocalypse has begun." Ascension level description.

Ascension is the eighth Zombies map included in the First Strike downloadable content pack for Call of Duty: Black Ops. It was released on February 1, 2011 on the Xbox 360, March 3 on the PlayStation 3, and PC on March 25, 2011 for $14.99 USD.

Ascension takes place in an abandoned Soviet Cosmodrome. The map features Tank Dempsey, Nikolai Belinski, Takeo Masaki, and Edward Richtofen as playable characters.

Among the new additions are two Wonder Weapons, the Gersch Device and Matryoshka Dolls (replacing the Monkey Bombs).Double Tap Root Beer does not appear, but two new perks are introduced: PhD Flopper and Stamin-Up.

A new enemy, the Space Monkeys, also appear in this map, replacing the Hellhounds and Pentagon Thief. A new Power-Up, the Random Perk Bottle is also available.

Ascension features two new perks, both costing 2000 points. However, Double Tap Root Beer does not make an appearance. Each player can only have four perks at a time, with the exception of already having four and obtaining the Random Perk Bottle to obtain all five (now six) perks at once as viewed here.

PhD Flopper; located outside near the "D" Lunar Lander launch pad that has the Fragmentation Grenades. This perk makes the player create a small "nuke" when diving to prone. The "nuke" will only take effect if the dive-to-prone would normally hurt the player (almost anything above flat ground). In addition, landing directly on top of a zombie seems to neutralize the effect of the nuke. The nuke kills all nearby zombies up until the round 20s. In addition, the player cannot take any fall damage and is immune to any explosive damage done to the player, including the Ray Gun's splash damage, damage when a launcher or the Mustang & Sally is shot too close (such as the M72 LAW, or the China Lake), fragmentation grenades, and Matryoshka Dolls. The player can still take explosive damage after a space monkey tosses back a grenade. The player can also overcook their own grenade without being harmed.

Stamin-Up; located near the AK-74u, towards another Lunar Lander launch pad. This perk gives the player increased movement speed and sprint duration, similar to both Marathon and Lightweight. Since Ascension is a rather large map, it is more useful than one might think, especially during Space Monkey rounds where a player may have to get to a Perk-a-Cola machine that is being attacked. It can also benefit the player when running from Zombies. This is even more useful in Call of the Dead.

The man speaking at the beginning is Gersch and is actually giving the players a mission to complete the node puzzle to repair the Casimir Mechanism. Once the mechanism is repaired, the players will each get a Death Machine that lasts for 90 seconds. This serves as the map's major easter egg and requires four players to complete.

Chimp on the Barbie (35 / Bronze Trophy ) - Kill a space monkey with a Fire Pit.

The Eagle has Landers (35 / Bronze Trophy ) - Use all three lunar landers.

They are going THROUGH! (35 / Bronze Trophy ) - Kill at least 5 zombies with the Gersch Device.

Space Race (45 / Silver Trophy ) - Pack-a-Punch a weapon before round 8.

Ascension leak in the iTunes album.

Old Ascension thumbnail, used on PC.

New Ascension thumbnail, used on Consoles.

Zombies, note the two with gas masks in the front.

Loading screen for "Ascension" (Notice the 115 in the rocket's shadows).

"Ascension" on the numbers broadcast paper.

Another view of Ascension, similar to its thumbnail.

The rocket rising up in Black Ops Zombies.

Looking towards the rocket in Black Ops Zombies. Note the red reticles.

The rocket being destroyed in Black Ops Zombies.

Call of Duty- Black Ops - Ascension Trailer Official HD

Official Ascension trailer.

Abracadavre Elena Siegman Call of Duty Black Ops - Ascension Easter Egg song Kevin Sherwood

Abracadavre, the Ascension music easter egg.

Ascension Game Over Song

Game Over song in Ascension & Call of The Dead.

Call of Duty Black Ops Zombies PhD Flopper Song

The jingle of PhD Flopper.

Call of Duty Black Ops Zombies Stamin-Up Song

The jingle of Stamin-Up.

Call of Duty Black Ops - Ascension - Destroying the Rocket

Destroying the rocket.

Ascension loading screen nazi zombies Kevin Sherwood Call of Duty Black Ops

Loading screen song.

Ascension Round 44 Full Gameplay - Black Ops Zombies

Gameplay.

See more here:

Ascension | Call of Duty Wiki | Fandom powered by Wikia

Space | National Archives

Information about the United States space flight programs, including NASA missions and the astronauts who participate in the efforts to explore space.

Stellar cluster taken by Hubble Space Telescope. (Courtesy of the Hubble Heritage Team)

NARA Resources Finding Aids for NARA Records on Space Exploration

Mars taken by Hubble Space Telescope. (Courtesy of NASA and the Hubble Heritage Team)

Presidential Libraries

The Dwight D. Eisenhower Library And Museum: Space Sources

John F. Kennedy Library & Museum: Space Sources

Lyndon Baines Johnson Library and Museum: Space Resources

Richard Nixon Library: Space Resources

Gerald R. Ford Library and Museum: Space Resources

Picture of the Trifid Nebula taken by Gemini North 8-meter Telescope. (Courtesy of the Gemini Observatory/GMOS Image)

Jimmy Carter Library and Museum: Space Resources

Ronald Reagan Presidential Library: Space Resources

George Bush Presidential Library and Museum: Space Resources

William J. Clinton Presidential Library: Space Resources

Neptune taken by Voyager spacecraft. (Courtesy of NASA, JPL, and CALTech)

General Space Exploration Resources

Jupiters red spot taken by Voyager spacecraft. (Courtesy of NASA, JPL, and CALTech)

Fireworks at star formation taken by Hubble Space Telescope. (Courtesy of NASA and the Hubble Heritage Team)

Visit link:

Space | National Archives

NATO bombing of Yugoslavia – Wikipedia

Operation Allied Force Part of the Kosovo War Novi Sad on fire, 1999 Federal Republic of Yugoslavia. Belligerents

NATO

Wesley Clark (SACEUR) Rupert Smith Javier Solana

Over 1,031 aircraft[11][12]

Human Rights Watch verified that around 500 civilians died as a result of air attacks, nearly 60% of whom were in Kosovo.[16][17] Serbian sources estimated between 1,200 and 5,700 civilian deaths.[16]

The NATO bombing of Yugoslavia was the North Atlantic Treaty Organisation's (NATO) military operation against the Federal Republic of Yugoslavia (FRY) during the Kosovo War. The air strikes lasted from March 24, 1999 to June 10, 1999. The official NATO operation code name was Operation Allied Force; the United States called it Operation Noble Anvil,[18] while in Yugoslavia the operation was incorrectly called "Merciful Angel" (Serbian Cyrillic: ), as a result of a misunderstanding or mistranslation.[19] The bombings continued until an agreement was reached that led to the withdrawal of Yugoslav armed forces from Kosovo and the establishment of United Nations Interim Administration Mission in Kosovo (UNMIK), a UN peacekeeping mission in Kosovo.

NATO claimed that the Albanian population in Kosovo were being persecuted by FRY forces, Serbian police, and Serb paramilitary forces, and that military action was needed to force the FRY to stop. NATO countries attempted to gain authorization from the United Nations Security Council for military action, but were opposed by China and Russia that indicated they would veto such a proposal. NATO launched a campaign without UN authorization, which it described as a humanitarian intervention. The FRY described the NATO campaign as an illegal war of aggression against a sovereign country that was in violation of international law because it did not have UN Security Council support.

The bombing killed between 489 and 528 civilians, and destroyed bridges, industrial plants, public buildings, private businesses, as well as barracks and military installations.

The NATO bombing marked the second major combat operation in its history, following the 1995 NATO bombing campaign in Bosnia and Herzegovina. It was the first time that NATO had used military force without the approval of the UN Security Council.[20]

Moscow attacked it as a breach of international law and a challenge to Russia's status.[21]

After its autonomy was quashed, Kosovo was faced with state organized oppression: from the early 1990s, Albanian language radio and television were restricted and newspapers shut down. Kosovar Albanians were fired in large numbers from public enterprises and institutions, including banks, hospitals, the post office and schools.[22] In June 1991 the University of Pritina assembly and several faculty councils were dissolved and replaced by Serbs. Kosovar Albanian teachers were prevented from entering school premises for the new school year beginning in September 1991, forcing students to study at home.[22]

Later, Kosovar Albanians started an insurgency against Belgrade when the Kosovo Liberation Army was founded in 1996. Armed clashes between two sides broke out in early 1998. A NATO-facilitated ceasefire was signed on 15 October, but both sides broke it two months later and fighting resumed. When the killing of 45 Kosovar Albanians in the Raak massacre was reported in January 1999, NATO decided that the conflict could only be settled by introducing a military peacekeeping force to forcibly restrain the two sides. After the Rambouillet Accords broke down on 23 March with Yugoslav rejection of an external peacekeeping force, NATO prepared to install the peacekeepers by force.

NATO's objectives in the Kosovo conflict were stated at the North Atlantic Council meeting held at NATO headquarters in Brussels on April 12, 1999:[23]

Operation Allied Force predominantly used a large-scale air campaign to destroy Yugoslav military infrastructure from high altitudes. After the third day of aerial bombing, NATO had destroyed almost all of its strategic military targets in Yugoslavia. Despite this, the Yugoslav Army continued to function and to attack Kosovo Liberation Army (KLA) insurgents inside Kosovo, mostly in the regions of Northern and Southwest Kosovo. NATO bombed strategic economic and societal targets, such as bridges, military facilities, official government facilities, and factories, using long-range cruise missiles to hit heavily defended targets, such as strategic installations in Belgrade and Pristina. The NATO air forces also targeted infrastructure, such as power plants (using the BLU-114/B "Soft-Bomb"), water-processing plants and the state-owned broadcaster, causing much environmental and economic damage throughout Yugoslavia.[citation needed]

Commentators[who?] have debated whether the capitulation of Yugoslavia in the Kosovo War of 1999 resulted solely from the use of air power, or whether other factors contributed.[clarification needed][citation needed]

Due to restrictive media laws, media in Yugoslavia carried little coverage of what its forces were doing in Kosovo, or of other countries' attitudes to the humanitarian crisis; so, few members of the public expected bombing, instead thinking that a diplomatic deal would be made.[24]

According to John Keegan, the capitulation of Yugoslavia in the Kosovo War marked a turning point in the history of warfare. It "proved that a war can be won by air power alone". By comparison, diplomacy had failed before the war, and the deployment of a large NATO ground force was still weeks away when Slobodan Miloevi agreed to a peace deal.[25]

As for why air power should have been capable of acting alone, it has been argued[by whom?] that there are several factors required. These normally come together only rarely, but all occurred during the Kosovo War:[26]

On 20 March 1999 OSCE Kosovo Verification Mission monitors withdrew from Kosovo citing a "steady deterioration in the security situation",[38][39] and on 23 March 1999 Richard Holbrooke returned to Brussels and announced that peace talks had failed.[40] Hours before the announcement, Yugoslavia announced on national television it had declared a state of emergency citing an "imminent threat of war ... against Yugoslavia by Nato" and began a huge mobilization of troops and resources.[40][41] On 23 March 1999 at 22:17 UTC the Secretary General of NATO, Javier Solana, announced he had directed the Supreme Allied Commander Europe (SACEUR), General Wesley Clark, to "initiate air operations in the Federal Republic of Yugoslavia."[41][42] On 24 March at 19:00 UTC NATO started the bombing campaign against Yugoslavia.[43][44]

NATO's bombing campaign involved 1,000 aircraft operating from air bases in Italy and Germany, and the aircraft carrier USSTheodore Roosevelt stationed in the Adriatic Sea. At dusk,[when?]F/A-18 Hornets of the Spanish Air Force were the first NATO planes to bomb Belgrade and perform SEAD operations. BGM-109 Tomahawk cruise missiles were fired from ships and submarines. The U.S. was the dominant member of the coalition against Yugoslavia, although other NATO members were involved. During the ten weeks of the conflict, NATO aircraft flew over 38,000 combat missions. For the German Air Force, this mission was its first conflict participation since World War II. In addition to air power, one battalion of Apache helicopters from the U.S. Army's 11th Aviation Regiment was deployed to help combat missions. The regiment was augmented by pilots from Fort Bragg's 82nd Airborne Attack Helicopter Battalion. The battalion secured AH-64 Apache attack helicopter refueling sites, and a small team forward deployed to the Albania Kosovo border to identify targets for NATO air strikes.

The campaign was initially designed to destroy Yugoslavian air defences and high-value military targets.[citation needed]

NATO military operations increasingly attacked Yugoslavian units on the ground; as well as continuing the strategic bombardment. Montenegro was bombed several times, and NATO refused to prop up the precarious position of its anti-Miloevi leader, Milo ukanovi. "Dual-use" targets, used by civilians and military, were attacked; the targets included bridges across the Danube, factories, power stations, telecommunications facilities, headquarters of Yugoslavian Leftists, a political party led by Miloevi's wife, and the Avala TV Tower. Some protested that these actions were violations of international law and the Geneva Conventions. NATO argued these facilities were potentially useful to the Yugoslavian military and that their bombing was justified.

On April 14, NATO planes bombed ethnic Albanians near Koria who had been used by Yugoslav forces as human shields.[45][46] Yugoslav troops took TV crews to the scene shortly after the bombing.[47] The Yugoslav government insisted that NATO had targeted civilians.[48][49][50]

On May 7, NATO bombed the Chinese embassy in Belgrade, killing three Chinese journalists. NATO had aimed at a Yugoslav military target, but navigational errors led to the wrong building being targeted.[51] The United States and NATO apologized for the bombing, saying it occurred because of an outdated map provided by the Central Intelligence Agency. The bombing strained relations between the People's Republic of China and NATO, provoking angry demonstrations outside Western embassies in Beijing.[52]

Solana directed Clark to "initiate air operations in the Federal Republic of Yugoslavia." Clark then delegated responsibility for the conduct of Operation Allied Force to the Commander-in-Chief of Allied Forces Southern Europe who in turn delegated control to the Commander of Allied Air Forces Southern Europe, Lieutenant-General Michael C. Short USAF.[53] Operationally, the day-to-day for responsibility for executing missions was delegated to the Commander of the 5th Allied Tactical Air Force.[54]

The Hague Tribunal ruled that over 700,000 Kosovo Albanians were forcibly displaced by Yugoslav forces into neighbouring Albania and Macedonia, with many thousands displaced within Kosovo.[55] By April, the United Nations reported 850,000 refugees had left from Kosovo.[56] Another 230,000 were listed as internally displaced persons (IDPs): driven from their homes, but still inside Kosovo. German Foreign Minister Joschka Fischer claimed the refugee crisis was produced by a Yugoslav plan codenamed "Operation Horseshoe".

Serbian Television claimed that huge columns of refugees were fleeing Kosovo because of NATOs bombing, not Yugoslav military operations.[57][58] The Yugoslav side and its Western supporters claimed the refugee outflows were caused by a mass panic in the Kosovo Albanian population, and the exodus was generated principally by fear of NATO bombs.

The United Nations and international human rights organizations were convinced the crisis resulted from a policy of ethnic cleansing. Many accounts from both Serbs and Albanians identified Yugoslav security forces and paramilitaries as the culprits, responsible for systematically emptying towns and villages of their Albanian inhabitants by forcing them to flee.[59]

Atrocities against civilians in Kosovo were the basis of United Nations war crimes charges against Miloevi and other officials responsible for directing the Kosovo conflict.

An important portion of the war involved combat between the Yugoslav Air Force and the opposing air forces. United States Air Force F-15s and F-16s flying mainly from Italian air force bases attacked the defending Yugoslav fighters; mainly MiG-29s, which were in poor condition, due to lack of spare parts and maintenance. Other NATO forces also contributed to the air war.

Air combat incidents:

By the start of April, the conflict seemed closer to resolution. NATO countries began to deliberate about invading Kosovo with ground units. U.S. President Bill Clinton was reluctant to commit US forces for a ground offensive. At the same time, Finnish and Russian negotiators continued to try to persuade Miloevi to back down. Faced with little alternative, Miloevi accepted the conditions offered by a Finnish-Russian mediation team and agreed to a military presence within Kosovo headed by the UN, but incorporating NATO troops.

On June 12, after Miloevi accepted the conditions, KFOR began entering Kosovo. KFOR, a NATO force, had been preparing to conduct combat operations, but in the end, its mission was only peacekeeping. It was based upon the Allied Rapid Reaction Corps headquarters commanded by then Lieutenant General Mike Jackson of the British Army. It consisted of British forces (a brigade built from 4th Armored and 5th Airborne Brigades), a French Army Brigade, a German Army brigade, which entered from the west while all the other forces advanced from the south, and Italian Army and US Army brigades. The U.S. contribution, known as the Initial Entry Force, was led by the U.S. 1st Armored Division. Subordinate units included TF 135 Armor from Baumholder, Germany, the 2nd Battalion, 505th Parachute Infantry Regiment from Fort Bragg, North Carolina, the 26th Marine Expeditionary Unit from Camp Lejeune, North Carolina, the 1st Battalion, 26th Infantry Regiment from Schweinfurt, Germany, and Echo Troop, 4th Cavalry Regiment, also from Schweinfurt, Germany. Also attached to the U.S. force was the Greek Army's 501st Mechanized Infantry Battalion. The initial U.S. forces established their area of operation around the towns of Uroevac, the future Camp Bondsteel, and Gnjilane, at Camp Monteith, and spent four months the start of a stay which continues to date establishing order in the southeast sector of Kosovo.

The first NATO troops to enter Pristina on the 12th of June 1999 were Norwegian special forces from FSK Forsvarets Spesialkommando and soldiers from the British Special Air Service 22 S.A.S, although to NATO's diplomatic embarrassment Russian troops arrived first at the airport. The Norwegian soldiers from FSK Forsvarets Spesialkommando were the first to come in contact with the Russian troops at the airport. FSK's mission was to level the negotiating field between the belligerent parties, and to fine-tune the detailed, local deals needed to implement the peace deal between the Serbians and the Kosovo Albanians.[77][78][79][80]

During the initial incursion, the U.S. soldiers were greeted by Albanians cheering and throwing flowers as U.S. soldiers and KFOR rolled through their villages.[citation needed] Although no resistance was met, three U.S. soldiers from the Initial Entry Force lost their lives in accidents.[81]

Following the military campaign, the involvement of Russian peacekeepers proved to be tense and challenging to the NATO Kosovo force. The Russians expected to have an independent sector of Kosovo, only to be unhappily surprised with the prospect of operating under NATO command. Without prior communication or coordination with NATO, Russian peacekeeping forces entered Kosovo from Bosnia and seized Pristina International Airport.

In 2010 James Blunt in an interview described how his unit was given the assignment of securing the Pristina in advance of the 30,000-strong peacekeeping force and the Russian army had moved in and taken control of the airport before his unit's arrival. As the first officer on the scene, Blunt shared a part in the difficult task of addressing the potentially violent international incident. His own account tells of how he refused to follow orders from NATO command to attack the Russians.[82]

Outpost Gunner was established on a high point in the Preevo Valley by Echo Battery 1/161 Field Artillery in an attempt to monitor and assist with peacekeeping efforts in the Russian Sector. Operating under the support of 2/3 Field Artillery, 1st Armored Division, the Battery was able to successfully deploy and continuously operate a Firefinder Radar which allowed the NATO forces to keep a closer watch on activities in the Sector and the Preevo Valley. Eventually a deal was struck whereby Russian forces operated as a unit of KFOR but not under the NATO command structure.[83]

While not directly related to the hostilities, on 12 March 1999 the Czech Republic, Hungary, and Poland joined NATO by depositing instruments of accession in accordance with Article 10 of the North Atlantic Treaty at a ceremony in Independence, Missouri.[84] These nations did not participate directly in hostilities.

A large element of the operation was the air forces of NATO, relying heavily on the US Air Force and Navy. The French Navy and Air Force operated the Super Etendard and the Mirage 2000. The Italian Air Force operated with 34 Tornado, 12 F-104, 12 AMX, 2 B-707, the Italian Navy operated with Harrier II. The British Royal Air Force operated the Harrier GR7 and Tornado ground attack jets as well as an array of support aircraft. Belgian, Danish, Dutch, Norwegian and Turkish Air Forces operated F-16s. The Spanish Air Force deployed EF-18s and KC-130s. The Canadian Air Force deployed a total of 18 CF-18s, enabling them to be responsible for 10% of all bombs dropped in the operation. The fighters were armed with both guided and unguided "dumb" munitions, including the Paveway series of laser-guided bombs.[citation needed] The bombing campaign marked the first time the German Air Force actively participated in combat operations since the end of World War II.[85]

However, NATO forces relied mostly upon the Americans and the proven effectiveness of its air power by using the F-16, F-15, F-117, F-14, F/A-18, EA-6B, B-52, KC-135, KC-10, AWACS, and JSTARS from bases throughout Europe and from aircraft carriers in the region. The American B-2 Spirit stealth bomber also saw its first successful combat role in Operation Allied Force, all while striking from its home base in the continental United States.

Even with this air power, noted a RAND Corporation study, "NATO never fully succeeded in neutralizing the enemy's radar-guided SAM threat".[86]

Operation Allied Force incorporated the first large-scale use of satellites as a direct method of weapon guidance. The collective bombing was the first combat use of the Joint Direct Attack Munition JDAM kit, which uses an inertial-guidance and GPS-guided tail fin to increase the accuracy of conventional gravity munitions up to 95%. The JDAM kits were outfitted on the B-2s. The AGM-154 Joint Standoff Weapon (JSOW) had been previously used in Operation Southern Watch earlier in 1999.

NATO naval forces operated in the Adriatic Sea. The Royal Navy sent a substantial task force that included the aircraft carrier HMSInvincible, which operated Sea Harrier FA2 fighter jets. The RN also deployed destroyers and frigates, and the Royal Fleet Auxiliary (RFA) provided support vessels, including the aviation training/primary casualty receiving ship RFAArgus. It was the first time the RN used cruise missiles in combat, operated from the nuclear fleet submarine HMSSplendid. The Italian Navy provided a naval task force that included the aircraft carrier Giuseppe Garibaldi, a frigate (Maestrale) and a submarine (Sauro-class). The United States Navy provided a naval task force that included the aircraft carrier USS Theodore Roosevelt, USSVella Gulf, and the amphibious assault ship USSKearsarge. The French Navy provided the aircraft carrier Foch and escorts. The German Navy deployed the frigate Rheinland-Pfalz and Oker, an Oste-classfleet service ship, in the naval operations.

U.S. ground forces included a battalion from the 505th Parachute Infantry Regiment, 82nd Airborne Division. The unit was deployed in March 1999 to Albania in support of the bombing campaign where the battalion secured the Tirana airfield, Apache helicopter refueling sites, established a forward-operating base to prepare for Multiple Launch Rocket System (MLRS) strikes and offensive ground operations, and deployed a small team with an AN/TPQ-36 Firefinder radar system to the Albania/Kosovo border where it acquired targets for allied/NATO air strikes. Immediately after the bombing campaign, the battalion was refitted back at Tirana airfield and issued orders to move into Kosovo as the initial entry force in support of Operation Joint Guardian. Task Force Hawk was also deployed.

Human Rights Watch "concludes that as few as 489 and as many as 528 Yugoslav civilians were killed in the ninety separate incidents in Operation Allied Force". Refugees were among the victims. Between 278 and 317 of the dead, between 56 and 60 percent of the total number of deaths, were in Kosovo. In Serbia, 201 civilians were killed (five in Vojvodina) and eight died in Montenegro. Almost two thirds (303 to 352) of the total registered civilian deaths occurred in twelve incidents where ten or more civilian deaths were confirmed.[87]

Military casualties on the NATO side were limited. According to official reports, the alliance suffered no fatalities from combat operations. However, on May 5, an American AH-64 Apache crashed and exploded during a night-time mission in Albania.[88][89] The Yugoslavs claimed they shot it down, but NATO claimed it crashed due to a technical malfunction. It crashed 40miles from Tirana,[90] killing the two crewmen, Army Chief Warrant Officers David Gibbs and Kevin Reichert.[91] It was one of two Apache helicopters lost in the war.[92] A further three American soldiers were taken as prisoners of war by Yugoslav special forces while riding on a Humvee on a surveillance mission along the Macedonian border.[93] A study of the campaign reports that Yugoslav air defenses may have fired up to 700 missiles at NATO aircraft, and that the B-1 bomber crews counted at least 20 surface-to-air missiles fired at them during their first 50 missions.[91] Despite this, only two NATO aircraft (one F-16C[94][95][96] and one F-117A Nighthawk[97][98]) were shot down.[99] A further F-117A Nighthawk was damaged[70][71] as were two A-10 Thunderbolt IIs.[100][101] One AV-8B Harrier crashed due to technical failure.[102] NATO also lost 25 UAVs, either due to enemy action or mechanical failure.[103]

In 2013, Serbia's then-Defence Minister Aleksandar Vui announced that Yugoslavia's military and police losses during the air campaign amounted to 956 killed and 52 missing. Vui stated that 631 soldiers were killed and a further 28 went missing, and that 325 police officers were also among the dead with a further 24 listed as missing.[104] The Government of Serbia also lists 5,173 combatants as having been wounded.[105][106] In early June 1999, while the bombing was still in progress, NATO officials claimed that 5,000 Yugoslav troops had been killed in the bombing and a further 10,000 wounded.[107][108][109] NATO later revised this estimation to 1,200 soldiers and policemen killed.[110]

Throughout the war; 181 NATO strikes were reported against tanks, 317 against armored personnel vehicles, 800 against other military vehicles, and 857 against artillery and mortars,[111] after a total of 38,000 sorties, or 200 sorties per day at the beginning of the conflict and over 1,000 at the end of the conflict.[112] When it came to alleged hits, 93 tanks, 153 APCs, 339 other vehicles, and 389 artillery systems were believed to have been disabled or destroyed with certainty.[113] The Department of Defense and Joint Chief of Staff had earlier provided a figure of 120 tanks, 220 APCs, and 450 artillery systems, and a Newsweek piece published around a year later stated that only 14 tanks, 18 APCs, and 20 artillery systems had actually been obliterated,[113] not that far from the Serbs own estimates of 13 tanks, 6 APCs, and 6 artillery pieces.[114] However, this reporting was heavily criticised, as it was based on the number of vehicles found during the assessment of the Munitions Effectiveness Assessment Team, which wasnt interested in the effectiveness of anything but the ordnance, and surveyed sites that hadnt been visited in nearly three-months, at a time when the most recent of strikes were four-weeks old.[114] The Yugoslav Air Force also sustained serious damage, with 121 aircraft destroyed.[115]

Operation Allied Force inflicted less damage on the Yugoslav military than originally thought due to the use of camouflage. Other misdirection techniques were used to disguise military targets. It was only in the later stages of the campaign that strategic targets such as bridges and buildings were attacked in any systematic way, causing significant disruption and economic damage. This stage of the campaign led to controversial incidents, most notably the bombing of the People's Republic of China embassy in Belgrade where three Chinese reporters were killed and twenty injured, which NATO claimed was a mistake.[51]

Relatives of Italian soldiers believe 50 of them have died since the war due to their exposure to depleted uranium weapons.[116]UNEP tests found no evidence of harm by depleted uranium weapons, even among cleanup workers,[117] but those tests and UNEPs report were questioned in an article in Le Monde diplomatique.[118]

In April 1999, during the NATO bombing, officials in Yugoslavia said the damage from the bombing campaign has cost around $100 billion up to that time.[119]

In 2000, a year after the bombing ended, Group 17 published a survey dealing with damage and economic restoration. The report concluded that direct damage from the bombing totalled $3.8 billion, not including Kosovo, of which only 5% had been repaired at that time.[120]

In 2006, a group of economists from the G17 Plus party estimated the total economic losses resulting from the bombing were about $29.6 billion.[121] This figure included indirect economic damage, loss of human capital, and loss of GDP.[citation needed]

When NATO agreed Kosovo would be politically supervised by the United Nations, and that there would be no independence referendum for three years, the Yugoslav government agreed to withdraw its forces from Kosovo, under strong diplomatic pressure from Russia, and the bombing was suspended on June 10. The war ended June 11, and Russian paratroopers seized Slatina airport to become the first peacekeeping force in the war zone.[122] As British troops were still massed on the Macedonian border, planning to enter Kosovo at 5am, the Serbs were hailing the Russian arrival as proof the war was a UN operation, not a NATO operation. After hostilities ended, on June 12 the U.S. Army's 82nd Airborne, 2505th Parachute Infantry Regiment entered war-torn Kosovo as part of Operation Joint Guardian.

Yugoslav President Miloevi survived the conflict and declared its outcome a major victory for Yugoslavia. He was, however, indicted for war crimes by the International Criminal Tribunal for the Former Yugoslavia along with a number of other senior Yugoslav political and military figures. His indictment led to Yugoslavia as a whole being treated as a pariah by much of the international community because Miloevi was subject to arrest if he left Yugoslavia. The country's economy was badly affected by the conflict, and in addition to electoral fraud, this was a factor in the overthrow of Miloevi.

Thousands were killed during the conflict, and hundreds of thousands more fled from the province to other parts of the country and to the surrounding countries. Most of the Albanian refugees returned home within a few weeks or months. However, much of the non-Albanian population again fled to other parts of Serbia or to protected enclaves within Kosovo following the operation.[123][124][125][126][127] Albanian guerrilla activity spread into other parts of Serbia and to neighbouring Republic of Macedonia, but subsided in 2001. The non-Albanian population has since diminished further following fresh outbreaks of inter-communal conflict and harassment.[citation needed]

In December 2002, Elizabeth II approved the awarding of the Battle Honour "Kosovo" to squadrons of the RAF that participated in the conflict. These were: Nos 1, 7, 8, 9, 14, 23, 31, 51, 101, and 216 squadrons. This was also extended to the Canadian squadrons deployed to the operation, 425 and 441.

Ten years after the operation, the Republic of Kosovo declared independence with a new Republic of Kosovo government.

Those who were involved in the NATO airstrikes have stood by the decision to take such action. U.S President Bill Clinton's Secretary of Defense, William Cohen, said, "The appalling accounts of mass killing in Kosovo and the pictures of refugees fleeing Serb oppression for their lives makes it clear that this is a fight for justice over genocide."[128] On CBS' Face the Nation Cohen claimed, "We've now seen about 100,000 military-aged men missing. ... They may have been murdered."[129] Clinton, citing the same figure, spoke of "at least 100,000 (Kosovar Albanians) missing".[130] Later, Clinton said about Yugoslav elections, "they're going to have to come to grips with what Mr. Miloevi ordered in Kosovo. ... They're going to have to decide whether they support his leadership or not; whether they think it's OK that all those tens of thousands of people were killed. ..."[131] In the same press conference, Clinton also claimed "NATO stopped deliberate, systematic efforts at ethnic cleansing and genocide."[131] Clinton compared the events of Kosovo to the Holocaust. CNN reported, "Accusing Serbia of 'ethnic cleansing' in Kosovo similar to the genocide of Jews in World War II, an impassioned Clinton sought Tuesday to rally public support for his decision to send U.S. forces into combat against Yugoslavia, a prospect that seemed increasingly likely with the breakdown of a diplomatic peace effort."[132] President Clinton's State Department also claimed Serbian troops had committed genocide. The New York Times reported, "the Administration said evidence of 'genocide' by Serbian forces was growing to include 'abhorrent and criminal action' on a vast scale. The language was the State Department's strongest up to that time in denouncing Yugoslav President Slobodan Miloevi."[133] The State Department also gave the highest estimate of dead Albanians. In May 1996, Defense Secretary William Cohen suggested that there might be up to 100,000 Albanian fatalities."[134]

Five months after the conclusion of NATO bombing, when around one third of reported gravesites had been visited thus far, 2,108 bodies had been found, with a estimated total of between 5,000 and 12,000 at that time;[135] Serb forces had systematically concealed grave sites and moved bodies.[136][137]

The United States House of Representatives passed a non-binding resolution on March 11, 1999 by a vote of 219191 conditionally approving of President Clinton's plan to commit 4000 troops to the NATO peacekeeping mission.[138] In late April the House Appropriations Committee approved $13billion in emergency spending to cover the cost of the air war, but a second non-binding resolution approving of the mission failed in the full House by a vote of 213213.[139] The Senate had passed the second resolution in late March by a vote of 5841.[140]

There has also been criticism of the campaign. Joseph Farah accused the coalition of exaggerating the casualty numbers to make a claim of potential genocide to justify the bombings.[141] The Clinton administration were accused of inflating the number of Kosovar Albanians killed by Serbians.[142]

In an interview with Radio-Television Serbia journalist Danilo Mandic on April 25, 2006, Noam Chomsky claimed that Strobe Talbott, the Deputy Secretary of State under President Clinton and the leading U.S. negotiator during the war, had written in his foreword to John Norris' 2005 book Collision Course: NATO, Russia, and Kosovo that "the real purpose of the war had nothing to do with concern for Kosovar Albanians", but rather "It was because Serbia was not carrying out the required social and economic reforms, meaning it was the last corner of Europe which had not subordinated itself to the US-run neoliberal programs, so therefore it had to be eliminated".[143] On May 31, 2006, Brad DeLong rebutted Chomsky's allegation and noted that in the original passage which Chomsky had cited,[144] Talbott claimed that "the Kosovo crisis was fueled by frustration with Milosevic and the legitimate fear that instability and conflict might spread further in the region" and also that "Only a decade of death, destruction, and Milosevic brinkmanship pushed NATO to act when the Rambouillet talks collapsed. Most of the leaders of NATO's major powers were proponents of 'third way' politics and headed socially progressive, economically centrist governments. None of these men were particularly hawkish, and Milosevic did not allow them the political breathing room to look past his abuses."[144][145]

The United Nations Charter does not allow military interventions in other sovereign countries with few exceptions which, in general, need to be decided upon by the United Nations Security Council. The issue was brought before the UNSC by Russia, in a draft resolution which, inter-alia, would affirm "that such unilateral use of force constitutes a flagrant violation of the United Nations Charter". China, Namibia and Russia voted for the resolution, the other members against, thus it failed to pass.[146][147][dead link]

On April 29, 1999, Yugoslavia filed a complaint at the International Court of Justice (ICJ) at The Hague against ten NATO member countries (Belgium, Germany, France, United Kingdom, Italy, Canada, the Netherlands, Portugal, Spain, and the United States) and alleged that the military operation had violated Article 9 of the 1948 Genocide Convention and that Yugoslavia had jurisdiction to sue through Article 38, para. 5 of Rules of Court.[148] On June 2, the ICJ ruled in an 84 vote that Yugoslavia had no such jurisdiction.[149] Four of the ten nations (the United States, France, Italy and Germany) had withdrawn entirely from the court's optional clause. Because Yugoslavia filed its complaint only three days after accepting the terms of the court's optional clause, the ICJ ruled that there was no jurisdiction to sue either Britain or Spain, as the two nations had only agreed to submit to ICJ lawsuits if a suing party had filed their complaint a year or more after accepting the terms of the optional clause.[149] Despite objections that Yugoslavia had legal jurisdiction to sue Belgium, the Netherlands, Canada and Portugal,[149] the ICJ majority vote also determined that the NATO bombing was an instance of humanitarian intervention" and thus did not violate Article 9 of the Genocide Convention.[149]

Amnesty International released a report which stated that NATO forces had deliberately targeted a civilian object (NATO bombing of the Radio Television of Serbia headquarters), and had bombed targets at which civilians were certain to be killed.[150][151] The report was rejected by NATO as "baseless and ill-founded". A week before the report was released, Carla Del Ponte, the chief prosecutor for the International Criminal Tribunal for the former Yugoslavia had told the United Nations Security Council that her investigation into NATO actions found no basis for charging NATO or its leaders with war crimes.[152]

A majority of U.S. House Republicans voted against two resolutions, both of which expressed approval for American involvement in the NATO mission.[153][154]

Excerpt from:

NATO bombing of Yugoslavia - Wikipedia

FIBA Oceania – FIBA.com

Leagues and results

Find a league Argentina: Liga A Brazil: D2 Brazil: NBB DIRECTV Liga de las Americas DIRECTV Liga Sudamericana Mexico Uruguay: LUB USA: NBA USA: NBDL USA: WNBA China: CBA FIBA Asia Champions Cup Iran: Superleague Japan Kazakhstan: Division I Korea Philippines: PBA Adriatic League Austria: BBL Balkan Eurohold League Baltic Basketball League Basketball Champions League Belarus Belgium: D1 Bosnia and Herzegovina Bulgaria: NBL Croatia: A1 Czech Republic Denmark EuroCup EuroCup Women EuroLeague EuroLeague Women FIBA Europe Cup Finland: Korisliiga France: LFB (women) France: Pro A France: Pro B Georgia Germany: BEKO Bundesliga Germany: Pro A Great Britain: BBL Greece: HEBA A1 Greece: HEBA A2 Hungary: A Division Hungary: A Division (women) Iceland Israel: Winner League Italy: Lega A Italy: Lega B Latvia: LBL 1 Lithuania: LKL MKD Montenegro Netherlands Poland: PLK Portugal: LPB Romania: D1 Russia: Superliga Serbia: Prva Liga Slovakia Slovenia: Liga UPC Telemach Spain (women) Spain: ACB Spain: LEB Sweden: BasketLigan Turkey: D 2 Turkey: TBL Turkey: TKBL (women) VTB United League Australia: 2nd Div Australia: NBL

Excerpt from:

FIBA Oceania - FIBA.com

FIBA Oceania Championship – Wikipedia

FIBA Oceania Championship is the name commonly used to refer to the Oceania basketball championships that take every two years between national teams of the continent. Through the 2015 edition, the Oceania Championships are also a qualifying tournament for the Basketball World Cups and Olympic Games. Beginning in 2017, all FIBA continental championships for men will be held on a four-year cycle, and the continental championships will no longer be part of the qualifying process for either the World Cup or Olympics. The 2017 Oceanian Championships will also be the last Oceanian Championships to ever be held as starting 2021, the tournament will merge with the FIBA Asia Championship to give way for the FIBA Asia-Pacific Championship[1]

When only Australia and New Zealand compete, the tournament is usually a best-of-three playoff; if other teams compete, a round-robin and a knockout stage is employed. In 2009, the Oceania Basketball Federation changed this format to a two-game, home-and-away playoff between the two countries, with aggregate score as the tiebreaker should the teams split the series.

Results highlighted in blue are Olympic qualifiers, those which aren't are World Championship qualifiers.

In 1997 basketball was included in the Pacific Mini Games, so therefore the Oceania Tournament was not played. The South Pacific Mini Games are held every four years for island teams in the two years between the main Pacific Games. These Games are held in countries with limited facilities and because of the large number of basketball entries this sport has not been included in previous Mini Games. Normally the Oceania Basketball Confederation conducts the Oceania Tournament at a similar time so as to provide competition for all countries. As a result, no Australian or New Zealand teams participated in 1997.

There was no Oceania Basketball Tournament in 2005 because the Mini Games included basketball that year in Palau.

Read the original:

FIBA Oceania Championship - Wikipedia

Buck Island – BVI, Caribbean – Private Islands for Sale

The Main House has 2 bedroom suites that can accommodate up to 4.

The Master Suite includes 1 bedroom with 2 large porches, a study with attached conference room, and a drawing room with porch.

The Two-Bedroom suite has a kitchen, sitting and dining area on the upper level, and the garage, wine cellar, and staff office.

The Two Guest Cottages accompany the infinity edge pool that overlooks the Sir Francis Drake Channel.

Other areas include a large family room, 4 half-baths, a dinette area, a computer/library room, a gallery, a butlers pantry, and laundry facilities. Beneath the main house is a workout/spa room with beautiful views.

The Boat House is stocked with kayaks, lazer sail boats, dinghies, land & water recreational equipment, snorkel gear, life jackets/vests, and other water sports equipment.

There are several areas throughout the property that have covered and uncovered sitting areas.

On the island you will find a wide variety of thriving flora such as cactuses and wild flowers, as well as a range of land and sea birds including hawks, cranes, herons and hummingbirds. Even more abundant are the beautiful coral reefs and marine life that populate the surrounding waters.

A helipad is available on the island for guests arriving by private helicopters. Transport options to Buck Island are flexible and can be catered to your preference and convenience.

Year-round tradewinds average 15 mph bringing in clean and pure ionized air direct from West Africa

Temperatures vary little throughout the year

Average daily maxima: ~ 32 C (90 F) in the summer and 29 C (84 F) in the winter

Average daily minima: ~ 24 C (75 F) in the summer and 21 C (70 F) in the winter

Wettest months: September to November

Driest months: February to March

Continued here:

Buck Island - BVI, Caribbean - Private Islands for Sale

Grasshopper Island – Ontario, Canada – Private Islands for …

Located on Grasshopper Island, Rice Lake is a private island getaway, waiting for you less then two hours from Toronto. Your adventure starts with a five minute ferry ride on the Spirit of the Loon across Rice Lake, to an exclusive 25 acre island offering you tranquility and relaxation. We offer a private escape from life's hectic distractions where you can kayak and canoe all day long, have fantastic photos taken of you and your children petting a newborn lamb, picking your first free-range eggs at the chicken coop, or bake homemade bread and pizzas in the 100 year old bread oven. The call of the loon, the painted turtles, the jumping bass, the blue herons, in the evening enjoy beautiful Rice Lake sunsets before you settle in for a night of stargazing around a crackling campfire!

Inclusions: Kayaks, canoes, unlimited campfire wood and ferry boat ride from mainland to island.

*Please note ferry is pedestrian only.

peace, paradise, unique, awesome, serenity

6 twin beds(brand new Simmons beautyrest mattresses (can be made up into three King sized beds)

Queen sized futon

solar lndoor lights

We supply pillows, dishpan, can opener, pots and pans, steel plates, steel mugs, plastic glasses, cutlery, BBQ utensils, oven mitts, lanterns. (NO candles allowed inside cabin!)

6 Muskoka recycled Lawn chairs, so comfy

Picnic table

Cedar deck

Fire pit, unlimited campfire wood

Propane BBQ (Filled tank included)

Outdoor privy / outdoor rain water shower

2- Toilet paper

Canoe + kayaks + Paddle Boats

Swimming (watershoes a must for everyone going in the water, including kayaking and canoeing!)

Adult lifejackets (we recommend you bring your own if you have them)

Walking, hiking and biking trails

Sandy Play areas

Sand Volleyball court , badminton court, horseshoe pits, reflections areas

Gigantic checkers

Books and board games

100 year old island fireplace, retrofitted with bread ovens (bring your bread mix etc)

baby sheep, piglets, freerange laying hens.......how cool is that? just imagine the photos? the island memories

Follow this link:

Grasshopper Island - Ontario, Canada - Private Islands for ...

What is Singularity (the)? – Definition from WhatIs.com

The Singularity is the hypothetical future creation of superintelligent machines. Superintelligence is defined as a technologically-created cognitive capacity far beyond that possible for humans. Should the Singularity occur, technology will advance beyond our ability to foresee or control its outcomes and the world will be transformed beyond recognition by the application of superintelligence to humans and/or human problems, including poverty, disease and mortality.

Revolutions in genetics, nanotechnology and robotics (GNR) in the first half of the 21st century are expected to lay the foundation for the Singularity. According to Singularity theory, superintelligence will be developed by self-directed computers and will increase exponentially rather than incrementally.

Lev Grossman explains the prospective exponential gains in capacity enabled by superintelligent machines in an article in Time:

Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks...

Proposed mechanisms for adding superintelligence to humans include brain-computer interfaces, biological alteration of the brain, artificial intelligence (AI) brain implants and genetic engineering. Post-singularity, humanity and the world would be quite different. A human could potentially scan his consciousness into a computer and live eternally in virtual reality or as a sentient robot. Futurists such as Ray Kurzweil (author of The Singularity is Near) have predicted that in a post-Singularity world, humans would typically live much of the time in virtual reality -- which would be virtually indistinguishable from normal reality. Kurzweil predicts, based on mathematical calculations of exponential technological development, that the Singularity will come to pass by 2045.

Most arguments against the possibility of the Singularity involve doubts that computers can ever become intelligent in the human sense. The human brain and cognitive processes may simply be more complex than a computer could be. Furthermore, because the human brain is analog, with theoretically infinite values for any process, some believe that it cannot ever be replicated in a digital format. Some theorists also point out that the Singularity may not even be desirable from a human perspective because there is no reason to assume that a superintelligence would see value in, for example, the continued existence or well-being of humans.

Science-fiction writer Vernor Vinge first used the term the Singularity in this context in the 1980s, when he used it in reference to the British mathematician I.J. Goods concept of an intelligence explosion brought about by the advent of superintelligent machines. The term is borrowed from physics; in that context a singularity is a point where the known physical laws cease to apply.

See also: Asimovs Three Laws of Robotics, supercomputer, cyborg, gray goo, IBMs Watson supercomputer, neural networks, smart robot

Neil deGrasse Tyson vs. Ray Kurzweil on the Singularity:

This was last updated in February 2016

Read the rest here:

What is Singularity (the)? - Definition from WhatIs.com

LSD – Psychedelic Effects – The Good Drugs Guide

The effects below describe the common physical, mental and emotional effects which comprise the psychedelic experience.

This information has been compiled from two sources: the decades of observation and study by psychiatrists in a clinical setting before LSD and other psychedelics were outlawed in the late 1960s; and books and anecdotal trip reports written by users. See here for a list of sources.

The most important thing to realize is that no two trips are the same. The intensity and effects of a drug like LSD vary dramatically from person to person. If different people take the same amount in the same circumstances, each will have a distinctly different experience. If the same person takes LSD repeatedly, each experience is usually completely different in its flavor and content. (1)

The nature of the psychedelic experience is strongly determined by set and setting. Set is your mindset (how you're feeling, issues in your life, your psychological makeup) when taking the drug; setting is where you are - that includes who you're with and how relaxed you feel. Dosage and previous experience with the drug are also important factors.

Basically, if you take LSD, you will experience some or none of the effects on the following scale:

how you feel before taking a drug

Very mild effect. Relaxation. Giggling. Like being stoned but with enhanced visual perception: colors may seem brighter, patterns recognition enhanced, colors and details more eye-grabbing.

Physically, a feeling of lightness and euphoria, and a slight tingling in the body. Energy. A sense of urgency. Music sounds better.

top

Stronger visual hallucinations. Radiant colors. Objects and surfaces appear to ripple or breathe. Colored patterns behind the eyes are vivid, more active. Moments of reflection and distractive thought patterns. Thoughts and thinking become enhanced. Creative urges. Euphoria. Connection with others, empathy. Ability to talk or interact with others however slightly impaired. Sense of time distorted or lost. Sexual arousal. "Flight of ideas" and "ambitious designs". You're tripping.

Very obvious visual effects. Curved or warped patterns. Familiar objects appear strange as surface details distract the eye. Imagination and 'mind's eye' images vivid, three dimensional. Geometric patterns behind closed eyes. Some confusion of the senses.

Distortion rather than deterioration of mental processes. Some awareness of background brain functioning: such as balance systems or auditory visual perception. Deep store memory becomes accessible. Images or experiences may rise to the fore. Music is powerful and can affect mood. Sense of time lost. Occasional trance states. Paranoia and distortions of body image possible.

Physical symptoms may include: stiffness, cramp, and muscular tension. Nausea, fever, feeling of illness. You're loaded.

top

Lying down. Difficult to interact with other people and 'consensus reality' in general. You should really be somewhere safe.

Very strong hallucinations such as objects morphing into other objects. Tracers, lingering after-images, and visual echoes.

Intense depersonalization. Category enscramblement. The barriers between you and the universe begin to break down. Connection with everything around you. Experiencing contradictory feelings simultaneously. Some loss of reality. Time meaningless. Senses blend into one. Sensations of being born. Multiple splitting of the ego. Powerful awareness of mental processes and senses. Lengthy trances often featuring highly symbolic, often mythical visions when eyes are closed. Powerful, and sometimes brutal psycho-physical reactions described by users as reliving their own birth. Direct experience of group or collective consciousness, ancestral memories, recall of past-lives, and other mystical experiences. Ecstasy.

Music extremely powerful, perhaps overwhelming. Emotionally sensitivity increased (often massively). Crying or laughing, or both simultaneously.

Tremors, twitches, twisting movements, sweating, chills, hot flushes - all common. You're essentially out of it.

A very rare experience. Total loss of visual connection with reality. The senses cease to function in the normal way. Total loss of self. Transcendental experiences of cosmic unity, merging with space, other objects, or the universe. Out of body experience. Ecstasy. "Entity contact". The loss of reality becomes so severe that it defies explanation. Pure white light. Difficult to put into words.

- The Varieties Of Psychedelic Experience, Robert Masters Ph.D & Jean Houston Ph.D (Park Street Press, 2000)

- Lysergic acid diethylamide (LSD-25). A clinical-psychological study. Savage C Amer. J. Psychiat., 1952; 108:896

Original post:

LSD - Psychedelic Effects - The Good Drugs Guide

Amrita – Wikipedia

Amrit (Sanskrit, IAST: amta) or Amata (Pali) is a word that literally means "immortality" and is often referred to in texts as nectar. Amta is etymologically related to the Greek ambrosia[1] and carries the same meaning.[2] The word's earliest occurrence is in the Rigveda, where it is one of several synonyms for soma, the drink which confers immortality upon the gods.

Amrit has varying significance in different Indian religions.

Amrit is also a common first name for Hindus; the feminine form is Amrit.

Amrit is repeatedly referred to as the drink of the devas which grants them immortality.

Amrit features in the samudra manthan legend, which describes how the devas, because of a curse from the sage Durvasa, begin to lose their immortality. Assisted by their mortal enemies, the asuras, they churn the ocean and release (among other auspicious object and beings) amrit, the nectar of immortality.[3]

Amrit is sometimes said to miraculously form on, or flow from, statues of Hindu gods. The substance is consumed by worshippers and is alleged to be sweet-tasting and not at all similar to honey or sugar water.

Amrit was the last of the fourteen treasure jewels that emerged from the churning of the ocean and contained in a pot borne by Dhanvantari, the physician of the Gods.

Amrit (Punjabi: ) is the name of the holy water used in the baptism ceremony or Amrit Sanchar in Sikhism. This ceremony is observed to initiate the Sikhs into the Khalsa and requires drinking amrit. This is created by mixing a number of soluble ingredients, including sugar, and is then rolled with a khanda with the accompaniment of scriptural recitation of five sacred verses.

Metaphorically, God's name is also referred to as a nectar:

Amrit sabad amrit har bai. The Shabda is Amrit; the Lord's bani is Amrit. Satgur seviai ridai sami. Serving the True Guru, it permeates the heart. Nnak amrit nm sad sukhdta pi amrit sabh bhukh lh jvaia. O Nanak, the Ambrosial Naam is forever the Giver of peace; drinking in this Amrit, all hunger is satisfied.[4]

According to Thanissaro Bhikkhu, "the deathless" refers to the deathless dimension of the mind which is dwelled in permanently after nibbana.[5]

In the Amata Sutta, the Buddha advises monks to stay with the four Satipatthana: "Monks, remain with your minds well-established in these four establishings of mindfulness. Don't let the deathless be lost to you."[6]

In the questions for Nagasena, King Milinda asks for evidence that the Buddha once lived, wherein Nagasena describes evidence of the Dhamma in a simile:

"Revered Nagasena, what is the nectar shop of the Buddha, the Blessed One?"

"Nectar, sire, has been pointed out by the Blessed One. With this nectar the Blessed One sprinkles the world with the devas; when the devas and the humans have been sprinkled with this nectar, they are set free from birth, aging, disease, death, sorrow, lamentation, pain, grief and despair. What is this nectar? It is mindfulness occupied with the body. And this too, sire, was said by the Blessed One: 'Monks, they partake of nectar (the deathless) who partake of mindfulness that is occupied with the body.' This, sire, is called the Blessed One's nectar shop."

Miln 335[7]

Amrit (Wylie: bdud rtsi, THL: dtsi) also plays a significant role in Vajrayana Buddhism as a sacramental drink which is consumed at the beginning of all important rituals such as the abhisheka, ganachakra, and homa. In the Tibetan tradition, dtsi is made during drubchens - lengthy ceremonies involving many high lamas. It usually takes the form of small, dark-brown grains that are taken with water, or dissolved in very weak solutions of alcohol and is said to improve physical and spiritual well-being.[8]

The foundational text of traditional Tibetan medicine, the Four Tantras, is also known by the name The Heart of Amrita (Wylie: snying po bsdus pa).

The Immaculate Crystal Garland (Wylie: dri med zhal phreng) describes the origin of amrita in a version of the samudra manthan legend retold in Buddhist terms. In this Vajrayana version, the monster Rahu steals the amrita and is blasted by Vajrapani's thunderbolt. As Rahu has already drunk the amrita he cannot die, but his blood, dripping onto the surface of this earth, causes all kinds of medicinal plants to grow. At the behest of all the Buddhas, Vajrapani reassembles Rahu who eventually becomes a protector of Buddhism according to the Nyingma school of Tibetan Buddhism.

Chinese Buddhism describes Amrita (Chinese: ; pinyin: gnl) as blessed water, food, or other consumable objects often produced through merits of chanting mantras.

Read the rest here:

Amrita - Wikipedia

Alternative Medicine, Holistic Doctors,Naturopathic …

Advanced Health & Wellness Dr. Geoffrey Channon Reed, Chiropractic Physician Clinton, NJ 908-735-8988 Services include Chiropractic, rehabilitation & massage ``````````````````````````````````````````````````````````````````````` A Life in Balance Nutrition Katie Vnenchak Holistic Nutritionist and Meditation Coach 1 Stangl Rd. Flemington, NJ 08822 732-864-6063 alifeinbalancept.com

Weight Loss, Kid's Nutrition, Meditation, Nutritional Therapy ``````````````````````````````````````````````````````````````````````` Bellewood Wellness Center Rt. 614 Pattenburg, NJ Services include massage, yoga, reiki, acupuncture & more ``````````````````````````````````````````````````````````````````````` Creative Alternatives of NJ, LLC Karolyn Saracino, BA, CMT Califon, NJ Craniosacral therapy, feng shui & integrative bodywork ``````````````````````````````````````````````````````````````````````` Divine Health, LLC 1390 Rt. 22 West #204 Lebanon, NJ 908-236-8042 Whole food nutrition, health & wellness & nutrition response testing ```````````````````````````````````````````````````````` Dr. Fuhrman's Medical Associates 4 Walter E. Foran Blvd. Flemington, NJ Joel Fuhrman, M.D. Jay Benson, D.O. Kathleen Mullin, M.D. Jyoti Matthews, M.D. Michael Klaper, M.D.

Continuing and comprehensive health care for adults and children. Dr. Fuhrman specializes in preventing and reversing disease through a nutrient rich diet. He has also created The Nutritional Education Institute to provide education and training to those interested in pursuing nutritional science as a therapeutic intervention for disease reversal and prevention. ```````````````````````````````````````````````````````````````````````

Eat Holistic, LLC Kirstin Nussgruber, C.N.C., EMB Holistic Cancer-Fighting Nutritional Consulting Special attention given to Cancer Patients, Cancer Survivors and Cancer Prevention Education eatholistic@gmail.com 908.512.2220 ``````````````````````````````````````````````````````````````````````` Family Chiropractic Center Dr. John Dowling, D.C. Flemington, NJ 908-788-5050 Gentle low force chiropractic

This great little gadget will suppress the excess high frequency Electromagnetic Frequencies (EMF) leaking into your home!

More here:

Alternative Medicine, Holistic Doctors,Naturopathic ...

Income inequality in the United States – Wikipedia

Income inequality in the United States has increased significantly since the 1970s after several decades of stability, meaning the share of the nation's income received by higher income households has increased. This trend is evident with income measured both before taxes (market income) as well as after taxes and transfer payments. Income inequality has fluctuated considerably since measurements began around 1915, moving in an arc between peaks in the 1920s and 2000s, with a 30-year period of relatively lower inequality between 19501980.[1][2]

Measured for all households, U.S. income inequality is comparable to other developed countries before taxes and transfers, but is among the highest after taxes and transfers, meaning the U.S. shifts relatively less income from higher income households to lower income households. Measured for working-age households, market income inequality is comparatively high (rather than moderate) and the level of redistribution is moderate (not low). These comparisons indicate Americans shift from reliance on market income to reliance on income transfers later in life and less than households in other developed countries do.[2][3]

The U.S. ranks around the 30th percentile in income inequality globally, meaning 70% of countries have a more equal income distribution.[4] U.S. federal tax and transfer policies are progressive and therefore reduce income inequality measured after taxes and transfers.[5] Tax and transfer policies together reduced income inequality slightly more in 2011 than in 1979.[1]

While there is strong evidence that it has increased since the 1970s, there is active debate in the United States regarding the appropriate measurement, causes, effects and solutions to income inequality.[5] The two major political parties have different approaches to the issue, with Democrats historically emphasizing that economic growth should result in shared prosperity (i.e., a pro-labor argument advocating income redistribution), while Republicans tend to downplay the validity or feasibility of positively influencing the issue (i.e., a pro-capital argument against redistribution).[6]

U.S. income inequality has grown significantly since the early 1970s,[8][9][10][11][12][13] after several decades of stability,[14][15][16] and has been the subject of study of many scholars and institutions. The U.S. consistently exhibits higher rates of income inequality than most developed nations due to the nation's enhanced support of free market capitalism and less progressive spending on social services.[17][18][19][20][21]

The top 1% of income earners received approximately 20% of the pre-tax income in 2013,[7] versus approximately 10% from 1950 to 1980.[2][22][23] The top 1% is not homogeneous, with the very top income households pulling away from others in the top 1%. For example, the top 0.1% of households received approximately 10% of the pre-tax income in 2013, versus approximately 34% between 19511981.[7][24] According to IRS data, adjusted gross income (AGI) of approximately $430,000 was required to be in the top 1% in 2013.[25]

Most of the growth in income inequality has been between the middle class and top earners, with the disparity widening the further one goes up in the income distribution.[26] The bottom 50% earned 20% of the nation's pre-tax income in 1979; this fell steadily to 14% by 2007 and 13% by 2014. Income for the middle 40% group, a proxy for the middle class, fell from 45% in 1979 to 41% in both 2007 and 2014.[27]

To put this change into perspective, if the US had the same income distribution it had in 1979, each family in the bottom 80% of the income distribution would have $11,000 more per year in income on average, or $916 per month.[28] Half of the U.S. population lives in poverty or is low-income, according to U.S. Census data.[29]

The trend of rising income inequality is also apparent after taxes and transfers. A 2011 study by the CBO[30] found that the top earning 1 percent of households increased their income by about 275% after federal taxes and income transfers over a period between 1979 and 2007, compared to a gain of just under 40% for the 60 percent in the middle of America's income distribution.[30] U.S. federal tax and transfer policies are progressive and therefore substantially reduce income inequality measured after taxes and transfers. They became moderately less progressive between 1979 and 2007[5] but slightly more progressive measured between 1979 and 2011. Income transfers had a greater impact on reducing inequality than taxes from 1979 to 2011.[1]

Americans are not generally aware of the extent of inequality or recent trends.[31] There is a direct relationship between actual income inequality and the public's views about the need to address the issue in most developed countries, but not in the U.S., where income inequality is worse but the concern is lower.[32] The U.S. was ranked the 6th worst among 173 countries (4th percentile) on income equality measured by the Gini index.[33]

There is significant and ongoing debate as to the causes, economic effects, and solutions regarding income inequality. While before-tax income inequality is subject to market factors (e.g., globalization, trade policy, labor policy, and international competition), after-tax income inequality can be directly affected by tax and transfer policy. U.S. income inequality is comparable to other developed nations before taxes and transfers, but is among the worst after taxes and transfers.[2][34] Income inequality may contribute to slower economic growth, reduced income mobility, higher levels of household debt, and greater risk of financial crises and deflation.[35][36]

Labor (workers) and capital (owners) have always battled over the share of the economic pie each obtains. The influence of the labor movement has waned in the U.S. since the 1960s along with union participation and more pro-capital laws.[22] The share of total worker compensation has declined from 58% of national income (GDP) in 1970 to nearly 53% in 2013, contributing to income inequality.[37] This has led to concerns that the economy has shifted too far in favor of capital, via a form of corporatism, corpocracy or neoliberalism.[38][39][40][41][42][43][44]

Although some have spoken out in favor of moderate inequality as a form of incentive,[45][46] others have warned against the current high levels of inequality, including Yale Nobel prize for economics winner Robert J. Shiller, (who called rising economic inequality "the most important problem that we are facing now today"),[47] former Federal Reserve Board chairman Alan Greenspan, ("This is not the type of thing which a democratic society a capitalist democratic society can really accept without addressing"),[48] and President Barack Obama (who referred to the widening income gap as the "defining challenge of our time").[49]

The level of concentration of income in the United States has fluctuated throughout its history. Going back to the early 20th Century, when income statistics started to become available, there has been a "great economic arc" from high inequality "to relative equality and back again," in the words of Nobel laureate economist Paul Krugman.[50] In 1915, an era in which the Rockefellers and Carnegies dominated American industry, the richest 1% of Americans earned roughly 18% of all income. By 2007, the top 1 percent account for 24% of all income.[51] In between, their share fell below 10% for three decades.

The first era of inequality lasted roughly from the post-civil war era ("the Gilded Age") to sometime around 1937. But from about 1937 to 1947 a period that has been dubbed the "Great Compression"[52] income inequality in the United States fell dramatically. Highly progressive New Deal taxation, the strengthening of unions, and regulation of the National War Labor Board during World War II raised the income of the poor and working class and lowered that of top earners.[53] This "middle class society" of relatively low level of inequality remained fairly steady for about three decades ending in early 1970s,[14][52][54] the product of relatively high wages for the US working class and political support for income leveling government policies.

Wages remained relatively high because of lack of foreign competition for American manufacturing, and strong trade unions. By 1947 more than a third of non-farm workers were union members,[55] and unions both raised average wages for their membership, and indirectly, and to a lesser extent, raised wages for workers in similar occupations not represented by unions.[56] Scholars believe political support for equalizing government policies was provided by high voter turnout from union voting drives, the support of the otherwise conservative South for the New Deal, and prestige that the massive mobilization and victory of World War II had given the government.[57]

The return to high inequality or to what Krugman and journalist Timothy Noah have referred as the "Great Divergence",[51] began in the 1970s. Studies have found income grew more unequal almost continuously except during the economic recessions in 199091, 2001 (Dot-com bubble), and 2007 sub-prime bust.[58][59]

The Great Divergence differs in some ways from the pre-Depression era inequality. Before 1937, a larger share of top earners income came from capital (interest, dividends, income from rent, capital gains). After 1970, income of high-income taxpayers comes predominantly from labor: employment compensation.[60]

Until 2011, the Great Divergence had not been a major political issue in America, but stagnation of middle-class income was. In 2009 the Barack Obama administration White House Middle Class Working Families Task Force convened to focus on economic issues specifically affecting middle-income Americans. In 2011, the Occupy movement drew considerable attention to income inequality in the country.

CBO reported that for the 1979-2007 period, after-tax income of households in the top 1 percent of earners grew by 275%, compared to 65% for the next 19%, just under 40% for the next 60%, 18% for the bottom fifth of households. "As a result of that uneven income growth," the report noted, "the share of total after-tax income received by the 1 percent of the population in households with the highest income more than doubled between 1979 and 2007, whereas the share received by low- and middle-income households declined.... The share of income received by the top 1 percent grew from about 8% in 1979 to over 17% in 2007. The share received by the other 19 percent of households in the highest income quintile (one fifth of the population as divided by income) was fairly flat over the same period, edging up from 35% to 36%."[5][61]

According to the CBO,[62] the major reason for observed rise in unequal distribution of after-tax income was an increase in market income, that is household income before taxes and transfers. Market income for a household is a combination of labor income (such as cash wages, employer-paid benefits, and employer-paid payroll taxes), business income (such as income from businesses and farms operated solely by their owners), capital gains (profits realized from the sale of assets and stock options), capital income (such as interest from deposits, dividends, and rental income), and other income. Of them, capital gains accounted for 80% of the increase in market income for the households in top 20%, in the 20002007 period. Even over the 19912000 period, according to the CBO, capital gains accounted for 45% of the market income for the top 20% households.

In a July 2015 op-ed article, Martin Feldstein, Professor of Economics at Harvard University, stated that the CBO found that from 1980 to 2010 real median household income rose by 15%. However, when the definition of income was expanded to include benefits and subtracted taxes, the CBO found that the median household's real income rose by 45%. Adjusting for household size, the gain increased to 53%.[63]

Just as higher-income groups are more likely to enjoy financial gains when economic times are good, they are also likely to suffer more significant income losses during economic downturns and recessions when they are compared to lower income groups. Higher-income groups tend to derive relatively more of their income from more volatile sources related to capital income (business income, capital gains, and dividends), as opposed to labor income (wages and salaries). For example, in 2011 the top 1% of income earners derived 37% of their income from labor income, versus 62% for the middle quintile. On the other hand, the top 1% derived 58% of their income from capital as opposed to 4% for the middle quintile. Government transfers represented only 1% of the income of the top 1% but 25% for the middle quintile; the dollar amounts of these transfers tend to rise in recessions.[1]

This effect occurred during the Great Recession of 20072009, when total income going to the bottom 99 percent of Americans declined by 11.6%, but fell by 36.3% for the top 1%. Declines were especially steep for capital gains, which fell by 75% in real (inflation-adjusted) terms between 2007 and 2009. Other sources of capital income also fell: interest income by 40% and dividend income by 33%. Wages, the largest source of income, fell by a more modest 6%.

The share of pretax income received by the top 1% fell from 18.7% in 2007 to 16.0% in 2008 and 13.4% in 2009, while the bottom four quintiles all had their share of pretax income increase from 2007 to 2009.[64][65] The share of aftertax income received by the top 1% income group fell from 16.7%, in 2007, to 11.5%, in 2009.[1]

The distribution of household incomes has become more unequal during the post-2008 economic recovery as the effects of the recession reversed.[66][67][68] CBO reported in November 2014 that the share of pre-tax income received by the top 1% had risen from 13.3% in 2009 to 14.6% in 2011.[1] During 2012 alone, incomes of the wealthiest 1 percent rose nearly 20%, whereas the income of the remaining 99 percent rose 1% in comparison.[22]

If the United States had the same income distribution it had in 1979, the bottom 80 percent of the population would have $1 trillion or $11,000 per family more. The top 1 percent would have $1 trillion or $750,000 less. Larry Summers[69]

According to an article in The New Yorker, by 2012, the share of pre-tax income received by the top 1% had returned to its pre-crisis peak, at around 23% of the pre-tax income.[2] This is based on widely cited data from economist Emmanuel Saez, which uses "market income" and relies primarily on IRS data.[67] The CBO uses both IRS data and Census data in its computations and reports a lower pre-tax figure for the top 1%.[1] The two series were approximately 5 percentage points apart in 2011 (Saez at about 19.7% versus CBO at 14.6%), which would imply a CBO figure of about 18% in 2012 if that relationship holds, a significant increase versus the 14.6% CBO reported for 2011. The share of after-tax income received by the top 1% rose from 11.5% in 2009 to 12.6% in 2011.[1]

Inflation-adjusted pre-tax income for the bottom 90% of American families fell between 2010 and 2013, with the middle income groups dropping the most, about 6% for the 40th-60th percentiles and 7% for the 20th-40th percentiles. Incomes in the top decile rose 2%.[34]

The top 1% captured 91% of the real income growth per family during the 2009-2012 recovery period, with their pre-tax incomes growing 34.7% adjusted for inflation while the pre-tax incomes of the bottom 99% grew 0.8%. Measured from 20092015, the top 1% captured 52% of the total real income growth per family, indicating the recovery was becoming less "lopsided" in favor of higher income families. By 2015, the top 10% (top decile) had a 50.5% share of the pre-tax income, close its highest all-time level.[70]

Tax increases on higher income earners were implemented in 2013 due to the Affordable Care Act and American Taxpayer Relief Act of 2012. CBO estimated that "average federal tax rates under 2013 law would be higher relative to tax rates in 2011 across the income spectrum. The estimated rates under 2013 law would still be well below the average rates from 1979 through 2011 for the bottom four income quintiles, slightly below the average rate over that period for households in the 81st through 99th percentiles, and well above the average rate over that period for households in the top 1 percent of the income distribution."[1] In 2016, the economists Peter H. Lindert and Jeffrey G. Williamson contended that inequality is the highest it has been since the nation's founding.[71] French economist Thomas Piketty attributed the victory of Donald Trump in the 2016 presidential election, which he characterizes as an "electoral upset," to "the explosion in economic and geographic inequality in the United States over several decades and the inability of successive governments to deal with this."[72]

U.S. income inequality is comparable to other developed countries measured before taxes and transfers, but is among the worst after taxes and transfers.[2]

According to the CBO and others, "the precise reasons for the [recent] rapid growth in income at the top are not well understood",[60][75] but "in all likelihood," an "interaction of multiple factors" was involved.[76] "Researchers have offered several potential rationales."[60][77] Some of these rationales conflict, some overlap.[78] They include:

Paul Krugman put several of these factors into context in January 2015: "Competition from emerging-economy exports has surely been a factor depressing wages in wealthier nations, although probably not the dominant force. More important, soaring incomes at the top were achieved, in large part, by squeezing those below: by cutting wages, slashing benefits, crushing unions, and diverting a rising share of national resources to financial wheeling and dealing...Perhaps more important still, the wealthy exert a vastly disproportionate effect on policy. And elite priorities obsessive concern with budget deficits, with the supposed need to slash social programs have done a lot to deepen [wage stagnation and income inequality]."[92]

There is an ongoing debate as to the economic effects of income inequality. For example, Alan B. Krueger, President Obama's Chairman of the Council of Economic Advisors, summarized the conclusions of several research studies in a 2012 speech. In general, as income inequality worsens:

Among economists and related experts, many believe that America's growing income inequality is "deeply worrying",[48] unjust,[84] a danger to democracy/social stability,[96][97][98] or a sign of national decline.[99] Yale professor Robert Shiller, who was among three Americans who won the Nobel prize for economics in 2013, said after receiving the award, "The most important problem that we are facing now today, I think, is rising inequality in the United States and elsewhere in the world."[100] Economist Thomas Piketty, who has spent nearly 20 years studying inequality primarily in the US, warns that "The egalitarian pioneer ideal has faded into oblivion, and the New World may be on the verge of becoming the Old Europe of the twenty-first century's globalized economy."[101]

On the other side of the issue are those who have claimed that the increase is not significant,[102] that it doesn't matter[98] because America's economic growth and/or equality of opportunity are what's important,[103] that it is a global phenomenon which would be foolish to try to change through US domestic policy,[104] that it "has many economic benefits and is the result of ... a well-functioning economy",[105][106] and has or may become an excuse for "class-warfare rhetoric",[102] and may lead to policies that "reduce the well-being of wealthier individuals".[105][107]

Economist Alan B. Krueger wrote in 2012: "The rise in inequality in the United States over the last three decades has reached the point that inequality in incomes is causing an unhealthy division in opportunities, and is a threat to our economic growth. Restoring a greater degree of fairness to the U.S. job market would be good for businesses, good for the economy, and good for the country." Krueger wrote that the significant shift in the share of income accruing to the top 1% over the 1979 to 2007 period represented nearly $1.1 trillion in annual income. Since the wealthy tend to save nearly 50% of their marginal income while the remainder of the population saves roughly 10%, other things equal this would reduce annual consumption (the largest component of GDP) by as much as 5%. Krueger wrote that borrowing likely helped many households make up for this shift, which became more difficult in the wake of the 20072009 recession.[95]

Inequality in land and income ownership is negatively correlated with subsequent economic growth. A strong demand for redistribution will occur in societies where a large section of the population does not have access to the productive resources of the economy. Rational voters must internalize such issues.[108] High unemployment rates have a significant negative effect when interacting with increases in inequality. Increasing inequality harms growth in countries with high levels of urbanization. High and persistent unemployment also has a negative effect on subsequent long-run economic growth. Unemployment may seriously harm growth because it is a waste of resources, because it generates redistributive pressures and distortions, because it depreciates existing human capital and deters its accumulation, because it drives people to poverty, because it results in liquidity constraints that limit labor mobility, and because it erodes individual self-esteem and promotes social dislocation, unrest and conflict. Policies to control unemployment and reduce its inequality-associated effects can strengthen long-run growth.[109]

Concern extends even to such supporters (or former supporters) of laissez-faire economics and private sector financiers. Former Federal Reserve Board chairman Alan Greenspan, has stated reference to growing inequality: "This is not the type of thing which a democratic society a capitalist democratic society can really accept without addressing."[48] Some economists (David Moss, Paul Krugman, Raghuram Rajan) believe the "Great Divergence" may be connected to the financial crisis of 2008.[105][110] Money manager William H. Gross, former managing director of PIMCO, criticized the shift in distribution of income from labor to capital that underlies some of the growth in inequality as unsustainable, saying:

Even conservatives must acknowledge that return on capital investment, and the liquid stocks and bonds that mimic it, are ultimately dependent on returns to labor in the form of jobs and real wage gains. If Main Street is unemployed and undercompensated, capital can only travel so far down Prosperity Road.

He concluded: "Investors/policymakers of the world wake up you're killing the proletariat goose that lays your golden eggs."[111][112]

Among economists and reports that find inequality harming economic growth are a December 2013 Associated Press survey of three dozen economists',[114] a 2014 report by Standard and Poor's,[115] economists Gar Alperovitz, Robert Reich, Joseph Stiglitz, and Branko Milanovic.

A December 2013 Associated Press survey of three dozen economists found that the majority believe that widening income disparity is harming the US economy. They argue that wealthy Americans are receiving higher pay, but they spend less per dollar earned than middle class consumers, the majority of the population, whose incomes have largely stagnated.[114]

A 2014 report by Standard and Poor's concluded that diverging income inequality has slowed the economic recovery and could contribute to boom-and-bust cycles in the future as more and more Americans take on debt in order to consume. Higher levels of income inequality increase political pressures, discouraging trade, investment, hiring, and social mobility according to the report.[115]

Economists Gar Alperovitz and Robert Reich argue that too much concentration of wealth prevents there being sufficient purchasing power to make the rest of the economy function effectively.[116][117]

Joseph Stiglitz argues that concentration of wealth and income leads the politically powerful economic elite seek to protect themselves from redistributive policies by weakening the state, and this leads to less public investments by the state roads, technology, education, etc. that are essential for economic growth.[118][119]

According to economist Branko Milanovic, while traditionally economists thought inequality was good for growth, "The view that income inequality harms growth or that improved equality can help sustain growth has become more widely held in recent years. The main reason for this shift is the increasing importance of human capital in development. When physical capital mattered most, savings and investments were key. Then it was important to have a large contingent of rich people who could save a greater proportion of their income than the poor and invest it in physical capital. But now that human capital is scarcer than machines, widespread education has become the secret to growth." He continued that "Broadly accessible education" is both difficult to achieve when income distribution is uneven and tends to reduce "income gaps between skilled and unskilled labor."[120]

Robert Gordon wrote that such issues as 'rising inequality; factor price equalization stemming from the interplay between globalization and the Internet; the twin educational problems of cost inflation in higher education and poor secondary student performance; the consequences of environmental regulations and taxes..." make economic growth harder to achieve than in the past.[121]

In response to the Occupy movement Richard A. Epstein defended inequality in a free market society, maintaining that "taxing the top one percent even more means less wealth and fewer jobs for the rest of us." According to Epstein, "the inequalities in wealth ... pay for themselves by the vast increases in wealth", while "forced transfers of wealth through taxation ... will destroy the pools of wealth that are needed to generate new ventures.[122] Some researchers have found a connection between lowering high marginal tax rates on high income earners (high marginal tax rates on high income being a common measure to fight inequality), and higher rates of employment growth.[123][124] Government significant free market strategy affects too. the reason is there is a failure in the US political system to counterbalance the rise in unequal distribution of income amongst the citizens.[125]

Economic sociologist Lane Kenworthy has found no correlation between levels of inequality and economic growth among developed countries, among states of the US, or in the US over the years from 1947 to 2005.[126]Jared Bernstein found a nuanced relation he summed up as follows: "In sum, I'd consider the question of the extent to which higher inequality lowers growth to be an open one, worthy of much deeper research".[127]Tim Worstall commented that capitalism would not seem to contribute to an inherited-wealth stagnation and consolidation, but instead appears to promote the opposite, a vigorous, ongoing turnover and creation of new wealth.[128][129]

Income inequality was cited as one of the causes of the Great Depression by Supreme Court Justice Louis D. Brandeis in 1933. In his dissent in the Louis K. Liggett Co. v. Lee (288 U.S. 517) case, he wrote: "Other writers have shown that, coincident with the growth of these giant corporations, there has occurred a marked concentration of individual wealth; and that the resulting disparity in incomes is a major cause of the existing depression."[130]

Central Banking economist Raghuram Rajan argues that "systematic economic inequalities, within the United States and around the world, have created deep financial 'fault lines' that have made [financial] crises more likely to happen than in the past" the Financial crisis of 200708 being the most recent example.[131] To compensate for stagnating and declining purchasing power, political pressure has developed to extend easier credit to the lower and middle income earners particularly to buy homes and easier credit in general to keep unemployment rates low. This has given the American economy a tendency to go "from bubble to bubble" fueled by unsustainable monetary stimulation.[132]

Greater income inequality can lead to monopolization of the labor force, resulting in fewer employers requiring fewer workers.[133][134] Remaining employers can consolidate and take advantage of the relative lack of competition, leading to less consumer choice, market abuses, and relatively higher prices.[109][134]

Income inequality lowers aggregate demand, leading to increasingly large segments of formerly middle class consumers unable to afford as many luxury and essential goods and services.[133] This pushes production and overall employment down.[109]

Deep debt may lead to bankruptcy and researchers Elizabeth Warren and Amelia Warren Tyagi found a fivefold increase in the number of families filing for bankruptcy between 1980 and 2005.[135] The bankruptcies came not from increased spending "on luxuries", but from an "increased spending on housing, largely driven by competition to get into good school districts." Intensifying inequality may mean a dwindling number of ever more expensive school districts that compel middle class or would-be middle class to "buy houses they can't really afford, taking on more mortgage debt than they can safely handle".[136]

The ability to move from one income group into another (income mobility) is a means of measuring economic opportunity. A higher probability of upward income mobility theoretically would help mitigate higher income inequality, as each generation has a better chance of achieving higher income groups. Conservatives and libertarians such as economist Thomas Sowell, and Congressman Paul Ryan (R., Wisc.)[137] argue that more important than the level of equality of results is America's equality of opportunity, especially relative to other developed countries such as western Europe.

Nonetheless, results from various studies reflect the fact that endogenous regulations and other different rules yield distinct effects on income inequality. A study examines the effects of institutional change on age-based labor market inequalities in Europe. There is a focus on wage-setting institutions on the adult male population and the rate of their unequal income distribution. According to the study, there is evidence that unemployment protection and temporary work regulation affect the dynamics of age-based inequality with positive employment effects of all individuals by the strength of unions. Even though the European Union is within a favorable economic context with perspectives of growth and development, it is also very fragile. [138]

However, several studies have indicated that higher income inequality corresponds with lower income mobility. In other words, income brackets tend to be increasingly "sticky" as income inequality increases. This is described by a concept called the Great Gatsby curve.[95][139] In the words of journalist Timothy Noah, "you can't really experience ever-growing income inequality without experiencing a decline in Horatio Alger-style upward mobility because (to use a frequently-employed metaphor) it's harder to climb a ladder when the rungs are farther apart."[48]

The centrist Brookings Institution said in March 2013 that income inequality was increasing and becoming permanent, sharply reducing social mobility in the US.[140] A 2007 study (by Kopczuk, Saez and Song in 2007) found the top population in the United States "very stable" and that income mobility had "not mitigated the dramatic increase in annual earnings concentration since the 1970s."[139]

Economist Paul Krugman, attacks conservatives for resorting to "extraordinary series of attempts at statistical distortion". He argues that while in any given year, some of the people with low incomes will be "workers on temporary layoff, small businessmen taking writeoffs, farmers hit by bad weather" the rise in their income in succeeding years is not the same 'mobility' as poor people rising to middle class or middle income rising to wealth. It's the mobility of "the guy who works in the college bookstore and has a real job by his early thirties."

Studies by the Urban Institute and the US Treasury have both found that about half of the families who start in either the top or the bottom quintile of the income distribution are still there after a decade, and that only 3 to 6% rise from bottom to top or fall from top to bottom.[141]

On the issue of whether most Americans do not stay put in any one income bracket, Krugman quotes from 2011 CBO distribution of income study

Household income measured over a multi-year period is more equally distributed than income measured over one year, although only modestly so. Given the fairly substantial movement of households across income groups over time, it might seem that income measured over a number of years should be significantly more equally distributed than income measured over one year. However, much of the movement of households involves changes in income that are large enough to push households into different income groups but not large enough to greatly affect the overall distribution of income. Multi-year income measures also show the same pattern of increasing inequality over time as is observed in annual measures.[30]

In other words, "many people who have incomes greater than $1 million one year fall out of the category the next year but that's typically because their income fell from, say, $1.05 million to 0.95 million, not because they went back to being middle class."[30][142]

Several studies have found the ability of children from poor or middle-class families to rise to upper income known as "upward relative intergenerational mobility" is lower in the US than in other developed countries[143] and at least two economists have found lower mobility linked to income inequality.[48][144]

In their Great Gatsby curve,[144]White House Council of Economic Advisers Chairman Alan B. Krueger and labor economist Miles Corak show a negative correlation between inequality and social mobility. The curve plotted "intergenerational income elasticity" i.e. the likelihood that someone will inherit their parents' relative position of income level and inequality for a number of countries.[48][145]

Aside from the proverbial distant rungs, the connection between income inequality and low mobility can be explained by the lack of access for un-affluent children to better (more expensive) schools and preparation for schools crucial to finding high-paying jobs; the lack of health care that may lead to obesity and diabetes and limit education and employment.[143]

Krueger estimates that "the persistence in the advantages and disadvantages of income passed from parents to the children" will "rise by about a quarter for the next generation as a result of the rise in inequality that the U.S. has seen in the last 25 years."[48]

Greater income inequality can increase the poverty rate, as more income shifts away from lower income brackets to upper income brackets. Jared Bernstein wrote: "If less of the economy's market-generated growth i.e., before taxes and transfers kick in ends up in the lower reaches of the income scale, either there will be more poverty for any given level of GDP growth, or there will have to be a lot more transfers to offset inequality's poverty-inducing impact." The Economic Policy Institute estimated that greater income inequality would have added 5.5% to the poverty rate between 1979 and 2007, other factors equal. Income inequality was the largest driver of the change in the poverty rate, with economic growth, family structure, education and race other important factors.[146][147] An estimated 16% of Americans lived in poverty in 2012, versus 26% in 1967.[148]

A rise in income disparities weakens skills development among people with a poor educational background in term of the quantity and quality of education attained. Those with a low level of expertise will always consider themselves unworthy of any high position and pay[149]

Lisa Shalett, chief investment officer at Merrill Lynch Wealth Management noted that, "for the last two decades and especially in the current period, ... productivity soared ... [but] U.S. real average hourly earnings are essentially flat to down, with today's inflation-adjusted wage equating to about the same level as that attained by workers in 1970. ... So where have the benefits of technology-driven productivity cycle gone? Almost exclusively to corporations and their very top executives."[150][150] In addition to the technological side of it, the affected functionality emanates from the perceived unfairness and the reduced trust of people towards the state. The study by Kristal and Cohen showed that rising wage inequality has brought about an unhealthy competition between institutions and technology. The technological changes, with computerization of the workplace, seem to give an upper hand to the high-skilled workers as the primary cause of inequality in America. The qualified will always be considered to be in a better position as compared to those dealing with hand work leading to replacements and unequal distribution of resources.[151]

Economist Timothy Smeeding summed up the current trend:[152]

Americans have the highest income inequality in the rich world and over the past 2030 years Americans have also experienced the greatest increase in income inequality among rich nations. The more detailed the data we can use to observe this change, the more skewed the change appears to be ... the majority of large gains are indeed at the top of the distribution.

According to Janet L. Yellen, chair of the Federal Reserve,

...from 1973 to 2005, real hourly wages of those in the 90th percentile where most people have college or advanced degrees rose by 30% or more... among this top 10 percent, the growth was heavily concentrated at the very tip of the top, that is, the top 1 percent. This includes the people who earn the very highest salaries in the U.S. economy, like sports and entertainment stars, investment bankers and venture capitalists, corporate attorneys, and CEOs. In contrast, at the 50th percentile and below where many people have at most a high school diploma real wages rose by only 5 to 10% [77]

Economists Jared Bernstein and Paul Krugman have attacked the concentration of income as variously "unsustainable"[97] and "incompatible"[98] with real democracy. American political scientists Jacob S. Hacker and Paul Pierson quote a warning by Greek-Roman historian Plutarch: "An imbalance between rich and poor is the oldest and most fatal ailment of all republics."[96] Some academic researchers have written that the US political system risks drifting towards a form of oligarchy, through the influence of corporations, the wealthy, and other special interest groups.[153][154]

Rising income inequality has been linked to the political polarization in Washington DC.[155] According to a 2013 study published in the Political Research Quarterly, elected officials tend to be more responsive to the upper income bracket and ignore lower income groups.[156]

Paul Krugman wrote in November 2014 that: "The basic story of political polarization over the past few decades is that, as a wealthy minority has pulled away economically from the rest of the country, it has pulled one major party along with it...Any policy that benefits lower- and middle-income Americans at the expense of the elite like health reform, which guarantees insurance to all and pays for that guarantee in part with taxes on higher incomes will face bitter Republican opposition." He used environmental protection as another example, which was not a partisan issue in the 1990s but has since become one.[157]

As income inequality has increased, the degree of House of Representatives polarization measured by voting record has also increased. The voting is mostly by the rich and for the rich making it hard to achieve equal income and resource distribution for the average population (Bonica et al., 2013). There is a little number of people who turn to government insurance with the rising wealth and real income since they consider inequality within the different government sectors. Additionally, there has been an increased influence by the rich on the regulatory, legislative and electoral processes within the country that has led to improved employment standards for the bureaucrats and politicians.[158] Professors McCarty, Pool and Rosenthal wrote in 2007 that polarization and income inequality fell in tandem from 1913 to 1957 and rose together dramatically from 1977 on. They show that Republicans have moved politically to the right, away from redistributive policies that would reduce income inequality. Polarization thus creates a feedback loop, worsening inequality.[159]

Several economists and political scientists have argued that economic inequality translates into political inequality, particularly in situations where politicians have financial incentives to respond to special interest groups and lobbyists. Researchers such as Larry Bartels of Vanderbilt University have shown that politicians are significantly more responsive to the political opinions of the wealthy, even when controlling for a range of variables including educational attainment and political knowledge.[161][162]

Historically, discussions of income inequality and capital vs. labor debates have sometimes included the language of class warfare, from President Theodore Roosevelt (referring to the leaders of big corporations as "malefactors of great wealth"), to President Franklin Roosevelt ("economic royalists...are unanimous in their hate for me--and I welcome their hatred"), to more the recent "1% versus the 99%" issue and the question of which political party better represents the interests of the middle class.[163]

Investor Warren Buffett said in 2006 that: "There's class warfare, all right, but it's my class, the rich class, that's making war, and we're winning." He advocated much higher taxes on the wealthiest Americans, who pay lower effective tax rates than many middle-class persons.[164]

Two journalists concerned about social separation in the US are economist Robert Frank, who notes that: "Today's rich had formed their own virtual country .. [T]hey had built a self-contained world unto themselves, complete with their own health-care system (concierge doctors), travel network (Net jets, destination clubs), separate economy...The rich weren't just getting richer; they were becoming financial foreigners, creating their own country within a country, their own society within a society, and their economy within an economy.[165]

George Packer wrote that "Inequality hardens society into a class system ... Inequality divides us from one another in schools, in neighborhoods, at work, on airplanes, in hospitals, in what we eat, in the condition of our bodies, in what we think, in our children's futures, in how we die. Inequality makes it harder to imagine the lives of others.[99]

Even these class levels can affect the politics in certain ways. There has been an increased influence by the rich on the regulatory, legislative and electoral processes within the country that has led to improved employment standards for the bureaucrats and politicians. They have a greater influence through their lobbying and contributions that give them an opportunity to immerse wealth for themselves.[166]

Loss of income by the middle class relative to the top-earning 1% and 0.1% is both a cause and effect of political change, according to journalist Hedrick Smith. In the decade starting around 2000, business groups employed 30 times as many Washington lobbyists as trade unions and 16 times as many lobbyists as labor, consumer, and public interest lobbyists combined.[167]

From 1998 through 2010 business interests and trade groups spent $28.6 billion on lobbying compared with $492 million for labor, nearly a 60-to-1 business advantage.[168]

The result, according to Smith, is a political landscape dominated in the 1990s and 2000s by business groups, specifically "political insiders" former members of Congress and government officials with an inside track working for "Wall Street banks, the oil, defense, and pharmaceutical industries; and business trade associations." In the decade or so prior to the Great Divergence, middle-class-dominated reformist grassroots efforts such as civil rights movement, environmental movement, consumer movement, labor movement had considerable political impact.[167]

"We haven't achieved the minimalist state that libertarians advocate. What we've achieved is a state too constrained to provide the public goods investments in infrastructure, technology, and education that would make for a vibrant economy and too weak to engage in the redistribution that is needed to create a fair society. But we have a state that is still large enough and distorted enough that it can provide a bounty of gifts to the wealthy."

Economist Joseph Stiglitz argues that hyper-inequality may explain political questions such as why America's infrastructure (and other public investments) are deteriorating,[170] or the country's recent relative lack of reluctance to engage in military conflicts such as the 2003 invasion of Iraq. Top-earning families, wealthy enough to buy their own education, medical care, personal security, and parks, have little interest in helping pay for such things for the rest of society, and the political influence to make sure they don't have to. So too, the lack of personal or family sacrifice involved for top earners in the military intervention of their country their children being few and far between in the relatively low-paying all-volunteer military may mean more willingness by influential wealthy to see its government wage war.[171]

Economist Branko Milanovic argued that globalization and the related competition with cheaper labor from Asia and immigrants have caused U.S. middle-class wages to stagnate, fueling the rise of populist political candidates such as Donald Trump.[172]

The relatively high rates of health and social problems, (obesity, mental illness, homicides, teenage births, incarceration, child conflict, drug use) and lower rates of social goods (life expectancy, educational performance, trust among strangers, women's status, social mobility, even numbers of patents issued per capita), in the US compared to other developed countries may be related to its high income inequality. Using statistics from 23 developed countries and the 50 states of the US, British researchers Richard G. Wilkinson and Kate Pickett have found such a correlation which remains after accounting for ethnicity,[173] national culture,[174] and occupational classes or education levels.[175] Their findings, based on UN Human Development Reports and other sources, locate the United States at the top of the list in regards to inequality and various social and health problems among developed countries.[176] The authors argue inequality creates psychosocial stress and status anxiety that lead to social ills.[177] A 2009 study conducted by researchers at Harvard University and published in the British Medical Journal attribute one in three deaths in the United States to high levels of inequality.[178] According to The Earth Institute, life satisfaction in the US has been declining over the last several decades, which has been attributed to soaring inequality, lack of social trust and loss of faith in government.[179]

It is claimed in a 2015 study by Princeton University researchers Angus Deaton and Anne Case that income inequality could be a driving factor in a marked increase in deaths among white males between the ages of 45 to 54 in the period 1999 to 2013.[180][181]

Paul Krugman argues that the much lamented long-term funding problems of Social Security and Medicare can be blamed in part on the growth in inequality as well as the usual culprits like longer life expectancies. The traditional source of funding for these social welfare programs payroll taxes is inadequate because it does not capture income from capital, and income above the payroll tax cap, which make up a larger and larger share of national income as inequality increases.[182]

Upward redistribution of income is responsible for about 43% of the projected Social Security shortfall over the next 75 years.[183]

Disagreeing with this focus on the top-earning 1%, and urging attention to the economic and social pathologies of lower-income/lower education Americans, is conservative[184] journalist David Brooks. Whereas in the 1970s, high school and college graduates had "very similar family structures", today, high school grads are much less likely to get married and be active in their communities, and much more likely to smoke, be obese, get divorced, or have "a child out of wedlock."[185]

The zooming wealth of the top one percent is a problem, but it's not nearly as big a problem as the tens of millions of Americans who have dropped out of high school or college. It's not nearly as big a problem as the 40 percent of children who are born out of wedlock. It's not nearly as big a problem as the nation's stagnant human capital, its stagnant social mobility and the disorganized social fabric for the bottom 50 percent.[185][186]

Contradicting most of these arguments, classical liberals such as Friedrich Hayek have maintained that because individuals are diverse and different, state intervention to redistribute income is inevitably arbitrary and incompatible with the concept of general rules of law, and that "what is called 'social' or distributive' justice is indeed meaningless within a spontaneous order". Those who would use the state to redistribute, "take freedom for granted and ignore the preconditions necessary for its survival."[187][188][188]

The growth of inequality has provoked a political protest movement the Occupy movement starting in Wall Street and spreading to 600 communities across the United States in 2011. Its main political slogan "We are the 99%" references its dissatisfaction with the concentration of income in the top 1%.

See the original post here:

Income inequality in the United States - Wikipedia

Rationalism verses Empiricism – dummies.com

The history of philosophy has seen many warring camps fighting battles over some major issue or other. One of the major battles historically has been over the foundations of all our knowledge. What is most basic in any human set of beliefs? What are our ultimate starting points for any world view? Where does human knowledge ultimately come from?

Empiricists have always claimed that sense experience is the ultimate starting point for all our knowledge. The senses, they maintain, give us all our raw data about the world, and without this raw material, there would be no knowledge at all. Perception starts a process, and from this process come all our beliefs. In its purest form, empiricism holds that sense experience alone gives birth to all our beliefs and all our knowledge. A classic example of an empiricist is the British philosopher John Locke (16321704).

Its easy to see how empiricism has been able to win over many converts. Think about it for a second. Its interestingly difficult to identify a single belief that you have that didnt come your way by means of some sense experience sight, hearing, touch, smell, or taste. Its natural, then, to come to believe that the senses are the sole source and ultimate grounding of belief.

But not all philosophers have been convinced that the senses fly solo when it comes to producing belief. We seem to have some beliefs that cannot be read off sense experience, or proved from any perception that we might be able to have. Because of this, there historically has been a warring camp of philosophers who give a different answer to the question of where our beliefs ultimately do, or should, come from.

Rationalists have claimed that the ultimate starting point for all knowledge is not the senses but reason. They maintain that without prior categories and principles supplied by reason, we couldnt organize and interpret our sense experience in any way. We would be faced with just one huge, undifferentiated, kaleidoscopic whirl of sensation, signifying nothing. Rationalism in its purest form goes so far as to hold that all our rational beliefs, and the entirety of human knowledge, consists in first principles and innate concepts (concepts that we are just born having) that are somehow generated and certified by reason, along with anything logically deducible from these first principles.

How can reason supply any mental category or first principle at all? Some rationalists have claimed that we are born with several fundamental concepts or categories in our minds ready for use. These give us what the rationalists call innate knowledge. Examples might be certain categories of space, of time, and of cause and effect.

We naturally think in terms of cause and effect. And this helps organize our experience of the world. We think of ourselves as seeing some things cause other things to happen, but in terms of our raw sense experience, we just see certain things happen before other things, and remember having seen such before-and-after sequences at earlier times. For example, a rock hits a window, and then the window breaks. We dont see a third thing called causation. But we believe it has happened. The rock hitting the window caused it to break. But this is not experienced like the flight of the rock or the shattering of the glass. Experience does not seem to force the concept of causation on us. We just use it to interpret what we experience. Cause and effect are categories that could never be read out of our experience and must therefore be brought to that experience by our prior mental disposition to attribute such a connection. This is the rationalist perspective.

Rationalist philosophers have claimed that at the foundations of our knowledge are propositions that are self-evident, or self-evidently true. A self-evident proposition has the strange property of being such that, on merely understanding what it says, and without any further checking or special evidence of any kind, we can just intellectually see that it is true. Examples might be such propositions as:

The claim is that, once these statements are understood, it takes no further sense experience whatsoever to see that they are true.

Descartes was a thinker who used skeptical doubt as a prelude to constructing a rationalist philosophy. He was convinced that all our beliefs that are founded on the experience of the external senses could be called into doubt, but that with certain self-evident beliefs, like I am thinking, there is no room for creating and sustaining a reasonable doubt. Descartes then tried to find enough other first principles utterly immune to rational doubt that he could provide an indubitable, rational basis for all other legitimate beliefs.

Philosophers do not believe that Descartes succeeded. But it was worth a try. Rationalism has remained a seductive idea for individuals attracted to mathematics and to the beauties of unified theory, but it has never been made to work as a practical matter.

Continued here:

Rationalism verses Empiricism - dummies.com

Logic: Rationalism vs. Empiricism – Theology

D. Rationalism vs. Empiricism

Theories of knowledge divide naturally, theoretically and historically into the two rival schools of rationalism and empiricism. Neither rationalism nor empiricism disregards the primary tool of the other school entirely. The issue revolves on beliefs about necessary knowledge and empirical knowledge.

1. Rationalism

Rationalism believes that some ideas or concepts are independent of experience and that some truth is known by reason alone.

a. a priori

This is necessary knowledge not given in nor dependent upon experience; it is necessarily true by definition. For instance "black cats are black." This is an analytic statement, and broadly, it is a tautology; its denial would be self-contradictory.

2. Empiricism

Empiricism believes that some ideas or concepts are independent of experience and that truth must be established by reference to experience alone.

b. a posteriori

This is knowledge that comes after or is dependent upon experience. for instance "Desks are brown" is a synthetic statement. Unlike the analytic statement "Black cats are black", the synthetic statement "Desks are brown" is not necessarily true unless all desks are by definition brown, and to deny it would not be self-contradictory. We would probably refer the matter to experience.

Since knowledge depends primarily on synthetic statements -- statements that may be true or may be false -- their nature and status are crucial to theories of knowledge. The controvercial issue is the possibility of synthetic necessary knowledge -- that is, the possibility of having genuine knowledge of the world without the need to rely on experience. Consider these statements:

1) The sum of the angles of a triangle is 180 degrees.

2) Parallel lines never meet.

3) A whole is the sum of all its parts.

Rationalism may believe these to be synthetic necessary statements, universally treu, and genunie knowledge; i.e., they are not merely empty as the analytic or tautologous statemenst (Black cats are black) and are not dependent on experience for their truth value.

Empiricism denies that these statements are synthetic and necessary. Strict empriicism asserts that all such statements only appear to be necessary or a priori. Actually, they derive from experience.

Logical empiricism admits that these statements are ncessary but only because they are not really synthetic statements but analytic statements, which are true by definition alone and do not give us genuine knowledge of the world.

GENUINE KNOWLEDGE

Rationalism includes in genuine knowledge synthetic necessary statements (or, if this term is rejected, then those analytic necessary statements that "reveal reality" in terms of universally necessary truth; e.g., "An entity is what it is and not something else.")

Empiricism limits genuine knowledge to empirical statements. Necessary statements are empty (that is, they tell us nothing of the world).

Logical empiricism admits as genuine knowledge only analytic necessary (Black cats are black) or synthetic empirical statements (desks are brown). But the anyalytic necessary statements or laws of logic and mathematics derive from arbitrary rules of usage, definitions, and the like, and therefore reveal nothing about reality. (This is the antimetaphysical point of view).

Originally posted here:

Logic: Rationalism vs. Empiricism - Theology

Rationalism vs. Empiricism Essay – 797 Words – StudyMode

In Philosophy, there are two main positions about the source of all knowledge. These positions are called rationalism and empiricism. Rationalists believe that all knowledge is "innate", or is there when one is born, and that learning comes from intuition. On the other hand, empiricists believe that all knowledge comes from direct sense experience. In this essay, I will further explain each position, it's strengths and weaknesses, and how Kant discovered that there is an alternative to these positions. The thesis I defend in this essay is that knowledge can be of both positions.

According to Rationalists (such as Descartes), all knowledge must come from the mind. Rationalism is concerned with absolute truths that are universal (such as logic and mathematics), which is one of the strengths of this position. It's weakness lies in the fact that it is difficult to apply rationalism to particulars (which are everywhere in our daily life!) because it is of such an abstract nature.

According to Empiricists, such as John Locke, all knowledge comes from direct sense experience. Locke's concept of knowledge comes from his belief that the mind is a "blank slate or tabula rosa" at birth, and our experiences are written upon the slate. Therefore, there are no innate experiences. The strength of the empiricist position is that it is best at explaining particulars, which we encounter on a daily basis. The weakness of this position is that one cannot have direct experiences of general concepts, since we only experience particulars.

Noticing that rationalism and empiricism have opposing strengths and weaknesses, Kant attempted to bring the best of both positions together. In doing so he came up with a whole new position, which I will soon explain.

Kant claimed that there are 3 types of knowledge. The first type of knowledge he called "a priori", which means prior to experience. This knowledge corresponds to rationalist thinking, in that it holds knowledge to be...

Follow this link:

Rationalism vs. Empiricism Essay - 797 Words - StudyMode

Donald Trump pledges to sign anti-LGBTQ First Amendment …

Donald Trump AP Photo/Cheryl Senter

Donald Trump has been courting the LGBTQ vote throughout this presidential election, claiming he would be the better choice for the community than opponent Hillary Clinton and promising to protect us from terrorismin his Republican National Convention speech.

That argumentgets harder to believe by the week, as he gives speeches at anti-LGBTQ events, sticks up for homophobic and transphobic legislation and surrounds himself with bigoted politicians and advisers. Now we have a new offense to add to the list.

Related:Cher: I shudder to think what President Trump will do to trans Americans

Trump has pledged to sign the First Amendment Defense Act (FADA), if passed by congress. It was first introduced in the House on June 17, 2015 and would effectively legalize anti-LGBTQ discrimination across the board, includingamong employers, businesses, landlords and healthcare providers,as long as they claim to be motivated by a firmly held religious beliefs.

It would act to overturn the executive order signed in 2014 by President Obama prohibiting anti-LGBTQ discrimination among federal contractors.

Related:Mike Pences top seven most homophobic moments (out of many)

The statement, added to Trumps website on Thursday under the title Issues Of Importance To Catholics and the subtitle Religious Liberty,reads:

Religious liberty is enshrined in the First Amendment to the Constitution. It is our first liberty and provides the most important protection in that it protects our right of conscience. Activist judges and executive orders issued by Presidents who have no regard for the Constitution have put these protections in jeopardy. If I am elected president and Congress passes the First Amendment Defense Act, I will sign it to protect the deeply held religious beliefs of Catholics and the beliefs of Americans of all faiths. The Little Sisters of the Poor, or any religious order for that matter, will always have their religious liberty protected on my watch and will not have to face bullying from the government because of their religious beliefs.

FADAs text reads:

Prohibits the federal government from taking discriminatory action against a person on the basis that such person believes or acts in accordance with a religious belief or moral conviction that: (1) marriage is or should be recognized as the union of one man and one woman, or (2) sexual relations are properly reserved to such a marriage.

Defines discriminatory action as any federal government action to discriminate against a person with such beliefs or convictions, including a federal government action to:

Requires the federal government to consider to be accredited, licensed, or certified for purposes of federal law any person who would be accredited, licensed, or certified for such purposes but for a determination that the person believes or acts in accordance with such a religious belief or moral conviction.

Permits a person to assert an actual or threatened violation of this Act as a claim or defense in a judicial or administrative proceeding and to obtain compensatory damages or other appropriate relief against the federal government.

Authorizes the Attorney General to bring an action to enforce this Act against the Government Accountability Office or an establishment in the executive branch, other than the U.S. Postal Service or the Postal Regulatory Commission, that is not an executive department, military department, or government corporation.

Defines person as any person regardless of religious affiliation, including corporations and other entities regardless of for-profit or nonprofit status.

Visit link:

Donald Trump pledges to sign anti-LGBTQ First Amendment ...

Human – Wikipedia

Human[1] Temporal range: 0.1950Ma Middle Pleistocene Recent An adult human male (left) and female (right) in Northern Thailand. Scientific classification Kingdom: Animalia Phylum: Chordata Clade: Synapsida Class: Mammalia Order: Primates Suborder: Haplorhini Family: Hominidae Tribe: Hominini Genus: Homo Species: H.sapiens Binomial name Homo sapiens Linnaeus, 1758 Subspecies

Homo sapiens idaltu White et al., 2003 Homo sapiens sapiens

Modern humans (Homo sapiens, primarily ssp. Homo sapiens sapiens) are the only extant members of Hominina clade (or human clade), a branch of the taxonomical tribe Hominini belonging to the family of great apes. They are characterized by erect posture and bipedal locomotion; manual dexterity and increased tool use, compared to other animals; and a general trend toward larger, more complex brains and societies.[3][4]

Early homininsparticularly the australopithecines, whose brains and anatomy are in many ways more similar to ancestral non-human apesare less often referred to as "human" than hominins of the genus Homo.[5] Several of these hominins used fire, occupied much of Eurasia, and gave rise to anatomically modern Homo sapiens in Africa about 200,000 years ago.[6][7] They began to exhibit evidence of behavioral modernity around 50,000 years ago. In several waves of migration, anatomically modern humans ventured out of Africa and populated most of the world.[8]

The spread of humans and their large and increasing population has had a profound impact on large areas of the environment and millions of native species worldwide. Advantages that explain this evolutionary success include a relatively larger brain with a particularly well-developed neocortex, prefrontal cortex and temporal lobes, which enable high levels of abstract reasoning, language, problem solving, sociality, and culture through social learning. Humans use tools to a much higher degree than any other animal, are the only extant species known to build fires and cook their food, and are the only extant species to clothe themselves and create and use numerous other technologies and arts.

Humans are uniquely adept at utilizing systems of symbolic communication (such as language and art) for self-expression and the exchange of ideas, and for organizing themselves into purposeful groups. Humans create complex social structures composed of many cooperating and competing groups, from families and kinship networks to political states. Social interactions between humans have established an extremely wide variety of values,[9]social norms, and rituals, which together form the basis of human society. Curiosity and the human desire to understand and influence the environment and to explain and manipulate phenomena (or events) has provided the foundation for developing science, philosophy, mythology, religion, anthropology, and numerous other fields of knowledge.

Though most of human existence has been sustained by hunting and gathering in band societies,[10] increasing numbers of human societies began to practice sedentary agriculture approximately some 10,000 years ago,[11] domesticating plants and animals, thus allowing for the growth of civilization. These human societies subsequently expanded in size, establishing various forms of government, religion, and culture around the world, unifying people within regions to form states and empires. The rapid advancement of scientific and medical understanding in the 19th and 20th centuries led to the development of fuel-driven technologies and increased lifespans, causing the human population to rise exponentially. By February 2016, the global human population had exceeded 7.3 billion.[12]

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

-4500

-4000

-3500

-3000

-2500

-2000

-1500

-1000

-500

0

In common usage, the word "human" generally refers to the only extant species of the genus Homoanatomically and behaviorally modern Homo sapiens.

In scientific terms, the meanings of "hominid" and "hominin" have changed during the recent decades with advances in the discovery and study of the fossil ancestors of modern humans. The previously clear boundary between humans and apes has blurred, resulting in now acknowledging the hominids as encompassing multiple species, and Homo and close relatives since the split from chimpanzees as the only hominins. There is also a distinction between anatomically modern humans and Archaic Homo sapiens, the earliest fossil members of the species.

The English adjective human is a Middle English loanword from Old French humain, ultimately from Latin hmnus, the adjective form of hom "man." The word's use as a noun (with a plural: humans) dates to the 16th century.[13] The native English term man can refer to the species generally (a synonym for humanity), and could formerly refer to specific individuals of either sex, though this latter use is now obsolete.[14]

The species binomial Homo sapiens was coined by Carl Linnaeus in his 18th century work Systema Naturae.[15] The generic name Homo is a learned 18th century derivation from Latin hom "man," ultimately "earthly being" (Old Latin hem a cognate to Old English guma "man," from PIE demon-, meaning "earth" or "ground").[16] The species-name sapiens means "wise" or "sapient." Note that the Latin word homo refers to humans of either gender, and that sapiens is the singular form (while there is no such word as sapien).[17]

The genus Homo evolved and diverged from other hominins in Africa, after the human clade split from the chimpanzee lineage of the hominids (great apes) branch of the primates. Modern humans, defined as the species Homo sapiens or specifically to the single extant subspecies Homo sapiens sapiens, proceeded to colonize all the continents and larger islands, arriving in Eurasia 125,00060,000 years ago,[18][19]Australia around 40,000 years ago, the Americas around 15,000 years ago, and remote islands such as Hawaii, Easter Island, Madagascar, and New Zealand between the years 300 and 1280.[20][21]

The closest living relatives of humans are chimpanzees (genus Pan) and gorillas (genus Gorilla).[22] With the sequencing of both the human and chimpanzee genome, current estimates of similarity between human and chimpanzee DNA sequences range between 95% and 99%.[22][23][24] By using the technique called a molecular clock which estimates the time required for the number of divergent mutations to accumulate between two lineages, the approximate date for the split between lineages can be calculated. The gibbons (Hylobatidae) and orangutans (genus Pongo) were the first groups to split from the line leading to the humans, then gorillas (genus Gorilla) followed by the chimpanzees (genus Pan). The splitting date between human and chimpanzee lineages is placed around 48 million years ago during the late Miocene epoch.[25][26] During this split, chromosome 2 was formed from two other chromosomes, leaving humans with only 23 pairs of chromosomes, compared to 24 for the other apes.[27][28]

There is little fossil evidence for the divergence of the gorilla, chimpanzee and hominin lineages.[29][30] The earliest fossils that have been proposed as members of the hominin lineage are Sahelanthropus tchadensis dating from 7 million years ago, Orrorin tugenensis dating from 5.7 million years ago, and Ardipithecus kadabba dating to 5.6 million years ago. Each of these species has been argued to be a bipedal ancestor of later hominins, but all such claims are contested. It is also possible that any one of the three is an ancestor of another branch of African apes, or is an ancestor shared between hominins and other African Hominoidea (apes). The question of the relation between these early fossil species and the hominin lineage is still to be resolved. From these early species the australopithecines arose around 4 million years ago diverged into robust (also called Paranthropus) and gracile branches,[31] possibly one of which (such as A. garhi, dating to 2.5 million years ago) is a direct ancestor of the genus Homo.[32]

The earliest members of the genus Homo are Homo habilis which evolved around 2.8 million years ago.[33]Homo habilis has been considered the first species for which there is clear evidence of the use of stone tools. More recently, however, in 2015, stone tools, perhaps predating Homo habilis, have been discovered in northwestern Kenya that have been dated to 3.3 million years old.[34] Nonetheless, the brains of Homo habilis were about the same size as that of a chimpanzee, and their main adaptation was bipedalism as an adaptation to terrestrial living. During the next million years a process of encephalization began, and with the arrival of Homo erectus in the fossil record, cranial capacity had doubled. Homo erectus were the first of the hominina to leave Africa, and these species spread through Africa, Asia, and Europe between 1.3to1.8 million years ago. One population of H. erectus, also sometimes classified as a separate species Homo ergaster, stayed in Africa and evolved into Homo sapiens. It is believed that these species were the first to use fire and complex tools. The earliest transitional fossils between H. ergaster/erectus and archaic humans are from Africa such as Homo rhodesiensis, but seemingly transitional forms are also found at Dmanisi, Georgia. These descendants of African H. erectus spread through Eurasia from ca. 500,000 years ago evolving into H. antecessor, H. heidelbergensis and H. neanderthalensis. The earliest fossils of anatomically modern humans are from the Middle Paleolithic, about 200,000 years ago such as the Omo remains of Ethiopia and the fossils of Herto sometimes classified as Homo sapiens idaltu.[35] Later fossils of archaic Homo sapiens from Skhul in Israel and Southern Europe begin around 90,000 years ago.[36]

Human evolution is characterized by a number of morphological, developmental, physiological, and behavioral changes that have taken place since the split between the last common ancestor of humans and chimpanzees. The most significant of these adaptations are 1. bipedalism, 2. increased brain size, 3. lengthened ontogeny (gestation and infancy), 4. decreased sexual dimorphism (neoteny). The relationship between all these changes is the subject of ongoing debate.[37] Other significant morphological changes included the evolution of a power and precision grip, a change first occurring in H. erectus.[38]

Bipedalism is the basic adaption of the hominin line, and it is considered the main cause behind a suite of skeletal changes shared by all bipedal hominins. The earliest bipedal hominin is considered to be either Sahelanthropus[39] or Orrorin, with Ardipithecus, a full bipedal,[40] coming somewhat later.[citation needed] The knuckle walkers, the gorilla and chimpanzee, diverged around the same time, and either Sahelanthropus or Orrorin may be humans' last shared ancestor with those animals.[citation needed] The early bipedals eventually evolved into the australopithecines and later the genus Homo.[citation needed] There are several theories of the adaptational value of bipedalism. It is possible that bipedalism was favored because it freed up the hands for reaching and carrying food, because it saved energy during locomotion, because it enabled long distance running and hunting, or as a strategy for avoiding hyperthermia by reducing the surface exposed to direct sun.[citation needed]

The human species developed a much larger brain than that of other primates typically 1,330 cm3 in modern humans, over twice the size of that of a chimpanzee or gorilla.[41] The pattern of encephalization started with Homo habilis which at approximately 600cm3 had a brain slightly larger than chimpanzees, and continued with Homo erectus (8001100cm3), and reached a maximum in Neanderthals with an average size of 12001900cm3, larger even than Homo sapiens (but less encephalized).[42] The pattern of human postnatal brain growth differs from that of other apes (heterochrony), and allows for extended periods of social learning and language acquisition in juvenile humans. However, the differences between the structure of human brains and those of other apes may be even more significant than differences in size.[43][44][45][46] The increase in volume over time has affected different areas within the brain unequally the temporal lobes, which contain centers for language processing have increased disproportionately, as has the prefrontal cortex which has been related to complex decision making and moderating social behavior.[41] Encephalization has been tied to an increasing emphasis on meat in the diet,[47][48] or with the development of cooking,[49] and it has been proposed [50] that intelligence increased as a response to an increased necessity for solving social problems as human society became more complex.

The reduced degree of sexual dimorphism is primarily visible in the reduction of the male canine tooth relative to other ape species (except gibbons). Another important physiological change related to sexuality in humans was the evolution of hidden estrus. Humans are the only ape in which the female is fertile year round, and in which no special signals of fertility are produced by the body (such as genital swelling during estrus). Nonetheless humans retain a degree of sexual dimorphism in the distribution of body hair and subcutaneous fat, and in the overall size, males being around 25% larger than females. These changes taken together have been interpreted as a result of an increased emphasis on pair bonding as a possible solution to the requirement for increased parental investment due to the prolonged infancy of offspring.[citation needed]

By the beginning of the Upper Paleolithic period (50,000 BP), full behavioral modernity, including language, music and other cultural universals had developed.[51][52] As modern humans spread out from Africa they encountered other hominids such as Homo neanderthalensis and the so-called Denisovans. The nature of interaction between early humans and these sister species has been a long-standing source of controversy, the question being whether humans replaced these earlier species or whether they were in fact similar enough to interbreed, in which case these earlier populations may have contributed genetic material to modern humans.[53] Recent studies of the human and Neanderthal genomes suggest gene flow between archaic Homo sapiens and Neanderthals and Denisovans.[54][55][56] In March 2016, studies were published that suggest that modern humans bred with hominins, including Denisovans and Neanderthals, on multiple occasions.[57]

This dispersal out of Africa is estimated to have begun about 70,000 years BP from Northeast Africa. Current evidence suggests that there was only one such dispersal and that it only involved a few hundred individuals. The vast majority of humans stayed in Africa and adapted to a diverse array of environments.[58] Modern humans subsequently spread globally, replacing earlier hominins (either through competition or hybridization). They inhabited Eurasia and Oceania by 40,000 years BP, and the Americas at least 14,500 years BP.[59][60]

Until about 10,000 years ago, humans lived as hunter-gatherers. They gradually gained domination over much of the natural environment. They generally lived in small nomadic groups known as band societies, often in caves. The advent of agriculture prompted the Neolithic Revolution, when access to food surplus led to the formation of permanent human settlements, the domestication of animals and the use of metal tools for the first time in history. Agriculture encouraged trade and cooperation, and led to complex society.[citation needed]

The early civilizations of Mesopotamia, Egypt, India, China, Maya, Greece and Rome were some of the cradles of civilization.[61][62][63] The Late Middle Ages and the Early Modern Period saw the rise of revolutionary ideas and technologies. Over the next 500 years, exploration and European colonialism brought great parts of the world under European control, leading to later struggles for independence. The concept of the modern world as distinct from an ancient world is based on a rapid change progress in a brief period of time in many areas.[citation needed] Advances in all areas of human activity prompted new theories such as evolution and psychoanalysis, which changed humanity's views of itself.[citation needed] The Scientific Revolution, Technological Revolution and the Industrial Revolution up until the 19th century resulted in independent discoveries such as imaging technology, major innovations in transport, such as the airplane and automobile; energy development, such as coal and electricity.[64] This correlates with population growth (especially in America)[65] and higher life expectancy, the World population rapidly increased numerous times in the 19th and 20th centuries as nearly 10% of the 100 billion people lived in the past century.[66]

With the advent of the Information Age at the end of the 20th century, modern humans live in a world that has become increasingly globalized and interconnected. As of 2010, almost 2billion humans are able to communicate with each other via the Internet,[67] and 3.3 billion by mobile phone subscriptions.[68] Although interconnection between humans has encouraged the growth of science, art, discussion, and technology, it has also led to culture clashes and the development and use of weapons of mass destruction.[citation needed] Human civilization has led to environmental destruction and pollution significantly contributing to the ongoing mass extinction of other forms of life called the Holocene extinction event,[69] which may be further accelerated by global warming in the future.[70]

Early human settlements were dependent on proximity to water and, depending on the lifestyle, other natural resources used for subsistence, such as populations of animal prey for hunting and arable land for growing crops and grazing livestock. But humans have a great capacity for altering their habitats by means of technology, through irrigation, urban planning, construction, transport, manufacturing goods, deforestation and desertification. Deliberate habitat alteration is often done with the goals of increasing material wealth, increasing thermal comfort, improving the amount of food available, improving aesthetics, or improving ease of access to resources or other human settlements. With the advent of large-scale trade and transport infrastructure, proximity to these resources has become unnecessary, and in many places, these factors are no longer a driving force behind the growth and decline of a population. Nonetheless, the manner in which a habitat is altered is often a major determinant in population change.[citation needed]

Technology has allowed humans to colonize all of the continents and adapt to virtually all climates. Within the last century, humans have explored Antarctica, the ocean depths, and outer space, although large-scale colonization of these environments is not yet feasible. With a population of over seven billion, humans are among the most numerous of the large mammals. Most humans (61%) live in Asia. The remainder live in the Americas (14%), Africa (14%), Europe (11%), and Oceania (0.5%).[71]

Human habitation within closed ecological systems in hostile environments, such as Antarctica and outer space, is expensive, typically limited in duration, and restricted to scientific, military, or industrial expeditions. Life in space has been very sporadic, with no more than thirteen humans in space at any given time.[72] Between 1969 and 1972, two humans at a time spent brief intervals on the Moon. As of January 2017, no other celestial body has been visited by humans, although there has been a continuous human presence in space since the launch of the initial crew to inhabit the International Space Station on October 31, 2000.[73] However, other celestial bodies have been visited by human-made objects.[74][75][76]

Since 1800, the human population has increased from one billion[77] to over seven billion,[78] In 2004, some 2.5 billion out of 6.3 billion people (39.7%) lived in urban areas. In February 2008, the U.N. estimated that half the world's population would live in urban areas by the end of the year.[79] Problems for humans living in cities include various forms of pollution and crime,[80] especially in inner city and suburban slums. Both overall population numbers and the proportion residing in cities are expected to increase significantly in the coming decades.[81]

Humans have had a dramatic effect on the environment. Humans are apex predators, being rarely preyed upon by other species.[82] Currently, through land development, combustion of fossil fuels, and pollution, humans are thought to be the main contributor to global climate change.[83] If this continues at its current rate it is predicted that climate change will wipe out half of all plant and animal species over the next century.[84][85]

Most aspects of human physiology are closely homologous to corresponding aspects of animal physiology. The human body consists of the legs, the torso, the arms, the neck, and the head. An adult human body consists of about 100 trillion (1014) cells. The most commonly defined body systems in humans are the nervous, the cardiovascular, the circulatory, the digestive, the endocrine, the immune, the integumentary, the lymphatic, the muscoskeletal, the reproductive, the respiratory, and the urinary system.[86][87]

Humans, like most of the other apes, lack external tails, have several blood type systems, have opposable thumbs, and are sexually dimorphic. The comparatively minor anatomical differences between humans and chimpanzees are a result of human bipedalism. One difference is that humans have a far faster and more accurate throw than other animals. Humans are also among the best long-distance runners in the animal kingdom, but slower over short distances.[88][89] Humans' thinner body hair and more productive sweat glands help avoid heat exhaustion while running for long distances.[90]

As a consequence of bipedalism, human females have narrower birth canals. The construction of the human pelvis differs from other primates, as do the toes. A trade-off for these advantages of the modern human pelvis is that childbirth is more difficult and dangerous than in most mammals, especially given the larger head size of human babies compared to other primates. This means that human babies must turn around as they pass through the birth canal, which other primates do not do, and it makes humans the only species where females usually require help from their conspecifics (other members of their own species) to reduce the risks of birthing. As a partial evolutionary solution, human fetuses are born less developed and more vulnerable. Chimpanzee babies are cognitively more developed than human babies until the age of six months, when the rapid development of human brains surpasses chimpanzees. Another difference between women and chimpanzee females is that women go through the menopause and become unfertile decades before the end of their lives. All species of non-human apes are capable of giving birth until death. Menopause probably developed as it has provided an evolutionary advantage (more caring time) to young relatives.[89]

Apart from bipedalism, humans differ from chimpanzees mostly in smelling, hearing, digesting proteins, brain size, and the ability of language. Humans' brains are about three times bigger than in chimpanzees. More importantly, the brain to body ratio is much higher in humans than in chimpanzees, and humans have a significantly more developed cerebral cortex, with a larger number of neurons. The mental abilities of humans are remarkable compared to other apes. Humans' ability of speech is unique among primates. Humans are able to create new and complex ideas, and to develop technology, which is unprecedented among other organisms on Earth.[89]

It is estimated that the worldwide average height for an adult human male is about 172cm (5ft 712in),[citation needed] while the worldwide average height for adult human females is about 158cm (5ft 2in).[citation needed] Shrinkage of stature may begin in middle age in some individuals, but tends to be typical in the extremely aged.[91] Through history human populations have universally become taller, probably as a consequence of better nutrition, healthcare, and living conditions.[92] The average mass of an adult human is 5464kg (120140lb) for females and 7683kg (168183lb) for males.[93] Like many other conditions, body weight and body type is influenced by both genetic susceptibility and environment and varies greatly among individuals. (see obesity)[94][95]

Although humans appear hairless compared to other primates, with notable hair growth occurring chiefly on the top of the head, underarms and pubic area, the average human has more hair follicles on his or her body than the average chimpanzee. The main distinction is that human hairs are shorter, finer, and less heavily pigmented than the average chimpanzee's, thus making them harder to see.[96] Humans have about 2 million sweat glands spread over their entire bodies, many more than chimpanzees, whose sweat glands are scarce and are mainly located on the palm of the hand and on the soles of the feet.[97]

The dental formula of humans is: 2.1.2.32.1.2.3. Humans have proportionately shorter palates and much smaller teeth than other primates. They are the only primates to have short, relatively flush canine teeth. Humans have characteristically crowded teeth, with gaps from lost teeth usually closing up quickly in young individuals. Humans are gradually losing their wisdom teeth, with some individuals having them congenitally absent.[98]

Like all mammals, humans are a diploid eukaryotic species. Each somatic cell has two sets of 23 chromosomes, each set received from one parent; gametes have only one set of chromosomes, which is a mixture of the two parental sets. Among the 23 pairs of chromosomes there are 22 pairs of autosomes and one pair of sex chromosomes. Like other mammals, humans have an XY sex-determination system, so that females have the sex chromosomes XX and males have XY.[99]

One human genome was sequenced in full in 2003, and currently efforts are being made to achieve a sample of the genetic diversity of the species (see International HapMap Project). By present estimates, humans have approximately 22,000 genes.[100] The variation in human DNA is very small compared to other species, possibly suggesting a population bottleneck during the Late Pleistocene (around 100,000 years ago), in which the human population was reduced to a small number of breeding pairs.[101][102]Nucleotide diversity is based on single mutations called single nucleotide polymorphisms (SNPs). The nucleotide diversity between humans is about 0.1%, i.e. 1 difference per 1,000 base pairs.[103][104] A difference of 1 in 1,000 nucleotides between two humans chosen at random amounts to about 3 million nucleotide differences, since the human genome has about 3 billion nucleotides. Most of these single nucleotide polymorphisms (SNPs) are neutral but some (about 3 to 5%) are functional and influence phenotypic differences between humans through alleles.[citation needed]

By comparing the parts of the genome that are not under natural selection and which therefore accumulate mutations at a fairly steady rate, it is possible to reconstruct a genetic tree incorporating the entire human species since the last shared ancestor. Each time a certain mutation (SNP) appears in an individual and is passed on to his or her descendants, a haplogroup is formed including all of the descendants of the individual who will also carry that mutation. By comparing mitochondrial DNA, which is inherited only from the mother, geneticists have concluded that the last female common ancestor whose genetic marker is found in all modern humans, the so-called mitochondrial Eve, must have lived around 90,000 to 200,000 years ago.[105][106][107]

Human accelerated regions, first described in August 2006,[108][109] are a set of 49 segments of the human genome that are conserved throughout vertebrate evolution but are strikingly different in humans. They are named according to their degree of difference between humans and their nearest animal relative (chimpanzees) (HAR1 showing the largest degree of human-chimpanzee differences). Found by scanning through genomic databases of multiple species, some of these highly mutated areas may contribute to human-specific traits.[citation needed]

The forces of natural selection have continued to operate on human populations, with evidence that certain regions of the genome display directional selection in the past 15,000 years.[110]

As with other mammals, human reproduction takes place as internal fertilization by sexual intercourse. During this process, the male inserts his erect penis into the female's vagina and ejaculates semen, which contains sperm. The sperm travels through the vagina and cervix into the uterus or Fallopian tubes for fertilization of the ovum. Upon fertilization and implantation, gestation then occurs within the female's uterus.

The zygote divides inside the female's uterus to become an embryo, which over a period of 38 weeks (9 months) of gestation becomes a fetus. After this span of time, the fully grown fetus is birthed from the woman's body and breathes independently as an infant for the first time. At this point, most modern cultures recognize the baby as a person entitled to the full protection of the law, though some jurisdictions extend various levels of personhood earlier to human fetuses while they remain in the uterus.

Compared with other species, human childbirth is dangerous. Painful labors lasting 24 hours or more are not uncommon and sometimes lead to the death of the mother, the child or both.[111] This is because of both the relatively large fetal head circumference and the mother's relatively narrow pelvis.[112][113] The chances of a successful labor increased significantly during the 20th century in wealthier countries with the advent of new medical technologies. In contrast, pregnancy and natural childbirth remain hazardous ordeals in developing regions of the world, with maternal death rates approximately 100 times greater than in developed countries.[114]

In developed countries, infants are typically 34kg (69pounds) in weight and 5060cm (2024inches) in height at birth.[115][not in citation given] However, low birth weight is common in developing countries, and contributes to the high levels of infant mortality in these regions.[116] Helpless at birth, humans continue to grow for some years, typically reaching sexual maturity at 12 to 15years of age. Females continue to develop physically until around the age of 18, whereas male development continues until around age 21. The human life span can be split into a number of stages: infancy, childhood, adolescence, young adulthood, adulthood and old age. The lengths of these stages, however, have varied across cultures and time periods. Compared to other primates, humans experience an unusually rapid growth spurt during adolescence, where the body grows 25% in size. Chimpanzees, for example, grow only 14%, with no pronounced spurt.[117] The presence of the growth spurt is probably necessary to keep children physically small until they are psychologically mature. Humans are one of the few species in which females undergo menopause. It has been proposed that menopause increases a woman's overall reproductive success by allowing her to invest more time and resources in her existing offspring, and in turn their children (the grandmother hypothesis), rather than by continuing to bear children into old age.[118][119]

For various reasons, including biological/genetic causes,[120] women live on average about four years longer than menas of 2013 the global average life expectancy at birth of a girl is estimated at 70.2 years compared to 66.1 for a boy.[121] There are significant geographical variations in human life expectancy, mostly correlated with economic developmentfor example life expectancy at birth in Hong Kong is 84.8years for girls and 78.9 for boys, while in Swaziland, primarily because of AIDS, it is 31.3years for both sexes.[122] The developed world is generally aging, with the median age around 40years. In the developing world the median age is between 15 and 20years. While one in five Europeans is 60years of age or older, only one in twenty Africans is 60years of age or older.[123] The number of centenarians (humans of age 100years or older) in the world was estimated by the United Nations at 210,000 in 2002.[124] At least one person, Jeanne Calment, is known to have reached the age of 122years;[125] higher ages have been claimed but they are not well substantiated.

Humans are omnivorous, capable of consuming a wide variety of plant and animal material.[126][127] Varying with available food sources in regions of habitation, and also varying with cultural and religious norms, human groups have adopted a range of diets, from purely vegetarian to primarily carnivorous. In some cases, dietary restrictions in humans can lead to deficiency diseases; however, stable human groups have adapted to many dietary patterns through both genetic specialization and cultural conventions to use nutritionally balanced food sources.[128] The human diet is prominently reflected in human culture, and has led to the development of food science.

Until the development of agriculture approximately 10,000 years ago, Homo sapiens employed a hunter-gatherer method as their sole means of food collection. This involved combining stationary food sources (such as fruits, grains, tubers, and mushrooms, insect larvae and aquatic mollusks) with wild game, which must be hunted and killed in order to be consumed.[129] It has been proposed that humans have used fire to prepare and cook food since the time of Homo erectus.[130] Around ten thousand years ago, humans developed agriculture,[131] which substantially altered their diet. This change in diet may also have altered human biology; with the spread of dairy farming providing a new and rich source of food, leading to the evolution of the ability to digest lactose in some adults.[132][133] Agriculture led to increased populations, the development of cities, and because of increased population density, the wider spread of infectious diseases. The types of food consumed, and the way in which they are prepared, have varied widely by time, location, and culture.

In general, humans can survive for two to eight weeks without food, depending on stored body fat. Survival without water is usually limited to three or four days. About 36 million humans die every year from causes directly or indirectly related to starvation.[134] Childhood malnutrition is also common and contributes to the global burden of disease.[135] However global food distribution is not even, and obesity among some human populations has increased rapidly, leading to health complications and increased mortality in some developed, and a few developing countries. Worldwide over one billion people are obese,[136] while in the United States 35% of people are obese, leading to this being described as an "obesity epidemic."[137] Obesity is caused by consuming more calories than are expended, so excessive weight gain is usually caused by an energy-dense diet.[136]

No two humansnot even monozygotic twinsare genetically identical. Genes and environment influence human biological variation from visible characteristics to physiology to disease susceptibly to mental abilities. The exact influence of genes and environment on certain traits is not well understood.[138][139]

Most current genetic and archaeological evidence supports a recent single origin of modern humans in East Africa,[140] with first migrations placed at 60,000 years ago. Compared to the great apes, human gene sequenceseven among African populationsare remarkably homogeneous.[141] On average, genetic similarity between any two humans is 99.9%.[142][143] There is about 23 times more genetic diversity within the wild chimpanzee population, than in the entire human gene pool.[144][145][146]

The human body's ability to adapt to different environmental stresses is remarkable, allowing humans to acclimatize to a wide variety of temperatures, humidity, and altitudes. As a result, humans are a cosmopolitan species found in almost all regions of the world, including tropical rainforests, arid desert, extremely cold arctic regions, and heavily polluted cities. Most other species are confined to a few geographical areas by their limited adaptability.[147]

There is biological variation in the human specieswith traits such as blood type, cranial features, eye color, hair color and type, height and build, and skin color varying across the globe. Human body types vary substantially. The typical height of an adult human is between 1.4m and 1.9m (4ft 7 in and 6ft 3 in), although this varies significantly depending, among other things, on sex and ethnic origin.[148][149] Body size is partly determined by genes and is also significantly influenced by environmental factors such as diet, exercise, and sleep patterns, especially as an influence in childhood. Adult height for each sex in a particular ethnic group approximately follows a normal distribution. Those aspects of genetic variation that give clues to human evolutionary history, or are relevant to medical research, have received particular attention. For example, the genes that allow adult humans to digest lactose are present in high frequencies in populations that have long histories of cattle domestication, suggesting natural selection having favored that gene in populations that depend on cow milk. Some hereditary diseases such as sickle cell anemia are frequent in populations where malaria has been endemic throughout historyit is believed that the same gene gives increased resistance to malaria among those who are unaffected carriers of the gene. Similarly, populations that have for a long time inhabited specific climates, such as arctic or tropical regions or high altitudes, tend to have developed specific phenotypes that are beneficial for conserving energy in those environmentsshort stature and stocky build in cold regions, tall and lanky in hot regions, and with high lung capacities at high altitudes. Similarly, skin color varies clinally with darker skin around the equatorwhere the added protection from the sun's ultraviolet radiation is thought to give an evolutionary advantageand lighter skin tones closer to the poles.[150][151][152][153]

The hue of human skin and hair is determined by the presence of pigments called melanins. Human skin color can range from darkest brown to lightest peach, or even nearly white or colorless in cases of albinism.[146] Human hair ranges in color from white to red to blond to brown to black, which is most frequent.[154] Hair color depends on the amount of melanin (an effective sun blocking pigment) in the skin and hair, with hair melanin concentrations in hair fading with increased age, leading to grey or even white hair. Most researchers believe that skin darkening is an adaptation that evolved as protection against ultraviolet solar radiation, which also helps balancing folate, which is destroyed by ultraviolet radiation. Light skin pigmentation protects against depletion of vitamin D, which requires sunlight to make.[155] Skin pigmentation of contemporary humans is clinally distributed across the planet, and in general correlates with the level of ultraviolet radiation in a particular geographic area. Human skin also has a capacity to darken (tan) in response to exposure to ultraviolet radiation.[156][157][158]

Within the human species, the greatest degree of genetic variation exists between males and females. While the nucleotide genetic variation of individuals of the same sex across global populations is no greater than 0.1%, the genetic difference between males and females is between 1% and 2%. Although different in nature[clarification needed], this approaches the genetic differentiation between men and male chimpanzees or women and female chimpanzees. The genetic difference between sexes contributes to anatomical, hormonal, neural, and physiological differences between men and women, although the exact degree and nature of social and environmental influences on sexes are not completely understood. Males on average are 15% heavier and 15cm taller than females. There is a difference between body types, body organs and systems, hormonal levels, sensory systems, and muscle mass between sexes. On average, there is a difference of about 4050% in upper body strength and 2030% in lower body strength between men and women. Women generally have a higher body fat percentage than men. Women have lighter skin than men of the same population; this has been explained by a higher need for vitamin D (which is synthesized by sunlight) in females during pregnancy and lactation. As there are chromosomal differences between females and males, some X and Y chromosome related conditions and disorders only affect either men or women. Other conditional differences between males and females are not related to sex chromosomes. Even after allowing for body weight and volume, the male voice is usually an octave deeper than the female voice. Women have a longer life span in almost every population around the world.[160][161][162][163][164][165][166][167][168]

Males typically have larger tracheae and branching bronchi, with about 30% greater lung volume per unit body mass. They have larger hearts, 10% higher red blood cell count, and higher hemoglobin, hence greater oxygen-carrying capacity. They also have higher circulating clotting factors (vitamin K, prothrombin and platelets). These differences lead to faster healing of wounds and higher peripheral pain tolerance.[169] Females typically have more white blood cells (stored and circulating), more granulocytes and B and T lymphocytes. Additionally, they produce more antibodies at a faster rate than males. Hence they develop fewer infectious diseases and these continue for shorter periods.[169]Ethologists argue that females, interacting with other females and multiple offspring in social groups, have experienced such traits as a selective advantage.[170][171][172][173][174] According to Daly and Wilson, "The sexes differ more in human beings than in monogamous mammals, but much less than in extremely polygamous mammals."[175] But given that sexual dimorphism in the closest relatives of humans is much greater than among humans, the human clade must be considered to be characterized by decreasing sexual dimorphism, probably due to less competitive mating patterns. One proposed explanation is that human sexuality has developed more in common with its close relative the bonobo, which exhibits similar sexual dimorphism, is polygynandrous and uses recreational sex to reinforce social bonds and reduce aggression.[176]

Humans of the same sex are 99.9% genetically identical. There is extremely little variation between human geographical populations, and most of the variation that does occur is at the personal level within local areas, and not between populations.[146][177][178] Of the 0.1% of human genetic differentiation, 85% exists within any randomly chosen local population, be they Italians, Koreans, or Kurds. Two randomly chosen Koreans may be genetically as different as a Korean and an Italian. Any ethnic group contains 85% of the human genetic diversity of the world. Genetic data shows that no matter how population groups are defined, two people from the same population group are about as different from each other as two people from any two different population groups.[146][179][180][181]

Current genetic research has demonstrated that humans on the African continent are the most genetically diverse.[182] There is more human genetic diversity in Africa than anywhere else on Earth. The genetic structure of Africans was traced to 14 ancestral population clusters. Human genetic diversity decreases in native populations with migratory distance from Africa and this is thought to be the result of bottlenecks during human migration.[183][184] Humans have lived in Africa for the longest time, which has allowed accumulation of a higher diversity of genetic mutations in these populations. Only part of Africa's population migrated out of the continent, bringing just part of the original African genetic variety with them. African populations harbor genetic alleles that are not found in other places of the world. All the common alleles found in populations outside of Africa are found on the African continent.[146]

Geographical distribution of human variation is complex and constantly shifts through time which reflects complicated human evolutionary history. Most human biological variation is clinally distributed and blends gradually from one area to the next. Groups of people around the world have different frequencies of polymorphic genes. Furthermore, different traits are non-concordant and each have different clinal distribution. Adaptability varies both from person to person and from population to population. The most efficient adaptive responses are found in geographical populations where the environmental stimuli are the strongest (e.g. Tibetans are highly adapted to high altitudes). The clinal geographic genetic variation is further complicated by the migration and mixing between human populations which has been occurring since prehistoric times.[146][185][186][187][188][189]

Go here to read the rest:

Human - Wikipedia

Artificial Intelligence: What It Is and How It Really Works

Which is Which?

It all started out as science fiction: machines that can talk, machines that can think, machines that can feel. Although that last bit may be impossible without sparking an entire world of debate regarding the existence of consciousness, scientists have certainly been making strides with the first two.

Over the years, we have been hearing a lot about artificial intelligence, machine learning, and deep learning. But how do we differentiate between these three rather abstruse terms, and how are they related to one another?

Artificial intelligence (AI) is the general field that covers everything that has anything to do with imbuing machines with intelligence, with the goal of emulatinga human beings unique reasoning faculties. Machine learning is a category within the larger field of artificial intelligence that is concerned with conferring uponmachines the ability to learn. This is achieved by using algorithms that discoverpatterns and generate insights from the data they are exposed to, for application to future decision-making and predictions, a process that sidesteps theneed to be programmed specifically for every single possible action.

Deep learning, on the other hand, is a subset of machine learning: its the most advanced AI field, one that brings AI the closest to thegoal of enabling machines to learn and think as much like humans as possible.

In short, deep learning is a subset of machine learning, and machine learning falls within artificial intelligence. The followingimage perfectly encapsulatesthe interrelationship of the three.

Heres a little bit of historical background to better illustrate the differences between the three, and how each discovery and advance has paved the way for the next:

Philosophers attempted to make sense of human thinking in the context of a system, and this idea resulted in the coinage ofthe term artificial intelligence in 1956. And its stillbelieved that philosophy has an important role to play in the advancement of artificial intelligence to this day. Oxford University physicist David Deutsch wrote in an article how he believes that philosophy still holds the key to achieving artificial general intelligence (AGI), the level of machine intelligence comparable to that of the human brain, despite the fact that no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality.

Advancements in AI have given rise to debates specifically about them being a threat to humanity, whether physically or economically (for which universal basic income is also proposed, and is currently being tested in certain countries).

Machine learning is just one approach to reifyingartificial intelligence, and ultimately eliminates (or greatly reduces) the need to hand-code the software with a list of possibilities, and how the machine intelligence ought toreact to each of them. Throughout 1949 until the late 1960s, American electric engineer Arthur Samuel worked hard onevolving artificial intelligence from merely recognizing patterns to learning from the experience, making him the pioneer of the field. He used a game of checkers for his research while working with IBM, and this subsequently influenced the programming of early IBM computers.

Current applications are becoming more and more sophisticated, making their way into complex medical applications.

Examples include analyzing large genome sets in an effort to prevent diseases, diagnosing depression based on speech patterns, and identifying people with suicidal tendencies.

As we delve into higher and evenmore sophisticated levels of machine learning, deep learning comes into play. Deep learning requires a complex architecture that mimics a human brains neural networks in order to make sense of patterns, even with noise, missing details, and other sources of confusion. While the possibilities of deep learning are vast, so are its requirements: you need big data, and tremendous computing power.

It means not having to laboriously program a prospective AI with that elusive quality of intelligencehowever defined. Instead, all the potential for future intelligence and reasoning powers are latent in the program itself, much like an infants inchoate but infinitely flexible mind.

Watch this video for a basic explanation of how it all works:

Follow this link:

Artificial Intelligence: What It Is and How It Really Works

Algorithm-Driven Design: How Artificial Intelligence Is …

We use ad-blockers as well, you know. We gotta keep those servers running though. Did you know that we publish useful books and run friendly conferences crafted for pros like yourself? E.g. upcoming SmashingConf San Francisco, dedicated to smart front-end techniques and design patterns.

Ive been following the idea of algorithm-driven design for several years now and have collected some practical examples. The tools of the approach can help us to construct a UI, prepare assets and content, and personalize the user experience. The information, though, has always been scarce and hasnt been systematic.

However, in 2016, the technological foundations of these tools became easily accessible, and the design community got interested in algorithms, neural networks and artificial intelligence (AI). Now is the time to rethink the modern role of the designer.

One of the most impressive promises of algorithm-driven design was given by the infamous CMS The Grid3. It chooses templates and content-presentation styles, and it retouches and crops photos all by itself. Moreover, the system runs A/B tests to choose the most suitable pattern. However, the product is still in private beta, so we can judge it only by its publications and ads.

The Designer News community found real-world examples of websites created with The Grid, and they had a mixed reaction4 people criticized the design and code quality. Many skeptics opened a champagne bottle on that day.

The idea to fully replace a designer with an algorithm sounds futuristic, but the whole point is wrong. Product designers help to translate a raw product idea into a well-thought-out user interface, with solid interaction principles and a sound information architecture and visual style, while helping a company to achieve its business goals and strengthen its brand.

Designers make a lot of big and small decisions; many of them are hardly described by clear processes. Moreover, incoming requirements are not 100% clear and consistent, so designers help product managers solve these collisions making for a better product. Its much more than about choosing a suitable template and filling it with content.

However, if we talk about creative collaboration, when designers work in pair with algorithms to solve product tasks, we see a lot of good examples and clear potential. Its especially interesting how algorithms can improve our day-to-day work on websites and mobile apps.

Designers have learned to juggle many tools and skills to near perfection, and as a result, a new term emerged, product designer7. Product designers are proactive members of a product team; they understand how user research works, they can do interaction design and information architecture, they can create a visual style, enliven it with motion design, and make simple changes in the code for it. These people are invaluable to any product team.

However, balancing so many skills is hard you cant dedicate enough time to every aspect of product work. Of course, a recent boon of new design tools has shortened the time we need to create deliverables and has expanded our capabilities. However, its still not enough. There is still too much routine, and new responsibilities eat up all of the time weve saved. We need to automate and simplify our work processes even more. I see three key directions for this:

Ill show you some examples and propose a new approach for this future work process.

Publishing tools such as Medium, Readymag and Squarespace have already simplified the authors work countless high-quality templates will give the author a pretty design without having to pay for a designer. There is an opportunity to make these templates smarter, so that the barrier to entry gets even lower.

For example, while The Grid is still in beta, a hugely successful website constructor, Wix, has started including algorithm-driven features. The company announced Advanced Design Intelligence8, which looks similar to The Grids semi-automated way of enabling non-professionals to create a website. Wix teaches the algorithm by feeding it many examples of high-quality modern websites. Moreover, it tries to make style suggestions relevant to the clients industry. Its not easy for non-professionals to choose a suitable template, and products like Wix and The Grid could serve as a design expert.

Surely, as in the case of The Grid, rejecting designers from the creative process leads to clichd and mediocre results (even if it improves overall quality). However, if we consider this process more like paired design with a computer, then we can offload many routine tasks; for example, designers could create a moodboard on Dribbble or Pinterest, then an algorithm could quickly apply these styles to mockups and propose a suitable template. Designers would become art directors to their new apprentices, computers.

Of course, we cant create a revolutionary product in this way, but we could free some time to create one. Moreover, many everyday tasks are utilitarian and dont require a revolution. If a company is mature enough and has a design system9, then algorithms could make it more powerful.

For example, the designer and developer could define the logic that considers content, context and user data; then, a platform would compile a design using principles and patterns. This would allow us to fine-tune the tiniest details for specific usage scenarios, without drawing and coding dozens of screen states by hand. Florian Schulz shows how you can use the idea of interpolation10 to create many states of components.

My interest in algorithm-driven design sprung up around 2012, when my design team at Mail.Ru Group required an automated magazine layout. Existing content had a poor semantic structure, and updating it by hand was too expensive. How could we get modern designs, especially when the editors werent designers?

Well, a special script would parse an article. Then, depending on the articles content (the number of paragraphs and words in each, the number of photos and their formats, the presence of inserts with quotes and tables, etc.), the script would choose the most suitable pattern to present this part of the article. The script also tried to mix patterns, so that the final design had variety. It would save the editors time in reworking old content, and the designer would just have to add new presentation modules. Flipboard launched a very similar model13 a few years ago.

Vox Media made a home page generator14 using similar ideas. The algorithm finds every possible layout that is valid, combining different examples from a pattern library. Next, each layout is examined and scored based on certain traits. Finally, the generator selects the best layout basically, the one with the highest score. Its more efficient than picking the best links by hand, as proven by recommendation engines such as Relap.io15.

Creating cookie-cutter graphic assets in many variations is one of the most boring parts of a designers work. It takes so much time and is demotivating, when designers could be spending this time on more valuable product work.

Algorithms could take on simple tasks such as color matching. For example, Yandex.Launcher uses an algorithm to automatically set up colors for app cards, based on app icons18. Other variables could be automatically set, such as changing text color according to the background color19, highlighting eyes in a photo to emphasize emotion20, and implementing parametric typography21.

Algorithms can create an entire composition. Yandex.Market uses a promotional image generator for e-commerce product lists (in Russian24). A marketer fills a simple form with a title and an image, and then the generator proposes an endless number of variations, all of which conform to design guidelines. Netflix went even further25 its script crops movie characters for posters, then applies a stylized and localized movie title, then runs automatic experiments on a subset of users. Real magic! Engadget has nurtured a robot apprentice to write simple news articles about new gadgets26. Whew!

Truly dark magic happens in neural networks. A fresh example, the Prisma app29, stylizes photos to look like works of famous artists. Artisto30 can process video in a similar way (even streaming video).

However, all of this is still at an early stage. Sure, you could download an app on your phone and get a result in a couple of seconds, rather than struggle with some library on GitHub (as we had to last year); but its still impossible to upload your own reference style and get a good result without teaching a neural network. However, when that happens at last, will it make illustrators obsolete? I doubt it will for those artists with a solid and unique style. But it will lower the barrier to entry when you need decent illustrations for an article or website but dont need a unique approach. No more boring stock photos!

For a really unique style, it might help to have a quick stylized sketch based on a question like, What if we did an illustration of a building in our unified style? For example, the Pixar artists of the animated movie Ratatouille tried to apply several different styles to the movies scenes and characters; what if a neural network made these sketches? We could also create storyboards and describe scenarios with comics (photos can be easily converted to sketches). The list can get very long.

Finally, there is live identity, too. Animation has become hugely popular in branding recently, but some companies are going even further. For example, Wolff Olins presented a live identity for Brazilian telecom Oi33, which reacts to sound. You just cant create crazy stuff like this without some creative collaboration with algorithms.

One way to get a clear and well-developed strategy is to personalize a product for a narrow audience segment or even specific users. We see it every day in Facebook newsfeeds, Google search results, Netflix and Spotify recommendations, and many other products. Besides the fact that it relieves the burden of filtering information from users, the users connection to the brand becomes more emotional when the product seems to care so much about them.

However, the key question here is about the role of designer in these solutions. We rarely have the skill to create algorithms like these engineers and big data analysts are the ones to do it. Giles Colborne of CX Partners sees a great example in Spotifys Discover Weekly feature: The only element of classic UX design here is the track list, whereas the distinctive work is done by a recommendation system that fills this design template with valuable music.

Colborne offers advice to designers35 about how to continue being useful in this new era and how to use various data sources to build and teach algorithms. Its important to learn how to work with big data and to cluster it into actionable insights. For example, Airbnb learned how to answer the question, What will the booked price of a listing be on any given day in the future? so that its hosts could set competitive prices36. There are also endless stories about Netflixs recommendation engine.

A relatively new term, anticipatory design38 takes a broader view of UX personalization and anticipation of user wishes. We already have these types of things on our phones: Google Now automatically proposes a way home from work using location history data; Siri proposes similar ideas. However, the key factor here is trust. To execute anticipatory experiences, people have to give large companies permission to gather personal usage data in the background.

I already mentioned some examples of automatic testing of design variations used by Netflix, Vox Media and The Grid. This is one more way to personalize UX that could be put onto the shoulders of algorithms. Liam Spradlin describes the interesting concept of mutative design39; its a well-though-out model of adaptive interfaces that considers many variables to fit particular users.

Ive covered several examples of algorithm-driven design in practice. What tools do modern designers need for this? If we look back to the middle of the last century, computers were envisioned as a way to extend human capabilities. Roelof Pieters and Samim Winiger have analyzed computing history and the idea of augmentation of human ability40 in detail. They see three levels of maturity for design tools:

Algorithm-driven design should be something like an exoskeleton for product designers increasing the number and depth of decisions we can get through. How might designers and computers collaborate?

The working process of digital product designers could potentially look like this:

These tasks are of two types: the analysis of implicitly expressed information and already working solutions, and the synthesis of requirements and solutions for them. Which tools and working methods do we need for each of them?

Analysis of implicitly expressed information about users that can be studied with qualitative research is hard to automate. However, exploring the usage patterns of users of existing products is a suitable task. We could extract behavioral patterns and audience segments, and then optimize the UX for them. Its already happening in ad targeting, where algorithms can cluster a user using implicit and explicit behavior patterns (within either a particular product or an ad network).

To train algorithms to optimize interfaces and content for these user clusters, designers should look into machine learning43. Jon Bruner gives44 a good example: A genetic algorithm starts with a fundamental description of the desired outcome say, an airlines timetable that is optimized for fuel savings and passenger convenience. It adds in the various constraints: the number of planes the airline owns, the airports it operates in, and the number of seats on each plane. It loads what you might think of as independent variables: details on thousands of flights from an existing timetable, or perhaps randomly generated dummy information. Over thousands, millions or billions of iterations, the timetable gradually improves to become more efficient and more convenient. The algorithm also gains an understanding of how each element of the timetable the take-off time of Flight 37 from OHare, for instance affects the dependent variables of fuel efficiency and passenger convenience.

In this scenario, humans curate an algorithm and can add or remove limitations and variables. The results can be tested and refined with experiments on real users. With a constant feedback loop, the algorithm improves the UX, too. Although the complexity of this work suggests that analysts will be doing it, designers should be aware of the basic principles of machine learning. OReilly published45 a great mini-book on the topic recently.

Two years ago, a tool for industrial designers named Autodesk Dreamcatcher46 made a lot of noise and prompted several publications from UX gurus47. Its based on the idea of generative design, which has been used in performance, industrial design, fashion and architecture for many years now. Many of you know Zaha Hadid Architects; its office calls this approach parametric design48.

Logojoy51 is a product to replace freelancers for a simple logo design. You choose favorite styles, pick a color and voila, Logojoy generates endless ideas. You can refine a particular logo, see an example of a corporate style based on it, and order a branding package with business cards, envelopes, etc. Its the perfect example of an algorithm-driven design tool in the real world! Dawson Whitfield, the founder, described machine learning principles behind it52.

However, its not yet established in digital product design, because it doesnt help to solve utilitarian tasks. Of course, the work of architects and industrial designers has enough limitations and specificities of its own, but user interfaces arent static their usage patterns, content and features change over time, often many times. However, if we consider the overall generative process a designer defines rules, which are used by an algorithm to create the final object theres a lot of inspiration. The working process of digital product designers could potentially look like this:

Its yet unknown how can we filter a huge number of concepts in digital product design, in which usage scenarios are so varied. If algorithms could also help to filter generated objects, our job would be even more productive and creative. However, as product designers, we use generative design every day in brainstorming sessions where we propose dozens of ideas, or when we iterate on screen mockups and prototypes. Why cant we offload a part of these activities to algorithms?

The experimental tool Rene55 by Jon Gold, who worked at The Grid, is an example of this approach in action. Gold taught a computer to make meaningful typographic decisions56. Gold thinks that its not far from how human designers are taught, so he broke this learning process into several steps:

His idea is similar to what Roelof and Samim say: Tools should be creative partners for designers, not just dumb executants.

Golds experimental tool Rene is built on these principles58. He also talks about imperative and declarative approaches to programming and says that modern design tools should choose the latter focusing on what we want to calculate, not how. Jon uses vivid formulas to show how this applies to design and has already made a couple of low-level demos. You can try out the tool59 for yourself. Its a very early concept but enough to give you the idea.

While Jon jokingly calls this approach brute-force design and multiplicative design, he emphasizes the importance of a professional being in control. Notably, he left The Grid team earlier this year.

Unfortunately, there are no tools for product design for web and mobile that could help with analysis and synthesis on the same level as Autodesk Dreamcatcher does. However, The Grid and Wix could be considered more or less mass-level and straightforward solutions. Adobe is constantly adding features that could be considered intelligent: The latest release of Photoshop has a content-aware feature60 that intelligently fills in the gaps when you use the cropping tool to rotate an image or expand the canvas beyond the images original size.

There is another experiment by Adobe and University of Toronto. DesignScape61 automatically refines a design layout for you. It can also propose an entirely new composition.

You should definitely follow Adobe in its developments, because the company announced a smart platform named Sensei62 at the MAX 2016 conference. Sensei uses Adobes deep expertise in AI and machine learning, and it will be the foundation for future algorithm-driven design features in Adobes consumer and enterprise products. In its announcement63, the company refers to things such as semantic image segmentation (showing each region in an image, labeled by type for example, building or sky), font recognition (i.e. recognizing a font from a creative asset and recommending similar fonts, even from handwriting), and intelligent audience segmentation.

However, as John McCarthy, the late computer scientist who coined the term artificial intelligence, famously said, As soon as it works, no one calls it AI anymore. What was once cutting-edge AI is now considered standard behavior for computers. Here are a couple of experimental ideas and tools64 that could become a part of the digital product designers day-to-day toolkit:

But these are rare and patchy glimpses of the future. Right now, its more about individual companies building custom solutions for their own tasks. One of the best approaches is to integrate these algorithms into a companys design system. The goals are similar: to automate a significant number of tasks in support of the product line; to achieve and sustain a unified design; to simplify launches; and to support current products more easily.

Modern design systems started as front-end style guidelines, but thats just a first step (integrating design into code used by developers). The developers are still creating pages by hand. The next step is half-automatic page creation and testing using predefined rules.

Platform Thinking by Yury Vetrov (Source67)

Should your company follow this approach?

If we look in the near term, the value of this approach is more or less clear:

Altogether, this frees the designer from the routines of both development support and the creative process, but core decisions are still made by them. A neat side effect is that we will better understand our work, because we will be analyzing it in an attempt to automate parts of it. It will make us more productive and will enable us to better explain the essence of our work to non-designers. As a result, the overall design culture within a company will grow.

However, all of these benefits are not so easy to implement or have limitations:

There are also ethical questions: Is design produced by an algorithm valuable and distinct? Who is the author of the design? Wouldnt generative results be limited by a local maximum? Oliver Roeder says68 that computer art isnt any more provocative than paint art or piano art. The algorithmic software is written by humans, after all, using theories thought up by humans, using a computer built by humans, using specifications written by humans, using materials gathered by humans, in a company staffed by humans, using tools built by humans, and so on. Computer art is human art a subset, rather than a distinction. The revolution is already happening, so why dont we lead it?

This is a story of a beautiful future, but we should remember the limits of algorithms theyre built on rules defined by humans, even if the rules are being supercharged now with machine learning. The power of the designer is that they can make and break rules; so, in a year from now, we might define beautiful as something totally different. Our industry has both high- and low-skilled designers, and it will be easy for algorithms to replace the latter. However, those who can follow and break rules when necessary will find magical new tools and possibilities.

Moreover, digital products are getting more and more complex: We need to support more platforms, tweak usage scenarios for more user segments, and hypothesize more. As Frogs Harry West says, human-centered design has expanded from the design of objects (industrial design) to the design of experiences (encompassing interaction design, visual design and the design of spaces). The next step will be the design of system behavior: the design of the algorithms that determine the behavior of automated or intelligent systems. Rather than hire more and more designers, offload routine tasks to a computer. Let it play with the fonts.

(vf, il, al)

Back to top Tweet itShare on Facebook

Yury leads a team comprising UX and visual designers at one of the largest Russian Internet companies, Mail.Ru Group. His team works on communications, content-centric, and mobile products, as well as cross-portal user experiences. Both Yury and his team are doing a lot to grow their professional community in Russia.

Go here to see the original:

Algorithm-Driven Design: How Artificial Intelligence Is ...

8 Secrets to Achieving Financial Independence

If you make $1 million a year from a job, you could lose your job any day. If you make the same $1 million from owning hotels, or businesses, no one can take that from you. Having a high income alone does not mean financial independence. Robin Bartholick, Getty Images

Most people believe the key to wealth is a high-paying job. Yes, it's easier to amass assets if you have more money coming in each month, but the true secret to increasing your net worth is to spend less than you make. It is a cliche; but it is the fundamental, absolute, non-negotiable reality of money. To escape this trap, you need to understand that income is not wealth.

What is wealth? My personal definition: Wealth is the part of your net worth (assets minus liabilities) that generates capital gains, income, and dividends without your labor. If you are a Doctor or Lawyer, you need to put in long hours after years of specialty training and higher education to get a paycheck. On the other hand, if you have a portfolio of private businesses, car washes, parking garages, stocks, bonds, mutual funds, real estate, patents, trademarks, and other cash generators, you could sit by the pool. The real value, of course, is that you could maintain your lifestyle even if you were disabled or unable to continue working at your primary occupation. Better yet, unlike a salaried employee, wealth can't fire you you have to squander it. It's far easier to lose a job that wipes out a well-constructed portfolio.

The level of your wealth should be measured by the length of time you could maintain your standard of living without an additional paycheck. In other words, if you had to stop working right now, how long could you keep up your purchasing pattern for cars, clothing, music lessons, college tuition, video games, etc.? The average person isn't educated in this truth, which is why the more and more they earn, they are left wondering why financial independence and security continue to allude them, always seemingly just out of grasp.

See original here:

8 Secrets to Achieving Financial Independence