John McAfee Says He’s No Longer Pitching ICOs "Due To SEC …

Anyone who understands the difference between a Free Republic with markets and the UCC Criminal Fraud UNITED STATES, CORP. INC. is de facto considered a Threat to their chicanery.

Lyn Ulbricht, mother of Ross Ulbricht, joins us today to discuss the arrest, conviction and unconscionable double life plus 40 year sentence of her son in the Silk Road case. We discuss the case against Ross and the exculpatory information that was withheld from the jury (and sometimes even the defence) during his trial. We also talk about the loss of his appeal in the 2nd District court and where theFreeRoss.orgcampaign goes from here.

https://www.corbettreport.com/interview-1285-lyn-ulbricht-updates-us-on

For as long as the Populace continues to CONSENT to the Board of Directors aka "CONgress" & its CEO aka "President" within their 10 square mile Criminal Fraud DC District of Criminals.

The raping, murder & pillaging will continue. And, the Custom wearing jack booted thugs will continue to enforce the Fraud.

You,

Black Laws Dictionary, CONSENT to it by birth, silence, signature etc...

They're all involved in an elaborate scheme based on contrat law & Criminal deceit to Fraud The American People by CONSENT (Black Law's Dictionary) & being an accessory to the deceit & Criminal Fraud by contracting with the Criminal State.

We are "Governed" Indoctrinated into a Political, Educational, Religious & Economic UNITED STATES, CORP based on contract law which is based on Criminal Fraud, deceit & illusion.

The Private Corp UNITED STATES, CORP uses the cover of being a functional Government when in reality they are not. Much like the Criminal Federal Reserve uses The "Federal" in their name & use it as cover to give the illusion that they are a branch of the US Government when they are not.

Through bankruptcies, Criminal Contract Fraud & deceit the Charlatans have incrementally incorporated the US as well as your souls (birth cert) which are securitized via the Criminal Federal Reserve through to the IMF.

They're functioning off corporate version of the THE CONSTITUTION. It's the reason why The Global Criminal Oligarch Cabal Bankster Intelligence Crime Syndicate continues to lie, cheat, deceit, rape & pillage with impunity.

The only power the have over you is with CONSENT (Black Law's Dictionary). Pay no Taxes. Peaceful Non-Participation, Non-Compliance & being an accessory into their Criminal system/s based on Criminal Fraud, Debt Bondage & Enslavement.

Vote with Your Dollars. Seek Alternative Systems Decentralized outside the Control of the Borg. They're out there. Peer to Peer. Seek them.

It's because in times of trouble, the circle of trust contracts.

Smaller and smaller down to localities, and families. Local.

Trust in the federal government is collapsing/contracting along with other large institutions.

Smaller circle of trust = decentralization.

Long Agorism.

AGORISM:The ideology which asserts that the Libertarian philosophical position occurs in the real world in practice as Counter-Economics (see below).

AGORIST:Conscious practitioner of Counter-Economics; older terms include Left Libertarian and New Libertarian.

COUNTER-ECONOMICS:The study and/or practice of all human action which is forbidden by the State, including violation or non-compliance with regulations; sale and delivery of controlled or forbidden substances; ignoring of all borders and internal state boundaries, customs, tariffs, duties and taxes; evasion of taxes, tributes, levies and assizes; non-compliance with personal regulation such.

James Corbett:?The Most Dangerous Philosophy the Oligarchs Do Not Want You To Know.

https://www.corbettreport.com/the-most-dangerous-philosophy-what-the-oli...

Read the rest here:

John McAfee Says He's No Longer Pitching ICOs "Due To SEC ...

John McAfee Fled to Belize, But He Couldnt Escape Himself

On November 12, 2012, Belizean police announced that they were seeking John McAfee for questioning in connection with the murder of his neighbor. Six months earlier, I began an in-depth investigation into McAfee's life. This is the chronicle of that investigation.

Twelve weeks before the murder, John McAfee flicks open the cylinder of his Smith & Wesson revolver and empties the bullets, letting them clatter onto the table between us. A few tumble to the floor. McAfee is 66, lean and fit, with veins bulging out of his forearms. His hair is bleached blond in patches, like a cheetah, and tattoos wrap around his arms and shoulders.

More than 25 years ago, he formed McAfee Associates, a maker of antivirus software that went on to become immensely popular and was acquired by Intel in 2010 for $7.68 billion. Now he's holed up in a bungalow on his island estate, about 15 miles off the coast of mainland Belize. The shades are drawn so I can see only a sliver of the white sand beach and turquoise water outside. The table is piled with boxes of ammunition, fake IDs bearing his photo, Frontiersman bear deterrent, and a single blue baby pacifier.

- Better Than Human

McAfee picks a bullet off the floor and fixes me with a wide-eyed, manic intensity. "This is a bullet, right?" he says in the congenial Southern accent that has stuck with him since his boyhood in Virginia.

"Let's put the gun back," I tell him. I'd come here to try to understand why the government of Belize was accusing him of assembling a private army and entering the drug trade. It seemed implausible that a wildly successful tech entrepreneur would disappear into the Central American jungle and become a narco-trafficker. Now I'm not so sure.

But he explains that the accusations are a fabrication. "Maybe what happened didn't actually happen," he says, staring hard at me. "Can I do a demonstration?"

He loads the bullet into the gleaming silver revolver, spins the cylinder.

"This scares you, right?" he says. Then he puts the gun to his head.

My heart rate kicks up; it takes me a second to respond. "Yeah, I'm scared," I admit. "We don't have to do this."

"I know we don't," he says, the muzzle pressed against his temple. And then he pulls the trigger. Nothing happens. He pulls it three more times in rapid succession. There are only five chambers.

"Reholster the gun," I demand.

He keeps his eyes fixed on me and pulls the trigger a fifth time. Still nothing. With the gun still to his head, he starts pulling the trigger incessantly. "I can do this all day long," he says to the sound of the hammer clicking. "I can do this a thousand times. Ten thousand times. Nothing will ever happen. Why? Because you have missed something. You are operating on an assumption about reality that is wrong."

It's the same thing, he argues, with the government's accusations. They were a smoke screenan attempt to distort realitybut there's one thing everybody agrees on: The trouble really got rolling in the humid predawn murk of April 30, 2012.

It was a Monday, about 4:50 am. A television flickered in the guard station of McAfee's newly built, 2.5-acre jungle outpost on the Belizean mainland. At the far end of the property, a muddy river flowed slowly past. Crocodiles lurked on the opposite bank, and howler monkeys screeched. In the guard station, a drunk night watchman gaped at Blond Ambition, a Madonna concert DVD.

The guard heard the trucks first. Then boots hitting the ground and the gate rattling as the lock was snapped with bolt cutters. He stood up and looked outside. Dozens of men in green camouflage were streaming into the compound. Many were members of Belize's Gang Suppression Unit, an elite force trained in part by the FBI and armed with Taurus MT-9 submachine guns. Formed in 2010, their mission was to dismantle criminal organizations.

The guard observed the scene silently for a moment and then sat back down. After all, the Madonna concert wasn't over yet. Outside, flashlight beams streaked across the property. "This is the police," a voice blared over a bullhorn. "Everyone out!"

Deep in the compound, McAfee burst out of a thatched-roof bungalow that stood on stilts 20 feet off the ground. He was naked and held a revolver. Things had changed since his days as a high-flying software tycoon. By 2009 he had sold almost everything he ownedestates in Hawaii, Colorado, New Mexico, and Texas as well as his 10-passenger planeand moved into the jungle. He announced that he was searching for natural antibiotics in the rain forest and constructed a mysterious laboratory on his property. Now his jungle stronghold was under attack. The commandos were converging on him. There were 31 of them; he was outgunned and outmanned.

McAfee walked back inside to the 17-year-old in his bed. She was sitting up, naked, her long frizzy hair falling around her shoulders and framing the stars tattooed on her chest. She was terrified.

As the GSU stormed up the stairs, he put on some shorts, laid down his gun, and walked out with his hands up. The commandos collided with McAfee at the top of the stairs, slammed him against the wall, and handcuffed him.

"You're being detained on suspicion of producing methamphetamine," one of the cops said.

McAfee twisted to look at his accuser. "That's a startling hypothesis, sir," he responded. "Because I haven't sold drugs since 1983."

BRIAN FINKE

Nineteen eighty-three was a pivotal year for McAfee. He was 38 and director of engineering at Omex, a company that built information storage systems in Santa Clara, California. He was also selling cocaine to his subordinates and snorting massive amounts himself. When he got too high to focus, he'd take a quaalude. If he started to fall asleep at his desk, he'd snort some more coke to wake up. McAfee had trouble making it through the day and spent his afternoons drinking scotch to even out the tumult in his head.

He'd been a mess for a long time. He grew up in Roanoke, Virginia, where his father was a road surveyor and his mother a bank teller. His father, McAfee recalls, was a heavy drinker and "a very unhappy man" who McAfee says beat him and his mother severely. When McAfee was 15, his father shot himself. "Every day I wake up with him," McAfee says. "Every relationship I have, he's by my side; every mistrust, he is the negotiator of that mistrust. So my life is fucked."

McAfee started drinking heavily his first year at Roanoke College and supported himself by selling magazine subscriptions door-to-door. He would knock and announce that the lucky resident had won an absolutely free subscription; all they had to do was pay a small shipping and handling fee. "So, in fact, I am explaining to them why it's not free and why they are going to pay for it. But the ruse worked," McAfee recalls. He learned that confidence was all that mattered. He smiled, fixed them with his penetrating blue-eyed gaze, and hit them with a nonstop stream of patter. "I made a fortune," he says.

He spent his money on booze but managed to graduate and start a PhD in mathematics at Northeast Louisiana State College in 1968. He got kicked out for sleeping with one of his undergraduate students (whom he later married) and ended up coding old-school punch-card programs for Univac in Bristol, Tennessee. That didn't last long, either. He was arrested for buying marijuana, and though his lawyer got him off without a conviction, he was summarily fired.

Still, he had learned enough to gin up an impressive, totally fake rsum and used it to get a job at Missouri Pacific Railroad in St. Louis. It was 1969 and the company was attempting to use an IBM computer to schedule trains. After six months, McAfee's system began to churn out optimized train-routing patterns. Unfortunately, he had also discovered LSD. He would drop acid in the morning, go to work, and route trains all day. One morning he decided to experiment with another psychedelic called DMT. He did a line, felt nothing, and decided to snort a whole bag of the orangish powder. "Within an hour my mind was shattered," McAfee says.

People asked him questions, but he didn't understand what they were saying. The computer was spitting out train schedules to the moon; he couldn't make sense of it. He ended up behind a garbage can in downtown St. Louis, hearing voices and desperately hoping that nobody would look at him. He never went back to Missouri Pacific. Part of him believes he's still on that trip, that everything since has been one giant hallucination and that one day he'll snap out of it and find himself back on his couch in St. Louis, listening to Pink Floyd's Dark Side of the Moon.

From then on he felt like he was always one step away from a total breakdown, which finally came at Omex in 1983. He was snorting lines of coke off his desk most mornings, polishing off a bottle of scotch every day, and living in constant fear that he would run out of drugs. His wife had left him, he'd given away his dog, and in the wake of what he calls a mutual agreement, he left Omex. He ended up shuttered in his house, with no friends, doing drugs alone for days on end and wondering whether he should kill himself just as his father had. "My life was total hell," he says.

Finally he went to a therapist, who suggested he go to Alcoholics Anonymous. He attended a meeting and started sobbing. Someone gave him a hug and told him he wasn't alone.

"That's when life really began for me," he says.

He says he's been sober ever since.

When the Madonna concert ended, McAfee's drunken guard finally emerged from his station and strolled over to find out what was going on. The police quickly surrounded him. They knew who he was: Austin "Tino" Allen had been convicted 28 times for crimes ranging from robbery to assault, and he had spent most of his life in and out of prison.

The police lined everybody up against a rock wall as the sun rose. A low, heavy heat filled the jungle. Everybody began to sweat when the police fanned out to search the property. As an officer headed toward an outlying building, one of McAfee's dogs cut him off, growled, and, according to police, went in for an attack. The cop immediately shot the dog through the rib cage.

"What the fuck!" McAfee screamed. "That's my dog."

The police ignored him. They left the dead dog in the dirt while they rummaged through the compound. They found shotguns, pistols, a huge cache of ammunition, and hundreds of bottles of chemicals they couldn't identify. McAfee and the others were left in the sun for hours. (GSU commander Marco Vidal claims they were under the shade of a large tree.) By the time the police announced that they were taking several of them to jail, McAfee says his face was turning pink with sunburn. He and Allen were loaded into the back of a pickup. The truck tore off, heading southeast toward Belize City at 80 miles per hour.

McAfee tried to stay calm, but he had to admit that this was a bad situation. He had walked away from a luxurious lifemansions on multiple continents, sports cars, a private planeonly to end up in the back of a pickup cuffed to a notoriously violent man. Allen pulled McAfee close so he could be heard over the roar of the wind. McAfee tensed. "Boss, I just want to say that it's an honor to be here with you," Allen shouted. "You must be a really important person for them to send all these men to get you."

In 1986 two brothers in Pakistan coded the first known computer virus aimed at PCs. They weren't trying to destroy anything; it was simple curiosity. They wanted to see how far their creation would travel, so they included their names, addresses, and telephone numbers in the code of the virus. They named it Brain after their computer services shop in Lahore.

Within a year the phone at the shop was ringing: Brain had infected computers around the world. At the time, McAfee had been sober for four years and gotten a security clearance to work on a classified voice-recognition program at Lockheed in Sunnyvale, California. But then he came across an article in the San Jose Mercury News about the spread of the Pakistani Brain virus in the US.

He found the idea terrifying. Nobody knew for sure at the time why these intrusions were occurring. It reminded him of his childhood, when his father would hit him for no reason. "I didn't know why he did it," McAfee says. "I just knew a beating could happen any time." As a boy, he wasn't able to fight back. Now, faced with a new form of attack that was hard to rationalize, he decided to do something.

He started McAfee Associates out of his 700-square-foot home in Santa Clara. His business plan: Create an antivirus program and give it away on electronic bulletin boards. McAfee didn't expect users to pay. His real aim was to get them to think the software was so necessary that they would install it on their computers at work. They did. Within five years, half of the Fortune 100 companies were running it, and they felt compelled to pay a license fee. By 1990, McAfee was making $5 million a year with very little overhead or investment.

His success was due in part to his ability to spread his own paranoia, the fear that there was always somebody about to attack. Soon after launching his company, he bought a 27-foot Winnebago, loaded it with computers, and announced that he had formed the first "antivirus paramedic unit." When he got a call from someone experiencing computer problems in the San Jose area, he drove to the site and searched for "virus residue." Like a good door-to-door salesman, there was a kernel of truth to his pitch, but he amplified and embellished the facts to sell his product. The RV therefore was not just an RV; it was "the first specially customized unit to wage effective, on-the-spot counterattacks in the virus war."

It was great publicity, executed with drama and sly wit. By the end of 1988, he was on The MacNeil/Lehrer NewsHour telling the country that viruses were causing so much damage, some companies were "near collapse from financial loss." He underscored the danger with his 1989 book, Computer Viruses, Worms, Data Diddlers, Killer Programs, and Other Threats to Your System. "The reality is so alarming that it would be very difficult to exaggerate," he wrote. "Even if no new viruses are ever created, there are already enough circulating to cause a growing problem as they reproduce. A major disaster seems inevitable."

In 1992 McAfee told almost every major news network and newspaper that the recently discovered Michelangelo virus was a huge threat; he believed it could destroy as many as 5 million computers around the world. Sales of his software spiked, but in the end only tens of thousands of infections were reported. Though McAfee was roundly criticized for his proclamation, the criticism worked in his favor, as he explained in an email in 2000 to a computer-security blogger: "My business increased tenfold in the two months following the stories and six months later our revenues were 50 times greater and we had captured the lion's share of the anti-virus market."

This ability to infect others with his own paranoia made McAfee a wealthy man. In October 1992 his company debuted on Nasdaq, and his shares were suddenly worth $80 million.

The jail cell was about 10 feet by 10 feet. The concrete floor was bare and cold, the smell of urine overpowering. A plastic milk container in the corner had been hacked open and was serving as a toilet. The detention center was located in the Queen Street police station, but everybody in Belize City called it the Pisshouse. In the shadows of his cell, McAfee could see the other inmates staring at him.

No charges had been filed yet, though the police had confiscated what they said were two unlicensed firearms on McAfee's property; they still couldn't identify the chemicals they had found. McAfee said he had licenses for all his firearms and explained that the chemicals were part of his antibiotic research. The police weren't buying it.

McAfee pulled 20 Belizean dollars out of his shoe and passed it through the bars to a guard. "You got a cigarette?" he asked.

McAfee hadn't smoked for 10 years, but this seemed like a good time to start again. The guard handed him a book of matches and a pack of Benson & Hedges. McAfee lit one and took a deep drag. He was supposed to be living out a peaceful retirement in a tropical paradise. Now he was standing in jail, holding up his pants with one hand because the police had confiscated his belt. "Use this," Allen said, offering him a dirty plastic bag.

McAfee looked confused. "You tie your pants," Allen explained.

McAfee fed the bag through two of his belt loops, cinched it tight, and tied a knot. It worked.

"Welcome to the Pisshouse," Allen said, smiling.

McAfee lived in Silicon Valley for nearly 20 years. Outwardly he seemed to lead a traditional life with his second wife, Judy. He was a seasoned businessman whom startups turned to for advice. Stanford Graduate School of Business wrote two case studies highlighting his strategies. He was regularly invited to lecture at the school, and he was awarded an honorary doctorate from his alma mater, Roanoke College. In 2000 he started a yoga institute near his 10,000-square-foot mansion in the Colorado Rockies and wrote four books about spirituality. Even after his marriage fell apart in 2002, he was a respectable citizen who donated computers to schools and took out newspaper ads discouraging drug use.

But as he neared retirement age in the late 2000s, he started to feel like he was deluding himself. His properties, cars, and planes had become a burden, and he realized that he didn't want the traditional rich man's life anymore. Maintaining so many possessions was a constant distraction; it was time, he felt, to try to live more rustically. "John has always been searching for something," says Jennifer Irwin, McAfee's girlfriend at the time. She remembers him telling her once that he was trying to reach "the expansive horizon."

He was also hurting financially. The economic collapse in 2008 hit him hard, and he couldn't afford to maintain his lifestyle. By 2009 he'd auctioned off almost everything he owned, including more than 1,000 acres of land in Hawaii and the private airport he'd built in New Mexico. He was trying in part to deter people from suing him on the assumption that he had deep pockets. He was already facing a suit from a man who had tripped on his property in New Mexico. Another suit alleged that he was responsible for the death of someone who crashed during a lesson at a flight school McAfee had founded. He figured that if he were out of the country, he'd be less of a target. And he knew that, should he lose a case, it would be harder for the plaintiffs to collect money if he lived overseas.

In early 2008 McAfee started searching for property in the Caribbean. His criteria were pretty basic: He was looking for an English-speaking country near the US with beautiful beaches. He quickly came across a villa on Ambergris Caye in Belize. In the early '90s he had visited the nation of 189,000 people and loved it. (Today the population is around 356,000.) He looked at the property on Google Earth, decided it was perfect, and bought it. The first time he saw it in person was in April 2008, when he moved in.

Soon after his arrival, McAfee began to explore the country. He was particularly fascinated by stories of a majestic Mayan city in the jungle and hired a guide to go see it. Boating up a river that snaked into the northern jungle, they stopped at a makeshift dock that jutted from the dense vegetation. McAfee jumped ashore, pushed through the vines, and caught sight of a towering, crumbling temple. Trees had grown up through the ancient buildings, encasing them in roots. Giant stone faces glared out through the foliage, mouths agape. As the men walked up the steps of the temple, the guide described how the Mayans sacrificed their prisoners, sending torrents of blood down the very stairs he and McAfee were now climbing.

McAfee was spellbound. "Belize is so raw and so clear and so in-your-face. There's an opportunity to see something about human nature that you can't really see in a politer society, because the purpose of society is to mask ourselves from each other," McAfee says. The jungle, in other words, would give him the chance to find out exactly who he was, and that opportunity was irresistible.

So in February 2010 he bought two and a half acres of swampy land along the New River, 10 miles upriver from the Mayan ruins. Over the next year, he spent more than a million dollars filling in the swamp and constructing an array of thatched-roofed bungalows. While his girlfriend, Irwin, stayed on Ambergris Caye, McAfee outfitted the place like Kublai Khan's sumptuous house of pleasure. He imported ancient Tibetan art and shipped in a baby grand piano even though he had never taken lessons. There was no Internet. At night, when the construction stopped, there was just the sound of the river flowing quietly past. He sat at the piano and played exuberant odes of his own creation. "It was magical," he says.

He didn't like the idea of getting old, though, so he injected testosterone into his buttocks every other week. He felt that it gave him youthful energy and kept him lean. Plus, he wasn't looking for a quiet retirement. He started a cigar manufacturing business, a coffee distribution company, and a water taxi service that connected parts of Ambergris Caye. He continued to build more bungalows on his property even though he had no pressing need for them.

In 2010 McAfee visited a beachfront resort for lunch and met Allison Adonizio, a 31-year-old microbiologist who was on vacation. In the resort's dining room, Adonizio explained that she was doing postgrad research at Harvard on how plants combat bacteria. She was particularly interested in plant compounds that appeared to prevent bacteria from causing infections by interfering with the way the microbes communicated. Eventually, Adonizio explained, the work might also lead to an entire new class of antibiotics.

McAfee was thrilled by the idea. He had fought off digital contagions, and now he could fight organic ones. It was perfect.

He immediately proposed they start a business to commercialize her research. Within minutes McAfee was talking in rapid-fire bursts about how this would transform the pharmaceutical industry and the entire world. They would save millions of lives and reinvent whole industries. Adonizio was astounded. "He offered me my dream job," she says. "My own lab, assistants. It was incredible."

Adonizio said yes on the spot, quit her research position in Boston, sold the house she had just bought, and moved to Belize. McAfee soon built a laboratory on his property and stocked it with tens of thousands of dollars' worth of equipment. Adonizio went to work trying to isolate new plant compounds that might be effective medicines, while McAfee touted the business to the international press.

But the methodical pace of Adonizio's scientific research couldn't keep up with McAfee's enthusiasm, and his attention seemed to wander. He began spending more time in Orange Walk, a town of about 13,000 people that was 5 miles from his compound. McAfee described it in an email to friends as "the asshole of the worlddirty, hot, gray, dilapidated." He liked to walk the town's poorly paved streets and take pictures of the residents. "I gravitate to the world's outcasts," he explained in another email. "Prostitutes, thieves, the handicapped ... For some reason I have always been fascinated by these subcultures."

Though he says he never drank alcohol, he became a regular at a saloon called Lover's Bar. The proprietor, McAfee wrote to his friends, was partial to "shatteringly bad Mexican karaoke music to which voices beyond description add a disharmony that reaches diabolic proportions." McAfee quickly noticed that the place doubled as a whorehouse, servicing, as he put it, "cane field workers, street vendors, fishermen, farmersanyone who has managed to save up $15 for a good time."

This was the real world he was looking for, in all its horror. The bar girls were given one Belize dollar for every beer a patron bought them. To increase their earnings, some of the women would chug beers, vomit in the restroom, and return to chug more. One reported drinking 50 beers in one day. "Ninety-nine percent of people would run because they'd fear for their safety or sanity," McAfee says. "I couldn't do that. I couldn't walk away."

McAfee started spending most mornings at Lover's. After six months, he sent out another update to his friends: "My fragile connection with the world of polite society has, without a doubt, been severed," he wrote. "My attire would rank me among the worst-dressed Tijuana panhandlers. My hygiene is no better. Yesterday, for the first time, I urinated in public, in broad daylight."

McAfee knew he had entered a dangerous world. "I have no illusions," he noted in another dispatch. "We are tainted by everything we touch."

Evaristo "Paz" Novelo, the obese Belizean proprietor of Lover's, liked to sit at a corner table and squint at his customers through perpetually puffy eyes. He admits to a long history of operating brothels and prides himself on his ability to figure out exactly what will please his patrons. Early on, he asked whether McAfee was looking for a woman. When McAfee said no, Novelo asked whether he wanted a boy. McAfee declined again. Then Novelo showed up at McAfee's compound with a 16-year-old girl named Amy Emshwiller.

Emshwiller had a brassy toughness that belied her girlishness. In a matter-of-fact tone, she told McAfee that she had been abused as a child and said that her mother had forced her to sleep with dozens of men for money. "I don't fall in love," she told him. "That's not my job." She carried a gun, wore aviator sunglasses, and had on a low-cut shirt that framed her ample cleavage.

McAfee felt a swirl of emotions: lust, compassion, pity. "I am the male version of Amy," he says. "I resonated with her story because I lived it."

Emshwiller, however, felt nothing for him. "I know how to control men," she says. "I told him my story because I wanted him to feel sorry for me, and it worked." All Emshwiller saw was an easy mark. "A millionaire in freaking Belize, where people work all day just to make a dime?" she says. "Who wouldn't want to rob him?"

McAfee soon realized that Emshwiller was dangerous and unstable, but that was part of her attractiveness. "She can pretend sanity better than any woman I have ever known," he says. "And she can be alluring, she can be very beautiful, she can be butchlike. She's a chameleon." Within a month they were sleeping together, and McAfee started building a new bungalow on his property for her.

Visiting from Ambergris Caye, McAfee's girlfriend, Jennifer Irwin, was flabbergasted. She asked him to tell the girl to leave, and when McAfee refused, Irwin left the country. McAfee hardly blames her. "What I basically did was can a solid 12-year relationship for a stark-raving madwoman," he says. "But I honestly fell in love."

One night Emshwiller decided to make her move. She slipped out of bed and pulled McAfee's Smith & Wesson out of a holster hanging from an ancient Tibetan gong in his bedroom. Her plan, if it could be called that, was to kill him and make off with as much cash as she could scrounge up. She crept to the foot of the bed, aimed, and started to pull the trigger. But at the last moment she closed her eyes, and the bullet went wide, ripping through a pillow. "I guess I didn't want to kill the bastard," she admits.

McAfee leaped out of bed and grabbed the gun before she could fire again. She ran to the bathroom, locked herself in, and asked if he was going to shoot her. He couldn't hear out of his left ear and was trying to get his bearings. Finally he told her he was going to take away her phone and TV for a month. She was furious.

>"I basically canned a solid 12-year relationship for a stark-raving madwoman," McAfee says. "But I fell in love."

"But I didn't even kill you!" she shouted.

McAfee decided it was better for Emshwiller to have her own place about a mile down the road in the village of Carmelita. So in early 2011 he built her a house in the village. Many of the homes are made of stripped tree trunks and topped with sheets of corrugated iron; 10 percent have no electricity. The village has a handful of dirt roads populated with colonies of biting ants and a grassy soccer field surrounded by palm trees and stray dogs. The town's biggest source of income: sand from a pit by the river that locals sell to construction companies.

Emshwiller, who had grown up in the area, warned McAfee that the village was not what it appeared to be. She told him that the tiny, impoverished town of 1,600 was in fact a major shipment site for drugs moving overland into Mexico, 35 miles to the north. As Emshwiller described it, this village in McAfee's backyard was crawling with narco-traffickers.

It was a revelation perfectly tailored to feed into McAfee's latent paranoia. "I was massively disturbed," he says. "I fell in love with the river, but then I discovered the horrors of Carmelita."

He asked Emshwiller what he should do. "She wanted me to shoot all the men in the town," McAfee says. It occurred to him that she might be using him to exact revenge on people who had wronged her, so he asked the denizens of Lover's for more information. They told him stories of killings, torture, and gang wars in the area. For McAfee, the town began to take on mythic proportions. "Carmelita was literally the Wild West," he says. "I didn't realize that 2 miles away was the most corrupt village on the planet."

He decided to go on the offensive. After all, he was a smart Silicon Valley entrepreneur who had launched a multibillion-dollar company. Even though he had lost a lot of money in the financial crisis, he was still wealthy. Maybe he couldn't maintain multiple estates around the world, but surely he could clean up one village.

He started by solving some obvious problems. Carmelita had no police station, so McAfee bought a small cement house and hired workers to install floor-to-ceiling iron bars. Then he told the national cops responsible for the area to start arresting people. The police protested that they were ill-equipped for the job, so McAfee furnished them with imported M16s, boots, pepper spray, stun guns, and batons. Eventually he started paying officers to patrol during their off-hours. The police, in essence, became McAfee's private army, and he began issuing orders. "What I'd like you to do is go into Carmelita and start getting information for me," he told the officers on his payroll. "Who's dealing drugs, and where are the drugs coming from?"

When a 22-year-old villager nicknamed Burger fired a gun outside Emshwiller's house in November 2011, McAfee decided he couldn't rely on others to get the work done; he needed to take action himself. An eyewitness told him that Burger had shot at a motorcycleit looked like a drug deal gone bad. Burger's sister said that he was firing at stray dogs that attacked him. Either way, McAfee was incensed. He drove his gray Dodge pickup to the family's wooden shack near the river and strode into the muddy yard with Emshwiller as his backup (she was carrying a matte-black air rifle with a large scope). Burger wasn't there, but his mother, sister, and brother-in-law were. "I'm giving you a last chance here," McAfee said, holding his Smith & Wesson. "Your brother will be a dead man if he doesn't turn in that gun. It doesn't matter where he goes."

"It was like he thought he was in a movie," says Amelia Allen, the shooter's sister. But she wasn't going to argue with McAfee. Her mother pulled the gun out of a bush and handed it to him.

Soon, McAfee was everywhere. He pulled over a suspicious car on the road only to discover that it was filled with elderly people and children. He offered a new flatscreen TV to a small-time marijuana peddler on the condition that the man stop dealing (the guy accepted, though the TV soon broke). "It was like John Wayne came to town," says Elvis Reynolds, former chair of the village council.

When I visited the village, Reynolds and others admitted that there were fights and petty theft but insisted that Carmelita was simply an impoverished little village, not a major transit point for international narco-traffickers, as McAfee alleges. The village leaders, for their part, were dumbfounded. Many were unfamiliar with antivirus software and had never heard of John McAfee. "I thought he would come by, introduce himself, and explain what he was doing here, but he never did," says Feliciano Salam, a soft-spoken resident who has served on the village council for two years. "He just showed up and started telling us what to do."

The fact that he was running a laboratory on his property only added to the mystery. Adonizio was continuing to research botanical compounds, but McAfee didn't want to tell the locals anything about it. In part he was worried about corporate espionage. He had seen white men in suits standing beside their cars on the heavily trafficked toll bridge near his property and was sure they were spies. "Do you realize that Glaxo, Bayer, every single drug company in the world sent people out there?" McAfee says. "I was working on a project that had some paradigm-shifting impact on the drug world. It would be insanity to talk about it."

McAfee became convinced that he was being watched at all hours. Across the river, he saw people lurking in the forest and would surveil them with binoculars. When Emshwiller visited, she never noticed anybody but repeatedly told McAfee to be careful. She heard rumors that gang members were out to "jack" himrob and kill him. On one occasion, she recorded a village councilman discussing how to dispatch McAfee with a grenade. McAfee was wowed by her street smarts"She is brilliant beyond description," he saysand relished the fact that she had come full circle and was now defending him. "He got himself into a very entangled, dysfunctional situation," says Katrina Ancona, the wife of McAfee's partner in the water taxi business. "We kept telling him to get out."

Adonizio was also worried about McAfee's behavior. He had initially told her that the area was perfectly safe, but now she was surrounded by armed men. When she went to talk to McAfee in his bungalow, she noticed garbage bags filled with cash and blister packs of pharmaceuticals, including Viagra. She lived just outside of Carmelita and had never had any problems. If there was any danger, she felt that it was coming from McAfee. "He turned into a very scary person," she says. She wasn't comfortable living there anymore and left the country.

George Lovell, CEO of the Ministry of National Security, was also concerned that McAfee was buying guns and hiring guards. "When I see people doing this, my question is, what are you trying to protect?" Lovell says. Marco Vidal, head of the Gang Suppression Unit, concurred. "We got information to suggest that there may have been a meth laboratory at his location," he wrote in an email. "Given the intelligence on McAfee, there was no scope for making efforts to resolve the matter." He proposed a raid, and his superiors approved it.

When members of the GSU swept into McAfee's compound on April 30, 2012, they found no meth. They found no illegal drugs of any kind. They did confiscate 10 weapons and 320 rounds of ammunition. Three of McAfee's security guards were operating without a security guard license, and charges were filed against them. McAfee was accused of possessing an unlicensed firearm and spent a night in the Queen Street jail, aka the Pisshouse.

But the next morning, the charges were dropped and McAfee was released. He was convinced, however, that his war on drugs had made him some powerful enemies.

He had reason to worry. According to Vidal, McAfee was still a "person of interest," primarily because the authorities still couldn't explain what he was up to. "The GSU makes no apologies for deeming a person in control of a laboratory, with no approval for manufacturing any substance, having gang connections and heavily armed security guards, as a person of interest," Vidal wrote.

Vidal's suspicions may not have been far off. Two years after moving to Belize, McAfee began posting dozens of queries on Bluelight.ru, a drug discussion forum. He explained that he had started to experiment with MDPV, a psychoactive stimulant found in bath salts, a class of designer drugs that have effects similar to amphetamines and cocaine. "When I first started doing this I accidently got a few drops on my fingers while handling a used flask and didn't sleep for four days," McAfee posted. "I had visual and auditory hallucinations and the worst paranoia of my life."

McAfee indicated, though, that the heightened sexuality justified the drug's risks and claimed to have produced 50 pounds of MDPV in 2010. "I have distributed over 3,000 doses exclusively in this country," he wrote. But neither Emshwiller, Adonizio, nor anyone else I spoke with observed him making the stuff. So how could he have produced 50 pounds without anyone noticing?

McAfee has a simple explanation: The whole thing was an elaborate prank aimed at tricking drug users into trying a notoriously noxious drug. "It was the most tongue-in-cheek thing in the fucking world," he says, and denies ever taking the substance. "If I'm gonna do drugs, I'm gonna do something that I know is good," he says. "I'm gonna grab some mushrooms, number one, and maybe get some really fine cocaine.

See original here:

John McAfee Fled to Belize, But He Couldnt Escape Himself

Clerk of Court – Home

Welcome to the official website of the Ascension Parish Clerk of Court!

Our office is pleased to provide an online resource to help residents and visitors obtain information and conduct business. We hope you find this website useful and we welcome your suggestions and comments for improvement.

Our mission is to provide excellence in service, preservation and management of records and provide access to legal documents filed in our office. Exceptional customer service is very important to us and we strive to provide this service with professionalism, proficiency and courtesy to our customers.

We have offices located in both Donaldsonville and Gonzales for your convenience. Our main office is located at the Courthouse in Donaldsonville and our satellite office in Gonzales is located across the street from the Courthouse. Our Minute Clerk and Criminal Department is located in the Courthouse complex in Gonzales.

The duties and responsibilities of the Clerk of Court are established in the Louisiana Constitution which include but are not limited to: Clerk of the District Court, Clerk of the Parish Court, Ex-Officio Recorder of Deeds, Mortgages, and other legal instruments, Treasurer for the Court system, Chief Elections Officer for the Parish and Custodian of Voting Machines, Ex-Officio Member of the Jury Commission, and Ex-Officio Member of the Board of Supervisors of Elections.

We strive to continue to keep pace with the latest technology and are proud to offer an online record search for Mortgage, Conveyance, Maps, Criminal, Traffic and Civil records. You can subscribe to our Ascension Clerk of Court Electronic Search System (ACCESS) through a monthly subscription or day rate. Each office is equipped with public access computers that will enable you to access general information for records filed in our office.

Fees collected for recordings, certified copies and all services rendered in connection with civil and criminal proceedings are established by statute. All salaries and expenses of the office are paid out of fees that are collected which make the Clerk of Courts Office entirely self-supporting.

See the rest here:

Clerk of Court - Home

Ron Paul Exposes The Human Suffering Of "Cultural Marxism"

Ron Paul KNOWS the score:Madlad Ron Paul Tweets A. Wyatt Mann Cartoon Blasting Jew Cultural Marxism

You make sense as does this gentleman:Richard B June 30, 2018 at 8:53 pm Just because the other person is bad doesnt make you good. But, is the other person bad? Maybe, maybe not. Lets talk about it and find out, using all of the available intellectual and social resources at our disposal while exercising our rights to free speech and assembly in the process. Oh, wait, we cant do that. And whos responsible, the Puritans? Theyre long gone.

In any event, when a group is condemned as morally bankrupt and another set up as morally superior one has to consider the source. Judaism passing itself off as morally superior is laughable. Its literally a religion of projection. Its guilty of every accusation it hurls at Whites, ie; Ethno-Supremacy; Its bad to say Master Race but Ok to say Chosen People (and did the Germans really say they were the Master Race, or is that itself a projection?). Its exclusive, not inclusive, its racist and bullying (Palestine! Hello!).

The irony is that you cant imagine the Jewish people feeling guilt not because theyre innocent, but because of their Myth of Innocence, which actually says in effect, We never do anything wrong. Things are done to us. When you live like that you become incapable of learning, change, and growth. Essential qualities today for any people with the pretention of wanting to be global leaders (or, to put it more baldly, to rule the world).

Which is why, though theyre certainly good at shame, blame, denial, projection, infiltration and subversion, theyre no damn good at social-management. Just look at the places they rule over. The sun never sets on their dysfunction. Its literally everywhere. And theyre calling US morally bankrupt?!

Its this amazing lack of self-awareness (another important quality needed to lead anything today, let alone an entire civilization) and incredible inability to admit when theyre wrong, about anything, that keeps them from seeing those moments when in fact they are wrong, ie; Whites are capable of feeling guilt exactly because they HAVE a conscience which Jews and many other non-White groups simply lack, as if they all have a morality chip missing in their DNA. In short, having a conscience that the other groups in general and Jews in particular lack makes Whites morally superior, not inferior.

And if Jews arent morally superior but insist on saying they are, in spite of all the evidence to the contrary, then what are they? Well aside from being inauthentic and dishonest, it makes them Psychopaths, of course!

Any one individual or group walking around acting as if theyre morally superior when theyre actually criminal psychopaths are DANGEROUS!And this is why so many times throughout history theyve had to be stopped. And its happening again now ON A GLOBAL SCALE!

No ones saying Whites are perfect. Kevins article makes it clear just how imperfect they are, though theres certainly lots of other evidence. Evidence we know about because Whites Dont Hide Their Imperfections! They actually make it a matter of public discourse so they can learn, change, and grow!No, were not saying Whites are perfect. But we are saying that Jews are dangerous and not just to Whites. But Whites have to think about themselves. The West is a towering human acheivement and Whites have nothing to apologize for. Only a people poisoned by envy and a lust for power would suggest otherwise. And such a people cant be morally superior to anyone.

Sure, Whites have made mistakes and given the accidents of history, its understanable weve made them. But we are the ONLY people in the history of the world who believe in and practice the idea of a Developing Conscience. Whereas the many peoples judging us arent even aware that thats possible or, to the extent theyre aware, desirable.

Again, we have nothing to apologize for and everything to be proud of. Were going to see more and more Whites express their justifiable irritation, and not just irritation, at constantly being judged by a people who we all know will never judge themselves, thereby revealing their hypocrisy and moral inferiority. When this happens their worst nightmare is going to turn into a daymare as they realize that the usual accusations dont work anymore.

Visit link:

Ron Paul Exposes The Human Suffering Of "Cultural Marxism"

Ron Paul disavows racist newsletters under his name – CBS News

John Huff

Ron Paul reiterated Tuesday that he did not write a series of newsletters that appeared under his name in the 1980s and 1990s that included controversial comments about African-Americans, including a claim that "[o]rder was only restored in L.A. when it came time for the blacks to pick up their welfare checks."

Asked by CBS News and National Journal if the newsletters are fair game on Tuesday in New Hampshire, Paul responded, "I don't know whether fair is the right word."

"I mean, it's politics," he continued. "Nobody talked about it for 20 years until they found out that the message of liberty was making progress. And everybody knows I didn't write them, and it's not my sentiment, so it's sort of politics as usual."

Writing in The New Republic in 2008, reporter James Kirchick revealed some particularly incendiary passages from the monthly newsletters, which carried names like "Ron Paul's Freedom Report" and the "Ron Paul Political Report." Many of the newsletters, which were mostly written in the first person and usually didn't otherwise carry a byline, were reportedly being held in collections of extreme-right political literature.

The newsletters included a criticism of Ronald Reagan for legislation creating a federal holiday in honor of Martin Luther King Jr., who is described as a "world-class philanderer who beat up his paramours" and "seduced underage girls and boys."

"We can thank him for our annual Hate Whitey Day," one newsletter said of Reagan, according to Kirchick. The newsletters also claimed that AIDS sufferers "enjoy the attention and pity that comes with being sick," expressed support for and offered advice to the "local militias now training to defend liberty" shortly before the Oklahoma City bombing, and questioned whether the 1993 World Trade Center bombing "was a setup by the Israeli Mossad."

Kirchick revisited the newsletters in the Weekly Standard on Tuesday, writing that "Paul's lucrative and decades-long promotion of bigotry and conspiracy theories, for which he has yet to account fully, and his continuing espousal of extremist views...should make him unwelcome at any respectable forum."

Kirchick tied the newsletters to Paul's willingness to appear on the radio program of conspiracy theorist Alex Jones, who has reportedly accused the government of encouraging "homosexuality with chemicals so that people don't have children." He noted that Paul seemed open to Jones' suggestion that the military's NORTHCOM combatant command is "taking over" the nation.

Paul denied his involvement with the newsletters back in 2008, saying the controversial comments "are not mine and do not represent what I believe or have ever believed."

"When I was out of Congress and practicing medicine full-time, a newsletter was published under my name that I did not edit. Several writers contributed to the product," he said. "For over a decade, I have publicly taken moral responsibility for not paying closer attention to what went out under my name."

In 2008, the Libertarian magazine Reason (citing libertarian activists, some close to Paul) reported that Paul's chief ghostwriter for the newsletters was one Llewellyn Rockwell, Jr., who was Paul's congressional chief of staff from 1978 to 1982 and a longtime Paul confident and adviser. (Rockwell denies this.) Paul and his wife were officers of Ron Paul & Associates, the now-defunct company that published the newsletters, which reportedly earned Ron Paul & Associates nearly one million dollars over one year, according to a 1993 tax document. Paul, his family and Rockwell were listed as four of the company's 11 employees.

Paul's campaign chairman, Jesse Benton, told Hotsheet Tuesday that "We take Ron at [his] word that he did not write" the newsletters.

"So have his constituents in Texas," Benton continued. "We do so because everything he has worked and stood for forty years stands anathema to racism. We know that Dr. Paul stands for Liberty for all Americans."

Asked if the issue was fair game, Benton responded, "He has answered questions about these newsletters for 20 years, but it is reasonable that he answer them again now."

"We are confident that Americans will look to his vast, consistent and principled record, his life [as] a doctor, faithful husband and family man, and accept his answer," he added.

Excerpt from:

Ron Paul disavows racist newsletters under his name - CBS News

Trance Retreat

PRICING

Early bird discount

We are offering a 5% early bird discount for anyone who confirms their booking and makes their payment in full by June 30, 2018.

What is included in the cost?

- Catered Breakfast, Lunch & Dinner (and snacks) served daily- Complimentary non-alcoholic beverages and select alcoholic beverages- Private estate accommodation for 8 days (7 nights)- Daily room cleaning- Return Transportation from fixed pick-up point in Bali, Indonesia- Use of Studio Monitors (shared) on-site- Guestlist & Transportation for any external Trance Retreat events hosted in Indonesia during the week

Optional group excursions and outings may be offered at a separate cost.

Is there a deposit required?

A 500 non-refundable deposit will secure your booking spot (pending selection) with the remainder of the balance due by August 31, 2018.

Arrival and Departure

We will set a Central, tourist friendly point in Bali as the initial pick-up at the start of the retreat (approx. Noon on Sept 16) and also as the return drop-off point (approx. 3PM on Sept 23).

Full Retreat Details

A detailed preparation guide and schedule of instructional events and side events will be emailed out to participants no later than 1 month prior to the start of the Retreat.

Originally posted here:

Trance Retreat

Fact Check: Has Trump declared bankruptcy four or six …

Youve taken business bankruptcies six times.Hillary Clinton

On occasion four times we used certain laws that are there.

Donald Trump

THE FACT CHECKER | Clinton is correct.

Trumps companies have filed for Chapter 11 bankruptcy protection, which means a company can remain in business while wiping away many of its debts. The bankruptcy court ultimately approves a corporate budget and a plan to repay remaining debts; often shareholders lose much of their equity.

Trumps Taj Mahal opened in April 1990 in Atlantic City, but six months later, defaulted on interest payments to bondholders as his finances went into a tailspin, The Washington Posts Robert OHarrow found. In July 1991, Trumps Taj Mahal filed for bankruptcy. He could not keep up with debts on two other Atlantic City casinos, and those two properties declared bankruptcy in 1992. A fourth property, the Plaza Hotel in New York, declared bankruptcy in 1992 after amassing debt.

PolitiFact uncovered two more bankruptcies filed after 1992, totaling six. Trump Hotels and Casinos Resorts filed for bankruptcy again in 2004, after accruing about $1.8 billion in debt. Trump Entertainment Resorts also declared bankruptcy in 2009, after being hit hard during the 2008 recession.

Why the discrepancy? Perhaps this will give us an idea: Trump told Washington Post reporters that he counted the first three bankruptcies as just one.

Continue reading here:

Fact Check: Has Trump declared bankruptcy four or six ...

Libertarianism | Internet Encyclopedia of Philosophy

What it means to be a "libertarian" in a political sense is a contentious issue, especially among libertarians themselves. There is no single theory that can be safely identified as the libertarian theory, and probably no single principle or set of principles on which all libertarians can agree. Nevertheless, there is a certain family resemblance among libertarian theories that can serve as a framework for analysis. Although there is much disagreement about the details, libertarians are generally united by a rough agreement on a cluster of normative principles, empirical generalizations, and policy recommendations. Libertarians are committed to the belief that individuals, and not states or groups of any other kind, are both ontologically and normatively primary; that individuals have rights against certain kinds of forcible interference on the part of others; that liberty, understood as non-interference, is the only thing that can be legitimately demanded of others as a matter of legal or political right; that robust property rights and the economic liberty that follows from their consistent recognition are of central importance in respecting individual liberty; that social order is not at odds with but develops out of individual liberty; that the only proper use of coercion is defensive or to rectify an error; that governments are bound by essentially the same moral principles as individuals; and that most existing and historical governments have acted improperly insofar as they have utilized coercion for plunder, aggression, redistribution, and other purposes beyond the protection of individual liberty.

In terms of political recommendations, libertarians believe that most, if not all, of the activities currently undertaken by states should be either abandoned or transferred into private hands. The most well-known version of this conclusion finds expression in the so-called "minimal state" theories of Robert Nozick, Ayn Rand, and others (Nozick 1974; Rand 1963a, 1963b) which hold that states may legitimately provide police, courts, and a military, but nothing more. Any further activity on the part of the stateregulating or prohibiting the sale or use of drugs, conscripting individuals for military service, providing taxpayer-funded support to the poor, or even building public roadsis itself rights-violating and hence illegitimate.

Libertarian advocates of a strictly minimal state are to be distinguished from two closely related groups, who favor a smaller or greater role for government, and who may or may not also label themselves "libertarian." On one hand are so-called anarcho-capitalists who believe that even the minimal state is too large, and that a proper respect for individual rights requires the abolition of government altogether and the provision of protective services by private markets. On the other hand are those who generally identify themselves as classical liberals. Members of this group tend to share libertarians' confidence in free markets and skepticism over government power, but are more willing to allow greater room for coercive activity on the part of the state so as to allow, say, state provision of public goods or even limited tax-funded welfare transfers.

As this article will use the term, libertarianism is a theory about the proper role of government that can be, and has been, supported on a number of different metaphysical, epistemological, and moral grounds. Some libertarians are theists who believe that the doctrine follows from a God-made natural law. Others are atheists who believe it can be supported on purely secular grounds. Some libertarians are rationalists who deduce libertarian conclusions from axiomatic first principles. Others derive their libertarianism from empirical generalizations or a reliance on evolved tradition. And when it comes to comprehensive moral theories, libertarians represent an almost exhaustive array of positions. Some are egoists who believe that individuals have no natural duties to aid their fellow human beings, while others adhere to moral doctrines that hold that the better-off have significant duties to improve the lot of the worse-off. Some libertarians are deontologists, while others are consequentialists, contractarians, or virtue-theorists. Understanding libertarianism as a narrow, limited thesis about the proper moral standing, and proper zone of activity, of the stateand not a comprehensive ethical or metaphysical doctrineis crucial to making sense of this otherwise baffling diversity of broader philosophic positions.

This article will focus primarily on libertarianism as a philosophic doctrine. This means that, rather than giving close scrutiny to the important empirical claims made both in support and criticism of libertarianism, it will focus instead on the metaphysical, epistemological, and especially moral claims made by the discussants. Those interested in discussions of the non-philosophical aspects of libertarianism can find some recommendations in the reference list below.

Furthermore, this article will focus almost exclusively on libertarian arguments regarding just two philosophical subjects: distributive justice and political authority. There is a danger that this narrow focus will be misleading, since it ignores a number of interesting and important arguments that libertarians have made on subjects ranging from free speech to self-defense, to the proper social treatment of the mentally ill. More generally, it ignores the ways in which libertarianism is a doctrine of social or civil liberty, and not just one of economic liberty. For a variety of reasons, however, the philosophic literature on libertarianism has mostly ignored these other aspects of the theory, and so this article, as a summary of that literature, will generally reflect that trend.

Probably the most well-known and influential version of libertarianism, at least among academic philosophers, is that based upon a theory of natural rights. Natural rights theories vary, but are united by a common belief that individuals have certain moral rights simply by virtue of their status as human beings, that these rights exist prior to and logically independent of the existence of government, and that these rights constrain the ways in which it is morally permissible for both other individuals and governments to treat individuals.

Although one can find some earlier traces of this doctrine among, for instance, the English Levellers or the Spanish School of Salamanca, John Locke's political thought is generally recognized as the most important historical influence on contemporary natural rights versions of libertarianism. The most important elements of Locke's theory in this respect, set out in his Second Treatise, are his beliefs about the law of nature, and his doctrine of property rights in external goods.

Locke's idea of the law of nature draws on a distinction between law and government that has been profoundly influential on the development of libertarian thought. According to Locke, even if no government existed over men, the state of nature would nevertheless not be a state of "license." In other words, men would still be governed by law, albeit one that does not originate from any political source (c.f. Hayek 1973, ch. 4). This law, which Locke calls the "law of nature" holds that "being all equal and independent, no one ought to harm another in his life, liberty, or possessions" (Locke 1952, para. 6). This law of nature serves as a normative standard to govern human conduct, rather than as a description of behavioral regularities in the world (as are other laws of nature like, for instance, the law of gravity). Nevertheless, it is a normative standard that Locke believes is discoverable by human reason, and that binds us all equally as rational agents.

Locke's belief in a prohibition on harming others stems from his more basic belief that each individual "has a property in his own person" (Locke 1952, para. 27). In other words, individuals are self-owners. Throughout this essay we will refer to this principle, which has been enormously influential on later libertarians, as the "self-ownership principle." Though controversial, it has generally been taken to mean that each individual possesses over her own body all those rights of exclusive use that we normally associate with property in external goods. But if this were all that individuals owned, their liberties and ability to sustain themselves would obviously be extremely limited. For almost anything we want to doeating, walking, even breathing, or speaking in order to ask another's permissioninvolves the use of external goods such as land, trees, or air. From this, Locke concludes, we must have some way of acquiring property in those external goods, else they will be of no use to anyone. But since we own ourselves, Locke argues, we therefore also own our labor. And by "mixing" our labor with external goods, we can come to own those external goods too. This allows individuals to make private use of the world that God has given to them in common. There is a limit, however, to this ability to appropriate external goods for private use, which Locke captures in his famous "proviso" that holds that a legitimate act of appropriation must leave "enough, and as good... in common for others" (Locke 1952, para. 27). Still, even with this limit, the combination of time, inheritance, and differential abilities, motivation, and luck will lead to possibly substantial inequalities in wealth between persons, and Locke acknowledges this as an acceptable consequence of his doctrine (Locke 1952, para. 50).

By far the single most important influence on the perception of libertarianism among contemporary academic philosophers was Robert Nozick in his book, Anarchy, State, and Utopia (1974). This book is an explanation and exploration of libertarian rights that attempts to show how a minimal, and no more than a minimal, state can arise via an "invisible hand" process out of a state of nature without violating the rights of individuals; to challenge the highly influential claims of John Rawls that purport to show that a more-than-minimal state was justified and required to achieve distributive justice; and to show that a regime of libertarian rights could establish a "framework for utopia" wherein different individuals would be free to seek out and create mediating institutions to help them achieve their own distinctive visions of the good life.

The details of Nozick's arguments can be found at Robert Nozick. Here, we will just briefly point out a few elements of particular importance in understanding Nozick's place in contemporary libertarian thoughthis focus on the "negative" aspects of liberty and rights, his Kantian defense of rights, his historical theory of entitlement, and his acceptance of a modified Lockean proviso on property acquisition. A discussion of his argument for the minimal state can be found in the section on anarcho-capitalism below.

First, Nozick, like almost all natural rights libertarians, stresses negative liberties and rights above positive liberties and rights. The distinction between positive and negative liberty, made famous by Isaiah Berlin (Berlin 1990), is often thought of as a distinction between "freedom to" and "freedom from." One has positive liberty when one has the opportunity and ability to do what one wishes (or, perhaps, what one "rationally" wishes or "ought" to wish). One has negative liberty, on the other hand, when there is an absence of external interferences to one's doing what one wishesspecifically, when there is an absence of external interferences by other people. A person who is too sick to gather food has his negative liberty intactno one is stopping him from gathering foodbut not his positive liberty as he is unable to gather food even though he wants to do so. Nozick and most libertarians see the proper role of the state as protecting negative liberty, not as promoting positive liberty, and so toward this end Nozick focuses on negative rights as opposed to positive rights. Negative rights are claims against others to refrain from certain kinds of actions against you. Positive rights are claims against others to perform some sort of positive action. Rights against assault, for instance, are negative rights, since they simply require others not to assault you. Welfare rights, on the other hand, are positive rights insofar as they require others to provide you with money or services. By enforcing negative rights, the state protects our negative liberty. It is an empirical question whether enforcing merely negative rights or, as more left-liberal philosophers would promote, enforcing a mix of both negative and positive rights would better promote positive liberty.

Second, while Nozick agrees with the broadly Lockean picture of the content and government-independence of natural law and natural rights, his remarks in defense of those rights draw their inspiration more from Immanuel Kant than from Locke. Nozick does not provide a full-blown argument to justify libertarian rights against other non-libertarian rights theoriesa point for which he has been widely criticized, most famously by Thomas Nagel (Nagel 1975). But what he does say in their defense suggests that he sees libertarian rights as an entailment of the other-regarding element in Kant's second formulation of the categorical imperativethat we treat the humanity in ourselves and others as an end in itself, and never merely as a means. According to Nozick, both utilitarianism and theories that uphold positive rights sanction the involuntary sacrifice of one individual's interests for the sake of others. Only libertarian rights, which for Nozick take the form of absolute side-constraints against force and fraud, show proper respect for the separateness of persons by barring such sacrifice altogether, and allowing each individual the liberty to pursue his or her own goals without interference.

Third, it is important to note that Nozick's libertarianism evaluates the justice of states of affairs, such as distributions of property, in terms of the history or process by which that state of affairs arose, and not by the extent to which it satisfies what he calls a patterned or end-state principle of justice. Distributions of property are just, according to Nozick, if they arose from previously just distributions by just procedures. Discerning the justice of current distributions thus requires that we establish a theory of justice in transferto tell us which procedures constitute legitimate means of transferring ownership between personsand a theory of justice in acquisitionto tell us how individuals might come to own external goods that were previously owned by no one. And while Nozick does not fully develop either of these theories, his skeletal position is nevertheless significant, for it implies that it is only the proper historical pedigree that makes a distribution just, and it is only deviations from the proper pedigree that renders a distribution unjust. An implication of this position is that one cannot discern from time-slice statistical data alonesuch as the claim that the top fifth of the income distribution in the United States controls more than 80 percent of the nation's wealththat a distribution is unjust. Rather, the justice of a distribution depends on how it came aboutby force or by trade? By differing degrees of hard work and luck? Or by fraud and theft? Libertarianism's historical focus thus sets the doctrine against both outcome-egalitarian views that hold that only equal distributions are just, utilitarian views that hold that distributions are just to the extent they maximize utility, and prioritarian views that hold that distributions are just to the extent they benefit the worse-off. Justice in distribution is a matter of respecting people's rights, not of achieving a certain outcome.

The final distinctive element of Nozick's view is his acceptance of a modified version of the Lockean proviso as part of his theory of justice in acquisition. Nozick reads Locke's claim that legitimate acts of appropriation must leave enough and as good for others as a claim that such appropriations must not worsen the situation of others (Nozick 1974, 175, 178). On the face of it, this seems like a small change from Locke's original statement, but Nozick believes it allows for much greater freedom for free exchange and capitalism (Nozick 1974, 182). Nozick reaches this conclusion on the basis of certain empirical beliefs about the beneficial effects of private property:

it increases the social product by putting means of production in the hands of those who can use them most efficiently (profitably); experimentation is encouraged, because with separate persons controlling resources, there is no one person or small group whom someone with a new idea must convince to try it out; private property enables people to decide on the pattern and type of risks they wish to bear, leading to specialized types of risk bearing; private property protects future persons by leading some to hold back resources from current consumption for future markets; it provides alternative sources of employment for unpopular persons who don't have to convince any one person or small group to hire them, and so on. (Nozick 1974, 177)

If these assumptions are correct, then persons might not be made worse off by acts of original appropriation even if those acts fail to leave enough and as good for others to appropriate. Private property and the capitalist markets to which it gives rise generate an abundance of wealth, and latecomers to the appropriation game (like people today) are in a much better position as a result. As David Schmidtz puts the point:

Original appropriation diminishes the stock of what can be originally appropriated, at least in the case of land, but that is not the same thing as diminishing the stock of what can be owned. On the contrary, in taking control of resources and thereby removing those particular resources from the stock of goods that can be acquired by original appropriation, people typically generate massive increases in the stock of goods that can be acquired by trade. The lesson is that appropriation is typically not a zero-sum game. It normally is a positive-sum game. (Schmidtz and Goodin 1998, 30)

Relative to their level of well-being in a world where nothing is privately held, then, individuals are generally not made worse off by acts of private appropriation. Thus, Nozick concludes, the Lockean proviso will "not provide a significant opportunity for future state action" in the form of redistribution or regulation of private property (Nozick 1974, 182).

Nozick's libertarian theory has been subject to criticism on a number of grounds. Here we will focus on two primary categories of criticism of Lockean/Nozickian natural rights libertarianismnamely, with respect to the principle of self-ownership and the derivation of private property rights from self-ownership.

Criticisms of the self-ownership principle generally take one of two forms. Some arguments attempt to sever the connection between the principle of self-ownership and the more fundamental moral principles that are thought to justify it. Nozick's suggestion that self-ownership is warranted by the Kantian principle that no one should be treated as a mere means, for instance, is criticized by G.A. Cohen on the grounds that policies that violate self-ownership by forcing the well-off to support the less advantaged do not necessarily treat the well-off merely as means (Cohen 1995, 239241). We can satisfy Kant's imperative against treating others as mere means without thereby committing ourselves to full self-ownership, Cohen argues, and we have good reason to do so insofar as the principle of self-ownership has other, implausible, consequences. The same general pattern of argument holds against more intuitive defenses of the self-ownership principle. Nozick's concern (Nozick 1977, 206), elaborated by Cohen (Cohen 1995, 70), that theories that deny self-ownership might license the forcible transfer of eyes from the sight-endowed to the blind, for instance, or Murray Rothbard's claim that the only alternatives to self-ownership are slavery or communism (Rothbard 1973, 29), have been met with the response that a denial of the permissibility of slavery, communism, and eye-transplants can be madeand usually better madeon grounds other than self-ownership.

Other criticisms of self-ownership focus on the counterintuitive or otherwise objectionable implications of self-ownership. Cohen, for instance, argues that recognizing rights to full self-ownership allows individuals' lives to be objectionably governed by brute luck in the distribution of natural assets, since the self that people own is largely a product of their luck in receiving a good or bad genetic endowment, and being raised in a good or bad environment (Cohen 1995, 229). Richard Arneson, on the other hand, has argued that self-ownership conflicts with Pareto-Optimality (Arneson 1991). His concern is that since self-ownership is construed by libertarians as an absolute right, it follows that it cannot be violated even in small ways and even when great benefit would accrue from doing so. Thus, to modify David Hume, absolute rights of self-ownership seem to prevent us from scratching the finger of another even to prevent the destruction of the whole world. And although the real objection here seems to be to the absoluteness of self-ownership rights, rather than to self-ownership rights as such, it remains unclear whether strict libertarianism can be preserved if rights of self-ownership are given a less than absolute status.

Even if individuals have absolute rights to full self-ownership, it can still be questioned whether there is a legitimate way of moving from ownership of the self to ownership of external goods.

Left-libertarians, such as Hillel Steiner, Peter Vallentyne, and Michael Otsuka, grant the self-ownership principle but deny that it can yield full private property rights in external goods, especially land (Steiner 1994; Vallentyne 2000; Otsuka 2003). Natural resources, such theorists hold, belong to everyone in some equal way, and private appropriation of them amounts to theft. Rather than returning all such goods to the state of nature, however, most left-libertarians suggest that those who claim ownership of such resources be subjected to a tax to compensate others for the loss of their rights of use. Since the tax is on the value of the external resource and not on individuals' natural talents or efforts, it is thought that this line of argument can provide a justification for a kind of egalitarian redistribution that is compatible with full individual self-ownership.

While left-libertarians doubt that self-ownership can yield full private property rights in external goods, others are doubtful that the concept is determinate enough to yield any theory of justified property ownership at all. Locke's metaphor on labor mixing, for instance, is intuitively appealing, but notoriously difficult to work out in detail (Waldron 1983). First, it is not clear why mixing one's labor with something generates any rights at all. As Nozick himself asks, "why isn't mixing what I own with what I don't own a way of losing what I own rather than a way of gaining what I don't?" (Nozick 1974, 174175). Second, it is not clear what the scope of the rights generated by labor-mixing are. Again, Nozick playfully suggests (but does not answer) this question when he asks whether a person who builds a fence around virgin land thereby comes to own the enclosed land, or simply the fence, or just the land immediately under it. But the point is more worrisome than Nozick acknowledges. For as critics such as Barbara Fried have pointed out, following Hohfeld, property ownership is not a single right but a bundle of rights, and it is far from clear which "sticks" from this bundle individuals should come to control by virtue of their self-ownership (Fried 2004). Does one's ownership right over a plot of land entail the right to store radioactive waste on it? To dam the river that runs through it? To shine a very bright light from it in the middle of the night (Friedman 1989, 168)? Problems such as these must, of course, be resolved by any political theorynot just libertarians. The problem is that the concept of self-ownership seems to offer little, if any, help in doing so.

While Nozickian libertarianism finds its inspiration in Locke and Kant, there is another species of libertarianism that draws its influence from David Hume, Adam Smith, and John Stuart Mill. This variety of libertarianism holds its political principles to be grounded not in self-ownership or the natural rights of humanity, but in the beneficial consequences that libertarian rights and institutions produce, relative to possible and realistic alternatives. To the extent that such theorists hold that consequences, and only consequences, are relevant in the justification of libertarianism, they can properly be labeled a form of consequentialism. Some of these consequentialist forms of libertarianism are utilitarian. But consequentialism is not identical to utilitarianism, and this section will explore both traditional quantitative utilitarian defenses of libertarianism, and other forms more difficult to classify.

Philosophically, the approach that seeks to justify political institutions by demonstrating their tendency to maximize utility has its clearest origins in the thought of Jeremy Bentham, himself a legal reformer as well as moral theorist. But, while Bentham was no advocate of unfettered laissez-faire, his approach has been enormously influential among economists, especially the Austrian and Chicago Schools of Economics, many of whom have utilized utilitarian analysis in support of libertarian political conclusions. Some influential economists have been self-consciously libertarianthe most notable of which being Ludwig von Mises, Friedrich Hayek, James Buchanan, and Milton Friedman (the latter three are Nobel laureates). Richard Epstein, more legal theorist than economist, nevertheless utilizes utilitarian argument with an economic analysis of law to defend his version of classical liberalism. His work in Principles for a Free Society (1998) and Skepticism and Freedom (2003) is probably the most philosophical of contemporary utilitarian defenses of libertarianism. Buchanan's work is generally described as contractarian, though it certainly draws heavily on utilitarian analysis. It too is highly philosophical.

Utilitarian defenses of libertarianism generally consist of two prongs: utilitarian arguments in support of private property and free exchange and utilitarian arguments against government policies that exceed the bounds of the minimal state. Utilitarian defenses of private property and free exchange are too diverse to thoroughly canvass in a single article. For the purposes of this article, however, the focus will be on two main arguments that have been especially influential: the so-called "Tragedy of the Commons" argument for private property and the "Invisible Hand" argument for free exchange.

The Tragedy of the Commons argument notes that under certain conditions when property is commonly owned or, equivalently, owned by no one, it will be inefficiently used and quickly depleted. In his original description of the problem of the commons, Garrett Hardin asks us to imagine a pasture open to all, on which various herders graze their cattle (Hardin 1968). Each additional animal that the herder is able to graze means greater profit for the herder, who captures that entire benefit for his or her self. Of course, additional cattle on the pasture has a cost as well in terms of crowding and diminished carrying capacity of the land, but importantly this cost of additional grazing, unlike the benefit, is dispersed among all herders. Since each herder thus receives the full benefit of each additional animal but bears only a fraction of the dispersed cost, it benefits him or her to graze more and more animals on the land. But since this same logic applies equally well to all herders, we can expect them all to act this way, with the result that the carrying capacity of the field will quickly be exceeded.

The tragedy of the Tragedy of the Commons is especially apparent if we model it as a Prisoner's Dilemma, wherein each party has the option to graze additional animals or not to graze. (See figure 1, below, where A and B represent two herders, "graze" and "don't graze" their possible options, and the four possible outcomes of their joint action. Within the boxes, the numbers represent the utility each herder receives from the outcome, with A's outcome listed on the left and B's on the right). As the discussion above suggests, the best outcome for each individual herder is to graze an additional animal, but for the other herder not tohere the herder reaps all the benefit and only a fraction of the cost. The worst outcome for each individual herder, conversely, is to refrain from grazing an additional animal while the other herder indulgesin this situation, the herder bears costs but receives no benefit. The relationship between the other two possible outcomes is important. Both herders would be better off if neither grazed an additional animal, compared to the outcome in which both do graze an additional animal. The long-term benefits of operating within the carrying capacity of the land, we can assume, outweigh the short-term gains to be had from mutual overgrazing. By the logic of the Prisoner's Dilemma, however, rational self-interested herders will not choose mutual restraint over mutual exploitation of the resource. This is because, so long as the costs of over-grazing are partially externalized on to other users of the resource, it is in each herder's interest to overgraze regardless of what the other party does. In the language of game theory, overgrazing dominates restraint. As a result, not only is the resource consumed, but both parties are made worse off individually than they could have been. Mutual overgrazing creates a situation that not only yields a lower total utility than mutual restraint (2 vs. 6), but that is Pareto-inferior to mutual restraintat least one party (indeed, both!) would have been made better off by mutual restraint without anyone having been made worse off.

B

Don't Graze

Graze

A

Don't Graze

3, 3

0, 5

Graze

5, 0

1, 1

Figure 1. The Tragedy of the Commons as Prisoner's Dilemma

The classic solution to the Tragedy of the Commons is private property. Recall that the tragedy arises because individual herders do not have to bear the full costs of their actions. Because the land is common to all, the costs of overgrazing are partially externalized on to other users of the resource. But private property changes this. If, instead of being commonly owned by all, the field was instead divided into smaller pieces of private property, then herders would have the power to exclude others from using their own property. One would only be able to graze cattle on one's own field, or on others' fields on terms specified by their owners, and this means that the costs of that overgrazing (in terms of diminished usability of the land or diminished resale value because of that diminished usability) would be borne by the overgrazer alone. Private property forces individuals to internalize the cost of their actions, and this in turn provides individuals with an incentive to use the resource wisely.

The lesson is that by creating and respecting private property rights in external resources, governments can provide individuals with an incentive to use those resources in an efficient way, without the need for complicated government regulation and oversight of those resources. Libertarians have used this basic insight to argue for everything from privatization of roads (Klein and Fielding 1992) to private property as a solution to various environmental problems (Anderson and Leal 1991).

Libertarians believe that individuals and groups should be free to trade just about anything they wish with whomever they wish, with little to no governmental restriction. They therefore oppose laws that prohibit certain types of exchanges (such as prohibitions on prostitution and sale of illegal drugs, minimum wage laws that effectively prohibit low-wage labor agreements, and so on) as well as laws that burden exchanges by imposing high transaction costs (such as import tariffs).

The reason utilitarian libertarians support free exchange is that, they argue, it tends to allocate resources into the hands of those who value them most, and in so doing to increase the total amount of utility in society. The first step in seeing this is to understand that even if trade is a zero-sum game in terms of the objects that are traded (nothing is created or destroyed, just moved about), it is a positive-sum game in terms of utility. This is because individuals differ in terms of the subjective utility they assign to goods. A person planning to move from Chicago to San Diego might assign a relatively low utility value to her large, heavy furniture. It's difficult and costly to move, and might not match the style of the new home anyway. But to someone else who has just moved into an empty apartment in Chicago, that furniture might have a very high utility value indeed. If the first person values the furniture at $200 (or its equivalent in terms of utility) and the second person values it at $500, both will gain if they exchange for a price anywhere between those two values. Each will have given up something they value less in exchange for something they value more, and net utility will have increased as a result.

As Friedrich Hayek has noted, much of the information about the relative utility values assigned to different goods is transmitted to different actors in the market via the price system (Hayek 1980). An increase in a resource's price signals that demand for that resource has increased relative to supply. Consumers can respond to this price increase by continuing to use the resource at the now-higher price, switching to a substitute good, or discontinuing use of that sort of resource altogether. Each individual's decision is both affected by the price of the relevant resources, and affects the price insofar as it adds to or subtracts from aggregate supply and demand. Thus, though they generally do not know it, each person's decision is a response to the decisions of millions of other consumers and producers of the resource, each of whom bases her decision on her own specialized, local knowledge about that resource. And although all they are trying to do is maximize their own utility, each individual will be led to act in a way that leads the resource toward its highest-valued use. Those who derive the most utility from the good will outbid others for its use, and others will be led to look for cheaper substitutes.

On this account, one deeply influenced by the Austrian School of Economics, the market is a constantly churning process of competition, discovery, and innovation. Market prices represent aggregates of information and so generally represent an advance over what any one individual could hope to know on his own, but the individual decisions out of which market prices arise are themselves based on imperfect information. There are always opportunities that nobody has discovered, and the passage of time, the changing of people's preferences, and the development of new technological possibilities ensures that this ignorance will never be fully overcome. The market is thus never in a state of competitive equilibrium, and it will always "fail" by the test of perfect efficiency. But it is precisely today's market failures that provide the opportunities for tomorrow's entrepreneurs to profit by new innovation (Kirzner 1996). Competition is a process, not a goal to be reached, and it is a process driven by the particular decisions of individuals who are mostly unaware of the overall and long-term tendencies of their decisions taken as a whole. Even if no market actor cares about increasing the aggregate level of utility in society, he will be, as Adam Smith wrote, "led by an invisible hand to promote an end which was no part of his intention" (Smith 1981). The dispersed knowledge of millions of market actors will be taken into account in producing a distribution that comes as close as practically possible to that which would be selected by a benign, omniscient, and omnipotent despot. In reality, however, all that government is required to do in order to achieve this effect is to define and enforce clear property rights and to allow the price system to freely adjust in response to changing conditions.

The above two arguments, if successful, demonstrate that free markets and private property generate good utilitarian outcomes. But even if this is true, it remains possible that selective government intervention in the economy could produce outcomes that are even better. Governments might use taxation and coercion for the provision of public goods, or to prevent other sorts of market failures like monopolies. Or governments might engage in redistributive taxation on the grounds that given the diminishing marginal utility of wealth, doing so will provide higher levels of overall utility. In order to maintain their opposition to government intervention, then, libertarians must produce arguments to show that such policies will not produce greater utility than a policy of laissez-faire. Producing such arguments is something of a cottage industry among libertarian economists, and so we cannot hope to provide a complete summary here. Two main categories of argument, however, have been especially influential. We can call them incentive-based arguments and public choice arguments.

Incentive arguments proceed by claiming that government policies designed to promote utility actually produce incentives for individuals to act in ways that run contrary to promotion of utility. Examples of incentive arguments include arguments that (a) government-provided (welfare) benefits dissuade individuals from taking responsibility for their own economic well-being (Murray 1984), (b) mandatory minimum wage laws generate unemployment among low-skilled workers (Friedman 1962, 180181), (c) legal prohibition of drugs create a black market with inflated prices, low quality control, and violence (Thornton 1991), and (d) higher taxes lead people to work and/or invest less, and hence lead to lower economic growth.

Public choice arguments, on the other hand, are often employed by libertarians to undermine the assumption that government will use its powers to promote the public interest in the way its proponents claim it will. Public choice as a field is based on the assumption that the model of rational self-interest typically employed by economists to predict the behavior of market agents can also be used to predict the behavior of government agents. Rather than trying to maximize profit, however, government agents are thought to be aiming at re-election (in the case of elected officials) or maintenance or expansion of budget and influence (in the case of bureaucrats). From this basic analytical model, public choice theorists have argued that (a) the fact that the costs of many policies are widely dispersed among taxpayers, while their benefits are often concentrated in the hands of a few beneficiaries, means that even grossly inefficient policies will be enacted and, once enacted, very difficult to remove, (b) politicians and bureaucrats will engage in "rent-seeking" behavior by exploiting the powers of their office for personal gain rather than public good, and (c) certain public goods will be over-supplied by political processes, while others will be under-supplied, since government agents lack both knowledge and incentives necessary to provide such goods at efficient levels (Mitchell and Simmons 1994). These problems are held to be endemic to political processes, and not easily subject to legislative or constitutional correction. Hence, many conclude that the only way to minimize the problems of political power is to minimize the scope of political power itself by subjecting as few areas of life as possible to political regulation.

The quantitative utilitarians are often both rationalist and radical in their approach to social reform. For them, the maximization of utility serves as an axiomatic first principle, from which policy conclusions can be straightforwardly deduced once empirical (or quasi-empirical) assessments of causal relationships in the world have been made. From Jeremy Bentham to Peter Singer, quantitative utilitarians have advocated dramatic changes in social institutions, all justified in the name of reason and the morality it gives rise to.

There is, however, another strain of consequentialism that is less confident in the ability of human reason to radically reform social institutions for the better. For these consequentialists, social institutions are the product of an evolutionary process that itself is the product of the decisions of millions of discrete individuals. Each of these individuals in turn possess knowledge that, though by itself is insignificant, in the aggregate represents more than any single social reformer could ever hope to match. Humility, not radicalism, is counseled by this variety of consequentialism.

Though it has its affinities with conservative doctrines such as those of Edmund Burke, Michael Oakeshott, and Russell Kirk, this strain of consequentialism had its greatest influence on libertarianism through the work of Friedrich Hayek. Hayek, however, takes pains to distance himself from conservative ideology, noting that his respect for tradition is not grounded in a fetish for the status quo or an opposition to change as such, but in deeper, distinctively liberal principles (Hayek 1960). For Hayek, tradition is valuable because, and only to the extent that, it evolves in a peaceful, decentralized way. Social norms that are chosen by free individuals and survive competition from competing norms without being maintained by coercion are, for that reason, worthy of respect even if we are not consciously aware of all the reasons that the institution has survived. Somewhat paradoxically then, Hayek believes that we can rationally support institutions even when we lack substantive justifying reasons for supporting them. The reason this can be rational is that even when we lack substantive justifying reasons, we nevertheless have justifying reasons in a procedural sensethe fact that the institution is the result of an evolutionary procedure of a certain sort gives us reason to believe that there are substantive justifying reasons for it, even if we do not know what they are (Gaus 2006).

For Hayek, the procedures that lend justifying force to institutions are, essentially, ones that leave individuals free to act as they wish so long as they do not act aggressively toward others. For Hayek, however, this principle is not a moral axiom but rather follows from his beliefs regarding the limits and uses of knowledge in society. A crucial piece of Hayek's arguments regarding the price system, (see above) is his claim that each individual possesses a unique set of knowledge about his or her local circumstances, special interests, desires, abilities, and so forth. The price system, if allowed to function freely without artificial floors or ceilings, will reflect this knowledge and transmit it to other interested individuals, thus allowing society to make effective use of dispersed knowledge. But Hayek's defense of the price system is only one application of a more general point. The fact that knowledge of all sorts exists in dispersed form among many individuals is a fundamental fact about human existence. And since this knowledge is constantly changing in response to changing circumstances and cannot therefore be collected and acted upon by any central authority, the only way to make use of this knowledge effectively is to allow individuals the freedom to act on it themselves. This means that government must disallow individuals from coercing one another, and also must refrain from coercing them themselves. The social order that such voluntary actions produce is one that, given the complexity of social and economic systems and radical limitations on our ability to acquire knowledge about its particular details (Gaus 2007), cannot be imposed by fiat, but must evolve spontaneously in a bottom-up manner. Hayek, like Mill before him (Mill 1989), thus celebrates the fact that a free society allows individuals to engage in "experiments in living" and therefore, as Nozick argued in the neglected third part of his Anarchy, State, and Utopia, can serve as a "utopia of utopias" where individuals are at liberty to organize their own conception of the good life with others who voluntarily choose to share their vision (Hayek 1960).

Hayek's ideas about the relationship between knowledge, freedom, and a constitutional order were first developed at length in The Constitution of Liberty, later developed in his series Law, Legislation and Liberty, and given their last, and most accessible (though not necessarily most reliable (Caldwell 2005)) statement in The Fatal Conceit: The Errors of Socialism (1988). Since then, the most extensive integration of these ideas into a libertarian framework is in Randy Barnett's The Structure of Liberty, wherein Barnett argues that a "polycentric constitutional order" (see below regarding anarcho-capitalism) is best suited to solve not only the Hayekian problem of the use of knowledge in society, but also what he calls the problems of "interest" and "power" (Barnett 1998). More recently, Hayekian insights have been put to use by contemporary philosophers Chandran Kukathas (1989; 2006) and Gerald Gaus (2006; 2007).

Consequentialist defenses of libertarianism are, of course, varieties of consequentialist moral argument, and are susceptible therefore to the same kinds of criticisms leveled against consequentialist moral arguments in general. Beyond these standard criticisms, moreover, consequentialist defenses of libertarianism are subject to four special difficulties.

First, consequentialist arguments seem unlikely to lead one to full-fledged libertarianism, as opposed to more moderate forms of classical liberalism. Intuitively, it seems implausible that simple protection of individual negative liberties would do a better job than any alternative institutional arrangement at maximizing utility or peace and prosperity or whatever. And this intuitive doubt is buttressed by economic analyses showing that unregulated capitalist markets suffer from production of negative externalities, from monopoly power, and from undersupply of certain public goods, all of which cry out for some form of government protection (Buchanan 1985). Even granting libertarian claims that (a) these problems are vastly overstated, (b) often caused by previous failures of government to adequately respect or enforce private property rights, and (c) government ability to correct these is not as great as one might think, it's nevertheless implausible to suppose, a priori, that it will never be the case that government can do a better job than the market by interfering with strict libertarian rights.

Second, consequentialist defenses of libertarianism are subject to objections when a great deal of benefit can be had at a very low cost. So-called cases of "easy rescue," for instance, challenge the wisdom of adhering to absolute prohibitions on coercive conduct. After all, if the majority of the world's population lives in dire poverty and suffer from easily preventable diseases and deaths, couldn't utility be increased by increasing taxes slightly on wealthy Americans and using that surplus to provide basic medical aid to those in desperate need? The prevalence of such cases is an empirical question, but their possibility points (at least) to a "fragility" in the consequentialist case for libertarian prohibitions on redistributive taxation.

Third, the consequentialist theories at the root of these libertarian arguments are often seriously under-theorized. For instance, Randy Barnett bases his defense of libertarian natural rights on the claim that they promote the end of "happiness, peace and prosperity" (Barnett 1998). But this leaves a host of difficult questions unaddressed. The meaning of each of these terms, for instance, has been subject to intense philosophical debate. Which sense of happiness, then, does libertarianism promote? What happens when these ends conflictwhen we have to choose, say, between peace and prosperity? And in what sense do libertarian rights "promote" these ends? Are they supposed to maximize happiness in the aggregate? Or to maximize each person's happiness? Or to maximize the weighted sum of happiness, peace, and prosperity? Richard Epstein is on more familiar and hence, perhaps, firmer ground when he says that his version of classical liberalism is meant to maximize utility, but even here the claim that utility maximization is the proper end of political action is asserted without argument. The lesson is that while consequentialist political arguments might seem less abstract and philosophical (in the pejorative sense) than deontological arguments, consequentialism is still, nevertheless, a moral theory, and needs to be clearly articulated and defended as any other moral theory. Possibly because consequentialist defenses of libertarianism have been put forward mainly by non-philosophers, this challenge has yet to be met.

A fourth and related point has to do with issues surrounding the distribution of wealth, happiness, opportunities, and other goods allegedly promoted by libertarian rights. In part, this is a worry common to all maximizing versions of consequentialism, but it is of special relevance in this context given the close relation between economic systems and distributional issues. The worry is that morality, or justice, requires more than simply producing an abundance of wealth, happiness, or whatever. It requires that each person gets a fair sharewhether that is defined as an equal share, a share sufficient for living a good life, or something else. Intuitively fair distributions are simply not something that libertarian institutions can guarantee, devoid as they are of any means for redistributing these goods from the well-off to the less well-off. Furthermore, once it is granted that libertarianism is likely to produce unequal distributions of wealth, the Hayekian argument for relying on the free price system to allocate goods no longer holds as strongly as it appeared to. For we cannot simply assume that a free price system will lead to goods being allocated to their most valued use if some people have an abundance of wealth and others very little at all. A free market of self-interested persons will not distribute bread to the starving man, no matter how much utility he would derive from it, if he cannot pay for it. And a wealthy person, such as Bill Gates, will still always be able to outbid a poor person for season tickets to the Mariners, even if the poor person values the tickets much more highly than he, since the marginal value of the dollars he spends on the tickets is much lower to him than the marginal value of the poor person's dollars. Both by an external standard of fairness and by an internal standard of utility-maximization, then, unregulated free markets seem to fall short.

Anarcho-capitalists claim that no state is morally justified (hence their anarchism), and that the traditional functions of the state ought to be provided by voluntary production and trade instead (hence their capitalism). This position poses a serious challenge to both moderate classical liberals and more radical minimal state libertarians, though, as we shall see, the stability of the latter position is especially threatened by the anarchist challenge.

Anarcho-capitalism can be defended on either consequentialist or deontological grounds, though usually a mix of both arguments is proffered. On the consequentialist side, it is argued that police protection, court systems, and even law itself can be provided voluntarily for a price like any other market good (Friedman 1989; Rothbard 1978; Barnett 1998; Hasnas 2003; Hasnas 2007). And not only is it possible for markets to provide these traditionally state-supplied goods, it is actually more desirable for them to do so given that competitive pressures in this market, as in others, will produce an array of goods that is of higher general quality and that is diverse enough to satisfy individuals' differing preferences (Friedman 1989; Barnett 1998). Deontologically, anarcho-capitalists argue that the minimal state necessarily violates individual rights insofar as it (1) claims a monopoly on the legitimate use of force and thereby prohibits other individuals from exercising force in accordance with their natural rights, and (2) funds its protective services with coercively obtained tax revenue that it sometimes (3) uses redistributively to pay for protection for those who are unable to pay for themselves (Rothbard 1978; Childs 1994).

Robert Nozick was one of the first academic philosophers to take the anarchist challenge seriously. In the first part of his Anarchy, State, and Utopia he argued that the minimal state can evolve out of an anarcho-capitalist society through an invisible hand process that does not violate anyone's rights. Competitive pressures and violent conflict, he argued, will provide incentives for competing defensive agencies to merge or collude so that, effectively, monopolies will emerge over certain geographical areas (Nozick 1974). Since these monopolies are merely de facto, however, the dominant protection agency does not yet constitute a state. For that to occur, the "dominant protection agency" must claim that it would be morally illegitimate for other protection agencies to operate, and make some reasonably effective attempt to prohibit them from doing so. Nozick's argument that it would be legitimate for the dominant protection agency to do so is one of the most controversial aspects of his argument. Essentially, he argues that individuals have rights not to be subject to the risk of rights-violation, and that the dominant protection agency may legitimately prohibit the protective activities of its competitors on grounds that their procedures involve the imposition of risk. In claiming and enforcing this monopoly, the dominant protection agency becomes what Nozick calls the "ultraminimal state"ultraminimal because it does not provide protective services for all persons within its geographical territory, but only those who pay for them. The transition from the ultraminimal state to the minimal one occurs when the dominant protection agency (now state) provides protective services to all individuals within its territory, and Nozick argues that the state is morally obligated to do this in order to provide compensation to the individuals who have been disadvantaged by its seizure of monopoly power.

Nozick's arguments against the anarchist have been challenged on a number of grounds. First, the justification for the state it provides is entirely hypotheticalthe most he attempts to claim is that a state could arise legitimately from the state of nature, not that any actual state has (Rothbard 1977). But if hypotheticals were all that mattered, then an equally compelling story could be told of how the minimal state could devolve back into merely one competitive agency among others by a process that violates no one's rights (Childs 1977), thus leaving us at a justificatory stalemate. Second, it is questionable whether prohibiting activities that run the risk of violating rights, but do not actually violate any, is compatible with fundamental liberal principles (Rothbard 1977). Finally, even if the general principle of prohibition with compensation is legitimate, it is nevertheless doubtful that the proper way to compensate the anarchist who has been harmed by the state's claim of monopoly is to provide him with precisely what he does not wantstate police and military services (Childs 1977).

Until decisively rebutted, then, the anarchist position remains a serious challenge for libertarians, especially of the minimal state variety. This is true regardless of whether their libertarianism is defended on consequentialist or natural rights grounds. For the consequentialist libertarian, the challenge is to explain why law and protective services are the only goods that require state provision in order to maximize utility (or whatever the maximandum may be). If, for instance, the consequentialist justification for the state provision of law is that law is a public good, then the question is: Why should other public goods not also be provided? The claim that only police, courts, and military fit the bill appears to be more an a priori article of faith than a consequence of empirical analysis. This consideration might explain why so many consequentialist libertarians are in fact classical liberals who are willing to grant legitimacy to a larger than minimal state (Friedman 1962; Hayek 1960; Epstein 2003). For deontological libertarians, on the other hand, the challenge is to show why the state is justified in (a) prohibiting individuals from exercising or purchasing protective activities on their own and (b) financing protective services through coercive and redistributive taxation. If this sort of prohibition, and this sort of coercion and redistribution is justified, why not others? Once the bright line of non-aggression has been crossed, it is difficult to find a compelling substitute.

This is not to say that anarcho-capitalists do not face challenges of their own. First, many have pointed out that there is a paucity of empirical evidence to support the claim that anarcho-capitalism could function in a modern post-industrial society. Pointing to quasi-examples from Medieval Iceland (Friedman 1979) does little to alleviate this concern (Epstein 2003). Second, even if a plausible case could be made for the market provision of law and private defense, the market provision of national defense, which fits the characteristics of a public good almost perfectly, remains a far more difficult challenge (Friedman 1989). Finally, when it comes to rights and anarchy, one philosopher's modus ponens is another's modus tollens. If respect for robust rights of self-ownership and property in external goods, as libertarians understand them, entail anarcho-capitalism, why not then reject these rights rather than embrace anarcho-capitalism? Rothbard, Nozick and other natural rights libertarians are notoriously lacking in foundational arguments to support their strong belief in these rights. In the absence of strong countervailing reasons to accept these rights and the libertarian interpretation of them, the fact that they lead to what might seem to be absurd conclusions could be a decisive reason to reject them.

This entry has focused on the main approaches to libertarianism popular among academic philosophers. But it has not been exhaustive. There are other philosophical defenses of libertarianism that space prevents exploring in detail, but deserve mention nevertheless. These include defenses of libertarianism that proceed from teleological and contractual considerations.

One increasingly influential approach takes as its normative foundation a virtue-centered ethical theory. Such theories hold that libertarian political institutions are justified in the way they allow individuals to develop as virtuous agents. Ayn Rand was perhaps the earliest modern proponent of such theory, and while her writings were largely ignored by academics, the core idea has since been picked up and developed with greater sophistication by philosophers like Tara Smith, Douglas Rasmussen, and Douglas Den Uyl (Rasmussen and Den Uyl 1991; 2005).

Teleological versions of libertarianism are in some significant respects similar to consequentialist versions, insofar as they hold that political institutions are to be judged in light of their tendency to yield a certain sort of outcome. But the consequentialism at work here is markedly different from the aggregative and impartial consequentialism of act-utilitarianism. Political institutions are to be judged based on the extent to which they allow individuals to flourish, but flourishing is a value that is agent-relative (and not agent-neutral as is happiness for the utilitarian), and also one that can only be achieved by the self-directed activity of each individual agent (and not something that can be distributed among individuals by the state). It is thus not the job of political institutions to promote flourishing by means of activist policies, but merely to make room for it by enforcing the core set of libertarian rights.

These claims lead to challenges for the teleological libertarian, however. If human flourishing is good, it must be so in an agent-neutral or in an agent-relative sense. If it is good in an agent-neutral sense, then it is unclear why we do not share positive duties to promote the flourishing of others, alongside merely negative duties to refrain from hindering their pursuit of their own flourishing.

Teleological libertarians generally argue that flourishing is something that cannot be provided for one by others since it is essentially a matter of exercising one's own practical reason in the pursuit of a good life. But surely others can provide for us some of the means for our exercise of practical reasonfrom basics such as food and shelter to more complex goods such as education and perhaps even the social bases of self-respect. If, on the other hand, human flourishing is a good in merely an agent-relative sense, then it is unclear why others' flourishing imposes any duties on us at allpositive or negative. If duties to respect the negative rights of others are not grounded in the agent-neutral value of others' flourishing, then presumably they must be grounded in our own flourishing, but (a) making the wrongness of harming others depend on its negative effect on us seems to make that wrongness too contingent on situational factssurely there are some cases in which violating the rights of others can benefit us, even in the long-term holistic sense required by eudaimonistic accounts. And (b) the fact that wronging others will hurt us seems to be the wrong kind of explanation for why rights-violating acts are wrong. It seems to get matters backwards: rights-violating actions are wrong because of their effects on the person whose rights are violated, not because they detract from the rights-violator's virtue.

Another moral framework that has become increasingly popular among philosophers since Rawls's Theory of Justice (1971) is contractarianism. As a moral theory, contractarianism is the idea that moral principles are justified if and only if they are the product of a certain kind of agreement among persons. Among libertarians, this idea has been developed by Jan Narveson in his book, The Libertarian Idea (1988), which attempts to show that rational individuals would agree to a government that took individual negative liberty as the only relevant consideration in setting policy. And, while not self-described as a contractarian, Loren Lomasky's work in Persons, Rights, and the Moral Community (1987) has many affinities with this approach, as it attempts to defend libertarianism as a kind of policy of mutual-advantage between persons.

Most of the libertarian theories we have surveyed in this article have a common structure: foundational philosophical commitments are set out, theories are built upon them, and practical conclusions are derived from those theories. This approach has the advantage of thoroughnessone's ultimate political conclusions are undergirded by a weighty philosophical system to which any challengers can be directed. The downside of this approach is that anyone who disagrees with one's philosophic foundations will not be much persuaded by one's conclusions drawn from themand philosophers are not generally known for their widespread agreement on foundational issues.

As a result, much of the most interesting work in contemporary libertarian theory skips systematic theory-building altogether, and heads straight to the analysis of concrete problems. Often this analysis proceeds by accepting some set of values as givenoften the values embraced by those who are not sympathetic to libertarianism as a political theoryand showing that libertarian political institutions will better realize those values than competing institutional frameworks. Daniel Shapiro's recent work on welfare states (Shapiro 2007), for instance, is a good example of this trend, in arguing that contemporary welfare states are unjustifiable from a variety of popular theoretical approaches. Loren Lomasky (2005) has written a humorous but important piece arguing that Rawls's foundational principles are better suited to defending Nozickian libertarianism than even Nozick's foundational principles are. And David Schmidtz (Schmidtz and Goodin 1998) has argued that market institutions are supported on grounds of individual responsibility that any moral framework ought to take seriously. While such approaches lack the theoretical completeness that philosophers naturally crave, they nevertheless have the virtue of addressing crucially important social issues in a way that dispenses with the need for complete agreement on comprehensive moral theories.

A theoretical justification of this approach can be found in John Rawls's notion of an overlapping consensus, as developed in his work Political Liberalism (1993). Rawls's idea is that decisions about which political institutions and principles to adopt ought to be based on those aspects of morality on which all reasonable theories converge, rather than any one particular foundational moral theory, because there is reasonable and apparently intractable disagreement about foundational moral issues. Extending this overlapping consensus approach to libertarianism, then, entails viewing libertarianism as a political theory that is compatible with a variety of foundational metaphysical, epistemological, and ethical views. Individuals need not settle their reasonable disagreements regarding moral issues in order to agree upon a framework for political association; and libertarianism, with its robust toleration of individual differences, seems well-suited to serve as the principle for such a framework (Barnett 2004).

Matt ZwolinskiEmail: mzwolinski@sandiego.eduUniversity of San DiegoU. S. A.

Here is the original post:

Libertarianism | Internet Encyclopedia of Philosophy

What Is Libertarianism? – YouTube

The Supreme Court ruled that President Obama's recess appointments to fill openings in the National Labor Relations Board were unconstitutional. Was he abusing his power? This made us wonder, what sort of powers does the president actually have?

Learn More:Obama Recess Appointments Illegal, Supreme Court Findshttp://www.usnews.com/news/articles/2...Justices say presidents can only make recess appointments when the Senate says it's in recess.

Presidential Powershttp://nationalparalegal.edu/conlawcr...Find out what powers the president actually has.

Supreme Court Says Obama's NLRB Recess Appointments Were Unconstitutionalhttp://www.businessinsider.com/obama-...The Supreme Court ruled on Thursday that President Barack Obama's recess appointments to fill slots on the National Labor Relations Board in 2012 were unconstitutional. _________________________

NowThis World is dedicated to bringing you topical explainers about the world around you. Each week well be exploring current stories in international news, by examining the facts, providing historical context, and outlining the key players involved. Well also highlight powerful countries, ideologies, influential leaders, and ongoing global conflicts that are shaping the current landscape of the international community across the globe today.

More from NowThis:

Subscribe to NowThis News: http://go.nowth.is/News_Subscribe

Like NowThis World on Facebook: https://go.nowth.is/World_Facebook

Tweet @NowThisNews on Twitter: http://go.nowth.is/News_Twitter

Connect with Judah: Follow @judah_robinson on Twitter Facebook: http://go.nowth.is/LikeJudah

Connect with Versha: Follow @versharma on Twitter Facebook: http://go.nowth.is/LikeVersha

http://www.youtube.com/nowthisworld

Here is the original post:

What Is Libertarianism? - YouTube

Libertarianism – Wikiquote

A 'popular libertarian' might ... feel all that needs to be done to bring the world to justice is to institute the minimal state now, starting as it were from present holdings. On this view, then, libertarianism starts tomorrow, and we take the present possession of property for granted. There is, of course, something very problematic about this attitude. Part of the libertarian position involves treating property rights as natural rights, as so as being as important as anything can be. On the libertarian view, the fact that an injustice is old, and, perhaps, difficult to prove, does not make it any less of an injustice. ... We should try to work out what would have happened had the injustice not taken place. If the present state of affairs does not correspond to this hypothetical description, then it should be made to correspond. ~ Jonathan Wolff

Libertarianism is a political philosophy which advocates the maximization of individual liberty in thought and action and the minimization or even elimination of the powers of the state. Though libertarians embrace or dispute many viewpoints upon a broad range of economic strategies, ranging from laissez-faire capitalists such as those who dominate in the US Libertarian Party to libertarian socialists, the political policies they advocate tend toward those of a minimal state (minarchism), or forms of anarchism, and an insistence on the need to maintain the integrity of individual rights and responsibilities.

Go here to read the rest:

Libertarianism - Wikiquote

U.S. Department of Defense Established A Center To Better Integrate AI

The U.S. military's AI center will help the nation's armed forces develop and implement the latest in artificial intelligence

ALL EYES ON AI. The U.S. Department of Defense (DoD) is going all-in on AI. The department, which oversees everything pertaining to the U.S.’s national security and armed forces, has been tossing around the idea of establishing a center focused on artificial intelligence (AI) since October 2016. On June 27, the idea became a reality when Deputy Defense Secretary Patrick Shanahan issued a memo officially establishing the Joint Artificial Intelligence Center (JAIC).

The JAIC will serve as the military’s AI center, housing the DoD’s 600 or so AI projects. According to a request the DoD submitted to Congress in June, the center will cost an estimated $1.7 billion over the next six years.

“Deputy Secretary of Defense Patrick M. Shanahan directed the DoD Chief Information Officer to standup the Joint Artificial Intelligence Center (JAIC) in order to enable teams across DOD to swiftly deliver new AI-enabled capabilities and effectively experiment with new operating concepts in support of DOD’s military missions and business functions,” Department of Defense spokeswoman Heather Babb told Futurism.

AT THE JAIC. In his memo, Shanahan notes that advances in AI will likely change the nature of warfare and that the military needs a new approach to AI that will allow it to rapidly integrate any advances into its operations and “way of fighting.” He believes the military’s AI center could help in those efforts by focusing on four areas of need:

  • Helping the military execute its National Mission Initiatives (NMIs). These are large-scale AI projects designed to address groups of urgent, related challenges.
  • Creating a DoD-wide foundation for the execution of AI. This would mean finding a way to make any AI-related tools, data, technologies, experts, and processes available to the entire DoD quickly and efficiently.
  • Improving collaboration on AI projects both within the DoD and with outside parties, such as U.S. allies, private companies, and academics.
  • Working with the Office of the Secretary of Defense (OSD) to determine how to govern and standardize AI development and delivery.

CROSSING THE LINE. Last week, many of the biggest names in AI research from the private sector and academia took a stand against autonomous weapons, machines that use AI to decide whether or not to attempt to kill a person. Signatories of the pledge vowed to never work on any such projects; one even called autonomous weapons “as disgusting and destabilizing as bioweapons.”

By establishing an AI center, the U.S. government makes its stance clear: Not only does it see AI as an inevitable part of the future of war, it wants to be the best at implementing it. As Shanahan wrote in an email to DoD employees, “Plenty of people talk about the threat from AI; we want to be the threat.”

READ MORE: Pentagon’s Joint AI Center Is ‘Established,’ but There’s Much More to Figure Out [FedScoop]

More on autonomous weapons: Top AI Experts Vow They Won’t Help Create Lethal Autonomous Weapons

Editor’s note 7/23/18 at 3:15 PM: This piece was updated to include statements from Deputy Defense Secretary Patrick Shanahan and DoD spokesperson Heather Babb.

The post U.S. Department of Defense Established A Center To Better Integrate AI appeared first on Futurism.

Go here to read the rest:
U.S. Department of Defense Established A Center To Better Integrate AI

MIT Researchers Create an Aerosol Spray Loaded With Nanobots

MIT researchers have created nanobots that can travel via an aerosol spray, potentially opening up a new field in robotics.

AEROSOLS FOR GOOD. You may have sworn off aerosol sprays in the ’90s when everyone was talking about the hole in the ozone layer, but a team of researchers from MIT has found a use for aerosols that could be good for both the environment and our health. This spray contains nanobots, tiny sensors with the potential to do everything, from detecting dangerous leaks in pipelines, to diagnosing health issues. They published their research in Nature Nanotechnology on Monday.

NANO-SCALE SENSORS. Each sensor in the aerosol spray contains two parts. The first is a colloid, an extremely tiny insoluble particle or molecule. Colloids are so small, in fact, they can remain suspended in a liquid or the air indefinitely — the force of particles colliding around them is stronger than the force of gravity attempting to pull them down.

The second part of the sensor is a complex circuit containing a chemical detector built from a two-dimensional material, such as graphene. When this detector encounters a certain chemical in its environment, its ability to conduct electricity improves. The circuit also contains a photodiode, a device that can convert ambient light into electric current. This provides all the electricity needed to power the circuit’s data collection and memory.

The researchers grafted their circuits onto colloids, thereby giving them the colloid’s ability to travel in unique environments. Once combined, the researchers aerosolized the nanobots (converted them into a sprayable form). This delivery method wouldn’t be possible without the addition of the colloid. “[The circuits] can’t exist without a substrate,” said the study’s lead author Michael Strano in a news release. “We need to graft them to the particles to give them mechanical rigidity and to make them large enough to get entrained in the flow.”

TWO TYPES OF PIPELINES. The MIT team sees a number of potential diagnostic uses for their sprayable, microscopic sensors, demonstrating a couple in their study. As one example, they designed their sensors to detect the toxic chemical ammonia, then tested its ability within a sealed section of pipe. They sprayed the sensors into one side of a pipe, then gathered them at the other end using a piece of cheesecloth. When they examined the sensors, they could tell they’d come in contact with ammonia based on the information stored in the sensors’ memory.

In the real-world, this could save inspectors from having to manually look at an entire length of pipe from the outside. Instead, they could simply let the aerosol travel the length of the pipeline, then look for any data in its memory that might signal a problem, such as an encounter with an outside chemical that should not be in the pipeline.

As the MIT team noted in the news release, eventually, this same technology could help diagnose problems in the human body, for example, by traveling along our digestive tract, gathering data, and relaying it to medical experts. “We see this paper as the introduction of a new field [in robotics],” said Strano.

READ MORE: Cell-Sized Robots Can Sense Their Environment [MIT News]

More on nanobots: Kurzweil: By 2030, Nanobots Will Flow Throughout Our Bodies

The post MIT Researchers Create an Aerosol Spray Loaded With Nanobots appeared first on Futurism.

See original here:
MIT Researchers Create an Aerosol Spray Loaded With Nanobots

Leaders Who Pledged Not To Build Autonomous Killing Machines Are Ignoring The Real Problem

That major pledge against building autonomous killing machines is a great start, but it has some glaring holes in what it covers.

Last week, many of the major players in the artificial intelligence world signed a pledge to never build or endorse artificial intelligence systems that could run an autonomous weapon. The signatories included: Google DeepMind’s cofounders, OpenAI founder Elon Musk, and a whole slew of prominent artificial intelligence researchers and industry leaders.

The pledge, put forth by AI researcher Max Tegmark’s Future of Life Institute, argues that any system that can target and kill people without human oversight is inherently immoral, and condemns any future AI arms race that may occur. By signing the pledge, these AI bigwigs join the governments of 26 nations including China, Pakistan, and the State of Palestine, all of which also condemned and banned lethal autonomous weapons.

So if you want to build a fighter drone that doesn’t need any human oversight before killing, you’ll have to do it somewhere other than these nations, and with partners other than those who signed the agreement.

Yes, banning killer robots is likely a good move for our collective future — children in nations ravaged by drone warfare have already started to fear the sky — but there’s a pretty glaring hole in what this pledge actually does.

Namely: there are more subtle and insidious ways to leverage AI against a nation’s enemies than strapping a machine gun to a robot’s arm, Terminator-style.

The pledge totally ignores the fact that cybersecurity means more than protecting yourself from an army of killer robots. As Mariarosaria Taddeo of the Oxford Internet Institute told Business Insider, AI could be used in international conflicts in more subtle but impactful ways. Artificial intelligence algorithms could prove effective at hacking or hijacking networks that are crucial for national security.

Already, as Taddeo mentioned, the UK National Health Service was held hostage by the North Korea-linked WannaCry virus and a Russian cyberattack took control of European and North American power grids. With sophisticated, autonomous algorithms at the helm, these cyberattacks could become more frequent and more devastating. And yet, because these autonomous weapons don’t go “pew pew pew,” the recent AI pledge doesn’t mention (or pertain to) them at all.

Of course, that doesn’t make the pledge meaningless. Not by a long shot. But just as important as the high-profile people and companies that agreed to not make autonomous killing machines are the names missing from the agreement. Perhaps most notably is the U.S. Department of Defense, which recently established its Joint Artificial Intelligence Center (JAIC) for the express purpose of getting ahead for any forthcoming AI arms races.

“Deputy Secretary of Defense Patrick M. Shanahan directed the DOD Chief Information Officer to standup the Joint Artificial Intelligence Center (JAIC) in order to enable teams across DOD to swiftly deliver new AI-enabled capabilities and effectively experiment with new operating concepts in support of DOD’s military missions and business functions,” Heather Babb, Department of Defense spokesperson, told Futurism.

“Plenty of people talk about the treat from AI; we want to be the threat,” Deputy Defense Secretary Patrick Shanahan wrote in a recent email to DoD employees, a DoD spokesperson confirmed to Futurism.

The JAIC sees artificial intelligence as a crucial tool for the future of warfare. Given the U.S.’s hawkish stance on algorithmic warfare, it’s unclear if a well-intentioned, incomplete pledge can possibly hold up.

More on pledges against militarized AI: Google: JK, We’re Going To Keep Working With The Military After All

The post Leaders Who Pledged Not To Build Autonomous Killing Machines Are Ignoring The Real Problem appeared first on Futurism.

More:
Leaders Who Pledged Not To Build Autonomous Killing Machines Are Ignoring The Real Problem

Tesla Is Reportedly Asking Suppliers to Refund Payments so It Can Appear Profitable

Tesla's refund request to suppliers is raising eyebrows in the financial world, with some calling it

RETROACTIVE NEGOTIATION. Tesla seems to have a weird understanding of the old adage “You have to spend money to make money.” In order to look like it’s making money, the company is asking for refunds on the money it’s already spent — even though the people paid delivered on their part of the deal.

On Sunday, The Wall Street Journal reported that it had obtained a memo Tesla sent to one of its suppliers last week. In the memo, Tesla requested a refund on a “meaningful amount” of the money it had paid the supplier since 2016. The author of the memo, one of Tesla’s global supply managers, wrote that the money was “essential” to Tesla’s ability to continue operating and asked that the supplier view the refund as an “investment” that would allow Tesla and the supplier to continue to grow their relationship.

Though the memo claimed that all suppliers were receiving such refund requests, at least some contacted by The WSJ knew nothing about it.

HOW BIZARRE. A Tesla spokesperson doesn’t seem to think Tesla’s refund request is all that noteworthy, telling The WSJ it’s a standard practice. Many of those outside the company, however, think it’s downright bizarre. “I have never heard of that,” finance expert Ron Harbour told Bloomberg. “Suppliers have been asked for reductions, but going back for them in arrears reeks of desperation.”

It’s also a pretty self-centered move, according to manufacturing consultant Dennis Virag. “It’s simply ludicrous, and it just shows that Tesla is desperate right now,” he told The WSJ. “They’re worried about their profitability, but they don’t care about their suppliers’ profitability.”

TESLA’S WOES. Tesla’s current financial woes center on its Model 3, with frequent production issues repeatedly pushing back deliveries of the vehicle. The company currently carries more than $10 billion in debt and has been beset by one controversy after another throughout 2018. Just last month, shareholders even held a vote to decide whether or not to let CEO Elon Musk retain his position as chairman (they ultimately decided to let him stay on in that role).

If the plan behind Tesla’s refund request was to increase faith in the company as it continues to navigate the troubled waters of Model 3 production, it appears to be backfiring; Tesla’s stock dropped by 4 percent Monday morning, even though the first reviews of the Model 3 have started rolling out and have been largely positive (including from the WSJ).

On August 1, Musk will update shareholders on Tesla’s Q2 financial results, so he has just about a week to get the bad taste of Tesla’s refund request out of shareholders’ mouths. If he can’t, it’s not hard to imagine his role as chairman once again in jeopardy.

READ MORE: Tesla Asks Suppliers for Cash Back to Help Turn a Profit [The Wall Street Journal]

More on Model 3 production: In an Effort to Speed up Production, Tesla Is Assembling Model 3s in a Giant Tent

The post Tesla Is Reportedly Asking Suppliers to Refund Payments so It Can Appear Profitable appeared first on Futurism.

See the original post here:
Tesla Is Reportedly Asking Suppliers to Refund Payments so It Can Appear Profitable

Judge Kavanaugh on the Fourth Amendment – SCOTUSblog

Orin S. Kerr is the Frances R. and John J. Duggan Distinguished Professor of Law at the University of Southern California Gould School of Law.

Judge Brett Kavanaughs views of the Fourth Amendment have drawn significant interest following his recent nomination to the Supreme Court. This post takes a close look at Kavanaughs key Fourth Amendment opinions. It does so with an eye to guessing how he might rule in search and seizure cases if he is confirmed to the Supreme Court. The Supreme Court has a large Fourth Amendment docket. How might a Justice Kavanaugh approach those cases?

My analysis is tentative for two reasons. The first is probably obvious. Circuit judges are supposed to follow Supreme Court and circuit precedent, while Supreme Court justices have much more room to roam. Given that, translation is hard. You never know how much of a circuit judges rulings simply reflect a lower court judges commitment to stare decisis.

A second reason for caution is that Kavanaughs Fourth Amendment record is modest. The U.S. Court of Appeals for the District of Columbia Circuit doesnt get many search and seizure cases. A Westlaw search revealed around 35 cases in the subject area in which Kavanaugh sat on the panel or considered a rehearing petition en banc. Most of those were unanimous and pretty easy. I found only five Fourth Amendment decisions, and one recent speech, that I think might reveal something significant about his approach.

With those two important caveats, heres my overall sense of things. In tough Fourth Amendment cases that divide the Supreme Court, a Justice Kavanaugh would likely be on the governments side. He is wary of novel theories that would expand Fourth Amendment protection. And he often sees the Fourth Amendments requirement of reasonableness as giving the government significant latitude. If we had to associate Kavanaugh with a familiar justice, the limited evidence suggests that his approach in Fourth Amendment cases is probably somewhere in the ballpark of Justice Anthony Kennedy or Chief Justice William Rehnquist. Ill now run through the five key cases, and Kavanaughs recent speech, to explain why I think thats the case.

1. The balancing cases: Askew and Vilsack

The first two cases to consider involve balancing of government and privacy interests. In both cases, the majority held that the government practice violated the Fourth Amendment. Kavanaugh dissented, largely on the ground that he would have balanced the interests differently and therefore would have ruled for the government. In a close case that requires balancing of interests, the cases suggest, Kavanaugh is more likely to approach the case from the governments perspective than from the individuals perspective.

The first case is United States v. Askew, a stop-and-frisk case. The police stopped the suspect based on suspicion that he had just committed an armed robbery. After an initial frisk for weapons came up empty, an officer unzipped the suspects outer jacket to see if his clothing matched eyewitness descriptions of what the robber was wearing. It turned out the initial frisk had been poorly done: Unzipping the jacket revealed a gun in Askews waist pouch. Remarkably, the D.C. Circuit went en banc and divided sharply over whether the outer-jacket unzipping was allowed. As I joked at the time, the D.C. Circuits 85 pages of serious constitutional analysis, spread over three opinions, was the latest in zipper jurisprudence.

Askew is factually messy and a bit hard to summarize, but the most significant legal issue was whether the Fourth Amendment permits the police to move a suspects clothing to facilitate an eyewitness identification during a stop that is otherwise valid under the Supreme Courts 1968 decision in Terry v. Ohio. There was no obvious answer from Supreme Court caselaw. The en banc D.C. Circuit did not reach a majority view on the issue, although five of its 11 judges, Judges Harry Edwards, Judith Rogers, David Tatel, Janice Brown and Thomas Griffith, argued that identification searches were not permitted. Kavanaugh wrote a 32-page dissent, joined by then-Chief Judge David Sentelle and Judges Karen Henderson and Raymond Randolph, that argued that the unzipping to help identification should be allowed. In his view, the reasonableness framework that applies to Terry stops generally also permits reasonable identification procedures.

The most interesting passage in the dissent is probably Kavanaughs policy argument. Prohibiting the police during Terry stops from conducting identification procedures that constitute searches, he argued, would lead to absurd and dangerous results. For example, imagine that the police detained a suspect in a rape case and the victim claimed that the suspect had a distinctive tattoo on his forearm. If the police detained the suspect on reasonable suspicion of having committed the crime, Kavanaugh argued, the police should be allowed to pull up the suspects sleeve to see if he has the tattoo the victim claims. Not allowing limited moving of clothing to identify suspects would hamstring the police and prevent them from performing reasonable identification procedures that could solve serious crimes and protect the community from violent criminals at large.

You can see a similar focus on public safety in National Federation of Federal Employees v. Vilsack, a case about whether the Fourth Amendment permitted random drug testing for Forest Service Job Corps Center employees. The employees ran a residential job corps program at public schools for at-risk students aged 16 to 24. Under the Supreme Courts caselaw, resolving the constitutionality of the program required weighing the non-law-enforcement public-safety interest advanced by the drug testing against the degree of privacy invasion it caused. Rogers, joined by Judge Douglas Ginsburg, held that the program violated the Fourth Amendment under this test because it was a solution in search of a problem. There was insufficient evidence that a drug problem existed among the staff to justify testing, they reasoned. In addition, testing every employee was too broad because different employees served in different capacities.

Kavanaugh dissented. In his view, the drug-testing program was clearly reasonable. Indeed, he wrote, it would seem negligent not to test the employees for drugs. Many of the at-risk students had a history of drug problems. To maintain discipline, Kavanaugh argued, it was important that employees who ran the program were drug-free themselves and were not potential sources of illegal drugs for the students. As a result, the government had a strong and indeed compelling interest in maintaining a drug-free workforce at these specialized residential schools for at-risk youth. On the flip side, the privacy invasion was modest. The testing only required providing a urine sample, and it only revealed the presence of certain illegal drugs.

2. The flagging-for-SCOTUS cases: Wesby and Maynard

The next two cases show Kavanaugh writing on the Fourth Amendment in dissents from denial of rehearing en banc. In both cases, the original panel reached a surprising holding that the government had violated the Fourth Amendment. In both cases, Kavanaugh dissented from the full circuits refusal to review the outlier panel opinion. And in both cases, the Supreme Court subsequently granted certiorari and handed down a majority opinion that largely echoed Kavanaughs reasoning. I think of these cases as the flagging for SCOTUS cases because its possible that Kavanaughs dissents were written to flag the cases for the justices. And whether or not Kavanaugh intended it, his dissents appear to have done just that.

The first case is along these lines is Wesby v. District of Columbia, which involved trespass arrests at a loud party held in a vacant house. When the police arrived, and the people in the house had trouble identifying whose house it was, the police arrested everyone for trespass. The group sued the officers under the Fourth Amendment. In an opinion by Judge Cornelia Pillard, the D.C. Circuit somewhat remarkably held that the arrests violated the Fourth Amendment and that qualified immunity did not apply. Kavanaugh penned a dissent from denial of rehearing en banc that was joined by Henderson, Brown and Griffith.

Although Kavanaughs dissent mentioned the Fourth Amendment merits in passing, it focused primarily on qualified immunity. In Kavanaughs view, qualified immunity plainly barred the suit. Both the facts and the law created lots of room for a reasonable officer to believe the arrests were based on probable cause. To be sure, he added, I do not dismiss the irritation and anguish, as well as the reputational and economic harm, that can come from being arrested. Police officers should never lightly take that step, and the courts should not hesitate to impose liability when officers act unreasonably in light of clearly established law. But that is not what happened here, not by a long shot. The Supreme Court granted cert and reversed unanimously, ruling that probable cause existed (a view held by seven justices) and holding that in any event qualified immunity applied much as Kavanaugh had argued (a position taken by all nine justices).

A roughly similar dynamic occurred with Kavanaughs dissent from denial of rehearing in United States v. Maynard, later reviewed by the Supreme Court under the name United States v. Jones. Investigators placed a GPS device on the suspects car and tracked its location for 28 days. In an astonishing opinion for the D.C. Circuit, Ginsburg created the mosaic theory by which the monitoring was not a search at first but over time became a search because the government collected a search-like amount of information. The en banc D.C. Circuit denied the petition for rehearing 5-4. Kavanaugh joined Sentelles dissent from denial of rehearing, which argued that the panel opinion was inconsistent with Supreme Court and other circuits precedents and deserved en banc review.

The most interesting part of Kavanaughs approach to Maynard is that he wrote a brief separate dissent that flagged an alternative ground for ruling that a search occurred. Maybe it was the installation of the GPS that was a search, Kavanaugh suggested, rather than its use. Fourth Amendment caselaw before Katz v. United States had held that physical intrusion onto property was a search. If that caselaw was still valid and I see no indication that it is not, Kavanaugh added then installing the GPS device could be a search because it was an unauthorized physical encroachment on to the property of the suspects car. I do not yet know whether I agree with that conclusion, Kavanaugh wrote, but it is an important and close question deserving en banc review. When the government petitioned for certiorari, the lawyers for the defense added Kavanaughs theory as a second question presented in their brief in opposition.

The Supreme Court took up Kavanaughs suggestion. The justices granted certiorari under the name United States v. Jones on the Fourth Amendment implications of both installing the GPS device and its use. The majority opinion by Justice Antonin Scalia essentially adopted Kavanaughs approach. Installing a GPS was deemed a search because the installation trespassed on to the car. Jones sharply changed Fourth Amendment blackletter law by recognizing two different ways of establishing a search: the Katz test and the pre-Katz trespass test that Kavanaugh had proposed. To be sure, Kavanaughs view didnt come from nowhere. There had been something of a split on the question, and I agreed at the time that this should be the big question. But Kavanaugh was the one who best articulated the theory and teed it up for the justices.

3. The Section 215 opinion in Klayman

The last Kavanaugh opinion to consider is the one that has drawn the most attention. In Klayman v. Obama, Judge Richard Leon had ruled for the district court that the National Security Agencys Section 215 call-records program violated the Fourth Amendment. Under the program, the NSA was getting the numbers dialed (but not the contents) for millions of Americans phone calls. Leon ruled that the program was unconstitutional but then stayed any remedy while the appeal was pending. The D.C. Circuit sent the case back to the district court on procedural grounds. With the Section 215 program about to expire, Leon quickly handed down a new decision that the program was unlawful and refused to grant a stay. The next day, the D.C. Circuit issued an administrative stay; plaintiff Larry Klayman then sought an emergency petition for rehearing en banc, which the full court denied.

Kavanaugh filed a two-page solo concurrence in the denial of rehearing. In his view, the Section 215 program was entirely consistent with the Fourth Amendment. That was true for two reasons. First, the Supreme Court had held that collecting telephony metadata was not a search in Smith v. Maryland. Smith settled the Section 215 question, in Kavanaughs view: That precedent remains binding on lower courts in our hierarchical system of absolute vertical stare decisis. Second, even if a future court adopted a different a view of what is a search, the Section 215 program was still reasonable under the balancing of interests of the special needs exception (see the discussion of Vilsack above). [T]elephony metadata serves a critically important special need preventing terrorist attacks on the United States, Kavanaugh wrote, citing the 2004 9/11 Commission Report. [T]hat critical national security need outweighs the impact on privacy occasioned by this program.

What to make of Kavanaughs Klayman concurrence? On one hand, his view that the program satisfied the Fourth Amendment under Smith was doctrinally correct, in my view, at least before Carpenter v. United States last month. Its surprising that Kavanaugh didnt develop the Smith argument more. He gave the whole point only two sentences. But the argument was sound, and it matched what several district courts had said at that point (one example being the U.S. District Court for the Southern District of California in 2013 in United States v. Moalin).

On the other hand, Im less persuaded by Kavanaughs argument that Section 215 would fit the special-needs exception if call-records collection is a search. I would think the question is how much the program actually advances the interest in preventing terrorist attacks, not just the importance of its goal in the abstract. But note the echo of Kavanaughs Vilsack dissent. In both cases, Kavanaugh applied the special-needs exception in ways that construed the government interests as very weighty and the privacy interests as comparatively light.

4. Like Rehnquist, or perhaps like Kennedy?

A final data point for Kavanaughs Fourth Amendment views is his recent speech on Chief Justice William Rehnquist. Kavanaugh celebrates Rehnquist as Kavanaughs first judicial hero. As a law student, [i]n class after class, Kavanaugh found that he stood with Rehnquist. Kavanaugh is quick to say that he doesnt agree with every Rehnquist opinion. But in the course of a rather glowing overview of Rehnquists impact as a justice one that Kavanaugh describes as a labor of love to deliver Kavanaugh describes how Rehnquist led the charge in rebalancing Fourth Amendment law after the Warren Courts criminal-procedure revolution had expanded the rights of criminal defendants.

Kavanaugh mentions three areas in particular. First, Rehnquist wrote opinions making the probable cause standard more flexible and commonsensical. Second, Rehnquist wrote decisions expanding the category of special needs searches, which is a particularly interesting reference in light of Kavanaughs separate opinions in Vilsack and Klayman. Finally, Rehnquist opposed the exclusionary rule as a judge-created rule that was beyond the four corners of the Fourth Amendments text and imposed tremendous costs on society. Although Rehnquist did not succeed in having the exclusionary rule overturned, he dramatically changed the law of the exclusionary rule over time through the good-faith exception and other doctrines.

One takeaway from Kavanaughs speech is that his Fourth Amendment views probably arent too far from Rehnquists. Rehnquist was a pretty reliable voice for law enforcement interests in Fourth Amendment cases. The affinity may be revealing.

With that said, its also worth noting that Rehnquists views in Fourth Amendment cases also werent too far from that of Kennedy, the justice for whom Kavanaugh clerked and whose place Kavanaugh has been nominated to fill. Like Rehnquist, Kennedy tended to take a law-enforcement-oriented view in Fourth Amendment cases. You might say that Kennedys views of the Fourth Amendment were Rehnquist-like but without the broader agenda of rebalancing the rules after the Warren court.

If so, perhaps Kavanaughs views are better described as Kennedy-esque than Rehnquist-like. Like Kennedy, Kavanaugh seems to take government interests very seriously. At the same time, Kavanaughs opinions dont seem to reflect a broader agenda. Recall Kavanaughs Maynard concurrence in particular. Although Kavanaugh was unpersuaded by the panel opinions novel theory, he wrote separately to provide an alternative basis for concluding that the GPS installation was a search.

Posted in Nomination of Brett Kavanaugh to the Supreme Court, Judge Kavanaugh's jurisprudence, Featured

Recommended Citation: Orin Kerr, Judge Kavanaugh on the Fourth Amendment, SCOTUSblog (Jul. 20, 2018, 6:16 PM), http://www.scotusblog.com/2018/07/judge-kavanaugh-on-the-fourth-amendment/

See original here:

Judge Kavanaugh on the Fourth Amendment - SCOTUSblog

A New Backdoor Around the Fourth Amendment: The CLOUD Act …

Theres a new, proposed backdoor to our data, which would bypass our Fourth Amendment protections to communications privacy. It is built into a dangerous bill called the CLOUD Act, which would allow police at home and abroad to seize cross-border data without following the privacy rules where the data is stored.

This backdoor is an insidious method for accessing our emails, our chat logs, our online videos and photos, and our private moments shared online between one another. This backdoor would deny us meaningful judicial review and the privacy protections embedded in our Constitution.

This new backdoor for cross-border data mirrors another backdoor under Section 702 of the FISA Amendments Act, an invasive NSA surveillance authority for foreign intelligence gathering. That law, recently reauthorized and expanded by Congress for another six years, gives U.S. intelligence agencies, including the NSA, FBI, and CIA, the ability to search, read, and share our private electronic messages without first obtaining a warrant.

The new backdoor in the CLOUD Act operates much in the same way. U.S. police could obtain Americans data, and use it against them, without complying with the Fourth Amendment.

For this reason, and many more, EFF strongly opposes the CLOUD Act.

The CLOUD Act (S. 2383 and H.R. 4943) has two major components. First, it empowers U.S. law enforcement to grab data stored anywhere in the world, without following foreign data privacy rules. Second, it empowers the president to unilaterally enter executive agreements with any nation on earth, even known human rights abusers. Under such executive agreements, foreign law enforcement officials could grab data stored in the United States, directly from U.S. companies, without following U.S. privacy rules like the Fourth Amendment, so long as the foreign police are not targeting a U.S. person or a person in the United States.

That latter component is where the CLOUD Acts backdoor lives.

When foreign police use their power under CLOUD Act executive agreements to collect a foreign targets data from a U.S. company, they might also collect data belonging to a non-target U.S. person who happens to be communicating with the foreign target. Within the numerous, combined foreign investigations allowed under the CLOUD Act, it is highly likely that related seizures will include American communications, including email, online chat, video calls, and internet voice calls.

Under the CLOUD Acts rules for these data demands from foreign police to U.S. service providers, this collection of Americans data can happen without any prior, individualized review by a foreign or American judge. Also, it can happen without the foreign police needing to prove the high level of suspicion required by the U.S. Fourth Amendment: probable cause.

Once the foreign police have collected Americans data, they often will be able to hand it over to U.S. law enforcement, which can use it to investigate Americans, and ultimately to bring criminal charges against them in the United States.

According to the bill, foreign police can share the content of a U.S persons communications with U.S. authorities so long as it relates to significant harm, or the threat thereof,to the United States or United States persons. This nebulous standard is vague and overbroad. Also, the bills hypotheticals indicate far-ranging data sharing by foreign police with U.S. authorities. From national security to violent crime, from organized crime to financial fraud, the CLOUD Act permits it all to be shared, and likely far more.

Moreover, the CLOUD Act allows the foreign police who collect Americans communications to freely use that content against Americans, and to freely share it with additional nations.

To review: The CLOUD Act allows the president to enter an executive agreement with a foreign nation known for human rights abuses. Using its CLOUD Act powers, police from that nation inevitably will collect Americans communications. They can share the content of those communications with the U.S. government under the flawed significant harm test. The U.S. government can use that content against these Americans. A judge need not approve the data collection before it is carried out. At no point need probable cause be shown. At no point need a search warrant be obtained.

This is wrong. Much like the infamous backdoor search loophole connected to broad, unconstitutional NSA surveillance under Section 702, the backdoor proposed in the CLOUD Act violates our Fourth Amendment right to privacy by granting unconstitutional access to our private lives online.

Also, when foreign police using their CLOUD Act powers inevitably capture metadata about Americans, they can freely share it with the U.S. government, without even showing significant harm. Communications content is the words in an email or online chat, the recordings of an internet voice call, or the moving images and coordinating audio of a video call online. Communications metadata is the pieces of information that relate to a message, including when it was sent, who sent it, who received it, its duration, and where the sender was located when sending it. Metadata is enormously powerful information and should be treated with the same protection as content.

To be clear: the CLOUD Act fails to provide any limits on foreign police sharing Americans metadata with U.S. police.

The CLOUD Act would be a dangerous overreach into our data. It seeks to streamline cross-border police investigations, but it tears away critical privacy protections to attain that goal. This is not a fair trade. It is a new backdoor search loophole around the Fourth Amendment.

Tell your representative today to reject the CLOUD Act.

Take Action

Stop the CLOUD Act

Originally posted here:

A New Backdoor Around the Fourth Amendment: The CLOUD Act ...

History of artificial intelligence – Wikipedia

The history of Artificial Intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen; as Pamela McCorduck writes, AI began with "an ancient wish to forge the gods."

The seeds of modern AI were planted by classical philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

The field of AI research was founded at a workshop held on the campus of Dartmouth College during the summer of 1956. Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.

Eventually it became obvious that they had grossly underestimated the difficulty of the project due to computer hardware limitations. In 1973, in response to the criticism of James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter". Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 80s the investors became disillusioned by the absence of the needed computer power (hardware) and withdrew funding again.

Investment and interest in AI boomed in the first decades of the 21st century, when machine learning was successfully applied to many problems in academia and industry due to the presence of powerful computer hardware. As in previous "AI summers", some observers (such as Ray Kurzweil) predicted the imminent arrival of artificial general intelligence: a machine with intellectual capabilities that exceed the abilities of human beings.

The dream of artificial intelligence was first thought of in Indian philosophies like those of Charvaka, a famous philosophy tradition dating back to 1500 BCE and surviving documents around 600 BCE. McCorduck (2004) writes "artificial intelligence in one form or another is an idea that has pervaded intellectual history, a dream in urgent need of being realized," expressed in humanity's myths, legends, stories, speculation and clockwork automatons.

Mechanical men and artificial beings appear in Greek myths, such as the golden robots of Hephaestus and Pygmalion's Galatea.[4]In the Middle Ages, there were rumors of secret mystical or alchemical means of placing mind into matter, such as Jbir ibn Hayyn's Takwin, Paracelsus' homunculus and Rabbi Judah Loew's Golem.[5]By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel apek's R.U.R. (Rossum's Universal Robots),and speculation, such as Samuel Butler's "Darwin among the Machines."AI has continued to be an important element of science fiction into the present.

Realistic humanoid automatons were built by craftsman from every civilization, including Yan Shi,[8]Hero of Alexandria,[9]Al-Jazari, Pierre Jaquet-Droz,and Wolfgang von Kempelen.[11]The oldest known automatons were the sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotionHermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it."[12][13]

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanicalor "formal"reasoning has a long history. Chinese, Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwrizm (who developed algebra and gave his name to "algorithm") and European scholastic philosophers such as William of Ockham and Duns Scotus.[14]

Majorcan philosopher Ramon Llull (12321315) developed several logical machines devoted to the production of knowledge by logical means;[15] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[16] Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[17]

In the 17th century, Leibniz, Thomas Hobbes and Ren Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[18]Hobbes famously wrote in Leviathan: "reason is nothing but reckoning".[19]Leibniz envisioned a universal language of reasoning (his characteristica universalis) which would reduce argumentation to calculation, so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate."[20]These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole's The Laws of Thought and Frege's Begriffsschrift. Building on Frege's system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell's success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: "can all of mathematical reasoning be formalized?"[14]His question was answered by Gdel's incompleteness proof, Turing's machine and Church's Lambda calculus.[14][21]

Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machinea simple theoretical construct that captured the essence of abstract symbol manipulation. This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[14][23]

Calculating machines were built in antiquity and improved throughout history by many mathematicians, including (once again) philosopher Gottfried Leibniz. In the early 19th century, Charles Babbage designed a programmable computer (the Analytical Engine), although it was never built. Ada Lovelace speculated that the machine "might compose elaborate and scientific pieces of music of any degree of complexity or extent".[24] (She is often credited as the first programmer because of a set of notes she wrote that completely detail a method for calculating Bernoulli numbers with the Engine.)

The first modern computers were the massive code breaking machines of the Second World War (such as Z3, ENIAC and Colossus). The latter two of these machines were based on the theoretical foundation laid by Alan Turing[25] and developed by John von Neumann.[26]

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. The field of artificial intelligence research was founded as an academic discipline in 1956.

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. Claude Shannon's information theory described digital signals (i.e., all-or-nothing signals). Alan Turing's theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an electronic brain.[27]

Examples of work in this vein includes robots such as W. Grey Walter's turtles and the Johns Hopkins Beast. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[28]

Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions. They were the first to describe what later researchers would call a neural network.[29] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[30]Minsky was to become one of the most important leaders and innovators in AI for the next 50 years.

In 1950 Alan Turing published a landmark paper in which he speculated about the possibility of creating machines that think.[31]He noted that "thinking" is difficult to define and devised his famous Turing Test. If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was "thinking". This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition.[32] The Turing Test was the first serious proposal in the philosophy of artificial intelligence.

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[33] Arthur Samuel's checkers program, developed in the middle 50s and early 60s, eventually achieved sufficient skill to challenge a respectable amateur.[34] Game AI would continue to be used as a measure of progress in AI throughout its history.

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[35]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the "Logic Theorist" (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica, and find new and more elegant proofs for some.[36]Simon said that they had "solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind."[37](This was an early statement of the philosophical position John Searle would later call "Strong AI": that machines can contain minds just as human bodies do.)[38]

The Dartmouth Conference of 1956[39]was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it".[40]The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[41]At the conference Newell and Simon debuted the "Logic Theorist" and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field.[42]The 1956 Dartmouth conference was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[43]

The years after the Dartmouth conference were an era of discovery, of sprinting across new ground. The programs that were developed during this time were, to most people, simply "astonishing":[44] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all.[45] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[46] Government agencies like DARPA poured money into the new field.[47]

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called "reasoning as search".[48]

The principal difficulty was that, for many problems, the number of possible paths through the "maze" was simply astronomical (a situation known as a "combinatorial explosion"). Researchers would reduce the search space by using heuristics or "rules of thumb" that would eliminate those paths that were unlikely to lead to a solution.[49]

Newell and Simon tried to capture a general version of this algorithm in a program called the "General Problem Solver".[50] Other "searching" programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and SAINT, written by Minsky's student James Slagle (1961).[51] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[52]

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow's program STUDENT, which could solve high school algebra word problems.[53]

A semantic net represents concepts (e.g. "house","door") as nodes and relations among concepts (e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[54] and the most successful (and controversial) version was Roger Schank's Conceptual dependency theory.[55]

Joseph Weizenbaum's ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program. But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[56]

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[57]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd's SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[58]

The first generation of AI researchers made these predictions about their work:

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[63]DARPA made similar grants to Newell and Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[64] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[65]These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[66]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them.[67] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[68] but this "hands off" approach would not last.

In Japan, Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the world's first full-scale intelligent humanoid robot,[69][70] or android. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth.[71][72][73]

In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[74] At the same time, the field of connectionism (or neural nets) was shut down almost completely for 10 years by Marvin Minsky's devastating criticism of perceptrons.[75]Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[76]

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys".[77] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[78]

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[86]In 1973, the Lighthill report on the state of AI research in England criticized the utter failure of AI to achieve its "grandiose objectives" and led to the dismantling of AI research in that country.[87](The report specifically mentioned the combinatorial explosion problem as a reason for AI's failings.)[88]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[89]By 1974, funding for AI projects was hard to find.

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many researchers were caught up in a web of increasing exaggeration."[90]However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research". Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[91]

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gdel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[92] Hubert Dreyfus ridiculed the broken promises of the 1960s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little "symbol processing" and a great deal of embodied, instinctive, unconscious "know how".[93][94] John Searle's Chinese Room argument, presented in 1980, attempted to show that a program could not be said to "understand" the symbols that it uses (a quality called "intentionality"). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as "thinking".[95]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference "know how" or "intentionality" made to an actual computer program. Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored."[96] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me."[97] Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish. Although he was an outspoken critic of Dreyfus' positions, he "deliberately made it plain that theirs was not the way to treat a human being."[98]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote a "computer program which can conduct psychotherapeutic dialogue" based on ELIZA.[99] Weizenbaum was disturbed that Colby saw a mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[100]

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages." An active research program into the paradigm was carried out throughout the 1960s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt's predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was done in connectionism for 10 years. Eventually, a new generation of researchers would revive the field and thereafter it would become a vital and useful part of artificial intelligence. Rosenblatt would not live to see this, as he died in a boating accident shortly after the book was published.[75]

Logic was introduced into AI research as early as 1958, by John McCarthy in his Advice Taker proposal.[101]In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[102] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[103]Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[104]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[105]McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problemsnot machines that think as people do.[106]

Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[107] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[108]

In 1975, in a seminal paper, Minsky noted that many of his fellow "scruffy" researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[109] Many years later object-oriented programming would adopt the essential idea of "inheritance" from AI research on frames.

In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[110]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[111]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[112] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments. An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[113]

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. "AI researchers were beginning to suspectreluctantly, for it violated the scientific canon of parsimonythat intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,"[114] writes Pamela McCorduck. "[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay".[115] Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[116]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[117]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for Deep Blue.[118]

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[119] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[120]

Other countries responded with new programs of their own. The UK began the 350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or "MCC") to fund large scale projects in AI and information technology.[121][122] DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[123]

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a "Hopfield net") could learn and process information in a completely new way. Around the same time, David Rumelhart popularized a new method for training neural networks called "backpropagation" (discovered years earlier by Paul Werbos). These two discoveries revived the field of connectionism which had been largely abandoned since 1970.[122][124]

The new field was unified and inspired by the appearance of Parallel Distributed Processing in 1986a two volume collection of papers edited by Rumelhart and psychologist James McClelland. Neural networks would become commercially successful in the 1990s, when they began to be used as the engines driving programs like optical character recognition and speech recognition.[122][125]

The business community's fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. The collapse was in the perception of AI by government agencies and investors the field continued to make advances despite the criticism. Rodney Brooks and Hans Moravec, researchers from the related field of robotics, argued for an entirely new approach to artificial intelligence.

The term "AI winter" was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[126] Their fears were well founded: in the late 1980s and early 1990s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[127]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[128]

In the late 1980s, the Strategic Computing Initiative cut funding to AI "deeply and brutally." New leadership at DARPA had decided that AI was not "the next wave" and directed funds towards projects that seemed more likely to produce immediate results.[129]

By 1991, the impressive list of goals penned in 1981 for Japan's Fifth Generation Project had not been met. Indeed, some of them, like "carry on a casual conversation" had not been met by 2010.[130] As with other AI projects, expectations had run much higher than what was actually possible.[130]

In the late 1980s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[131] They believed that, to show real intelligence, a machine needs to have a body it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec's paradox). They advocated building intelligence "from the bottom up."[132]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 1970s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr's work would be cut short by leukemia in 1980.)[133]

In a 1990 paper, "Elephants Don't Play Chess,"[134] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough."[135] In the 1980s and 1990s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[136]

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine. Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence".[137] AI was both more cautious and more successful than it had ever been.

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[138] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second. The event was broadcast live over the internet and received over 74 million hits.[139]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[140] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[141] In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[142]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous power of computers today.[143] In fact, Deep Blue's computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[144] This dramatic increase is measured by Moore's law, which predicts that the speed and memory capacity of computers doubles every two years. The fundamental problem of "raw computer power" was slowly being overcome.

A new paradigm called "intelligent agents" became widely accepted during the 1990s.[145] Although earlier researchers had proposed modular "divide and conquer" approaches to AI,[146] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell, Leslie P. Kaelbling, and others brought concepts from decision theory and economics into the study of AI.[147] When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are "intelligent agents", as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as "the study of intelligent agents". This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[148]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[147][149]

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[150] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous "scientific" discipline. Russell & Norvig (2003) describe this as nothing less than a "revolution" and "the victory of the neats".[151][152]

Judea Pearl's highly influential 1988 book[153] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for "computational intelligence" paradigms like neural networks and evolutionary algorithms.[151]

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[154]and their solutions proved to be useful throughout the technology industry,[155] such asdata mining,industrial robotics,logistics,[156]speech recognition,[157]banking software,[158]medical diagnosis[158]and Google's search engine.[159]

The field of AI received little or no credit for these successes in the 1990s and early 2000s. Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[160] Nick Bostrom explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[161]

Many researchers in AI in 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may be because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the New York Times reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[162][163][164]

In 1968, Arthur C. Clarke and Stanley Kubrick had imagined that by the year 2001, a machine would exist with an intelligence that matched or exceeded the capability of human beings. The character they created, HAL 9000, was based on a belief shared by many leading AI researchers that such a machine would exist by the year 2001.[165]

In 2001, AI founder Marvin Minsky asked "So the question is why didn't we get HAL in 2001?"[166] Minsky believed that the answer is that the central problems, like commonsense reasoning, were being neglected, while most researchers pursued things like commercial applications of neural nets or genetic algorithms. John McCarthy, on the other hand, still blamed the qualification problem.[167] For Ray Kurzweil, the issue is computer power and, using Moore's Law, he predicted that machines with human-level intelligence will appear by 2029.[168] Jeff Hawkins argued that neural net research ignores the essential properties of the human cortex, preferring simple models that have been successful at solving simple problems.[169] There were many other explanations and for each there was a corresponding research program underway.

In the first decades of the 21st century, access to large amounts of data (known as "big data"), faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. In fact, McKinsey Global Institute estimated in their famous paper "Big data: The next frontier for innovation, competition, and productivity" that "by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data".

By 2016, the market for AI-related products, hardware, and software reached more than 8 billion dollars, and the New York Times reported that interest in AI had reached a "frenzy".[170] The applications of big data began to reach into other fields as well, such as training models in ecology[171] and for various applications in economics.[172] Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.[173]

Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.[173] According to the Universal approximation theorem, deep-ness isn't necessary for a neural network to be able to approximate arbitrary continuous functions. Even so, there are many problems that are common to shallow networks (such as overfitting) that deep networks help avoid.[174] As such, deep neural networks are able to realistically generate much more complex models as compared to their shallow counterparts.

However, deep learning has problems of its own. A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero. There have been many methods developed to approach this problem, such as Long short-term memory units.

State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition.[175]

Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go and Doom (which, being a FPS, has sparked some controversy).[176][177][178][179]

Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame. It is a massive amount of decision-making, insight, and process optimization capabilities that require new processing models. In the Big Data Era written by Victor Meyer Schonberg and Kenneth Cooke, big data means that instead of random analysis (sample survey), all data is used for analysis. The 5V characteristics of big data (proposed by IBM): Volume, Velocity, Variety[180], Value[181], Veracity[182].The strategic significance of big data technology is not to master huge data information, but to specialize in these meaningful data. In other words, if big data is likened to an industry, the key to realizing profitability in this industry is to increase the Process capability of the data and realize the Value added of the data through Processing.

Artificial intelligence is a branch of computer science that attempts to understand the essence of intelligence and produce a new intelligent machine that responds in a manner similar to human intelligence. Research in this area includes robotics, speech recognition, image recognition, Natural language processing and expert systems. Since the birth of artificial intelligence, the theory and technology have become more and more mature, and the application fields have been expanding. It is conceivable that the technological products brought by artificial intelligence in the future will be the "container" of human wisdom. Artificial intelligence can simulate the information process of human consciousness and thinking. Artificial intelligence is not human intelligence, but it can be like human thinking, and it may exceed human intelligence.Artificial general intelligence is also referred to as "strong AI",[183] "full AI"[184] or as the ability of a machine to perform "general intelligent action".[3] Academic sources reserve "strong AI" to refer to machines capable of experiencing consciousness.

.

Continue reading here:

History of artificial intelligence - Wikipedia