Monthly Archives: August 2015

Fourth Amendment | Signal 108

Posted: August 17, 2015 at 1:45 pm

The below article was reproduced from The Federal Law Enforcement Informer, August 2015 issue. The Informer is a product published by the Department of Homeland Security, Federal Law Enforcement Training Center (FLETC), Office of Chief Counsel, Legal Training Division. The entire document, which contains case notes on notable federal cases, can be found here.

REASONABLENESS AND POST-RILEY SMARTPHONE SEARCHES

Robert Duncan, Esq.

Attorney Advisor and Senior Instructor

Office of Chief Counsel

Federal Law Enforcement Training Centers

Artesia, New Mexico

Reasonableness as Touchstone

The Fourth Amendment protects [t]he right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures1 and in so doing, put the courts of the United States and Federal officials, in the exercise of their power and authority, under limitations and restraints [and] forever secure[d] the people, their persons, houses, papers, and effects, against all unreasonable searches and seizures under the guise of law.2 With the remainder of the Fourth Amendment prohibiting the issuance of warrants without probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized,3 officers may view the law governing search and seizure as largely evidentiary or procedural but the underlying command of the Fourth Amendment is always that searches and seizures be reasonable.4

The Supreme Court has clearly defined searches and seizures. A search occurs when

an expectation of privacy that society is prepared to consider reasonable is infringed[while] seizure of property occurs when there is some meaningful interference with an individuals possessory interests in that property.5 The Supreme Court has held that the touchstone of the Fourth Amendment is reasonableness6 but there is no talisman that determines in all cases those privacy expectations that society is prepared to accept as reasonable.7

Determining Reasonableness

Determining whether a search is reasonable under the Fourth Amendment usually involves looking to the traditional protections against unreasonable searches and seizures afforded by the common law at the time of the [Fourth Amendments] framing8 or by assessing, on the one hand, the degree to which it intrudes upon an individuals privacy and, on the other, the degree to which it is needed for the promotion of legitimate governmental interests.9

As neither a warrant nor probable cause is an indispensable component of reasonableness,10 the Supreme Court has determined that [w]here a search is undertaken by law enforcement officials to discover evidence of criminal wrongdoing[]reasonableness generally requires the obtaining of a judicial warrant.11 In the absence of a warrant, drawn by a neutral and detached magistrate instead of being judged by the officer engaged in the often competitive enterprise of ferreting out crime,12 a search is reasonable only if it falls within a specific exception to the warrant requirement,13 even if the warrantless search violates a persons reasonable expectation of privacy.14

The Supreme Court recognizes few specifically established and well-delineated exceptions15 to the warrant requirement. Those exceptions include the plain view doctrine,16 which allows an officer to seize evidence and contraband found in plain view during a lawful observation without a warrant;17 the Terry stop and Terry frisk, which grants authority to permit a reasonable search for weapons for the protection of the police officer, where he has reason to believe that he is dealing with an armed and dangerous individual;18 certain limited searches incident to lawful arrest;19 and searches involving exigent circumstances.20

A party alleging an unconstitutional search must establish both a subjective and an objective expectation of privacy.21 The Supreme Court has held the subjective component requires that a person exhibit an actual expectation of privacy, while the objective component requires that the privacy expectation be one that society is prepared to recognize as reasonable.22

A smartphone users expectation of privacy is viewed objectively and must be justifiable under the circumstances.23 With the advent of social media and smartphones, people can post a photo or video from their phones, allowing them to share their lives instantly.24 Until 2014, one could make a colorable argument that it is unreasonable to have an expectation of privacy when one records and instantly shares life events on a smartphone; if there is no violation of a persons reasonable expectation of privacy by police or government agents, then there is no Fourth Amendment search.25 Despite the prevalence of sharing, users also routinely use passwords, thumbprint scans, or other mechanisms to prevent unwanted viewing of the devices contents. Using these features demonstrates an intention to keep a devices contents private; the remaining question is whether the privacy expectation created by using a password is one that society is prepared to recognize as reasonable.

In early 2014, the Pew Research Center conducted a study that found more than 90 percent of Americans now own or regularly use a cellphone, and 58 percent have a more sophisticated smartphone.26 Even though society may share some data to others, society accepts that privacy expectations are reasonable on data stored on a smartphone itself and protected by passwords. In a digital age all of our papers and effects [are no longer] stored solely in satchels, briefcases, cabinets, and folders [but] ratherstored digitally on hard drives, flash drives, memory cards, and discs.27 Even the Supreme Courtan institution that does not enjoy a tech-savvy reputationhas agreed that papers and effects have given way to smartphones and selfies.28

Riley v. California

The Supreme Court extended reasonable expectations of privacy to smartphone data in Riley v. California, 134 S. Ct. 2473, 2485, 189 L. Ed. 2d 430 (2014). Riley involved two separate arrests and searches of smartphones by police officers, demonstrates the inverse relationship between smartphone technology and reasonableness of smartphone searches. Officers attempted to search a phone as part of a Terry frisk.

As to the Terry frisk exception, the Court held that digital data stored on a cell phone cannot itself be used as a weapon to harm an arresting officer or to effectuate the arrestees escape, thus significantly limiting the use of this exception for reasonable searches of smartphones.29 The Court also noted that smartphones place vast quantities of personal information literally in the hands of individuals [and a] search of the information on a cell phone bears little resemblance to the type of brief physical search considered in previous cases involving searches incident to lawful arrest.30

As to one of the remaining exceptions, exigent circumstances encompass a broad array of factors considered by the courts: the gravity or violent nature of the offense with which the suspect is to be charged; a reasonable belief that the suspect is armed; probable cause to believe the suspect committed the crime; strong reason to believe the suspect is in the premises being entered; the likelihood that a delay could cause the escape of the suspect or the destruction of essential Fourth Amendment evidence; and the safety of the officers or the public jeopardized by delay.31

The destruction of evidence factor was often cited in court cases through the mid-1990s through the late 2000s:

On a cell phone, the telephone numbers stored in the memory can be erased as a result of incoming phone calls and the deletion of text messages could be as soon as midnight the next day[O]nce the cell phone powers down evidence can be lost. [A popular cell phone, the Motorola Razer] has an option called message clean up that wipes away text messages between 1 and 99 days. There is no way to determine by

looking at the Razer cell phones screen, if the message clean-up option has been activated. If the one-day message clean up is chosen, any messages stored on the Razer cell phone will be deleted at midnight on the following day it is received.

Accordingly, this Court finds that exigent circumstances existed and the text messages retrieved from the Razer cell phones are admissible.32

As smartphone technology has developed, however, the Supreme Court views exigent circumstances with increasing skepticism. In 2014, the technology used in the most basic of phones was unheard of ten years ago33 and the current top-selling smart phone has a standard capacity of 16 gigabytes (and is available with up to 64 gigabytes). Sixteen gigabytes translates to millions of pages of text, thousands of pictures, or hundreds of videos.34

Advances in technology also mean that officers can prevent destruction of data by disconnecting a phone from the networkFirst, law enforcement officers can turn the phone off or remove its battery. Second, if they are concerned about encryption or other potential problems, they can leave a phone powered on and place it in an [Faraday] enclosure that isolates the phone from radio waves.35 With these precautions in place, there is no longer any risk that the arrestee himself will be able to delete incriminating data from the phone.36

Seek Warrant, Avoid Suppression of Evidence

With the Supreme Courts holding in Riley, trial courts will likely suppress smartphone evidence without a search warrant or factual information that an exception to the warrant requirement existed at the time of the search. Fortunately, officers can find model search warrant templates at the nearest Regional Computer Forensics Laboratories (RCFL) site and seek assistance from the Federal Bureau of Investigation (FBI). While other avenues exist for cell phone investigations, the RCFL and FBI are especially good resources because almost every FBI Field Office or Resident Agency has a Cell Phone Investigative Kiosk (CPIK) available for use.

According to the FBI, the CPIK allow users to extract data from a cell phone, put it into a report, and burn the report to a CD or DVD in as little as 30 minutes.37 Full-size kiosks are physically located in nearly all FBI Field Offices and RCFLs; portable kiosks are available at many FBI Resident Agencies. Drafting a search warrant and using the CPIK may help ensure that valuable information obtained from a smartphone may be admissible and help win convictions in a criminal case post-Riley.

1. U.S. CONST. AMEND. IV. 2. Mapp v. Ohio, 367 U.S. 643, 647, 81 S. Ct. 1684, 1687, 6 L. Ed. 2d 1081 (1961) citing Weeks v. United States, 232 U.S. 383, 391, 34 S. Ct. 341, 344, 58 L. Ed. 652 (1914). 3. U.S. CONST. AMEND. IV. 4. New Jersey v. T.L.O., 469 U.S. 325, 337, 105 S.Ct. 733, 740, 83 L.Ed.2d 720 (1985). 5. United States v. Jacobsen, 466 U.S. 109, 113, 104 S. Ct. 1652, 1656, 80 L. Ed. 2d 85 (1984). 6. See United States v. Knights, 534 U.S. 112, 112-13, 122 S. Ct. 587, 588, 151 L. Ed. 2d 497 (2001). 7. OConnor v. Ortega, 480 U.S. 709, 715, 107 S. Ct. 1492, 1496, 94 L. Ed. 2d 714 (1987). 8. California v. Hodari D., 499 U.S. 621, 624, 111 S.Ct. 1547, 1549-50, 113 L.Ed.2d 690 (1991); See e.g. United States v. Watson, 423 U.S. 411, 418-420, 96 S.Ct. 820, 825-26, 46 L.Ed.2d 598 (1976); Carroll v. United States, 267 U.S. 132, 149, 45 S.Ct. 280, 283-84, 69 L.Ed. 543 (1925). 9. Wyoming v. Houghton, 526 U.S. 295, 300, 119 S. Ct. 1297, 1300, 143 L. Ed. 2d 408 (1999).

10. Natl Treasury Employees Union v. Von Raab, 489 U.S. 656, 665, 109 S. Ct. 1384, 1390, 103 L. Ed. 2d 685 11. Vernonia School Dist. 47J v. Acton, 515 U.S. 646, 653, 115 S.Ct. 2386, 132 L.Ed.2d 564 (1995). 12. Johnson v. United States, 333 U.S. 10, 14, 68 S.Ct. 367, 92 L.Ed. 436 (1948). 13. See Kentucky v. King, 563 U.S. , , 131 S.Ct. 1849, 18561857, 179 L.Ed.2d 865 (2011). 14. See Illinois v. Rodriguez, 497 U.S. 177, 185, 110 S. Ct. 2793, 2799, 111 L. Ed. 2d 148 (1990). 15. Katz v. United States, 389 U.S. 347, 357, 88 S.Ct. 507, 514, 19 L.Ed.2d 576 (1967). 16. Smartphones usually have an automatic lock or passcode which prevents casual observation by law enforcement officers, making this exception of limited use in the field.

17. See Horton v. California, 496 U.S. 128, 128, 110 S. Ct. 2301, 2303, 110 L. Ed. 2d 112 (1990). 18. See Terry v. Ohio, 392 U.S. 1, 27, 88 S. Ct. 1868, 1883, 20 L. Ed. 2d 889 (1968).

Like Loading...

The rest is here:
Fourth Amendment | Signal 108

Posted in Fourth Amendment | Comments Off on Fourth Amendment | Signal 108

Chris Christie, Rand Paul and the Fourth Amendment | Fox News

Posted: at 1:45 pm

The dust-up between New Jersey Gov. Chris Christie and Kentucky Sen. Rand Paul over presidential fidelity to the Constitution -- particularly the Fourth Amendment -- was the most illuminating two minutes of the Republican debate last week.

It is a well-regarded historical truism that the Fourth Amendment was written by victims of government snooping, the 1770s version. The Framers wrote it to assure that the new federal government could never do to Americans what the king had done to the colonists.

What did the king do? He dispatched British agents and soldiers into the colonists homes and businesses ostensibly looking for proof of payment of the kings taxes and armed with general warrants issued by a secret court in London.

A general warrant did not name the person or place that was the target of the warrant, nor did it require the government to show any suspicion or evidence in order to obtain it. The government merely told the secret court it needed the warrant -- the standard was governmental need -- and the court issued it. General warrants authorized the bearer to search wherever he wished and to seize whatever he found.

The Fourth Amendment requires the government to present to a judge evidence of wrongdoing on the part of a specific target of the warrant, and it requires that the warrant specifically describe the place to be searched or the person or thing to be seized. The whole purpose of the Fourth Amendment is to protect the right to be left alone -- privacy -- by preventing general warrants.

The evidence of wrongdoing that the government must present in order to persuade a judge to sign a warrant must constitute probable cause. Probable cause is a level of evidence sufficient to induce a neutral judge to conclude that it is more likely than not that the government will find what it is looking for in the place it wants to search, and that what it is looking for will be evidence of criminal behavior.

But the government has given itself the power to cut constitutional corners. The Foreign Intelligence Surveillance Act, the Patriot Act and the Freedom Act totally disregard the Fourth Amendment by dispensing with the probable cause requirement and substituting instead -- incredibly -- the old British governmental need standard.

Hence, under any of the above federal laws, none of which is constitutional, the NSA can read whatever emails, listen to whatever phone calls in real time, and capture whatever text messages, monthly bank statements, credit card bills, legal or medical records it wishes merely by telling a secret court in Washington, D.C., that it needs them.

And the government gets this data by area codes or zip codes, or by telecom or computer server customer lists, not by naming a person or place about whom or which it is suspicious.

These federal acts not only violate the Fourth Amendment, they not only bring back a system the Founders and the Framers hated, rejected and fought a war to be rid of, they not only are contrary to the letter and spirit of the Constitution, but they produce information overload by getting all the data they can about everyone. Stated differently, under the present search-them-all regime, the bad guys can get through because the feds have more data than they can analyze, thus diluting their ability to focus on the bad guys.

Among the current presidential candidates, only Paul has expressed an understanding of this and has advocated for fidelity to the Constitution. He wants the government to follow the Fourth Amendment it has sworn to uphold. He is not against all spying, just against spying on all of us. He wants the feds to get a warrant based on probable cause before spying on anyone, because thats what the Constitution requires. The remaining presidential candidates -- the Republicans and Hillary Clinton -- prefer the unconstitutional governmental need standard, as does President Obama.

But Christie advocated an approach more radical than the presidents when he argued with Paul during the debate last week. He actually said that in order to acquire probable cause, the feds need to listen to everyones phone calls and read everyones emails first. He effectively argued that the feds need to break into a house first to see what evidence they can find there so as to present that evidence to a judge and get a search warrant to enter the house.

Such a circuitous argument would have made Joe Stalin happy, but it flunks American Criminal Procedure 101. It is the job of law enforcement to acquire probable cause without violating the Fourth Amendment. The whole purpose of the probable cause standard is to force the government to focus on people it suspects of wrongdoing and leave the rest of us alone. Christie wants the feds to use a fish net. Paul argues that the Constitution requires the feds to use a fish hook.

Christie rejects the plain meaning of the Constitution, as well as the arguments of the Framers, and he ignores the lessons of history. The idea that the government must break the law in order to enforce it or violate the Constitution in order to preserve it is the stuff of tyrannies, not free people.

Andrew P. Napolitano, a former judge of the Superior Court of New Jersey, is the senior judicial analyst at Fox News Channel.

Visit link:
Chris Christie, Rand Paul and the Fourth Amendment | Fox News

Posted in Fourth Amendment | Comments Off on Chris Christie, Rand Paul and the Fourth Amendment | Fox News

Mars Colony: Challenger Trainer, Cheats for PC

Posted: at 1:44 pm

TRAINERS (1) FAQS & WALKTHROUGHS (0) SAVEGAMES (0) WALLPAPERS (0) ACHIEVEMENTS (0) CHEAT CODES & HINTS (0) MISCELLANEOUS (0) MESSAGE BOARD We currently don't have any Mars Colony: Challenger FAQs, Guides or Walkthroughs for PC. If you know of any, please SUBMIT them or check back at a later date for more cheats to be added. We currently don't have any Mars Colony: Challenger Cheats, Cheat Codes or Hints for PC. If you know of any, please SUBMIT them or check back at a later date for more cheats to be added. We currently don't have a Mars Colony: ChallengerAchievements List for PC. If you know of any, please SUBMIT them or check back at a later date for more cheats to be added. We currently don't have any Mars Colony: Challenger Savegames for PC. If you know of any, please SUBMIT them or check back at a later date for more cheats to be added. We currently don't have any Mars Colony: Challenger Wallpapers for PC. Check back at a later date for more wallpapers to be added. User-Submitted Review There are no user-submitted reviews for this game. Be the first to post one now! 2442 users online. 2330 guests / 112 members.

See original here:
Mars Colony: Challenger Trainer, Cheats for PC

Posted in Mars Colonization | Comments Off on Mars Colony: Challenger Trainer, Cheats for PC

How the Bitcoin protocol actually works | DDI

Posted: August 16, 2015 at 9:43 am

Many thousands of articles have been written purporting to explain Bitcoin, the online, peer-to-peer currency. Most of those articles give a hand-wavy account of the underlying cryptographic protocol, omitting many details. Even those articles which delve deeper often gloss over crucial points. My aim in this post is to explain the major ideas behind the Bitcoin protocol in a clear, easily comprehensible way. Well start from first principles, build up to a broad theoretical understanding of how the protocol works, and then dig down into the nitty-gritty, examining the raw data in a Bitcoin transaction.

Understanding the protocol in this detailed way is hard work. It is tempting instead to take Bitcoin as given, and to engage in speculation about how to get rich with Bitcoin, whether Bitcoin is a bubble, whether Bitcoin might one day mean the end of taxation, and so on. Thats fun, but severely limits your understanding. Understanding the details of the Bitcoin protocol opens up otherwise inaccessible vistas. In particular, its the basis for understanding Bitcoins built-in scripting language, which makes it possible to use Bitcoin to create new types of financial instruments, such as smart contracts. New financial instruments can, in turn, be used to create new markets and to enable new forms of collective human behaviour. Talk about fun!

Ill describe Bitcoin scripting and concepts such as smart contracts in future posts. This post concentrates on explaining the nuts-and-bolts of the Bitcoin protocol. To understand the post, you need to be comfortable with public key cryptography, and with the closely related idea of digital signatures. Ill also assume youre familiar with cryptographic hashing. None of this is especially difficult. The basic ideas can be taught in freshman university mathematics or computer science classes. The ideas are beautiful, so if youre not familiar with them, I recommend taking a few hours to get familiar.

It may seem surprising that Bitcoins basis is cryptography. Isnt Bitcoin a currency, not a way of sending secret messages? In fact, the problems Bitcoin needs to solve are largely about securing transactions making sure people cant steal from one another, or impersonate one another, and so on. In the world of atoms we achieve security with devices such as locks, safes, signatures, and bank vaults. In the world of bits we achieve this kind of security with cryptography. And thats why Bitcoin is at heart a cryptographic protocol.

My strategy in the post is to build Bitcoin up in stages. Ill begin by explaining a very simple digital currency, based on ideas that are almost obvious. Well call that currency Infocoin, to distinguish it from Bitcoin. Of course, our first version of Infocoin will have many deficiencies, and so well go through several iterations of Infocoin, with each iteration introducing just one or two simple new ideas. After several such iterations, well arrive at the full Bitcoin protocol. We will have reinvented Bitcoin!

This strategy is slower than if I explained the entire Bitcoin protocol in one shot. But while you can understand the mechanics of Bitcoin through such a one-shot explanation, it would be difficult to understand why Bitcoin is designed the way it is. The advantage of the slower iterative explanation is that it gives us a much sharper understanding of each element of Bitcoin.

Finally, I should mention that Im a relative newcomer to Bitcoin. Ive been following it loosely since 2011 (and cryptocurrencies since the late 1990s), but only got seriously into the details of the Bitcoin protocol earlier this year. So Id certainly appreciate corrections of any misapprehensions on my part. Also in the post Ive included a number of problems for the author notes to myself about questions that came up during the writing. You may find these interesting, but you can also skip them entirely without losing track of the main text.

So how can we design a digital currency?

On the face of it, a digital currency sounds impossible. Suppose some person lets call her Alice has some digital money which she wants to spend. If Alice can use a string of bits as money, how can we prevent her from using the same bit string over and over, thus minting an infinite supply of money? Or, if we can somehow solve that problem, how can we prevent someone else forging such a string of bits, and using that to steal from Alice?

These are just two of the many problems that must be overcome in order to use information as money.

As a first version of Infocoin, lets find a way that Alice can use a string of bits as a (very primitive and incomplete) form of money, in a way that gives her at least some protection against forgery. Suppose Alice wants to give another person, Bob, an infocoin. To do this, Alice writes down the message I, Alice, am giving Bob one infocoin. She then digitally signs the message using a private cryptographic key, and announces the signed string of bits to the entire world.

(By the way, Im using capitalized Infocoin to refer to the protocol and general concept, and lowercase infocoin to refer to specific denominations of the currency. A similar useage is common, though not universal, in the Bitcoin world.)

This isnt terribly impressive as a prototype digital currency! But it does have some virtues. Anyone in the world (including Bob) can use Alices public key to verify that Alice really was the person who signed the message I, Alice, am giving Bob one infocoin. No-one else could have created that bit string, and so Alice cant turn around and say No, I didnt mean to give Bob an infocoin. So the protocol establishes that Alice truly intends to give Bob one infocoin. The same fact no-one else could compose such a signed message also gives Alice some limited protection from forgery. Of course, after Alice has published her message its possible for other people to duplicate the message, so in that sense forgery is possible. But its not possible from scratch. These two properties establishment of intent on Alices part, and the limited protection from forgery are genuinely notable features of this protocol.

I havent (quite) said exactly what digital money is in this protocol. To make this explicit: its just the message itself, i.e., the string of bits representing the digitally signed message I, Alice, am giving Bob one infocoin. Later protocols will be similar, in that all our forms of digital money will be just more and more elaborate messages [1].

A problem with the first version of Infocoin is that Alice could keep sending Bob the same signed message over and over. Suppose Bob receives ten copies of the signed message I, Alice, am giving Bob one infocoin. Does that mean Alice sent Bob ten different infocoins? Was her message accidentally duplicated? Perhaps she was trying to trick Bob into believing that she had given him ten different infocoins, when the message only proves to the world that she intends to transfer one infocoin.

What wed like is a way of making infocoins unique. They need a label or serial number. Alice would sign the message I, Alice, am giving Bob one infocoin, with serial number 8740348. Then, later, Alice could sign the message I, Alice, am giving Bob one infocoin, with serial number 8770431, and Bob (and everyone else) would know that a different infocoin was being transferred.

To make this scheme work we need a trusted source of serial numbers for the infocoins. One way to create such a source is to introduce a bank. This bank would provide serial numbers for infocoins, keep track of who has which infocoins, and verify that transactions really are legitimate,

In more detail, lets suppose Alice goes into the bank, and says I want to withdraw one infocoin from my account. The bank reduces her account balance by one infocoin, and assigns her a new, never-before used serial number, lets say 1234567. Then, when Alice wants to transfer her infocoin to Bob, she signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567. But Bob doesnt just accept the infocoin. Instead, he contacts the bank, and verifies that: (a) the infocoin with that serial number belongs to Alice; and (b) Alice hasnt already spent the infocoin. If both those things are true, then Bob tells the bank he wants to accept the infocoin, and the bank updates their records to show that the infocoin with that serial number is now in Bobs possession, and no longer belongs to Alice.

This last solution looks pretty promising. However, it turns out that we can do something much more ambitious. We can eliminate the bank entirely from the protocol. This changes the nature of the currency considerably. It means that there is no longer any single organization in charge of the currency. And when you think about the enormous power a central bank has control over the money supply thats a pretty huge change.

The idea is to make it so everyone (collectively) is the bank. In particular, well assume that everyone using Infocoin keeps a complete record of which infocoins belong to which person. You can think of this as a shared public ledger showing all Infocoin transactions. Well call this ledger the block chain, since thats what the complete record will be called in Bitcoin, once we get to it.

Now, suppose Alice wants to transfer an infocoin to Bob. She signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567, and gives the signed message to Bob. Bob can use his copy of the block chain to check that, indeed, the infocoin is Alices to give. If that checks out then he broadcasts both Alices message and his acceptance of the transaction to the entire network, and everyone updates their copy of the block chain.

We still have the where do serial number come from problem, but that turns out to be pretty easy to solve, and so I will defer it to later, in the discussion of Bitcoin. A more challenging problem is that this protocol allows Alice to cheat by double spending her infocoin. She sends the signed message I, Alice, am giving Bob one infocoin, with serial number 1234567 to Bob, and the messageI, Alice, am giving Charlie one infocoin, with [the same] serial number 1234567 to Charlie. Both Bob and Charlie use their copy of the block chain to verify that the infocoin is Alices to spend. Provided they do this verification at nearly the same time (before theyve had a chance to hear from one another), both will find that, yes, the block chain shows the coin belongs to Alice. And so they will both accept the transaction, and also broadcast their acceptance of the transaction. Now theres a problem. How should other people update their block chains? There may be no easy way to achieve a consistent shared ledger of transactions. And even if everyone can agree on a consistent way to update their block chains, there is still the problem that either Bob or Charlie will be cheated.

At first glance double spending seems difficult for Alice to pull off. After all, if Alice sends the message first to Bob, then Bob can verify the message, and tell everyone else in the network (including Charlie) to update their block chain. Once that has happened, Charlie would no longer be fooled by Alice. So there is most likely only a brief period of time in which Alice can double spend. However, its obviously undesirable to have any such a period of time. Worse, there are techniques Alice could use to make that period longer. She could, for example, use network traffic analysis to find times when Bob and Charlie are likely to have a lot of latency in communication. Or perhaps she could do something to deliberately disrupt their communications. If she can slow communication even a little that makes her task of double spending much easier.

How can we address the problem of double spending? The obvious solution is that when Alice sends Bob an infocoin, Bob shouldnt try to verify the transaction alone. Rather, he should broadcast the possible transaction to the entire network of Infocoin users, and ask them to help determine whether the transaction is legitimate. If they collectively decide that the transaction is okay, then Bob can accept the infocoin, and everyone will update their block chain. This type of protocol can help prevent double spending, since if Alice tries to spend her infocoin with both Bob and Charlie, other people on the network will notice, and network users will tell both Bob and Charlie that there is a problem with the transaction, and the transaction shouldnt go through.

In more detail, lets suppose Alice wants to give Bob an infocoin. As before, she signs the message I, Alice, am giving Bob one infocoin, with serial number 1234567, and gives the signed message to Bob. Also as before, Bob does a sanity check, using his copy of the block chain to check that, indeed, the coin currently belongs to Alice. But at that point the protocol is modified. Bob doesnt just go ahead and accept the transaction. Instead, he broadcasts Alices message to the entire network. Other members of the network check to see whether Alice owns that infocoin. If so, they broadcast the message Yes, Alice owns infocoin 1234567, it can now be transferred to Bob. Once enough people have broadcast that message, everyone updates their block chain to show that infocoin 1234567 now belongs to Bob, and the transaction is complete.

This protocol has many imprecise elements at present. For instance, what does it mean to say once enough people have broadcast that message? What exactly does enough mean here? It cant mean everyone in the network, since we dont a priori know who is on the Infocoin network. For the same reason, it cant mean some fixed fraction of users in the network. We wont try to make these ideas precise right now. Instead, in the next section Ill point out a serious problem with the approach as described. Fixing that problem will at the same time have the pleasant side effect of making the ideas above much more precise.

Suppose Alice wants to double spend in the network-based protocol I just described. She could do this by taking over the Infocoin network. Lets suppose she uses an automated system to set up a large number of separate identities, lets say a billion, on the Infocoin network. As before, she tries to double spend the same infocoin with both Bob and Charlie. But when Bob and Charlie ask the network to validate their respective transactions, Alices sock puppet identities swamp the network, announcing to Bob that theyve validated his transaction, and to Charlie that theyve validated his transaction, possibly fooling one or both into accepting the transaction.

Theres a clever way of avoiding this problem, using an idea known as proof-of-work. The idea is counterintuitive and involves a combination of two ideas: (1) to (artificially) make it computationally costly for network users to validate transactions; and (2) to reward them for trying to help validate transactions. The reward is used so that people on the network will try to help validate transactions, even though thats now been made a computationally costly process. The benefit of making it costly to validate transactions is that validation can no longer be influenced by the number of network identities someone controls, but only by the total computational power they can bring to bear on validation. As well see, with some clever design we can make it so a cheater would need enormous computational resources to cheat, making it impractical.

Thats the gist of proof-of-work. But to really understand proof-of-work, we need to go through the details.

Suppose Alice broadcasts to the network the news that I, Alice, am giving Bob one infocoin, with serial number 1234567.

As other people on the network hear that message, each adds it to a queue of pending transactions that theyve been told about, but which havent yet been approved by the network. For instance, another network user named David might have the following queue of pending transactions:

I, Tom, am giving Sue one infocoin, with serial number 1201174.

I, Sydney, am giving Cynthia one infocoin, with serial number 1295618.

I, Alice, am giving Bob one infocoin, with serial number 1234567.

David checks his copy of the block chain, and can see that each transaction is valid. He would like to help out by broadcasting news of that validity to the entire network.

However, before doing that, as part of the validation protocol David is required to solve a hard computational puzzle the proof-of-work. Without the solution to that puzzle, the rest of the network wont accept his validation of the transaction.

What puzzle does David need to solve? To explain that, let be a fixed hash function known by everyone in the network its built into the protocol. Bitcoin uses the well-known SHA-256 hash function, but any cryptographically secure hash function will do. Lets give Davids queue of pending transactions a label, , just so its got a name we can refer to. Suppose David appends a number (called the nonce) to and hashes the combination. For example, if we use Hello, world! (obviously this is not a list of transactions, just a string used for illustrative purposes) and the nonce then (output is in hexadecimal)

The puzzle David has to solve the proof-of-work is to find a nonce such that when we append to and hash the combination the output hash begins with a long run of zeroes. The puzzle can be made more or less difficult by varying the number of zeroes required to solve the puzzle. A relatively simple proof-of-work puzzle might require just three or four zeroes at the start of the hash, while a more difficult proof-of-work puzzle might require a much longer run of zeros, say 15 consecutive zeroes. In either case, the above attempt to find a suitable nonce, with , is a failure, since the output doesnt begin with any zeroes at all. Trying doesnt work either:

We can keep trying different values for the nonce, . Finally, at we obtain:

This nonce gives us a string of four zeroes at the beginning of the output of the hash. This will be enough to solve a simple proof-of-work puzzle, but not enough to solve a more difficult proof-of-work puzzle.

What makes this puzzle hard to solve is the fact that the output from a cryptographic hash function behaves like a random number: change the input even a tiny bit and the output from the hash function changes completely, in a way thats hard to predict. So if we want the output hash value to begin with 10 zeroes, say, then David will need, on average, to try different values for before he finds a suitable nonce. Thats a pretty challenging task, requiring lots of computational power.

Obviously, its possible to make this puzzle more or less difficult to solve by requiring more or fewer zeroes in the output from the hash function. In fact, the Bitcoin protocol gets quite a fine level of control over the difficulty of the puzzle, by using a slight variation on the proof-of-work puzzle described above. Instead of requiring leading zeroes, the Bitcoin proof-of-work puzzle requires the hash of a blocks header to be lower than or equal to a number known as the target. This target is automatically adjusted to ensure that a Bitcoin block takes, on average, about ten minutes to validate.

(In practice there is a sizeable randomness in how long it takes to validate a block sometimes a new block is validated in just a minute or two, other times it may take 20 minutes or even longer. Its straightforward to modify the Bitcoin protocol so that the time to validation is much more sharply peaked around ten minutes. Instead of solving a single puzzle, we can require that multiple puzzles be solved; with some careful design it is possible to considerably reduce the variance in the time to validate a block of transactions.)

Alright, lets suppose David is lucky and finds a suitable nonce, . Celebration! (Hell be rewarded for finding the nonce, as described below). He broadcasts the block of transactions hes approving to the network, together with the value for . Other participants in the Infocoin network can verify that is a valid solution to the proof-of-work puzzle. And they then update their block chains to include the new block of transactions.

For the proof-of-work idea to have any chance of succeeding, network users need an incentive to help validate transactions. Without such an incentive, they have no reason to expend valuable computational power, merely to help validate other peoples transactions. And if network users are not willing to expend that power, then the whole system wont work. The solution to this problem is to reward people who help validate transactions. In particular, suppose we reward whoever successfully validates a block of transactions by crediting them with some infocoins. Provided the infocoin reward is large enough that will give them an incentive to participate in validation.

In the Bitcoin protocol, this validation process is called mining. For each block of transactions validated, the successful miner receives a bitcoin reward. Initially, this was set to be a 50 bitcoin reward. But for every 210,000 validated blocks (roughly, once every four years) the reward halves. This has happened just once, to date, and so the current reward for mining a block is 25 bitcoins. This halving in the rate will continue every four years until the year 2140 CE. At that point, the reward for mining will drop below bitcoins per block. bitcoins is actually the minimal unit of Bitcoin, and is known as a satoshi. So in 2140 CE the total supply of bitcoins will cease to increase. However, that wont eliminate the incentive to help validate transactions. Bitcoin also makes it possible to set aside some currency in a transaction as a transaction fee, which goes to the miner who helps validate it. In the early days of Bitcoin transaction fees were mostly set to zero, but as Bitcoin has gained in popularity, transaction fees have gradually risen, and are now a substantial additional incentive on top of the 25 bitcoin reward for mining a block.

You can think of proof-of-work as a competition to approve transactions. Each entry in the competition costs a little bit of computing power. A miners chance of winning the competition is (roughly, and with some caveats) equal to the proportion of the total computing power that they control. So, for instance, if a miner controls one percent of the computing power being used to validate Bitcoin transactions, then they have roughly a one percent chance of winning the competition. So provided a lot of computing power is being brought to bear on the competition, a dishonest miner is likely to have only a relatively small chance to corrupt the validation process, unless they expend a huge amount of computing resources.

Of course, while its encouraging that a dishonest party has only a relatively small chance to corrupt the block chain, thats not enough to give us confidence in the currency. In particular, we havent yet conclusively addressed the issue of double spending.

Ill analyse double spending shortly. Before doing that, I want to fill in an important detail in the description of Infocoin. Wed ideally like the Infocoin network to agree upon the order in which transactions have occurred. If we dont have such an ordering then at any given moment it may not be clear who owns which infocoins. To help do this well require that new blocks always include a pointer to the last block validated in the chain, in addition to the list of transactions in the block. (The pointer is actually just a hash of the previous block). So typically the block chain is just a linear chain of blocks of transactions, one after the other, with later blocks each containing a pointer to the immediately prior block:

Occasionally, a fork will appear in the block chain. This can happen, for instance, if by chance two miners happen to validate a block of transactions near-simultaneously both broadcast their newly-validated block out to the network, and some people update their block chain one way, and others update their block chain the other way:

This causes exactly the problem were trying to avoid its no longer clear in what order transactions have occurred, and it may not be clear who owns which infocoins. Fortunately, theres a simple idea that can be used to remove any forks. The rule is this: if a fork occurs, people on the network keep track of both forks. But at any given time, miners only work to extend whichever fork is longest in their copy of the block chain.

Suppose, for example, that we have a fork in which some miners receive block A first, and some miners receive block B first. Those miners who receive block A first will continue mining along that fork, while the others will mine along fork B. Lets suppose that the miners working on fork B are the next to successfully mine a block:

After they receive news that this has happened, the miners working on fork A will notice that fork B is now longer, and will switch to working on that fork. Presto, in short order work on fork A will cease, and everyone will be working on the same linear chain, and block A can be ignored. Of course, any still-pending transactions in A will still be pending in the queues of the miners working on fork B, and so all transactions will eventually be validated.

Likewise, it may be that the miners working on fork A are the first to extend their fork. In that case work on fork B will quickly cease, and again we have a single linear chain.

No matter what the outcome, this process ensures that the block chain has an agreed-upon time ordering of the blocks. In Bitcoin proper, a transaction is not considered confirmed until: (1) it is part of a block in the longest fork, and (2) at least 5 blocks follow it in the longest fork. In this case we say that the transaction has 6 confirmations. This gives the network time to come to an agreed-upon the ordering of the blocks. Well also use this strategy for Infocoin.

With the time-ordering now understood, lets return to think about what happens if a dishonest party tries to double spend. Suppose Alice tries to double spend with Bob and Charlie. One possible approach is for her to try to validate a block that includes both transactions. Assuming she has one percent of the computing power, she will occasionally get lucky and validate the block by solving the proof-of-work. Unfortunately for Alice, the double spending will be immediately spotted by other people in the Infocoin network and rejected, despite solving the proof-of-work problem. So thats not something we need to worry about.

A more serious problem occurs if she broadcasts two separate transactions in which she spends the same infocoin with Bob and Charlie, respectively. She might, for example, broadcast one transaction to a subset of the miners, and the other transaction to another set of miners, hoping to get both transactions validated in this way. Fortunately, in this case, as weve seen, the network will eventually confirm one of these transactions, but not both. So, for instance, Bobs transaction might ultimately be confirmed, in which case Bob can go ahead confidently. Meanwhile, Charlie will see that his transaction has not been confirmed, and so will decline Alices offer. So this isnt a problem either. In fact, knowing that this will be the case, there is little reason for Alice to try this in the first place.

An important variant on double spending is if Alice = Bob, i.e., Alice tries to spend a coin with Charlie which she is also spending with herself (i.e., giving back to herself). This sounds like it ought to be easy to detect and deal with, but, of course, its easy on a network to set up multiple identities associated with the same person or organization, so this possibility needs to be considered. In this case, Alices strategy is to wait until Charlie accepts the infocoin, which happens after the transaction has been confirmed 6 times in the longest chain. She will then attempt to fork the chain before the transaction with Charlie, adding a block which includes a transaction in which she pays herself:

Unfortunately for Alice, its now very difficult for her to catch up with the longer fork. Other miners wont want to help her out, since theyll be working on the longer fork. And unless Alice is able to solve the proof-of-work at least as fast as everyone else in the network combined roughly, that means controlling more than fifty percent of the computing power then she will just keep falling further and further behind. Of course, she might get lucky. We can, for example, imagine a scenario in which Alice controls one percent of the computing power, but happens to get lucky and finds six extra blocks in a row, before the rest of the network has found any extra blocks. In this case, she might be able to get ahead, and get control of the block chain. But this particular event will occur with probability . A more general analysis along these lines shows that Alices probability of ever catching up is infinitesimal, unless she is able to solve proof-of-work puzzles at a rate approaching all other miners combined.

Of course, this is not a rigorous security analysis showing that Alice cannot double spend. Its merely an informal plausibility argument. The original paper introducing Bitcoin did not, in fact, contain a rigorous security analysis, only informal arguments along the lines Ive presented here. The security community is still analysing Bitcoin, and trying to understand possible vulnerabilities. You can see some of this research listed here, and I mention a few related problems in the Problems for the author below. At this point I think its fair to say that the jury is still out on how secure Bitcoin is.

The proof-of-work and mining ideas give rise to many questions. How much reward is enough to persuade people to mine? How does the change in supply of infocoins affect the Infocoin economy? Will Infocoin mining end up concentrated in the hands of a few, or many? If its just a few, doesnt that endanger the security of the system? Presumably transaction fees will eventually equilibriate wont this introduce an unwanted source of friction, and make small transactions less desirable? These are all great questions, but beyond the scope of this post. I may come back to the questions (in the context of Bitcoin) in a future post. For now, well stick to our focus on understanding how the Bitcoin protocol works.

Lets move away from Infocoin, and describe the actual Bitcoin protocol. There are a few new ideas here, but with one exception (discussed below) theyre mostly obvious modifications to Infocoin.

To use Bitcoin in practice, you first install a wallet program on your computer. To give you a sense of what that means, heres a screenshot of a wallet called Multbit. You can see the Bitcoin balance on the left 0.06555555 Bitcoins, or about 70 dollars at the exchange rate on the day I took this screenshot and on the right two recent transactions, which deposited those 0.06555555 Bitcoins:

Suppose youre a merchant who has set up an online store, and youve decided to allow people to pay using Bitcoin. What you do is tell your wallet program to generate a Bitcoin address. In response, it will generate a public / private key pair, and then hash the public key to form your Bitcoin address:

You then send your Bitcoin address to the person who wants to buy from you. You could do this in email, or even put the address up publicly on a webpage. This is safe, since the address is merely a hash of your public key, which can safely be known by the world anyway. (Ill return later to the question of why the Bitcoin address is a hash, and not just the public key.)

The person who is going to pay you then generates a transaction. Lets take a look at the data from an actual transaction transferring bitcoins. Whats shown below is very nearly the raw data. Its changed in three ways: (1) the data has been deserialized; (2) line numbers have been added, for ease of reference; and (3) Ive abbreviated various hashes and public keys, just putting in the first six hexadecimal digits of each, when in reality they are much longer. Heres the data:

Lets go through this, line by line.

Line 1 contains the hash of the remainder of the transaction, 7c4025..., expressed in hexadecimal. This is used as an identifier for the transaction.

Line 2 tells us that this is a transaction in version 1 of the Bitcoin protocol.

Lines 3 and 4 tell us that the transaction has one input and one output, respectively. Ill talk below about transactions with more inputs and outputs, and why thats useful.

Line 5 contains the value for lock_time, which can be used to control when a transaction is finalized. For most Bitcoin transactions being carried out today the lock_time is set to 0, which means the transaction is finalized immediately.

Line 6 tells us the size (in bytes) of the transaction. Note that its not the monetary amount being transferred! That comes later.

Lines 7 through 11 define the input to the transaction. In particular, lines 8 through 10 tell us that the input is to be taken from the output from an earlier transaction, with the given hash, which is expressed in hexadecimal as 2007ae.... The n=0 tells us its to be the first output from that transaction; well see soon how multiple outputs (and inputs) from a transaction work, so dont worry too much about this for now. Line 11 contains the signature of the person sending the money, 304502..., followed by a space, and then the corresponding public key, 04b2d.... Again, these are both in hexadecimal.

One thing to note about the input is that theres nothing explicitly specifying how many bitcoins from the previous transaction should be spent in this transaction. In fact, all the bitcoins from the n=0th output of the previous transaction are spent. So, for example, if the n=0th output of the earlier transaction was 2 bitcoins, then 2 bitcoins will be spent in this transaction. This seems like an inconvenient restriction like trying to buy bread with a 20 dollar note, and not being able to break the note down. The solution, of course, is to have a mechanism for providing change. This can be done using transactions with multiple inputs and outputs, which well discuss in the next section.

Lines 12 through 14 define the output from the transaction. In particular, line 13 tells us the value of the output, 0.319 bitcoins. Line 14 is somewhat complicated. The main thing to note is that the string a7db6f... is the Bitcoin address of the intended recipient of the funds (written in hexadecimal). In fact, Line 14 is actually an expression in Bitcoins scripting language. Im not going to describe that language in detail in this post, the important thing to take away now is just that a7db6f... is the Bitcoin address.

You can now see, by the way, how Bitcoin addresses the question I swept under the rug in the last section: where do Bitcoin serial numbers come from? In fact, the role of the serial number is played by transaction hashes. In the transaction above, for example, the recipient is receiving 0.319 Bitcoins, which come out of the first output of an earlier transaction with hash 2007ae... (line 9). If you go and look in the block chain for that transaction, youd see that its output comes from a still earlier transaction. And so on.

There are two clever things about using transaction hashes instead of serial numbers. First, in Bitcoin theres not really any separate, persistent coins at all, just a long series of transactions in the block chain. Its a clever idea to realize that you dont need persistent coins, and can just get by with a ledger of transactions. Second, by operating in this way we remove the need for any central authority issuing serial numbers. Instead, the serial numbers can be self-generated, merely by hashing the transaction.

In fact, its possible to keep following the chain of transactions further back in history. Ultimately, this process must terminate. This can happen in one of two ways. The first possibilitty is that youll arrive at the very first Bitcoin transaction, contained in the so-called Genesis block. This is a special transaction, having no inputs, but a 50 Bitcoin output. In other words, this transaction establishes an initial money supply. The Genesis block is treated separately by Bitcoin clients, and I wont get into the details here, although its along similar lines to the transaction above. You can see the deserialized raw data here, and read about the Genesis block here.

The second possibility when you follow a chain of transactions back in time is that eventually youll arrive at a so-called coinbase transaction. With the exception of the Genesis block, every block of transactions in the block chain starts with a special coinbase transaction. This is the transaction rewarding the miner who validated that block of transactions. It uses a similar but not identical format to the transaction above. I wont go through the format in detail, but if you want to see an example, see here. You can read a little more about coinbase transactions here.

Something I havent been precise about above is what exactly is being signed by the digital signature in line 11. The obvious thing to do is for the payer to sign the whole transaction (apart from the transaction hash, which, of course, must be generated later). Currently, this is not what is done some pieces of the transaction are omitted. This makes some pieces of the transaction malleable, i.e., they can be changed later. However, this malleability does not include the amounts being paid out, senders and recipients, which cant be changed later. I must admit I havent dug down into the details here. I gather that this malleability is under discussion in the Bitcoin developer community, and there are efforts afoot to reduce or eliminate this malleability.

In the last section I described how a transaction with a single input and a single output works. In practice, its often extremely convenient to create Bitcoin transactions with multiple inputs or multiple outputs. Ill talk below about why this can be useful. But first lets take a look at the data from an actual transaction:

Lets go through the data, line by line. Its very similar to the single-input-single-output transaction, so Ill do this pretty quickly.

Line 1 contains the hash of the remainder of the transaction. This is used as an identifier for the transaction.

Line 2 tells us that this is a transaction in version 1 of the Bitcoin protocol.

Lines 3 and 4 tell us that the transaction has three inputs and two outputs, respectively.

Line 5 contains the lock_time. As in the single-input-single-output case this is set to 0, which means the transaction is finalized immediately.

Line 6 tells us the size of the transaction in bytes.

Lines 7 through 19 define a list of the inputs to the transaction. Each corresponds to an output from a previous Bitcoin transaction.

The first input is defined in lines 8 through 11.

In particular, lines 8 through 10 tell us that the input is to be taken from the n=0th output from the transaction with hash 3beabc.... Line 11 contains the signature, followed by a space, and then the public key of the person sending the bitcoins.

Lines 12 through 15 define the second input, with a similar format to lines 8 through 11. And lines 16 through 19 define the third input.

Lines 20 through 24 define a list containing the two outputs from the transaction.

The first output is defined in lines 21 and 22. Line 21 tells us the value of the output, 0.01068000 bitcoins. As before, line 22 is an expression in Bitcoins scripting language. The main thing to take away here is that the string e8c30622... is the Bitcoin address of the intended recipient of the funds.

The second output is defined lines 23 and 24, with a similar format to the first output.

See the original post:
How the Bitcoin protocol actually works | DDI

Posted in Bitcoin | Comments Off on How the Bitcoin protocol actually works | DDI

Artificial Intelligence Robots Transhumanism Cyborgs 2015 …

Posted: August 15, 2015 at 5:42 pm

May 2015 breaking News End Times News Update Rise of Artificial Intelligence Robots Transhumanism Cyborgs Breaking News May 4 2015 PART2 http://www.cbsnews.com/news/carnegie-...

The Rise of Artificial Intelligence End Times news Update PART1 https://www.youtube.com/watch?v=qhBfl...

Breaking News New USA Warfare NO PILOT flying Fighter jet F16 Jet Flies Unmanned April 2015 https://www.youtube.com/watch?v=vNYqG...

Trans humanism Elon Musk Telsa CEO says artificial intelligence demonshttps://www.youtube.com/watch?v=tAO9-...

UFO's Aliens Cyborgs Trans humanism deception Demons Fallen Angels Postmodernism Emerging Emergent Church Mysticism ISLAM religion mythology Black Magic https://www.youtube.com/watch?v=aovOj...

View of Earth from Satellite orbiting the Earth April 2015 https://www.youtube.com/watch?v=jcrjo...

Solar Eclipse March 20 2015 & 3rd Blood Moon April 4 2015 Breaking News March 2015 https://www.youtube.com/watch?v=5E6dD...

What is going to happen in Israel and the world in 2015? https://www.youtube.com/watch?v=MtluJ...

Not if but when Armageddon Final Hour Last Days News Prophecy https://www.youtube.com/watch?v=rivbe...

Bible Prophecy wars leading to Armageddon last days https://www.youtube.com/watch?v=4jH2B...

October 8 2014 Breaking News 2nd lunar eclipse of four 4 Blood Moons https://www.youtube.com/watch?v=TO5Zm...

Eclipse Lunar April 4th 2015 Third (3rd) Blood Moon Tetrad Watch Live https://www.youtube.com/watch?v=gIUnM...

Breaking News April 4th 2015 Eclipse Full Lunar April 4th 2015 Third (3rd) Blood Moon Breaking News Tetrad Lunar Eclipse Third Blood Moon Bible Prophecy 3rd Blood Moon April 4th 2015 - Part2 https://www.youtube.com/watch?v=sFL_U...

Breaking News Blood moon Lunar Eclipse April 4th 2015 Bible Prophecy 4 Blood moons - Part1 https://www.youtube.com/watch?v=lwXlE...

Breaking news Solar Eclipse March 20 2015 & Sept 2015 Solar Eclipse March 20 2015 http://www.youtube.com/watch?v=ZroQb2...

Breaking news Bowe Bergdahl being Charged with Desertion Faces Life in Prison April 2015 https://www.youtube.com/watch?v=yv2dm...

Breaking News USA Woman Arrested on Suspicion of Trying to Join ISIS April 2015 https://www.youtube.com/watch?v=uku41...

Russia China USA Germany France UK Iran Nuclear Agreement NOW WHAT? https://www.youtube.com/watch?v=mKiUt...

Breaking News World Chaos the Hour is at hand brink of World War 3 April 2015 https://www.youtube.com/watch?v=BXBXb...

Breaking News North Korea Nuclear Threat says ready and willing Fire Missile Any Time April 2015 https://www.youtube.com/watch?v=m5u2L...

Breaking News USA deploying 3,000 troops & 750 tanks for 3 month drills in Baltics over Russian Aggression April 2015 https://www.youtube.com/watch?v=L7BWI...

April 2015 Terrorist group Al Shabaab massacre of Christians at university in Kenya Breaking news https://www.youtube.com/watch?v=-eOB1...

Russia Foreign Minister Sergey Lavrov Iran Nuclear deal reached on all key aspects Breaking News April 2015 https://www.youtube.com/watch?v=urqS9...

Obama Looking for Legacy in Iran Nuclear Deal as Iran Supreme leader&Iranians chant death to America https://www.youtube.com/watch?v=972-k...

Obama shuns Israel so USA John Boehner goes to Israel shows USA unwavering Support Breaking News https://www.youtube.com/watch?v=HmsbM...

Obama Foreign Policy Falures Yemen Ukraine Syria Iraq ETC. Breaking News April 2015 https://www.youtube.com/watch?v=I4NsD...

Breaking News Russia a Nuclear threat? Putin reminders Russia a nuclear power https://www.youtube.com/watch?v=WYRIw...

Surfing Video MXPX https://www.youtube.com/watch?v=_ZSoC...

Read more from the original source:
Artificial Intelligence Robots Transhumanism Cyborgs 2015 ...

Posted in Transhuman News | Comments Off on Artificial Intelligence Robots Transhumanism Cyborgs 2015 …

Internet censorship in India – www.ketan.net

Posted: at 5:41 pm

INTERNET CENSORSHIP IN INDIA: IS IT NECESSARY AND DOES IT WORK?

SARAI-CSDS

Short Term Independent Fellowship for 2004.

Internet Censorship in India:

Is It Necessary and Does It Work?

Ketan Tanna

Mumbai

http://www.ketan.net

Mobile: 91-9821034500

Acknowledgments

I am grateful to my parents

Narottam (Bachubhai) Mulji Tanna and Kusum Tanna

as well as my friend Viraf Doctor

for their support and help.

Contents

1

Introduction

The curious case of http://www.hindunity.org and role of the Mumbai police.

2

Internet Censorship in India

Origins and blocking of Yahoo groups

3

Laws that govern Internet Censorship in India

4

Is Internet Censorship Necessary?

5

Does Internet Censorship work in India?

6

Internet Censorship- India vis--vis the world

7

Interviews

8

Conclusion

The rest is here:
Internet censorship in India - http://www.ketan.net

Posted in Censorship | Comments Off on Internet censorship in India – www.ketan.net

Ron Paul (finally) sends out a donor pitch for Rand – The …

Posted: at 5:41 pm

The headlines neatly tell the story. "Ron Pauls Passive-Aggressive Campaign Against Rand Paul."Rand Paul Has a Daddy Issue." "Like Father, Like Son? Not Exactly."Sen. Rand Paul (R-Ky.) has endeavored so much to distinguish his "libertarian-ish" views from his father's "voluntarist" politics that any snark from the paterfamilias generates a story. He'll joke that he's still looking at who to endorse; it will be reported like Saturn devouring his offspring.

There will be no snark this weekend. As Rand Paul heads out of the country for a medical mission to Haiti, Ron Paul will make a print and e-mail pitch to donors. It is his first such email on Rand Paul's behalf since the April 7 start of his presidential bid.

"I know the media likes to play this little game where they pit us, or certain views, against each other," the elder Paul will write, according to excerpts provided by the younger Paul's campaign. "Don't fall for it. They're trying to manufacture story lines at liberty's expense. You've spent years seeing how the media treated me. They aren't my friends and they aren't yours."

In the e-mail, Ron Paul will say that the enemies of liberty "fear Rand more than any other candidate," and that "unlike other candidates, Rand isn't depending on Wall Street fat-cats and banksters who want more special treatment, bailouts and stimulus packages to bankroll his candidacy."

The "banksters" language is a mainstay of Ron Paul's own fundraising appeals, which rollout of his Campaign for Liberty as frequently as CDs used to roll out of Columbia House (R.I.P.). It can be read as a knock on, well, anyone else; the libertarian reader might think first of Sen. Ted Cruz (R-Tex.), whose fundraising has lapped Paul's with the help of hedge funds.

Cruz's campaign has already been trying to pull support from Paul, taking advantage of a polling slump that some libertarians blame -- ironically -- on the candidate's attempts to broaden his appeal. Ron Paul's letter addresses this directly.

"There is not one candidate who has run for president in my lifetime who can say they fully share my commitment to liberty, Austrian economics, small government, and following the Constitution, than my son, Rand Paul," writes Ron Paul.

Read the original:
Ron Paul (finally) sends out a donor pitch for Rand - The ...

Posted in Ron Paul | Comments Off on Ron Paul (finally) sends out a donor pitch for Rand – The …

Cryonics – RationalWiki

Posted: at 3:09 pm

Cryonics is the practice of freezing clinically dead people in liquid nitrogen with the hope of future reanimation. Presently-nonexistent sufficiently advanced nanotechnology or mind uploading are the favored methods envisioned for revival.

Scientists will admit that some sort of cryogenic preservation and revival does not provably violate known physics. But they stress that, in practical terms, freezing and reviving dead humans is so far off as to hardly be worth taking seriously; present cryonics practices are speculation at best, and quackery and pseudoscience at worst.

Nevertheless, cryonicists will accept considerable amounts of money right now for procedures based only on vague science fiction-level speculations, with no scientific evidence whatsoever that any of their present actions will help achieve their declared aims. They sincerely consider this an obviously sensible idea that one would have to be stupid not to sign up for.

Cryonics should not be confused with cryobiology (the study of living things and tissues at low temperatures), cryotherapy (the use of cold in medicine) or cryogenics (subjecting things to cold temperatures in general).

That is not dead which can eternal lie. And with strange aeons even death may die.

Cryonics enthusiasts will allow that a person is entirely dead when they reach "information-theoretic death," where the information that makes up their mind is beyond recovery.

The purpose of freezing the recently dead is to stop chemistry. This is intended to allow hypothetical future science and technology to recover the information in the frozen cells and repair them or otherwise reconstruct the person, or at least their mind. We have literally no idea how to do the revival now or how it might be done in the future but cryonicists believe that scientific and technological progress will, if sustained for a sufficient time, advance to the point where the information can be recovered and the mind restarted, in a body (for those who see cryonics as a medical procedure) or a computer running an emulator (for the transhumanists).

Most of the problems with cryonics relate to the massive physical damage caused by the freezing process.

Robert Ettinger, a teacher of physics and mathematics, published The Prospect of Immortality in 1964. He then founded the Cryonics Institute and the related Immortalist Society. Ettinger was inspired by "The Jameson Satellite" by Neil R. Jones (Amazing Stories, July 1931).[1] Lots of science fiction fans and early transhumanists then seized upon the notion with tremendous enthusiasm.

Corpses were being frozen in liquid nitrogen by the early 1960s, though only for cosmetic preservation. The first person to be frozen with the aim of revival was James Bedford, frozen in early 1967. Bedford remains frozen (at Alcor) to this day.

New hope came with K. Eric Drexler's Engines of Creation, postulating nanobots as a mechanism for cell repair, in 1986. That Drexlerian nanobots are utterly impossible has not affected cryonics advocates' enthusiasm for them in the slightest, and they remain a standard proposed revival mechanism.[2]

A major advance in tissue preservation came in the late 1990s with vitrification, where chemicals are added to the tissue so as to allow it to freeze as a glass rather than as ice crystals. This all but eliminated ice crystal damage, at the cost of toxicity of the chemicals.

Upon his death in 2011, Ettinger himself was stored at the Cryonics Institute in Detroit, the 106th person to be stored there. In all, over 200 people have been "preserved" around the world as of 2011. [3] There are about 2000 living people presently signed up with Alcor or the Cryonics Institute the cryonics subculture is very small for its cultural impact.

Whoo-hoo-hoo, look who knows so much. It just so happens that your friend here is only mostly dead. There's a big difference between mostly dead and all dead.

Cryonics for dead humans currently consists of a ritual that many find reminiscent of those performed by practitioners of the world's major religions:

As the Society for Cryobiology put it:

The Society does, however, take the position that cadaver freezing is not science. The knowledge necessary for the revival of whole mammals following freezing and for bringing the dead to life does not currently exist and can come only from conscientious and patient research in cryobiology, biology, chemistry, and medicine.

In the US, cryonics is legally considered an extremely elaborate form of burial,[4] and cannot be performed on someone who has not been declared medically dead. You are declared dead and your fellow cryonicists swoop in to preserve you as quickly as possible.

The body, or just the head, is given large doses of anti-clotting drugs, as well as being infused with cryoprotectant chemicals to allow vitrification. It is then frozen by being put into a bath of liquid nitrogen at -196C. At this temperature chemical reactions all but stop.

Long-term memory is stored in physical form in the neural network as proteins accumulated at a chemical synapse to change the strength of the interconnection between neurons. So if you freeze the brain without crystals forming, the information may not be lost. As such. Hopefully. Though we have no idea if current cryonics techniques preserve the physical and chemical structure in sufficient detail to recover the information even in principle. Samples look good, though working scientists with a strong interest in preserving the information disagree.[5]

Recovering the information is another matter. We have not even the start of an idea how to get it back out again. No revival method is proposed beyond "one day we will be able to do anything!" Some advocates literally propose a magic-equivalent future artificial superintelligence that will make everything better as the universal slam-dunk counterargument to all doubts.[6]

Ben Best, CEO of the Cryonics Institute, supplies in Scientific Justification of Cryonics Practice[7] a list of cryobiology findings that suggest that cryonicists might not be completely wrong; however, this paper (contrary to the promise of its title) also contains a liberal admixture of "then a miracle occurs." His assertions as to what cited papers say also vary considerably from what the cited papers' abstracts state.

Alcor Corporation calls cryonics "a scientific approach to extending human life" and compares it to heart surgery.[8] This is a gross misrepresentation of the state of both the science and technology and verges on both pseudoscience and quackery. Alcor also has a tendency to use invented pseudomedical terminology in its suspension reports.[9][10]

Keeping the head or entire body at -196C stops chemistry, but the freezing process itself causes massive physical damage to the cells. The following problems (many of which are acknowledged by cryonicists[11]) would all need to be solved to bring a frozen head or body back to life. Many would need breakthroughs not merely in engineering, but in scientific understanding itself, which we simply cannot predict.

This is the big problem. The two existing cryonics facilities are charities with large operational expenses run by obsessive enthusiasts. They are small and financially shaky.[20][21] In 1979, the Chatsworth facility (Cryonics Company of California, run by Robert Nelson) ran out of money and the frozen bodies thawed.[22][23] The cryonics movement as a whole was outraged and facility operators are much more careful these days. But it's an expensive business to operate as a charity.

The more general problem is that many cryonicists are libertarians and, unsurprisingly, have proven rather bad at putting together highly social nonprofits designed well enough to work in society on timescales of decades, let alone centuries. The movement has severe and obvious financial problems the cash flows just aren't sustainable, and Alcor relies on occasional large donations from rich members to make up the deficit.[24][25]

Insurance companies are barely willing to consider cryonics. You will have to work rather hard to find someone to even sell you the policy. There are, however, cryonicist insurance agents who specialise in the area.[26]

Of the early frozen corpses, only James Bedford remains, due to tremendous effort on the part of his surviving relatives. Though they didn't do anything to alleviate ice crystals, so his remains are likely just broken cell mush by now.

There are many medical issues connected with reanimation, but it is worth pointing out that a reanimated person faces numerous non-medical issues after returning to society. These might include:

All of these could cause the person great social, not to mention psychological, problems after revival. The person may also experience identity crisis or delusions of grandeur.

Cryonics, in various forms, has become a theme in science fiction,[27], either as a serious plot device (The Door into Summer, the Alien tetralogy), or a source of humor (Futurama, Sleeper). Its usual job is one-way time travel, the cryonics itself being handwaved (as you are allowed to do in science fiction, though not in reality) as a pretext for one of various Rip Van Winkle scenarios.

As a fictional concept, "cryogenics" generally refers to a not-yet-invented form of suspended animation rather than present-day cryonics, in that the worst technical issue to be resolved (if at all) in the far future is either aging, or the cause of death/whatever killed you.

Timothy Leary, the famous LSD-dropper, was famously interested in the "one in a thousand" chance of revival and signed up with Alcor soon after it opened.[28] Eventually, though, the cryonicists themselves creeped him out so much[29] he opted for cremation.[30]

Walt Disney, who is cited in urban legend as having had his head or body frozen, died in December 1966, a few weeks before the first cryonic freezing process in early 1967.

Hall of Fame baseball player and all-time Red Sox great Ted Williams was frozen after he died in 2002. A nasty fight broke out between his oldest children, who had a will saying he wished to be cremated, and his youngest son John-Henry who produced an informal family agreement saying he was to be frozen. This resulted in a macabre family feud for much of the summer of 2002. Williams was eventually frozen.[31]

Cryonics is not considered a part of cryobiology, and cryobiologists consider cryonicists nuisances. The Society for Cryobiology banned cryonicists from membership in 1982, specifically those "misrepresenting the science of cryobiology, including any practice or application of freezing deceased persons in anticipation of their reanimation."[32] As they put it in an official statement:

The act of freezing a dead body and storing it indefinitely on the chance that some future generation may restore it to life is an act of faith, not science.

The Society's planned statement was actually considerably toned down (it originally called cryonics a "fraud") after threats of litigation from Mike Darwin of Alcor.[33]

It can be difficult to find scientific critics willing to bother detailing why they think what the cryonics industry does is silly.[34] Mostly, scientists consider that cryonicists are failing to acknowledge the hard, grinding work needed to advance the several sciences and technologies that are prerequisites for their goals.[17] Castles in the air are a completely acceptable, indeed standard, part of turning science fiction into practical technology, but you do have to go through the brick-by-brick slog of building the foundations underneath. Or, indeed, inventing the grains of sand each brick is made of. (Some cryonicists are cryobiologists and so are personally putting in the hard slog needed to get there.)

Cryonicists, like many technologists, also frequently show arrogant ignorance of fields not their own not just sciences[35] but even directly-related medicine[36][37] leaving people in those fields disinclined to take them seriously.

William T. Jarvis, president of the National Council Against Health Fraud, said, "Cryonics might be a suitable subject for scientific research, but marketing an unproven method to the public is quackery."[38] Mostly, doctors ignore cryonics and consider it a nice, but expensive, long shot.

Demographically, cryonics advocates tend to intersect strongly with transhumanists and singularitarians: almost all well-educated, mostly male to the point where the phrase "hostile wife syndrome" is commonplace[39] mostly atheist or agnostic but with some being religious, and disproportionately involved in mathematics, computers, or physics.[40] Belief in cryonics is pretty much required on LessWrong to be accepted as "rational."[41]

Hardly any celebrities have signed up to be frozen in hopes of being brought back to life in the distant future.[42] (This may be a net win.)

Cryonicists are some of the smartest people you will ever meet and provide sterling evidence that humans are just monkeys with shiny toys, who mostly use intelligence to implement stupidity faster and better.

When arguing their case, cryonics advocates tend to conflate non-existent technologies that might someday be plausible with science-fiction-level speculation, and speak of "first, achieve the singularity" as if it were a minor detail that will just happen, rather than a huge amount of work by a huge number of people working out the many, many tiny details.

The proposals and speculations are so vague as to be pretty much unfalsifiable. Solid objection to a speculation is met with another speculation that may (but does not necessarily, or sometimes even probably) escape the problem. You will find many attempts to reverse the burden of proof and demand that you prove a given speculation isn't possible. Answering can involve trying to compress a degree in biology into a few paragraphs.[35] Most cryonicists' knowledge of biology appears severely deficient.

Cryonicists also tend to assert unsupported high probabilities for as-yet nonexistent technologies and as-yet nonexistent science.[43][44][45] Figures are derived on the basis of no evidence at all, concerning the behaviour of systems we've built nothing like and therefore have no empirical understanding of they even assert probabilities of particular as-yet unrealised scientific breakthroughs occurring. (Saying "Bayesian!" is apparently sufficient support with no further working being shown under any circumstances.) If someone gives a number or even says the word "probable," ask them to show their working.

One must also take care to make very precise queries, distinguishing between, "Is some sort of cryogenic suspension and revival not theoretically impossible with as yet unrealised future technologies?" and "Is there any evidence that what the cryonics industry is doing right now does any good at all?" Cryonics advocates who have been asked the second question tend to answer the first, at which point it is almost entirely impossible to pry a falsifiable claim out of them.

When you ask about a particularly tricky part and the answer is "but, nanobots!" take a drink. If it's "but, future nigh-magical artificial superintelligence!", down the bottle.

Cryonicists are almost all sincere, exceedingly smart, and capable people. However, they are also by and large absolute fanatics, and really believe that freezing your freshly-dead body is the best current hope of evading permanent death and that the $50120,000 this costs is an obviously sensible investment in the distant future. There is little, if any, deliberate fraud going on.

Some cryonicists considered the Chatsworth facility going broke to be due to fraud, but there's little to suggest it wasn't just the owner being out of his depth.

In widely-reported allegations by their ex-COO, Alcor have been incredibly careless with the frozen heads in their care.[46] Alcor denies all allegations, tried to get his book blocked from publication[47] and threatened further legal action. However, considering what fanatics cryonics people are, the allegations are unlikely to be true, despite how widely they were reported.

Cryonics enthusiasts are fond of applying a variant of Pascal's wager to cryonics[48] and saying that being a Pascal's Wager variant doesn't make their argument fallacious.[44][45][49] Ralph Merkle gives us Merkle's Matrix:

The questionable aspect here is omitting the bit where "sign up" means "spend $120,000 of your children's inheritance for a spot in the freezer and a bunch of completely scientifically unjustified promises from shaky organizations run by strange people who are medical incompetents." It also assumes that living at some undetermined future date is sufficiently bonum in se that it is worth spending all that money that could be used to feed starving children now.

When you freeze a steak and bring it back to edible, I'll believe it.

The basic notion of freezing and reviving an animal, e.g. a human, is far from completely implausible.

Instead of freezing your brain ... how about plastinating it instead?[69]

The rest is here:

Cryonics - RationalWiki

Posted in Cryonics | Comments Off on Cryonics – RationalWiki

The Futurist: The Singularity

Posted: at 3:09 pm

The Search for Extra-Terrestrial Intelligence (SETI) seeks to answer one of the most basic questions of human identity - whether we are alone in the universe, or merely one civilization among many. It is perhaps the biggest question that any human can ponder.

The Drake Equation, created by astronomer Frank Drake in 1960, calculates the number of advanced extra-terrestrial civilizations in the Milky Way galaxy in existence at this time. Watch this 8-minute clip of Carl Sagan in 1980 walking the audience through the parameters of the Drake Equation. The Drake equation manages to educate people on the deductive steps needed to understand the basic probability of finding another civilization in the galaxy, but as the final result varies so greatly based on even slight adjustments to the parameters, it is hard to make a strong argument for or against the existence of extra-terrestrial intelligence via the Drake equation. The most speculative parameter is the last one, fL, which is an estimation ofthe total lifespan of an advanced civilization.Again, this video clip isfrom 1980, and thus only 42 years after the advent of radio astronomy in 1938. Another 29 years, or 70%,have since been added to the ageof our radio-astronomy capabilities, and the prospect of nuclear annihilation of our civilization is far lower today than in was in 1980. No matter how ambitious or conservative of a stance youtake on the other parameters, the value offLin terms of our own civilization, continues to rise.This leads us to our first postulate :

The expected lifespan of an intelligent civilization is rising.

Carl Sagan himself believed that in such a vast cosmos, that intelligent life would have to emerge in multiple locations, and the cosmos was thus 'brimming over' with intelligent life. On the other side are various explanations for why intelligent life will be rare. The Rare Earth Hypothesis argues that the combination of conditions that enabled life to emerge on Earthare extremely rare. The Fermi Paradox, originating back in 1950, questions the contradiction between the supposed high incidence of intelligent life, and the continued lack of evidence of it.The Great Filtertheorysuggests that many intelligent civilizations self-destruct at some point, explaining their apparent scarcity. This leads to the conclusion that the easier it is for civilization to advance to our present stage, the bleaker our prospects for long-term survival, since the 'filter' that other civilizations collide with has yet to face us. A contrarian case can thus be made that the longer we go without detecting another civilization, the better.

But one dimension that is conspicuously absent from all of these theories is an accounting for the accelerating rate of change. I have previouslyprovided evidencethat telescopic power is also an accelerating technology. After the invention of the telescope by Galileo in 1609, major discoveries used to be several decades apart, but now are onlyseparated by years. An extrapolation of various discoveries enabled me to crudelyestimate that our observational power is currently rising at 26% per year, even though the first 300 years after the invention of the telescope only saw an improvement of 1% a year. At the time of the 1980 Cosmos television series, it was not remotely possible to confirm the existence of any extrasolar planet or to resolve any star aside from the sun into a disk. Yet, both were accomplished by the mid-1990s. As of May 2009, we have now confirmed a total of 347 extrasolar planets, with the rate of discovery rising quickly. While the first confirmation was not until 1995, we now arediscovering new planets at a rate of 1 per week. With a number of new telescope programs being launched, this rate will rise further still. Furthermore, most of the planets we have found so far are large. Soon, we will be able to detect planets much smaller in size, including Earth-sized planets. This leads us to our second postulate :

Telescopic power is rising quickly, possibly at 26% a year.

This Jet Propulsion Laboratory chart of exoplanet discoveries through 2004 is very overdue for an update, but is still instructive. The x-axis is the distance of the planet from the star, and the y-axis is the mass of the planet. All blue, red, and yellow dots are exoplanets, while the larger circles with letters in them are our own local planets, with the 'E' being Earth. Most exoplanet discoveries up to that time were of Jupiter-sized planets that were closer to their stars than Jupiter is to the sun. The green zone, or 'life zone'is the area within which a planet is a candidate to support lifewithin our current understanding of what life is. Even then, this chart does not capture the full possibilities for life, as a gas giant like Jupiter or Saturn, at the correct distance from a Sun-type star, might have rocky satellites that would thus also be in the life zone. In other words, if Saturn were as close to the Sun as Earth is, Titan would also be in the life zone, and thus the green area should extend vertically higher to capture the possibility of such large satellites of gas giants. The chart shows that telescopes commissioned in the near future will enable the detection of planets in the life zone. If this chart were updated, a few would already be recordedhere.Some of the missions and telescopesthat will soon be sending over a torrent of new discoveries are :

KeplerMission : Launched in March 2009, the Kepler Mission will continuously monitor a field of 100,000 stars for the transit of planets in front of them. This method has a far higher chance of detecting Earth-sized planets than prior methods, and we will see many discovered by 2010-11.

COROT : This European mission was launched in December 2006, and uses a similar method as the Kepler Mission, but is not as powerful. COROT has discovered a handful of planets thus far.

New Worlds Mission: This 2013 mission will build a large sunflower-shaped occulter in space to block the light of nearby stars to aid the observation of extrasolar planets.A large number of planets close to their stars will become visible through this method.

Allen Telescope Array: Funded by Microsoft co-founder Paul Allen, the ATA will survey 1,000,000 stars for radio astronomy evidence of intelligent life. The ATA is sensitive enough to discovera large radio telescope such as the Arecibo Observatory up to a distance of 1000 light years. Many of the ATA components are electronics that decline in price in accordance with Moore's Law, which will subsequently lead tothe development of the.....

Square Kilometer Array: Far larger and more powerful than the Allen Telescope Array, the SKA will be in full operation by 2020, and will be the most sensitive radio telescope ever. The continual decline in the price of processing technology will enable the SKA to scour the sky thousands of times faster than existing radio telescopes.

These are merely the missions that are alreadyunder development or even under operation. Several others are in the conceptual phase, and could be launched within the next 15 years. So many methods of observation used at once, combined with the cost improvements of Moore's Law, leads us to our third postulate, which few would have agreed withat the time of 'Cosmos' in 1980:

Thousands of planets in the 'life zone' will be confirmed by 2025.

Now, we will revisit the under-discussed factor of accelerating change. Out of4.5 billion years of Earth's existence, it has only hosted a civilization capable of radio astronomy for 71 years.But asour own technology is advancing on a multitude of fronts, through the accelerating rate of change and the Impact of Computing,each year, the power of our telescopes increases and the signals of intelligence (radio and TV) emitted from Earth move out one more light year. Thus,the probability for us to detect someone,and for us to be detected by them, however small, is now rising quickly. Our civilization gained far more in both detectability, and detection-capability, in the 30 years between 1980 and 2010, relative to the 30 years between 1610 and 1640, when Galileo was persecuted for his discoveries and support of heliocentrism, and certainly relative to the 30 years between 70,000,030 and 70,000,000 BC, when no advanced civilization existed on Earth, and the dominant life form was Tyrannosaurus.

Nikolai Kardashev has devised a scaleto measure the level of advancement that a technological civilization has achieved, based on their energy technology. This simple scale can be summarized as follows :

Type I : A civilization capable of harnessing all the energy available on their planet.

Type II : A civilization capable of harnessing all the energy available from their star.

Type III : A civilization capable of harnessing all the energy available in their galaxy.

The scale is logarithmic, and our civilization currently would receive a Kardashev score of 0.72. We could potentially achieve full Type I status by the mid-21st century due to a technological singularity. Some haveestimated that our exponentialgrowth could elevate us to Type II status by the late 22ndcentury.

This has given rise to another faction in the speculative debate on extra-terrestrial intelligence, a view held by Ray Kurzweil, among others. The theory is that it takes such a short time (a few hundred years) for a civilization to go from the earliest mechanical technology to reach a technological singularity where artificial intelligencesaturates surrounding matter, relative to the lifetime of the home planet (a few billion years), that we are the first civilization to come this far. Given the rate of advancement, a civilization would have to be just 100 years ahead of us to be so advanced that they would be easy to detect within 100 light years, despite 100 years being such a short fraction of a planet's life. In other words, where a 19th century Earth would be undetectable to us today, an Earth of the22nd century would be extremely conspicuous to us from 100 light years away, emitting countless signals across a variety of mediums.

A Type I civilization within 100 light years would be readily detected by our instruments today. A Type II civilization within 1000 light years will be visible to the Allen or the Square Kilometer Array. A Type III would be the only type of civilization that we probably could not detect, as we might have already been within one all along. We do not have a way of knowing if the current structure of the Milky Way galaxy is artificially designed by a Type III civilization. Thus, the fourth and final postulate becomes :

A civilization slightly more advanced than us will soonbe easy for us to detect.

The Carl Sagan view of plentiful advanced civilizations is the generally accepted wisdom, and a view that I held for a long time.On the other hand,the Kurzweil view is understood by very few, for even in the SETI community, not that many participants are truly acceleration aware. The accelerating nature of progress, which existed long before humans even evolved, as shown in Carl Sagan's cosmic calendarconcept, also from the 1980'Cosmos' series, simply has to be considered as one of the most critical forces in any estimation of extra-terrestrial life.I have not yet migrated fully to the Kurzweil view, but let us list our four postulates out all at once :

The expected lifespan of an intelligent civilization is rising.

Telescopic power is rising quickly, possibly at 26% a year.

Thousands of planets in the 'life zone' will be confirmed by 2025.

A civilization slightly more advanced than us will soonbe easy for us to detect.

Asthe Impact of Computingwill ensure that computational power rises 16,000X between 2009 and 2030, and that our radio astronomy experience will be 92 years old by 2030, there are just too many forces that are increasing our probabilities of finding a civilization if one does indeed exist nearby. It is one thing to know of no extrasolar planets, or of any civilizations. It is quite another to know about thousands of planets,yet still not detect any civilizations after years of searching.Thiswould greatlystrengthen the case against the existence of such civilizations, and the case would grow stronger by year. Thus, these four postulates in combinationlead me to conclude that :

Most of the 'realistic' science fiction regarding first contact with another extra-terrestrial civilization portrays that civilization being domiciled relatively nearby. In Carl Sagan's 'Contact', the civilization was from the Vega star system, just 26 light years away. In the film 'Star Trek : First Contact', humans come in contact with Vulcans in 2063, but the Vulcan homeworld is also just 16 light years from Earth. The possibility of any civilization this near to us would be effectively ruled out by 2030 if we do not find any favorable evidence. SETI should still be given the highest priority, of course, as the lack of a discovery is just as important as making a discovery of extra-terrestrial intelligence.

If we do detect evidence of an extra-terrestrial civilization, everything about life on Earth will change. Both 'Contact' and 'Star Trek : First Contact' depicted how an unprecedented wave of human unity swept across the globe upon evidence that humans were, after all, one intelligent species among many. In Star Trek, this led to what essentially became a techno-economic singularity for the human race. As shown in 'Contact', many of the world's religions were turned upside down upon this discovery, and had to revise their doctrines accordingly. Various new cults devoted to the worship of the new civilization formed almost immediately.

If, however,weare alone, then accordingto many Singularitarians, we will be the ones to determine the destiny of the cosmos.After a technologicalsingularity in the mid-21st century that merges our biology with our technology, we would proceed to convert all matter into artificial intelligence,make use ofall the elementary particles in our vicinity, and expand outward at speeds that eventually exceed the speed of light, ultimately saturating the entire universe with out intelligence in just a few centuries. That, however, is a topic for another day.

See more here:

The Futurist: The Singularity

Posted in The Singularity | Comments Off on The Futurist: The Singularity

Katherine Hayles, How We Became Posthuman, prologue

Posted: at 3:09 pm

"Too often the pressing implications of tomorrow's technologically enhanced human beings have been buried beneath an impenetrable haze of theory-babble and leather-clad posturing. Thankfully, N. Katherine Hayles's How We Became Posthuman provides a rigorous and historical framework for grappling with the cyborg, which Hayles replaces with the more all-purpose 'posthuman.'[Hayles] has written a deeply insightful and significant investigation of how cybernetics gradually reshaped the boundaries of the human."Erik Davis, Village Voice

"Could it be possible someday for your mind, including your memories and your consciousness, to be downloaded into a computer?In her important new bookHayles examines how it became possible in the late 20th Century to formulate a question such as the one above, and she makes a case for why it's the wrong question to ask.[She] traces the evolution over the last half-century of a radical reconception of what it means to be human and, indeed, even of what it means to be alive, a reconception unleashed by the interplay of humans and intelligent machines."Susan Duhig, Chicago Tribune Books

"This is an incisive meditation on a major, often misunderstood aspect of the avant-garde in science fiction: the machine/human interface in all its unsettling, technicolor glories. The author is well positioned to bring informed critical engines to bear on a subject that will increasingly permeate our media and our minds. I recommend it highly."Gregory Benford, author of Timescape and Cosm

"At a time when fallout from the 'science wars' continues to cast a pall over the American intellectual landscape, Hayles is a rare and welcome voice. She is a literary theorist at the University of California at Los Angeles who also holds an advanced degree in chemistry. Bridging the chasm between C. P. Snow's 'two cultures' with effortless grace, she has been for the past decade a leading writer on the interplay between science and literature.The basis of this scrupulously researched work is a history of the cybernetic and informatic sciences, and the evolution of the concept of 'information' as something ontologically separate from any material substrate. Hayles traces the development of this vision through three distinct stages, beginning with the famous Macy conferences of the 1940s and 1950s (with participants such as Claude Shannon and Norbert Weiner), through the ideas of Humberto Maturana and Francisco Varela about 'autopoietic' self-organising systems, and on to more recent conceptions of virtual (or purely informatic) 'creatures,' 'agents' and human beings."Margaret Wertheim, New Scientist

"Hayles's book continues to be widely praised and frequently cited. In academic discourse about the shift to the posthuman, it is likely to be influential for some time to come."Barbara Warnick, Argumentation and Advocacy

Read an interview/dialogue with N. Katherine Hayles and Albert Borgmann, author of Holding On to Reality: The Nature of Information at the Turn of the Millennium.

An excerpt from How We Became Posthuman Virtual Bodies in Cybernetics, Literature, and Informatics by N. Katherine Hayles

Prologue

You are alone in the room, except for two computer terminals flickering in the dim light. You use the terminals to communicate with two entities in another room, whom you cannot see. Relying solely on their responses to your questions, you must decide which is the man, which the woman. Or, in another version of the famous "imitation game" proposed by Alan Turing in his classic 1950 paper "Computer Machinery and Intelligence," you use the responses to decide which is the human, which the machine.1 One of the entities wants to help you guess correctly. His/her/its best strategy, Turing suggested, may be to answer your questions truthfully. The other entity wants to mislead you. He/she/it will try to reproduce through the words that appear on your terminal the characteristics of the other entity. Your job is to pose questions that can distinguish verbal performance from embodied reality. If you cannot tell the intelligent machine from the intelligent human, your failure proves, Turing argued, that machines can think.

Here, at the inaugural moment of the computer age, the erasure of embodiment is performed so that "intelligence" becomes a property of the formal manipulation of symbols rather than enaction in the human lifeworld. The Turing test was to set the agenda for artificial intelligence for the next three decades. In the push to achieve machines that can think, researchers performed again and again the erasure of embodiment at the heart of the Turing test. All that mattered was the formal generation and manipulation of informational patterns. Aiding this process was a definition of information, formalized by Claude Shannon and Norbert Wiener, that conceptualized information as an entity distinct from the substrates carrying it. From this formulation, it was a small step to think of information as a kind of bodiless fluid that could flow between different substrates without loss of meaning or form. Writing nearly four decades after Turing, Hans Moravec proposed that human identity is essentially an informational pattern rather than an embodied enaction. The proposition can be demonstrated, he suggested, by downloading human consciousness into a computer, and he imagined a scenario designed to show that this was in principle possible. The Moravec test, if I may call it that, is the logical successor to the Turing test. Whereas the Turing test was designed to show that machines can perform the thinking previously considered to be an exclusive capacity of the human mind, the Moravec test was designed to show that machines can become the repository of human consciousnessthat machines can, for all practical purposes, become human beings. You are the cyborg, and the cyborg is you.

In the progression from Turing to Moravec, the part of the Turing test that historically has been foregrounded is the distinction between thinking human and thinking machine. Often forgotten is the first example Turing offered of distinguishing between a man and a woman. If your failure to distinguish correctly between human and machine proves that machines can think, what does it prove if you fail to distinguish woman from man? Why does gender appear in this primal scene of humans meeting their evolutionary successors, intelligent machines? What do gendered bodies have to do with the erasure of embodiment and the subsequent merging of machine and human intelligence in the figure of the cyborg?

In his thoughtful and perceptive intellectual biography of Turing, Andrew Hodges suggests that Turing's predilection was always to deal with the world as if it were a formal puzzle.2 To a remarkable extent, Hodges says, Turing was blind to the distinction between saying and doing. Turing fundamentally did not understand that "questions involving sex, society, politics or secrets would demonstrate how what it was possible for people to say might be limited not by puzzle-solving intelligence but by the restrictions on what might be done" (pp. 423-24). In a fine insight, Hodges suggests that "the discrete state machine, communicating by teleprinter alone, was like an ideal for [Turing's] own life, in which he would be left alone in a room of his own, to deal with the outside world solely by rational argument. It was the embodiment of a perfect J. S. Mill liberal, concentrating upon the free will and free speech of the individual" (p. 425). Turing's later embroilment with the police and court system over the question of his homosexuality played out, in a different key, the assumptions embodied in the Turing test. His conviction and the court-ordered hormone treatments for his homosexuality tragically demonstrated the importance of doing over saying in the coercive order of a homophobic society with the power to enforce its will upon the bodies of its citizens.

The perceptiveness of Hodges's biography notwithstanding, he gives a strange interpretation of Turing's inclusion of gender in the imitation game. Gender, according to Hodges, "was in fact a red herring, and one of the few passages of the paper that was not expressed with perfect lucidity. The whole point of this game was that a successful imitation of a woman's responses by a man would not prove anything. Gender depended on facts which were not reducible to sequences of symbols" (p. 415). In the paper itself, however, nowhere does Turing suggest that gender is meant as a counterexample; instead, he makes the two cases rhetorically parallel, indicating through symmetry, if nothing else, that the gender and the human/machine examples are meant to prove the same thing. Is this simply bad writing, as Hodges argues, an inability to express an intended opposition between the construction of gender and the construction of thought? Or, on the contrary, does the writing express a parallelism too explosive and subversive for Hodges to acknowledge?

If so, now we have two mysteries instead of one. Why does Turing include gender, and why does Hodges want to read this inclusion as indicating that, so far as gender is concerned, verbal performance cannot be equated with embodied reality? One way to frame these mysteries is to see them as attempts to transgress and reinforce the boundaries of the subject, respectively. By including gender, Turing implied that renegotiating the boundary between human and machine would involve more than transforming the question of "who can think" into "what can think." It would also necessarily bring into question other characteristics of the liberal subject, for it made the crucial move of distinguishing between the enacted body, present in the flesh on one side of the computer screen, and the represented body, produced through the verbal and semiotic markers constituting it in an electronic environment. This construction necessarily makes the subject into a cyborg, for the enacted and represented bodies are brought into conjunction through the technology that connects them. If you distinguish correctly which is the man and which the woman, you in effect reunite the enacted and the represented bodies into a single gender identity. The very existence of the test, however, implies that you may also make the wrong choice. Thus the test functions to create the possibility of a disjunction between the enacted and the represented bodies, regardless which choice you make. What the Turing test "proves" is that the overlay between the enacted and the represented bodies is no longer a natural inevitability but a contingent production, mediated by a technology that has become so entwined with the production of identity that it can no longer meaningfully be separated from the human subject. To pose the question of "what can think" inevitably also changes, in a reverse feedback loop, the terms of "who can think."

On this view, Hodges's reading of the gender test as nonsignifying with respect to identity can be seen as an attempt to safeguard the boundaries of the subject from precisely this kind of transformation, to insist that the existence of thinking machines will not necessarily affect what being human means. That Hodges's reading is a misreading indicates he is willing to practice violence upon the text to wrench meaning away from the direction toward which the Turing test points, back to safer ground where embodiment secures the univocality of gender. I think he is wrong about embodiment's securing the univocality of gender and wrong about its securing human identity, but right about the importance of putting embodiment back into the picture. What embodiment secures is not the distinction between male and female or between humans who can think and machines which cannot. Rather, embodiment makes clear that thought is a much broader cognitive function depending for its specificities on the embodied form enacting it. This realization, with all its exfoliating implications, is so broad in its effects and so deep in its consequences that it is transforming the liberal subject, regarded as the model of the human since the Enlightenment, into the posthuman.

Think of the Turing test as a magic trick. Like all good magic tricks, the test relies on getting you to accept at an early stage assumptions that will determine how you interpret what you see later. The important intervention comes not when you try to determine which is the man, the woman, or the machine. Rather, the important intervention comes much earlier, when the test puts you into a cybernetic circuit that splices your will, desire, and perception into a distributed cognitive system in which represented bodies are joined with enacted bodies through mutating and flexible machine interfaces. As you gaze at the flickering signifiers scrolling down the computer screens, no matter what identifications you assign to the embodied entities that you cannot see, you have already become posthuman.

Footnotes: 1. Alan M. Turing, "Computing Machinery and Intelligence," Mind 54 (1950): 433-57. 2. Andrew Hodges, Alan Turing: The Enigma of Intelligence (London: Unwin, 1985), pp. 415-25. I am indebted to Carol Wald for her insights into the relation between gender and artificial intelligence, the subject of her dissertation, and to her other writings on this question. I also owe her thanks for pointing out to me that Andrew Hodges dismisses Turing's use of gender as a logical flaw in his analysis of the Turing text.

See the article here:

Katherine Hayles, How We Became Posthuman, prologue

Posted in Posthuman | Comments Off on Katherine Hayles, How We Became Posthuman, prologue