Monthly Archives: October 2016

DNA – Wikipedia

Posted: October 20, 2016 at 11:32 pm

Deoxyribonucleic acid (i;[1]DNA) is a molecule that carries the genetic instructions used in the growth, development, functioning and reproduction of all known living organisms and many viruses. DNA and RNA are nucleic acids; alongside proteins, lipids and complex carbohydrates (polysaccharides), they are one of the four major types of macromolecules that are essential for all known forms of life. Most DNA molecules consist of two biopolymer strands coiled around each other to form a double helix.

The two DNA strands are termed polynucleotides since they are composed of simpler monomer units called nucleotides.[2][3] Each nucleotide is composed of one of four nitrogen-containing nucleobaseseither cytosine (C), guanine (G), adenine (A), or thymine (T)and a sugar called deoxyribose and a phosphate group. The nucleotides are joined to one another in a chain by covalent bonds between the sugar of one nucleotide and the phosphate of the next, resulting in an alternating sugar-phosphate backbone. The nitrogenous bases of the two separate polynucleotide strands are bound together (according to base pairing rules (A with T, and C with G) with hydrogen bonds to make double-stranded DNA. The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037, and weighs 50 billion tonnes.[4] In comparison, the total mass of the biosphere has been estimated to be as much as 4 trillion tons of carbon (TtC).[5]

DNA stores biological information. The DNA backbone is resistant to cleavage, and both strands of the double-stranded structure store the same biological information. This information is replicated as and when the two strands separate. A large part of DNA (more than 98% for humans) is non-coding, meaning that these sections do not serve as patterns for protein sequences.

The two strands of DNA run in opposite directions to each other and are thus antiparallel. Attached to each sugar is one of four types of nucleobases (informally, bases). It is the sequence of these four nucleobases along the backbone that encodes biological information. RNA strands are created using DNA strands as a template in a process called transcription. Under the genetic code, these RNA strands are translated to specify the sequence of amino acids within proteins in a process called translation.

Within eukaryotic cells, DNA is organized into long structures called chromosomes. During cell division these chromosomes are duplicated in the process of DNA replication, providing each cell its own complete set of chromosomes. Eukaryotic organisms (animals, plants, fungi, and protists) store most of their DNA inside the cell nucleus and some of their DNA in organelles, such as mitochondria or chloroplasts.[6] In contrast, prokaryotes (bacteria and archaea) store their DNA only in the cytoplasm. Within the eukaryotic chromosomes, chromatin proteins such as histones compact and organize DNA. These compact structures guide the interactions between DNA and other proteins, helping control which parts of the DNA are transcribed.

DNA was first isolated by Friedrich Miescher in 1869. Its molecular structure was identified by James Watson and Francis Crick in 1953, whose model-building efforts were guided by X-ray diffraction data acquired by Rosalind Franklin. DNA is used by researchers as a molecular tool to explore physical laws and theories, such as the ergodic theorem and the theory of elasticity. The unique material properties of DNA have made it an attractive molecule for material scientists and engineers interested in micro- and nano-fabrication. Among notable advances in this field are DNA origami and DNA-based hybrid materials.[7]

DNA is a long polymer made from repeating units called nucleotides.[8][9] The structure of DNA is non-static,[10] all species comprises two helical chains each coiled round the same axis, and each with a pitch of 34ngstrms (3.4nanometres) and a radius of 10ngstrms (1.0nanometre).[11] According to another study, when measured in a particular solution, the DNA chain measured 22 to 26ngstrms wide (2.2 to 2.6nanometres), and one nucleotide unit measured 3.3 (0.33nm) long.[12] Although each individual repeating unit is very small, DNA polymers can be very large molecules containing millions of nucleotides. For instance, the DNA in the largest human chromosome, chromosome number 1, consists of approximately 220 million base pairs[13] and would be 85mm long if straightened.

In living organisms DNA does not usually exist as a single molecule, but instead as a pair of molecules that are held tightly together.[14][15] These two long strands entwine like vines, in the shape of a double helix. The nucleotide contains both a segment of the backbone of the molecule (which holds the chain together) and a nucleobase (which interacts with the other DNA strand in the helix). A nucleobase linked to a sugar is called a nucleoside and a base linked to a sugar and one or more phosphate groups is called a nucleotide. A polymer comprising multiple linked nucleotides (as in DNA) is called a polynucleotide.[16]

The backbone of the DNA strand is made from alternating phosphate and sugar residues.[17] The sugar in DNA is 2-deoxyribose, which is a pentose (five-carbon) sugar. The sugars are joined together by phosphate groups that form phosphodiester bonds between the third and fifth carbon atoms of adjacent sugar rings. These asymmetric bonds mean a strand of DNA has a direction. In a double helix, the direction of the nucleotides in one strand is opposite to their direction in the other strand: the strands are antiparallel. The asymmetric ends of DNA strands are said to have a directionality of five prime (5) and three prime (3), with the 5 end having a terminal phosphate group and the 3 end a terminal hydroxyl group. One major difference between DNA and RNA is the sugar, with the 2-deoxyribose in DNA being replaced by the alternative pentose sugar ribose in RNA.[15]

The DNA double helix is stabilized primarily by two forces: hydrogen bonds between nucleotides and base-stacking interactions among aromatic nucleobases.[19] In the aqueous environment of the cell, the conjugated bonds of nucleotide bases align perpendicular to the axis of the DNA molecule, minimizing their interaction with the solvation shell. The four bases found in DNA are adenine (A), cytosine (C), guanine (G) and thymine (T). These four bases are attached to the sugar-phosphate to form the complete nucleotide, as shown for adenosine monophosphate. Adenine pairs with thymine and guanine pairs with cytosine. It was represented by A-T base pairs and G-C base pairs.[20][21]

The nucleobases are classified into two types: the purines, A and G, being fused five- and six-membered heterocyclic compounds, and the pyrimidines, the six-membered rings C and T.[15] A fifth pyrimidine nucleobase, uracil (U), usually takes the place of thymine in RNA and differs from thymine by lacking a methyl group on its ring. In addition to RNA and DNA, many artificial nucleic acid analogues have been created to study the properties of nucleic acids, or for use in biotechnology.[22]

Uracil is not usually found in DNA, occurring only as a breakdown product of cytosine. However, in several bacteriophages, Bacillus subtilis bacteriophages PBS1 and PBS2 and Yersinia bacteriophage piR1-37, thymine has been replaced by uracil.[23] Another phage - Staphylococcal phage S6 - has been identified with a genome where thymine has been replaced by uracil.[24]

Base J (beta-d-glucopyranosyloxymethyluracil), a modified form of uracil, is also found in several organisms: the flagellates Diplonema and Euglena, and all the kinetoplastid genera.[25] Biosynthesis of J occurs in two steps: in the first step a specific thymidine in DNA is converted into hydroxymethyldeoxyuridine; in the second HOMedU is glycosylated to form J.[26] Proteins that bind specifically to this base have been identified.[27][28][29] These proteins appear to be distant relatives of the Tet1 oncogene that is involved in the pathogenesis of acute myeloid leukemia.[30] J appears to act as a termination signal for RNA polymerase II.[31][32]

Twin helical strands form the DNA backbone. Another double helix may be found tracing the spaces, or grooves, between the strands. These voids are adjacent to the base pairs and may provide a binding site. As the strands are not symmetrically located with respect to each other, the grooves are unequally sized. One groove, the major groove, is 22 wide and the other, the minor groove, is 12 wide.[33] The width of the major groove means that the edges of the bases are more accessible in the major groove than in the minor groove. As a result, proteins such as transcription factors that can bind to specific sequences in double-stranded DNA usually make contact with the sides of the bases exposed in the major groove.[34] This situation varies in unusual conformations of DNA within the cell (see below), but the major and minor grooves are always named to reflect the differences in size that would be seen if the DNA is twisted back into the ordinary B form.

In a DNA double helix, each type of nucleobase on one strand bonds with just one type of nucleobase on the other strand. This is called complementary base pairing. Here, purines form hydrogen bonds to pyrimidines, with adenine bonding only to thymine in two hydrogen bonds, and cytosine bonding only to guanine in three hydrogen bonds. This arrangement of two nucleotides binding together across the double helix is called a base pair. As hydrogen bonds are not covalent, they can be broken and rejoined relatively easily. The two strands of DNA in a double helix can thus be pulled apart like a zipper, either by a mechanical force or high temperature.[35] As a result of this base pair complementarity, all the information in the double-stranded sequence of a DNA helix is duplicated on each strand, which is vital in DNA replication. This reversible and specific interaction between complementary base pairs is critical for all the functions of DNA in living organisms.[9]

The two types of base pairs form different numbers of hydrogen bonds, AT forming two hydrogen bonds, and GC forming three hydrogen bonds (see figures, right). DNA with high GC-content is more stable than DNA with low GC-content.

As noted above, most DNA molecules are actually two polymer strands, bound together in a helical fashion by noncovalent bonds; this double stranded structure (dsDNA) is maintained largely by the intrastrand base stacking interactions, which are strongest for G,C stacks. The two strands can come apart a process known as melting to form two single-stranded DNA molecules (ssDNA) molecules. Melting occurs at high temperature, low salt and high pH (low pH also melts DNA, but since DNA is unstable due to acid depurination, low pH is rarely used).

The stability of the dsDNA form depends not only on the GC-content (% G,C basepairs) but also on sequence (since stacking is sequence specific) and also length (longer molecules are more stable). The stability can be measured in various ways; a common way is the "melting temperature", which is the temperature at which 50% of the ds molecules are converted to ss molecules; melting temperature is dependent on ionic strength and the concentration of DNA. As a result, it is both the percentage of GC base pairs and the overall length of a DNA double helix that determines the strength of the association between the two strands of DNA. Long DNA helices with a high GC-content have stronger-interacting strands, while short helices with high AT content have weaker-interacting strands.[36] In biology, parts of the DNA double helix that need to separate easily, such as the TATAAT Pribnow box in some promoters, tend to have a high AT content, making the strands easier to pull apart.[37]

In the laboratory, the strength of this interaction can be measured by finding the temperature necessary to break the hydrogen bonds, their melting temperature (also called Tm value). When all the base pairs in a DNA double helix melt, the strands separate and exist in solution as two entirely independent molecules. These single-stranded DNA molecules (ssDNA) have no single common shape, but some conformations are more stable than others.[38]

A DNA sequence is called "sense" if its sequence is the same as that of a messenger RNA copy that is translated into protein.[39] The sequence on the opposite strand is called the "antisense" sequence. Both sense and antisense sequences can exist on different parts of the same strand of DNA (i.e. both strands can contain both sense and antisense sequences). In both prokaryotes and eukaryotes, antisense RNA sequences are produced, but the functions of these RNAs are not entirely clear.[40] One proposal is that antisense RNAs are involved in regulating gene expression through RNA-RNA base pairing.[41]

A few DNA sequences in prokaryotes and eukaryotes, and more in plasmids and viruses, blur the distinction between sense and antisense strands by having overlapping genes.[42] In these cases, some DNA sequences do double duty, encoding one protein when read along one strand, and a second protein when read in the opposite direction along the other strand. In bacteria, this overlap may be involved in the regulation of gene transcription,[43] while in viruses, overlapping genes increase the amount of information that can be encoded within the small viral genome.[44]

DNA can be twisted like a rope in a process called DNA supercoiling. With DNA in its "relaxed" state, a strand usually circles the axis of the double helix once every 10.4 base pairs, but if the DNA is twisted the strands become more tightly or more loosely wound.[45] If the DNA is twisted in the direction of the helix, this is positive supercoiling, and the bases are held more tightly together. If they are twisted in the opposite direction, this is negative supercoiling, and the bases come apart more easily. In nature, most DNA has slight negative supercoiling that is introduced by enzymes called topoisomerases.[46] These enzymes are also needed to relieve the twisting stresses introduced into DNA strands during processes such as transcription and DNA replication.[47]

DNA exists in many possible conformations that include A-DNA, B-DNA, and Z-DNA forms, although, only B-DNA and Z-DNA have been directly observed in functional organisms.[17] The conformation that DNA adopts depends on the hydration level, DNA sequence, the amount and direction of supercoiling, chemical modifications of the bases, the type and concentration of metal ions, and the presence of polyamines in solution.[48]

The first published reports of A-DNA X-ray diffraction patternsand also B-DNAused analyses based on Patterson transforms that provided only a limited amount of structural information for oriented fibers of DNA.[49][50] An alternative analysis was then proposed by Wilkins et al., in 1953, for the in vivo B-DNA X-ray diffraction-scattering patterns of highly hydrated DNA fibers in terms of squares of Bessel functions.[51] In the same journal, James Watson and Francis Crick presented their molecular modeling analysis of the DNA X-ray diffraction patterns to suggest that the structure was a double-helix.[11]

Although the B-DNA form is most common under the conditions found in cells,[52] it is not a well-defined conformation but a family of related DNA conformations[53] that occur at the high hydration levels present in living cells. Their corresponding X-ray diffraction and scattering patterns are characteristic of molecular paracrystals with a significant degree of disorder.[54][55]

Compared to B-DNA, the A-DNA form is a wider right-handed spiral, with a shallow, wide minor groove and a narrower, deeper major groove. The A form occurs under non-physiological conditions in partly dehydrated samples of DNA, while in the cell it may be produced in hybrid pairings of DNA and RNA strands, and in enzyme-DNA complexes.[56][57] Segments of DNA where the bases have been chemically modified by methylation may undergo a larger change in conformation and adopt the Z form. Here, the strands turn about the helical axis in a left-handed spiral, the opposite of the more common B form.[58] These unusual structures can be recognized by specific Z-DNA binding proteins and may be involved in the regulation of transcription.[59]

For many years exobiologists have proposed the existence of a shadow biosphere, a postulated microbial biosphere of Earth that uses radically different biochemical and molecular processes than currently known life. One of the proposals was the existence of lifeforms that use arsenic instead of phosphorus in DNA. A report in 2010 of the possibility in the bacterium GFAJ-1, was announced,[60][60][61] though the research was disputed,[61][62] and evidence suggests the bacterium actively prevents the incorporation of arsenic into the DNA backbone and other biomolecules.[63]

At the ends of the linear chromosomes are specialized regions of DNA called telomeres. The main function of these regions is to allow the cell to replicate chromosome ends using the enzyme telomerase, as the enzymes that normally replicate DNA cannot copy the extreme 3 ends of chromosomes.[64] These specialized chromosome caps also help protect the DNA ends, and stop the DNA repair systems in the cell from treating them as damage to be corrected.[65] In human cells, telomeres are usually lengths of single-stranded DNA containing several thousand repeats of a simple TTAGGG sequence.[66]

These guanine-rich sequences may stabilize chromosome ends by forming structures of stacked sets of four-base units, rather than the usual base pairs found in other DNA molecules. Here, four guanine bases form a flat plate and these flat four-base units then stack on top of each other, to form a stable G-quadruplex structure.[68] These structures are stabilized by hydrogen bonding between the edges of the bases and chelation of a metal ion in the centre of each four-base unit.[69] Other structures can also be formed, with the central set of four bases coming from either a single strand folded around the bases, or several different parallel strands, each contributing one base to the central structure.

In addition to these stacked structures, telomeres also form large loop structures called telomere loops, or T-loops. Here, the single-stranded DNA curls around in a long circle stabilized by telomere-binding proteins.[70] At the very end of the T-loop, the single-stranded telomere DNA is held onto a region of double-stranded DNA by the telomere strand disrupting the double-helical DNA and base pairing to one of the two strands. This triple-stranded structure is called a displacement loop or D-loop.[68]

In DNA, fraying occurs when non-complementary regions exist at the end of an otherwise complementary double-strand of DNA. However, branched DNA can occur if a third strand of DNA is introduced and contains adjoining regions able to hybridize with the frayed regions of the pre-existing double-strand. Although the simplest example of branched DNA involves only three strands of DNA, complexes involving additional strands and multiple branches are also possible.[71] Branched DNA can be used in nanotechnology to construct geometric shapes, see the section on uses in technology below.

The expression of genes is influenced by how the DNA is packaged in chromosomes, in a structure called chromatin. Base modifications can be involved in packaging, with regions that have low or no gene expression usually containing high levels of methylation of cytosine bases. DNA packaging and its influence on gene expression can also occur by covalent modifications of the histone protein core around which DNA is wrapped in the chromatin structure or else by remodeling carried out by chromatin remodeling complexes (see Chromatin remodeling). There is, further, crosstalk between DNA methylation and histone modification, so they can coordinately affect chromatin and gene expression.[72]

For one example, cytosine methylation, produces 5-methylcytosine, which is important for X-inactivation of chromosomes.[73] The average level of methylation varies between organisms the worm Caenorhabditis elegans lacks cytosine methylation, while vertebrates have higher levels, with up to 1% of their DNA containing 5-methylcytosine.[74] Despite the importance of 5-methylcytosine, it can deaminate to leave a thymine base, so methylated cytosines are particularly prone to mutations.[75] Other base modifications include adenine methylation in bacteria, the presence of 5-hydroxymethylcytosine in the brain,[76] and the glycosylation of uracil to produce the "J-base" in kinetoplastids.[77][78]

DNA can be damaged by many sorts of mutagens, which change the DNA sequence. Mutagens include oxidizing agents, alkylating agents and also high-energy electromagnetic radiation such as ultraviolet light and X-rays. The type of DNA damage produced depends on the type of mutagen. For example, UV light can damage DNA by producing thymine dimers, which are cross-links between pyrimidine bases.[80] On the other hand, oxidants such as free radicals or hydrogen peroxide produce multiple forms of damage, including base modifications, particularly of guanosine, and double-strand breaks.[81] A typical human cell contains about 150,000 bases that have suffered oxidative damage.[82] Of these oxidative lesions, the most dangerous are double-strand breaks, as these are difficult to repair and can produce point mutations, insertions, deletions from the DNA sequence, and chromosomal translocations.[83] These mutations can cause cancer. Because of inherent limits in the DNA repair mechanisms, if humans lived long enough, they would all eventually develop cancer.[84][85] DNA damages that are naturally occurring, due to normal cellular processes that produce reactive oxygen species, the hydrolytic activities of cellular water, etc., also occur frequently. Although most of these damages are repaired, in any cell some DNA damage may remain despite the action of repair processes. These remaining DNA damages accumulate with age in mammalian postmitotic tissues. This accumulation appears to be an important underlying cause of aging.[86][87][88]

Many mutagens fit into the space between two adjacent base pairs, this is called intercalation. Most intercalators are aromatic and planar molecules; examples include ethidium bromide, acridines, daunomycin, and doxorubicin. For an intercalator to fit between base pairs, the bases must separate, distorting the DNA strands by unwinding of the double helix. This inhibits both transcription and DNA replication, causing toxicity and mutations.[89] As a result, DNA intercalators may be carcinogens, and in the case of thalidomide, a teratogen.[90] Others such as benzo[a]pyrene diol epoxide and aflatoxin form DNA adducts that induce errors in replication.[91] Nevertheless, due to their ability to inhibit DNA transcription and replication, other similar toxins are also used in chemotherapy to inhibit rapidly growing cancer cells.[92]

DNA usually occurs as linear chromosomes in eukaryotes, and circular chromosomes in prokaryotes. The set of chromosomes in a cell makes up its genome; the human genome has approximately 3 billion base pairs of DNA arranged into 46 chromosomes.[93] The information carried by DNA is held in the sequence of pieces of DNA called genes. Transmission of genetic information in genes is achieved via complementary base pairing. For example, in transcription, when a cell uses the information in a gene, the DNA sequence is copied into a complementary RNA sequence through the attraction between the DNA and the correct RNA nucleotides. Usually, this RNA copy is then used to make a matching protein sequence in a process called translation, which depends on the same interaction between RNA nucleotides. In alternative fashion, a cell may simply copy its genetic information in a process called DNA replication. The details of these functions are covered in other articles; here the focus is on the interactions between DNA and other molecules that mediate the function of the genome.

Genomic DNA is tightly and orderly packed in the process called DNA condensation, to fit the small available volumes of the cell. In eukaryotes, DNA is located in the cell nucleus, with small amounts in mitochondria and chloroplasts. In prokaryotes, the DNA is held within an irregularly shaped body in the cytoplasm called the nucleoid.[94] The genetic information in a genome is held within genes, and the complete set of this information in an organism is called its genotype. A gene is a unit of heredity and is a region of DNA that influences a particular characteristic in an organism. Genes contain an open reading frame that can be transcribed, and regulatory sequences such as promoters and enhancers, which control transcription of the open reading frame.

In many species, only a small fraction of the total sequence of the genome encodes protein. For example, only about 1.5% of the human genome consists of protein-coding exons, with over 50% of human DNA consisting of non-coding repetitive sequences.[95] The reasons for the presence of so much noncoding DNA in eukaryotic genomes and the extraordinary differences in genome size, or C-value, among species represent a long-standing puzzle known as the "C-value enigma".[96] However, some DNA sequences that do not code protein may still encode functional non-coding RNA molecules, which are involved in the regulation of gene expression.[97]

Some noncoding DNA sequences play structural roles in chromosomes. Telomeres and centromeres typically contain few genes, but are important for the function and stability of chromosomes.[65][99] An abundant form of noncoding DNA in humans are pseudogenes, which are copies of genes that have been disabled by mutation.[100] These sequences are usually just molecular fossils, although they can occasionally serve as raw genetic material for the creation of new genes through the process of gene duplication and divergence.[101]

A gene is a sequence of DNA that contains genetic information and can influence the phenotype of an organism. Within a gene, the sequence of bases along a DNA strand defines a messenger RNA sequence, which then defines one or more protein sequences. The relationship between the nucleotide sequences of genes and the amino-acid sequences of proteins is determined by the rules of translation, known collectively as the genetic code. The genetic code consists of three-letter 'words' called codons formed from a sequence of three nucleotides (e.g. ACT, CAG, TTT).

In transcription, the codons of a gene are copied into messenger RNA by RNA polymerase. This RNA copy is then decoded by a ribosome that reads the RNA sequence by base-pairing the messenger RNA to transfer RNA, which carries amino acids. Since there are 4 bases in 3-letter combinations, there are 64 possible codons (43combinations). These encode the twenty standard amino acids, giving most amino acids more than one possible codon. There are also three 'stop' or 'nonsense' codons signifying the end of the coding region; these are the TAA, TGA, and TAG codons.

Cell division is essential for an organism to grow, but, when a cell divides, it must replicate the DNA in its genome so that the two daughter cells have the same genetic information as their parent. The double-stranded structure of DNA provides a simple mechanism for DNA replication. Here, the two strands are separated and then each strand's complementary DNA sequence is recreated by an enzyme called DNA polymerase. This enzyme makes the complementary strand by finding the correct base through complementary base pairing, and bonding it onto the original strand. As DNA polymerases can only extend a DNA strand in a 5 to 3 direction, different mechanisms are used to copy the antiparallel strands of the double helix.[102] In this way, the base on the old strand dictates which base appears on the new strand, and the cell ends up with a perfect copy of its DNA.

Naked extracellular DNA (eDNA), most of it released by cell death, is nearly ubiquitous in the environment. Its concentration in soil may be as high as 2 g/L, and its concentration in natural aquatic environments may be as high at 88 g/L.[103] Various possible functions have been proposed for eDNA: it may be involved in horizontal gene transfer;[104] it may provide nutrients;[105] and it may act as a buffer to recruit or titrate ions or antibiotics.[106] Extracellular DNA acts as a functional extracellular matrix component in the biofilms of several bacterial species. It may act as a recognition factor to regulate the attachment and dispersal of specific cell types in the biofilm;[107] it may contribute to biofilm formation;[108] and it may contribute to the biofilm's physical strength and resistance to biological stress.[109]

All the functions of DNA depend on interactions with proteins. These protein interactions can be non-specific, or the protein can bind specifically to a single DNA sequence. Enzymes can also bind to DNA and of these, the polymerases that copy the DNA base sequence in transcription and DNA replication are particularly important.

Structural proteins that bind DNA are well-understood examples of non-specific DNA-protein interactions. Within chromosomes, DNA is held in complexes with structural proteins. These proteins organize the DNA into a compact structure called chromatin. In eukaryotes this structure involves DNA binding to a complex of small basic proteins called histones, while in prokaryotes multiple types of proteins are involved.[110][111] The histones form a disk-shaped complex called a nucleosome, which contains two complete turns of double-stranded DNA wrapped around its surface. These non-specific interactions are formed through basic residues in the histones, making ionic bonds to the acidic sugar-phosphate backbone of the DNA, and are thus largely independent of the base sequence.[112] Chemical modifications of these basic amino acid residues include methylation, phosphorylation and acetylation.[113] These chemical changes alter the strength of the interaction between the DNA and the histones, making the DNA more or less accessible to transcription factors and changing the rate of transcription.[114] Other non-specific DNA-binding proteins in chromatin include the high-mobility group proteins, which bind to bent or distorted DNA.[115] These proteins are important in bending arrays of nucleosomes and arranging them into the larger structures that make up chromosomes.[116]

A distinct group of DNA-binding proteins are the DNA-binding proteins that specifically bind single-stranded DNA. In humans, replication protein A is the best-understood member of this family and is used in processes where the double helix is separated, including DNA replication, recombination and DNA repair.[117] These binding proteins seem to stabilize single-stranded DNA and protect it from forming stem-loops or being degraded by nucleases.

In contrast, other proteins have evolved to bind to particular DNA sequences. The most intensively studied of these are the various transcription factors, which are proteins that regulate transcription. Each transcription factor binds to one particular set of DNA sequences and activates or inhibits the transcription of genes that have these sequences close to their promoters. The transcription factors do this in two ways. Firstly, they can bind the RNA polymerase responsible for transcription, either directly or through other mediator proteins; this locates the polymerase at the promoter and allows it to begin transcription.[119] Alternatively, transcription factors can bind enzymes that modify the histones at the promoter. This changes the accessibility of the DNA template to the polymerase.[120]

As these DNA targets can occur throughout an organism's genome, changes in the activity of one type of transcription factor can affect thousands of genes.[121] Consequently, these proteins are often the targets of the signal transduction processes that control responses to environmental changes or cellular differentiation and development. The specificity of these transcription factors' interactions with DNA come from the proteins making multiple contacts to the edges of the DNA bases, allowing them to "read" the DNA sequence. Most of these base-interactions are made in the major groove, where the bases are most accessible.[34]

Nucleases are enzymes that cut DNA strands by catalyzing the hydrolysis of the phosphodiester bonds. Nucleases that hydrolyse nucleotides from the ends of DNA strands are called exonucleases, while endonucleases cut within strands. The most frequently used nucleases in molecular biology are the restriction endonucleases, which cut DNA at specific sequences. For instance, the EcoRV enzyme shown to the left recognizes the 6-base sequence 5-GATATC-3 and makes a cut at the horizontal line. In nature, these enzymes protect bacteria against phage infection by digesting the phage DNA when it enters the bacterial cell, acting as part of the restriction modification system.[123] In technology, these sequence-specific nucleases are used in molecular cloning and DNA fingerprinting.

Enzymes called DNA ligases can rejoin cut or broken DNA strands.[124] Ligases are particularly important in lagging strand DNA replication, as they join together the short segments of DNA produced at the replication fork into a complete copy of the DNA template. They are also used in DNA repair and genetic recombination.[124]

Topoisomerases are enzymes with both nuclease and ligase activity. These proteins change the amount of supercoiling in DNA. Some of these enzymes work by cutting the DNA helix and allowing one section to rotate, thereby reducing its level of supercoiling; the enzyme then seals the DNA break.[46] Other types of these enzymes are capable of cutting one DNA helix and then passing a second strand of DNA through this break, before rejoining the helix.[125] Topoisomerases are required for many processes involving DNA, such as DNA replication and transcription.[47]

Helicases are proteins that are a type of molecular motor. They use the chemical energy in nucleoside triphosphates, predominantly adenosine triphosphate (ATP), to break hydrogen bonds between bases and unwind the DNA double helix into single strands.[126] These enzymes are essential for most processes where enzymes need to access the DNA bases.

Polymerases are enzymes that synthesize polynucleotide chains from nucleoside triphosphates. The sequence of their products are created based on existing polynucleotide chainswhich are called templates. These enzymes function by repeatedly adding a nucleotide to the 3 hydroxyl group at the end of the growing polynucleotide chain. As a consequence, all polymerases work in a 5 to 3 direction.[127] In the active site of these enzymes, the incoming nucleoside triphosphate base-pairs to the template: this allows polymerases to accurately synthesize the complementary strand of their template. Polymerases are classified according to the type of template that they use.

In DNA replication, DNA-dependent DNA polymerases make copies of DNA polynucleotide chains. To preserve biological information, it is essential that the sequence of bases in each copy are precisely complementary to the sequence of bases in the template strand. Many DNA polymerases have a proofreading activity. Here, the polymerase recognizes the occasional mistakes in the synthesis reaction by the lack of base pairing between the mismatched nucleotides. If a mismatch is detected, a 3 to 5 exonuclease activity is activated and the incorrect base removed.[128] In most organisms, DNA polymerases function in a large complex called the replisome that contains multiple accessory subunits, such as the DNA clamp or helicases.[129]

RNA-dependent DNA polymerases are a specialized class of polymerases that copy the sequence of an RNA strand into DNA. They include reverse transcriptase, which is a viral enzyme involved in the infection of cells by retroviruses, and telomerase, which is required for the replication of telomeres.[64][130] Telomerase is an unusual polymerase because it contains its own RNA template as part of its structure.[65]

Transcription is carried out by a DNA-dependent RNA polymerase that copies the sequence of a DNA strand into RNA. To begin transcribing a gene, the RNA polymerase binds to a sequence of DNA called a promoter and separates the DNA strands. It then copies the gene sequence into a messenger RNA transcript until it reaches a region of DNA called the terminator, where it halts and detaches from the DNA. As with human DNA-dependent DNA polymerases, RNA polymerase II, the enzyme that transcribes most of the genes in the human genome, operates as part of a large protein complex with multiple regulatory and accessory subunits.[131]

A DNA helix usually does not interact with other segments of DNA, and in human cells the different chromosomes even occupy separate areas in the nucleus called "chromosome territories".[133] This physical separation of different chromosomes is important for the ability of DNA to function as a stable repository for information, as one of the few times chromosomes interact is in chromosomal crossover which occurs during sexual reproduction, when genetic recombination occurs. Chromosomal crossover is when two DNA helices break, swap a section and then rejoin.

Recombination allows chromosomes to exchange genetic information and produces new combinations of genes, which increases the efficiency of natural selection and can be important in the rapid evolution of new proteins.[134] Genetic recombination can also be involved in DNA repair, particularly in the cell's response to double-strand breaks.[135]

The most common form of chromosomal crossover is homologous recombination, where the two chromosomes involved share very similar sequences. Non-homologous recombination can be damaging to cells, as it can produce chromosomal translocations and genetic abnormalities. The recombination reaction is catalyzed by enzymes known as recombinases, such as RAD51.[136] The first step in recombination is a double-stranded break caused by either an endonuclease or damage to the DNA.[137] A series of steps catalyzed in part by the recombinase then leads to joining of the two helices by at least one Holliday junction, in which a segment of a single strand in each helix is annealed to the complementary strand in the other helix. The Holliday junction is a tetrahedral junction structure that can be moved along the pair of chromosomes, swapping one strand for another. The recombination reaction is then halted by cleavage of the junction and re-ligation of the released DNA.[138]

DNA contains the genetic information that allows all modern living things to function, grow and reproduce. However, it is unclear how long in the 4-billion-year history of life DNA has performed this function, as it has been proposed that the earliest forms of life may have used RNA as their genetic material.[139][140] RNA may have acted as the central part of early cell metabolism as it can both transmit genetic information and carry out catalysis as part of ribozymes.[141] This ancient RNA world where nucleic acid would have been used for both catalysis and genetics may have influenced the evolution of the current genetic code based on four nucleotide bases. This would occur, since the number of different bases in such an organism is a trade-off between a small number of bases increasing replication accuracy and a large number of bases increasing the catalytic efficiency of ribozymes.[142] However, there is no direct evidence of ancient genetic systems, as recovery of DNA from most fossils is impossible because DNA survives in the environment for less than one million years, and slowly degrades into short fragments in solution.[143] Claims for older DNA have been made, most notably a report of the isolation of a viable bacterium from a salt crystal 250 million years old,[144] but these claims are controversial.[145][146]

Building blocks of DNA (adenine, guanine and related organic molecules) may have been formed extraterrestrially in outer space.[147][148][149] Complex DNA and RNA organic compounds of life, including uracil, cytosine, and thymine, have also been formed in the laboratory under conditions mimicking those found in outer space, using starting chemicals, such as pyrimidine, found in meteorites. Pyrimidine, like polycyclic aromatic hydrocarbons (PAHs), the most carbon-rich chemical found in the universe, may have been formed in red giants or in interstellar cosmic dust and gas clouds.[150]

Methods have been developed to purify DNA from organisms, such as phenol-chloroform extraction, and to manipulate it in the laboratory, such as restriction digests and the polymerase chain reaction. Modern biology and biochemistry make intensive use of these techniques in recombinant DNA technology. Recombinant DNA is a man-made DNA sequence that has been assembled from other DNA sequences. They can be transformed into organisms in the form of plasmids or in the appropriate format, by using a viral vector.[151] The genetically modified organisms produced can be used to produce products such as recombinant proteins, used in medical research,[152] or be grown in agriculture.[153][154]

Forensic scientists can use DNA in blood, semen, skin, saliva or hair found at a crime scene to identify a matching DNA of an individual, such as a perpetrator. This process is formally termed DNA profiling, but may also be called "genetic fingerprinting". In DNA profiling, the lengths of variable sections of repetitive DNA, such as short tandem repeats and minisatellites, are compared between people. This method is usually an extremely reliable technique for identifying a matching DNA.[155] However, identification can be complicated if the scene is contaminated with DNA from several people.[156] DNA profiling was developed in 1984 by British geneticist Sir Alec Jeffreys,[157] and first used in forensic science to convict Colin Pitchfork in the 1988 Enderby murders case.[158]

The development of forensic science, and the ability to now obtain genetic matching on minute samples of blood, skin, saliva, or hair has led to re-examining many cases. Evidence can now be uncovered that was scientifically impossible at the time of the original examination. Combined with the removal of the double jeopardy law in some places, this can allow cases to be reopened where prior trials have failed to produce sufficient evidence to convince a jury. People charged with serious crimes may be required to provide a sample of DNA for matching purposes. The most obvious defence to DNA matches obtained forensically is to claim that cross-contamination of evidence has occurred. This has resulted in meticulous strict handling procedures with new cases of serious crime. DNA profiling is also used successfully to positively identify victims of mass casualty incidents,[159] bodies or body parts in serious accidents, and individual victims in mass war graves, via matching to family members.

DNA profiling is also used in DNA paternity testing to determine if someone is the biological parent or grandparent of a child with the probability of parentage is typically 99.99% when the alleged parent is biologically related to the child. Normal DNA sequencing methods happen after birth but there are new methods to test paternity while a mother is still pregnant.[160]

Deoxyribozymes, also called DNAzymes or catalytic DNA are first discovered in 1994.[161] They are mostly single stranded DNA sequences isolated from a large pool of random DNA sequences through a combinatorial approach called in vitro selection or systematic evolution of ligands by exponential enrichment (SELEX). DNAzymes catalyze variety of chemical reactions including RNA-DNA cleavage, RNA-DNA ligation, amino acids phosphorylation-dephosphorylation, carbon-carbon bond formation, and etc. DNAzymes can enhance catalytic rate of chemical reactions up to 100,000,000,000-fold over the uncatalyzed reaction.[162] The most extensively studied class of DNAzymes are RNA-cleaving types which have been used to detect different metal ions and designing therapeutic agents. Several metal-specific DNAzymes have been reported including the GR-5 DNAzyme (lead-specific),[161] the CA1-3 DNAzymes (copper-specific),[163] the 39E DNAzyme (uranyl-specific) and the NaA43 DNAzyme (sodium-specific).[164] The NaA43 DNAzyme, which is reported to be more than 10,000-fold selective for sodium over other metal ions, was used to make a real-time sodium sensor in living cells.

Bioinformatics involves the development of techniques to store, data mine, search and manipulate biological data, including DNA nucleic acid sequence data. These have led to widely applied advances in computer science, especially string searching algorithms, machine learning and database theory.[165] String searching or matching algorithms, which find an occurrence of a sequence of letters inside a larger sequence of letters, were developed to search for specific sequences of nucleotides.[166] The DNA sequence may be aligned with other DNA sequences to identify homologous sequences and locate the specific mutations that make them distinct. These techniques, especially multiple sequence alignment, are used in studying phylogenetic relationships and protein function.[167] Data sets representing entire genomes' worth of DNA sequences, such as those produced by the Human Genome Project, are difficult to use without the annotations that identify the locations of genes and regulatory elements on each chromosome. Regions of DNA sequence that have the characteristic patterns associated with protein- or RNA-coding genes can be identified by gene finding algorithms, which allow researchers to predict the presence of particular gene products and their possible functions in an organism even before they have been isolated experimentally.[168] Entire genomes may also be compared, which can shed light on the evolutionary history of particular organism and permit the examination of complex evolutionary events.

DNA nanotechnology uses the unique molecular recognition properties of DNA and other nucleic acids to create self-assembling branched DNA complexes with useful properties.[169] DNA is thus used as a structural material rather than as a carrier of biological information. This has led to the creation of two-dimensional periodic lattices (both tile-based and using the DNA origami method) and three-dimensional structures in the shapes of polyhedra.[170]Nanomechanical devices and algorithmic self-assembly have also been demonstrated,[171] and these DNA structures have been used to template the arrangement of other molecules such as gold nanoparticles and streptavidin proteins.[172]

Because DNA collects mutations over time, which are then inherited, it contains historical information, and, by comparing DNA sequences, geneticists can infer the evolutionary history of organisms, their phylogeny.[173] This field of phylogenetics is a powerful tool in evolutionary biology. If DNA sequences within a species are compared, population geneticists can learn the history of particular populations. This can be used in studies ranging from ecological genetics to anthropology; For example, DNA evidence is being used to try to identify the Ten Lost Tribes of Israel.[174][175]

In a paper published in Nature in January 2013, scientists from the European Bioinformatics Institute and Agilent Technologies proposed a mechanism to use DNA's ability to code information as a means of digital data storage. The group was able to encode 739 kilobytes of data into DNA code, synthesize the actual DNA, then sequence the DNA and decode the information back to its original form, with a reported 100% accuracy. The encoded information consisted of text files and audio files. A prior experiment was published in August 2012. It was conducted by researchers at Harvard University, where the text of a 54,000-word book was encoded in DNA.[176][177]

DNA was first isolated by the Swiss physician Friedrich Miescher who, in 1869, discovered a microscopic substance in the pus of discarded surgical bandages. As it resided in the nuclei of cells, he called it "nuclein".[178][179] In 1878, Albrecht Kossel isolated the non-protein component of "nuclein", nucleic acid, and later isolated its five primary nucleobases.[180][181] In 1919, Phoebus Levene identified the base, sugar and phosphate nucleotide unit.[182] Levene suggested that DNA consisted of a string of nucleotide units linked together through the phosphate groups. Levene thought the chain was short and the bases repeated in a fixed order. In 1937, William Astbury produced the first X-ray diffraction patterns that showed that DNA had a regular structure.[183]

In 1927, Nikolai Koltsov proposed that inherited traits would be inherited via a "giant hereditary molecule" made up of "two mirror strands that would replicate in a semi-conservative fashion using each strand as a template".[184][185] In 1928, Frederick Griffith in his experiment discovered that traits of the "smooth" form of Pneumococcus could be transferred to the "rough" form of the same bacteria by mixing killed "smooth" bacteria with the live "rough" form.[186][187] This system provided the first clear suggestion that DNA carries genetic informationthe AveryMacLeodMcCarty experimentwhen Oswald Avery, along with coworkers Colin MacLeod and Maclyn McCarty, identified DNA as the transforming principle in 1943.[188] DNA's role in heredity was confirmed in 1952, when Alfred Hershey and Martha Chase in the HersheyChase experiment showed that DNA is the genetic material of the T2 phage.[189]

In 1953, James Watson and Francis Crick suggested what is now accepted as the first correct double-helix model of DNA structure in the journal Nature.[11] Their double-helix, molecular model of DNA was then based on one X-ray diffraction image (labeled as "Photo 51")[190] taken by Rosalind Franklin and Raymond Gosling in May 1952, and the information that the DNA bases are paired.

Experimental evidence supporting the Watson and Crick model was published in a series of five articles in the same issue of Nature.[191] Of these, Franklin and Gosling's paper was the first publication of their own X-ray diffraction data and original analysis method that partly supported the Watson and Crick model;[50][192] this issue also contained an article on DNA structure by Maurice Wilkins and two of his colleagues, whose analysis and in vivo B-DNA X-ray patterns also supported the presence in vivo of the double-helical DNA configurations as proposed by Crick and Watson for their double-helix molecular model of DNA in the prior two pages of Nature.[51] In 1962, after Franklin's death, Watson, Crick, and Wilkins jointly received the Nobel Prize in Physiology or Medicine.[193] Nobel Prizes are awarded only to living recipients. A debate continues about who should receive credit for the discovery.[194]

In an influential presentation in 1957, Crick laid out the central dogma of molecular biology, which foretold the relationship between DNA, RNA, and proteins, and articulated the "adaptor hypothesis".[195] Final confirmation of the replication mechanism that was implied by the double-helical structure followed in 1958 through the MeselsonStahl experiment.[196] Further work by Crick and coworkers showed that the genetic code was based on non-overlapping triplets of bases, called codons, allowing Har Gobind Khorana, Robert W. Holley and Marshall Warren Nirenberg to decipher the genetic code.[197] These findings represent the birth of molecular biology.

Read more:
DNA - Wikipedia

Posted in DNA | Comments Off on DNA – Wikipedia

Human genome – Wikipedia

Posted: at 11:32 pm

Genomic information Graphical representation of the idealized human diploid karyotype, showing the organization of the genome into chromosomes. This drawing shows both the female (XX) and male (XY) versions of the 23rd chromosome pair. Chromosomes are shown aligned at their centromeres. The mitochondrial DNA is not shown. NCBI genome ID 51 Ploidy diploid Genome size

3,234.83 Mb (Mega-basepairs) per haploid genome

The human genome is the complete set of nucleic acid sequence for humans (Homo sapiens), encoded as DNA within the 23 chromosome pairs in cell nuclei and in a small DNA molecule found within individual mitochondria. Human genomes include both protein-coding DNA genes and noncoding DNA. Haploid human genomes, which are contained in germ cells (the egg and sperm gamete cells created in the meiosis phase of sexual reproduction before fertilization creates a zygote) consist of three billion DNA base pairs, while diploid genomes (found in somatic cells) have twice the DNA content. While there are significant differences among the genomes of human individuals (on the order of 0.1%),[1] these are considerably smaller than the differences between humans and their closest living relatives, the chimpanzees (approximately 4%[2]) and bonobos.

The Human Genome Project produced the first complete sequences of individual human genomes, with the first draft sequence and initial analysis being published on February 12, 2001.[3] The human genome was the first of all vertebrates to be completely sequenced. As of 2012, thousands of human genomes have been completely sequenced, and many more have been mapped at lower levels of resolution. The resulting data are used worldwide in biomedical science, anthropology, forensics and other branches of science. There is a widely held expectation that genomic studies will lead to advances in the diagnosis and treatment of diseases, and to new insights in many fields of biology, including human evolution.

Although the sequence of the human genome has been (almost) completely determined by DNA sequencing, it is not yet fully understood. Most (though probably not all) genes have been identified by a combination of high throughput experimental and bioinformatics approaches, yet much work still needs to be done to further elucidate the biological functions of their protein and RNA products. Recent results suggest that most of the vast quantities of noncoding DNA within the genome have associated biochemical activities, including regulation of gene expression, organization of chromosome architecture, and signals controlling epigenetic inheritance.

There are an estimated 19,000-20,000 human protein-coding genes.[4] The estimate of the number of human genes has been repeatedly revised down from initial predictions of 100,000 or more as genome sequence quality and gene finding methods have improved, and could continue to drop further.[5][6]Protein-coding sequences account for only a very small fraction of the genome (approximately 1.5%), and the rest is associated with non-coding RNA molecules, regulatory DNA sequences, LINEs, SINEs, introns, and sequences for which as yet no function has been determined.[7]

In June 2016, scientists formally announced HGP-Write, a plan to synthesize the human genome.[8][9]

The total length of the human genome is over 3 billion base pairs. The genome is organized into 22 paired chromosomes, plus the X chromosome (one in males, two in females) and, in males only, one Y chromosome. These are all large linear DNA molecules contained within the cell nucleus. The genome also includes the mitochondrial DNA, a comparatively small circular molecule present in each mitochondrion. Basic information about these molecules and their gene content, based on a reference genome that does not represent the sequence of any specific individual, are provided in the following table. (Data source: Ensembl genome browser release 68, July 2012)

Table 1 (above) summarizes the physical organization and gene content of the human reference genome, with links to the original analysis, as published in the Ensembl database at the European Bioinformatics Institute (EBI) and Wellcome Trust Sanger Institute. Chromosome lengths were estimated by multiplying the number of base pairs by 0.34 nanometers, the distance between base pairs in the DNA double helix. The number of proteins is based on the number of initial precursor mRNA transcripts, and does not include products of alternative pre-mRNA splicing, or modifications to protein structure that occur after translation.

The number of variations is a summary of unique DNA sequence changes that have been identified within the sequences analyzed by Ensembl as of July, 2012; that number is expected to increase as further personal genomes are sequenced and examined. In addition to the gene content shown in this table, a large number of non-expressed functional sequences have been identified throughout the human genome (see below). Links open windows to the reference chromosome sequence in the EBI genome browser. The table also describes prevalence of genes encoding structural RNAs in the genome.

MicroRNA, or miRNA, functions as a post-transcriptional regulator of gene expression. Ribosomal RNA, or rRNA, makes up the RNA portion of the ribosome and is critical in the synthesis of proteins. Small nuclear RNA, or snRNA, is found in the nucleus of the cell. Its primary function is in the processing of pre-mRNA molecules and also in the regulation of transcription factors. Small nucleolar RNA, or SnoRNA, primarily functions in guiding chemical modifications to other RNA molecules.

Although the human genome has been completely sequenced for all practical purposes, there are still hundreds of gaps in the sequence. A recent study noted more than 160 euchromatic gaps of which 50 gaps were closed.[10] However, there are still numerous gaps in the heterochromatic parts of the genome which is much harder to sequence due to numerous repeats and other intractable sequence features.

The content of the human genome is commonly divided into coding and noncoding DNA sequences. Coding DNA is defined as those sequences that can be transcribed into mRNA and translated into proteins during the human life cycle; these sequences occupy only a small fraction of the genome (<2%). Noncoding DNA is made up of all of those sequences (ca. 98% of the genome) that are not used to encode proteins.

Some noncoding DNA contains genes for RNA molecules with important biological functions (noncoding RNA, for example ribosomal RNA and transfer RNA). The exploration of the function and evolutionary origin of noncoding DNA is an important goal of contemporary genome research, including the ENCODE (Encyclopedia of DNA Elements) project, which aims to survey the entire human genome, using a variety of experimental tools whose results are indicative of molecular activity.

Because non-coding DNA greatly outnumbers coding DNA, the concept of the sequenced genome has become a more focused analytical concept than the classical concept of the DNA-coding gene.[11][12]

Mutation rate of human genome is a very important factor in calculating evolutionary time points. Researchers calculated the number of genetic variations between human and apes. Dividing that number by age of fossil of most recent common ancestor of humans and ape, researchers calculated the mutation rate. Recent studies using next generation sequencing technologies concluded a slow mutation rate which doesn't add up with human migration pattern time points and suggesting a new evolutionary time scale.[13] 100,000 year old human fossils found in Israel have served to compound this new found uncertainty of the human migration timeline.[13]

Protein-coding sequences represent the most widely studied and best understood component of the human genome. These sequences ultimately lead to the production of all human proteins, although several biological processes (e.g. DNA rearrangements and alternative pre-mRNA splicing) can lead to the production of many more unique proteins than the number of protein-coding genes.

The complete modular protein-coding capacity of the genome is contained within the exome, and consists of DNA sequences encoded by exons that can be translated into proteins. Because of its biological importance, and the fact that it constitutes less than 2% of the genome, sequencing of the exome was the first major milepost of the Human Genome Project.

Number of protein-coding genes. About 20,000 human proteins have been annotated in databases such as Uniprot.[15] Historically, estimates for the number of protein genes have varied widely, ranging up to 2,000,000 in the late 1960s,[16] but several researchers pointed out in the early 1970s that the estimated mutational load from deleterious mutations placed an upper limit of approximately 40,000 for the total number of functional loci (this includes protein-coding and functional non-coding genes).[17]

The number of human protein-coding genes is not significantly larger than that of many less complex organisms, such as the roundworm and the fruit fly. This difference may result from the extensive use of alternative pre-mRNA splicing in humans, which provides the ability to build a very large number of modular proteins through the selective incorporation of exons.

Protein-coding capacity per chromosome. Protein-coding genes are distributed unevenly across the chromosomes, ranging from a few dozen to more than 2000, with an especially high gene density within chromosomes 19, 11, and 1 (Table 1). Each chromosome contains various gene-rich and gene-poor regions, which may be correlated with chromosome bands and GC-content[citation needed]. The significance of these nonrandom patterns of gene density is not well understood.[18]

Size of protein-coding genes. The size of protein-coding genes within the human genome shows enormous variability (Table 2). For example, the gene for histone H1a (HIST1HIA) is relatively small and simple, lacking introns and encoding mRNA sequences of 781 nt and a 215 amino acid protein (648 nt open reading frame). Dystrophin (DMD) is the largest protein-coding gene in the human reference genome, spanning a total of 2.2 MB, while Titin (TTN) has the longest coding sequence (114,414 bp), the largest number of exons (363),[19] and the longest single exon (17,106 bp). Over the whole genome, the median size of an exon is 122 bp (mean = 145 bp), the median number of exons is 7 (mean = 8.8), and the median coding sequence encodes 367 amino acids (mean = 447 amino acids; Table 21 in[7] ).

Table 2. Examples of human protein-coding genes. Chrom, chromosome. Alt splicing, alternative pre-mRNA splicing. (Data source: Ensembl genome browser release 68, July 2012)

Noncoding DNA is defined as all of the DNA sequences within a genome that are not found within protein-coding exons, and so are never represented within the amino acid sequence of expressed proteins. By this definition, more than 98% of the human genomes is composed of ncDNA.

Numerous classes of noncoding DNA have been identified, including genes for noncoding RNA (e.g. tRNA and rRNA), pseudogenes, introns, untranslated regions of mRNA, regulatory DNA sequences, repetitive DNA sequences, and sequences related to mobile genetic elements.

Numerous sequences that are included within genes are also defined as noncoding DNA. These include genes for noncoding RNA (e.g. tRNA, rRNA), and untranslated components of protein-coding genes (e.g. introns, and 5' and 3' untranslated regions of mRNA).

Protein-coding sequences (specifically, coding exons) constitute less than 1.5% of the human genome.[7] In addition, about 26% of the human genome is introns.[20] Aside from genes (exons and introns) and known regulatory sequences (820%), the human genome contains regions of noncoding DNA. The exact amount of noncoding DNA that plays a role in cell physiology has been hotly debated. Recent analysis by the ENCODE project indicates that 80% of the entire human genome is either transcribed, binds to regulatory proteins, or is associated with some other biochemical activity.[6]

It however remains controversial whether all of this biochemical activity contributes to cell physiology, or whether a substantial portion of this is the result transcriptional and biochemical noise, which must be actively filtered out by the organism.[21] Excluding protein-coding sequences, introns, and regulatory regions, much of the non-coding DNA is composed of: Many DNA sequences that do not play a role in gene expression have important biological functions. Comparative genomics studies indicate that about 5% of the genome contains sequences of noncoding DNA that are highly conserved, sometimes on time-scales representing hundreds of millions of years, implying that these noncoding regions are under strong evolutionary pressure and positive selection.[22]

Many of these sequences regulate the structure of chromosomes by limiting the regions of heterochromatin formation and regulating structural features of the chromosomes, such as the telomeres and centromeres. Other noncoding regions serve as origins of DNA replication. Finally several regions are transcribed into functional noncoding RNA that regulate the expression of protein-coding genes (for example[23] ), mRNA translation and stability (see miRNA), chromatin structure (including histone modifications, for example[24] ), DNA methylation (for example[25] ), DNA recombination (for example[26] ), and cross-regulate other noncoding RNAs (for example[27] ). It is also likely that many transcribed noncoding regions do not serve any role and that this transcription is the product of non-specific RNA Polymerase activity.[21]

Pseudogenes are inactive copies of protein-coding genes, often generated by gene duplication, that have become nonfunctional through the accumulation of inactivating mutations. Table 1 shows that the number of pseudogenes in the human genome is on the order of 13,000,[28] and in some chromosomes is nearly the same as the number of functional protein-coding genes. Gene duplication is a major mechanism through which new genetic material is generated during molecular evolution.

For example, the olfactory receptor gene family is one of the best-documented examples of pseudogenes in the human genome. More than 60 percent of the genes in this family are non-functional pseudogenes in humans. By comparison, only 20 percent of genes in the mouse olfactory receptor gene family are pseudogenes. Research suggests that this is a species-specific characteristic, as the most closely related primates all have proportionally fewer pseudogenes. This genetic discovery helps to explain the less acute sense of smell in humans relative to other mammals.[29]

Noncoding RNA molecules play many essential roles in cells, especially in the many reactions of protein synthesis and RNA processing. Noncoding RNA include tRNA, ribosomal RNA, microRNA, snRNA and other non-coding RNA genes including about 60,000 long non coding RNAs (lncRNAs).[6][30][31][32] It should be noted that while the number of reported lncRNA genes continues to rise and the exact number in the human genome is yet to be defined, many of them are argued to be non-functional.[33]

Many ncRNAs are critical elements in gene regulation and expression. Noncoding RNA also contributes to epigenetics, transcription, RNA splicing, and the translational machinery. The role of RNA in genetic regulation and disease offers a new potential level of unexplored genomic complexity.[34]

In addition to the ncRNA molecules that are encoded by discrete genes, the initial transcripts of protein coding genes usually contain extensive noncoding sequences, in the form of introns, 5'-untranslated regions (5'-UTR), and 3'-untranslated regions (3'-UTR). Within most protein-coding genes of the human genome, the length of intron sequences is 10- to 100-times the length of exon sequences (Table 2).

The human genome has many different regulatory sequences which are crucial to controlling gene expression. Conservative estimates indicate that these sequences make up 8% of the genome,[35] however extrapolations from the ENCODE project give that 20[36]-40%[37] of the genome is gene regulatory sequence. Some types of non-coding DNA are genetic "switches" that do not encode proteins, but do regulate when and where genes are expressed (called enhancers).[38]

Regulatory sequences have been known since the late 1960s.[39] The first identification of regulatory sequences in the human genome relied on recombinant DNA technology.[40] Later with the advent of genomic sequencing, the identification of these sequences could be inferred by evolutionary conservation. The evolutionary branch between the primates and mouse, for example, occurred 7090 million years ago.[41] So computer comparisons of gene sequences that identify conserved non-coding sequences will be an indication of their importance in duties such as gene regulation.[42]

Other genomes have been sequenced with the same intention of aiding conservation-guided methods, for exampled the pufferfish genome.[43] However, regulatory sequences disappear and re-evolve during evolution at a high rate.[44][45][46]

As of 2012, the efforts have shifted toward finding interactions between DNA and regulatory proteins by the technique ChIP-Seq, or gaps where the DNA is not packaged by histones (DNase hypersensitive sites), both of which tell where there are active regulatory sequences in the investigated cell type.[35]

Repetitive DNA sequences comprise approximately 50% of the human genome.[47]

About 8% of the human genome consists of tandem DNA arrays or tandem repeats, low complexity repeat sequences that have multiple adjacent copies (e.g. "CAGCAGCAG...").[citation needed] The tandem sequences may be of variable lengths, from two nucleotides to tens of nucleotides. These sequences are highly variable, even among closely related individuals, and so are used for genealogical DNA testing and forensic DNA analysis.[48]

Repeated sequences of fewer than ten nucleotides (e.g. the dinucleotide repeat (AC)n) are termed microsatellite sequences. Among the microsatellite sequences, trinucleotide repeats are of particular importance, as sometimes occur within coding regions of genes for proteins and may lead to genetic disorders. For example, Huntington's disease results from an expansion of the trinucleotide repeat (CAG)n within the Huntingtin gene on human chromosome 4. Telomeres (the ends of linear chromosomes) end with a microsatellite hexanucleotide repeat of the sequence (TTAGGG)n.

Tandem repeats of longer sequences (arrays of repeated sequences 1060 nucleotides long) are termed minisatellites.

Transposable genetic elements, DNA sequences that can replicate and insert copies of themselves at other locations within a host genome, are an abundant component in the human genome. The most abundant transposon lineage, Alu, has about 50,000 active copies,[49] and can be inserted into intragenic and intergenic regions.[50] One other lineage, LINE-1, has about 100 active copies per genome (the number varies between people).[51] Together with non-functional relics of old transposons, they account for over half of total human DNA.[52] Sometimes called "jumping genes", transposons have played a major role in sculpting the human genome. Some of these sequences represent endogenous retroviruses, DNA copies of viral sequences that have become permanently integrated into the genome and are now passed on to succeeding generations.

Mobile elements within the human genome can be classified into LTR retrotransposons (8.3% of total genome), SINEs (13.1% of total genome) including Alu elements, LINEs (20.4% of total genome), SVAs and Class II DNA transposons (2.9% of total genome).

With the exception of identical twins, all humans show significant variation in genomic DNA sequences. The human reference genome (HRG) is used as a standard sequence reference.

There are several important points concerning the human reference genome:

Most studies of human genetic variation have focused on single-nucleotide polymorphisms (SNPs), which are substitutions in individual bases along a chromosome. Most analyses estimate that SNPs occur 1 in 1000 base pairs, on average, in the euchromatic human genome, although they do not occur at a uniform density. Thus follows the popular statement that "we are all, regardless of race, genetically 99.9% the same",[53] although this would be somewhat qualified by most geneticists. For example, a much larger fraction of the genome is now thought to be involved in copy number variation.[54] A large-scale collaborative effort to catalog SNP variations in the human genome is being undertaken by the International HapMap Project.

The genomic loci and length of certain types of small repetitive sequences are highly variable from person to person, which is the basis of DNA fingerprinting and DNA paternity testing technologies. The heterochromatic portions of the human genome, which total several hundred million base pairs, are also thought to be quite variable within the human population (they are so repetitive and so long that they cannot be accurately sequenced with current technology). These regions contain few genes, and it is unclear whether any significant phenotypic effect results from typical variation in repeats or heterochromatin.

Most gross genomic mutations in gamete germ cells probably result in inviable embryos; however, a number of human diseases are related to large-scale genomic abnormalities. Down syndrome, Turner Syndrome, and a number of other diseases result from nondisjunction of entire chromosomes. Cancer cells frequently have aneuploidy of chromosomes and chromosome arms, although a cause and effect relationship between aneuploidy and cancer has not been established.

Whereas a genome sequence lists the order of every DNA base in a genome, a genome map identifies the landmarks. A genome map is less detailed than a genome sequence and aids in navigating around the genome.[55][56]

An example of a variation map is the HapMap being developed by the International HapMap Project. The HapMap is a haplotype map of the human genome, "which will describe the common patterns of human DNA sequence variation."[57] It catalogs the patterns of small-scale variations in the genome that involve single DNA letters, or bases.

Researchers published the first sequence-based map of large-scale structural variation across the human genome in the journal Nature in May 2008.[58][59] Large-scale structural variations are differences in the genome among people that range from a few thousand to a few million DNA bases; some are gains or losses of stretches of genome sequence and others appear as re-arrangements of stretches of sequence. These variations include differences in the number of copies individuals have of a particular gene, deletions, translocations and inversions.

Single-nucleotide polymorphisms (SNPs) do not occur homogeneously across the human genome. In fact, there is enormous diversity in SNP frequency between genes, reflecting different selective pressures on each gene as well as different mutation and recombination rates across the genome. However, studies on SNPs are biased towards coding regions, the data generated from them are unlikely to reflect the overall distribution of SNPs throughout the genome. Therefore, the SNP Consortium protocol was designed to identify SNPs with no bias towards coding regions and the Consortium's 100,000 SNPs generally reflect sequence diversity across the human chromosomes.The SNP Consortium aims to expand the number of SNPs identified across the genome to 300 000 by the end of the first quarter of 2001.[60]

Changes in non-coding sequence and synonymous changes in coding sequence are generally more common than non-synonymous changes, reflecting greater selective pressure reducing diversity at positions dictating amino acid identity. Transitional changes are more common than transversions, with CpG dinucleotides showing the highest mutation rate, presumably due to deamination.

A personal genome sequence is a (nearly) complete sequence of the chemical base pairs that make up the DNA of a single person. Because medical treatments have different effects on different people due to genetic variations such as single-nucleotide polymorphisms (SNPs), the analysis of personal genomes may lead to personalized medical treatment based on individual genotypes.[citation needed]

The first personal genome sequence to be determined was that of Craig Venter in 2007. Personal genomes had not been sequenced in the public Human Genome Project to protect the identity of volunteers who provided DNA samples. That sequence was derived from the DNA of several volunteers from a diverse population.[61] However, early in the Venter-led Celera Genomics genome sequencing effort the decision was made to switch from sequencing a composite sample to using DNA from a single individual, later revealed to have been Venter himself. Thus the Celera human genome sequence released in 2000 was largely that of one man. Subsequent replacement of the early composite-derived data and determination of the diploid sequence, representing both sets of chromosomes, rather than a haploid sequence originally reported, allowed the release of the first personal genome.[62] In April 2008, that of James Watson was also completed. Since then hundreds of personal genome sequences have been released,[63] including those of Desmond Tutu,[64][65] and of a Paleo-Eskimo.[66] In November 2013, a Spanish family made their personal genomics data obtained by direct-to-consumer genetic testing with 23andMe publicly available under a Creative Commons public domain license. This is believed to be the first such public genomics dataset for a whole family.[67]

The sequencing of individual genomes further unveiled levels of genetic complexity that had not been appreciated before. Personal genomics helped reveal the significant level of diversity in the human genome attributed not only to SNPs but structural variations as well. However, the application of such knowledge to the treatment of disease and in the medical field is only in its very beginnings.[68]Exome sequencing has become increasingly popular as a tool to aid in diagnosis of genetic disease because the exome contributes only 1% of the genomic sequence but accounts for roughly 85% of mutations that contribute significantly to disease.[69]

Most aspects of human biology involve both genetic (inherited) and non-genetic (environmental) factors. Some inherited variation influences aspects of our biology that are not medical in nature (height, eye color, ability to taste or smell certain compounds, etc.). Moreover, some genetic disorders only cause disease in combination with the appropriate environmental factors (such as diet). With these caveats, genetic disorders may be described as clinically defined diseases caused by genomic DNA sequence variation. In the most straightforward cases, the disorder can be associated with variation in a single gene. For example, cystic fibrosis is caused by mutations in the CFTR gene, and is the most common recessive disorder in caucasian populations with over 1,300 different mutations known.[70]

Disease-causing mutations in specific genes are usually severe in terms of gene function, and are fortunately rare, thus genetic disorders are similarly individually rare. However, since there are many genes that can vary to cause genetic disorders, in aggregate they constitute a significant component of known medical conditions, especially in pediatric medicine. Molecularly characterized genetic disorders are those for which the underlying causal gene has been identified, currently there are approximately 2,200 such disorders annotated in the OMIM database.[70]

Studies of genetic disorders are often performed by means of family-based studies. In some instances population based approaches are employed, particularly in the case of so-called founder populations such as those in Finland, French-Canada, Utah, Sardinia, etc. Diagnosis and treatment of genetic disorders are usually performed by a geneticist-physician trained in clinical/medical genetics. The results of the Human Genome Project are likely to provide increased availability of genetic testing for gene-related disorders, and eventually improved treatment. Parents can be screened for hereditary conditions and counselled on the consequences, the probability it will be inherited, and how to avoid or ameliorate it in their offspring.

As noted above, there are many different kinds of DNA sequence variation, ranging from complete extra or missing chromosomes down to single nucleotide changes. It is generally presumed that much naturally occurring genetic variation in human populations is phenotypically neutral, i.e. has little or no detectable effect on the physiology of the individual (although there may be fractional differences in fitness defined over evolutionary time frames). Genetic disorders can be caused by any or all known types of sequence variation. To molecularly characterize a new genetic disorder, it is necessary to establish a causal link between a particular genomic sequence variant and the clinical disease under investigation. Such studies constitute the realm of human molecular genetics.

With the advent of the Human Genome and International HapMap Project, it has become feasible to explore subtle genetic influences on many common disease conditions such as diabetes, asthma, migraine, schizophrenia, etc. Although some causal links have been made between genomic sequence variants in particular genes and some of these diseases, often with much publicity in the general media, these are usually not considered to be genetic disorders per se as their causes are complex, involving many different genetic and environmental factors. Thus there may be disagreement in particular cases whether a specific medical condition should be termed a genetic disorder. The categorized table below provides the prevalence as well as the genes or chromosomes associated with some human genetic disorders.

-10

-9

-8

-7

-6

-5

-4

-3

-2

-1

0

Comparative genomics studies of mammalian genomes suggest that approximately 5% of the human genome has been conserved by evolution since the divergence of extant lineages approximately 200 million years ago, containing the vast majority of genes.[72][73] The published chimpanzee genome differs from that of the human genome by 1.23% in direct sequence comparisons.[74] Around 20% of this figure is accounted for by variation within each species, leaving only ~1.06% consistent sequence divergence between humans and chimps at shared genes.[75] This nucleotide by nucleotide difference is dwarfed, however, by the portion of each genome that is not shared, including around 6% of functional genes that are unique to either humans or chimps.[76]

In other words, the considerable observable differences between humans and chimps may be due as much or more to genome level variation in the number, function and expression of genes rather than DNA sequence changes in shared genes. Indeed, even within humans, there has been found to be a previously unappreciated amount of copy number variation (CNV) which can make up as much as 5 15% of the human genome. In other words, between humans, there could be +/- 500,000,000 base pairs of DNA, some being active genes, others inactivated, or active at different levels. The full significance of this finding remains to be seen. On average, a typical human protein-coding gene differs from its chimpanzee ortholog by only two amino acid substitutions; nearly one third of human genes have exactly the same protein translation as their chimpanzee orthologs. A major difference between the two genomes is human chromosome 2, which is equivalent to a fusion product of chimpanzee chromosomes 12 and 13.[77] (later renamed to chromosomes 2A and 2B, respectively).

Humans have undergone an extraordinary loss of olfactory receptor genes during our recent evolution, which explains our relatively crude sense of smell compared to most other mammals. Evolutionary evidence suggests that the emergence of color vision in humans and several other primate species has diminished the need for the sense of smell.[78]

In September 2016, scientists reported that, based on human DNA genetic studies, all non-Africans in the world today can be traced to a single population that exited Africa between 50,000 and 80,000 years ago.[79]

The human mitochondrial DNA is of tremendous interest to geneticists, since it undoubtedly plays a role in mitochondrial disease. It also sheds light on human evolution; for example, analysis of variation in the human mitochondrial genome has led to the postulation of a recent common ancestor for all humans on the maternal line of descent (see Mitochondrial Eve).

Due to the lack of a system for checking for copying errors, mitochondrial DNA (mtDNA) has a more rapid rate of variation than nuclear DNA. This 20-fold higher mutation rate allows mtDNA to be used for more accurate tracing of maternal ancestry. Studies of mtDNA in populations have allowed ancient migration paths to be traced, such as the migration of Native Americans from Siberia or Polynesians from southeastern Asia. It has also been used to show that there is no trace of Neanderthal DNA in the European gene mixture inherited through purely maternal lineage.[80] Due to the restrictive all or none manner of mtDNA inheritance, this result (no trace of Neanderthal mtDNA) would be likely unless there were a large percentage of Neanderthal ancestry, or there was strong positive selection for that mtDNA (for example, going back 5 generations, only 1 of your 32 ancestors contributed to your mtDNA, so if one of these 32 was pure Neanderthal you would expect that ~3% of your autosomal DNA would be of Neanderthal origin, yet you would have a ~97% chance to have no trace of Neanderthal mtDNA).

Epigenetics describes a variety of features of the human genome that transcend its primary DNA sequence, such as chromatin packaging, histone modifications and DNA methylation, and which are important in regulating gene expression, genome replication and other cellular processes. Epigenetic markers strengthen and weaken transcription of certain genes but do not affect the actual sequence of DNA nucleotides. DNA methylation is a major form of epigenetic control over gene expression and one of the most highly studied topics in epigenetics. During development, the human DNA methylation profile experiences dramatic changes. In early germ line cells, the genome has very low methylation levels. These low levels generally describe active genes. As development progresses, parental imprinting tags lead to increased methylation activity.[81][82]

Epigenetic patterns can be identified between tissues within an individual as well as between individuals themselves. Identical genes that have differences only in their epigenetic state are called epialleles. Epialleles can be placed into three categories: those directly determined by an individuals genotype, those influenced by genotype, and those entirely independent of genotype. The epigenome is also influenced significantly by environmental factors. Diet, toxins, and hormones impact the epigenetic state. Studies in dietary manipulation have demonstrated that methyl-deficient diets are associated with hypomethylation of the epigenome. Such studies establish epigenetics as an important interface between the environment and the genome.[83]

Continue reading here:
Human genome - Wikipedia

Posted in Genome | Comments Off on Human genome – Wikipedia

Dermatitis – Wikipedia

Posted: at 11:31 pm

Dermatitis, also known as eczema, is a group of diseases that results in inflammation of the skin.[1] These diseases are characterized by itchiness, red skin, and a rash.[1] In cases of short duration there may be small blisters while in long term cases the skin may become thickened.[1] The area of skin involved can vary from small to the entire body.[1][2]

Dermatitis is a group of skin conditions that includes atopic dermatitis, allergic contact dermatitis, irritant contact dermatitis, and stasis dermatitis.[1][2] The exact cause of dermatitis is often unclear.[2] Cases are believed to often involve a combination of irritation, allergy, and poor venous return. The type of dermatitis is generally determined by the person's history and the location of the rash. For example, irritant dermatitis often occurs on the hands of people who frequently get them wet. Allergic contact dermatitis; however, can occur following brief exposures to specific substances to which a person is sensitive.[1]

Treatment of atopic dermatitis is typically with moisturizers and steroid creams.[3] The steroid creams should generally be of mid to high strength and used for less than two weeks at a time as side effects can occur.[4]Antibiotics may be required if there are signs of skin infection.[2] Contact dermatitis is typically treated by avoiding the allergen or irritant.[5][6]Antihistamines may be used to help with sleep and to decrease nighttime scratching.[2]

Dermatitis was estimated to affect 334 million people globally in 2013.[7] Atopic dermatitis is the most common type and generally starts in childhood.[1][2] In the United States it affects about 10-30% of people.[2] Contact dermatitis is two times more common in females than males.[8] Allergic contact dermatitis affects about 7% of people at some point in time.[9] Irritant contact dermatitis is common, especially among people who do certain jobs, however exact rates are unclear.[10]

Dermatitis symptoms vary with all different forms of the condition. They range from skin rashes to bumpy rashes or including blisters. Although every type of dermatitis has different symptoms, there are certain signs that are common for all of them, including redness of the skin, swelling, itching and skin lesions with sometimes oozing and scarring. Also, the area of the skin on which the symptoms appear tends to be different with every type of dermatitis, whether on the neck, wrist, forearm, thigh or ankle. Although the location may vary, the primary symptom of this condition is itchy skin. More rarely, it may appear on the genital area, such as the vulva or scrotum.[11] Symptoms of this type of dermatitis may be very intense and may come and go. Irritant contact dermatitis is usually more painful than itchy.

Although the symptoms of atopic dermatitis vary from person to person, the most common symptoms are dry, itchy, red skin. Typical affected skin areas include the folds of the arms, the back of the knees, wrists, face and hands.

Dermatitis herpetiformis symptoms include itching, stinging and a burning sensation. Papules and vesicles are commonly present. The small red bumps experienced in this type of dermatitis are usually about 1cm in size, red in color and may be found symmetrically grouped or distributed on the upper or lower back, buttocks, elbows, knees, neck, shoulders, and scalp.[12] Less frequently, the rash may appear inside the mouth or near the hairline.

The symptoms of seborrheic dermatitis on the other hand, tend to appear gradually, from dry or greasy scaling of the scalp (dandruff) to hair loss. In severe cases, pimples may appear along the hairline, behind the ears, on the eyebrows, on the bridge of the nose, around the nose, on the chest, and on the upper back.[13] In newborns, the condition causes a thick and yellowish scalp rash, often accompanied by a diaper rash.

Perioral dermatitis refers to a red bumpy rash around the mouth.[14]

A patch of dermatitis that has been scratched

The cause of dermatitis is unknown but is presumed to be a combination of genetic and environmental factors.[2]

The hygiene hypothesis postulates that the cause of asthma, eczema, and other allergic diseases is an unusually clean environment. It is supported by epidemiologic studies for asthma.[15] The hypothesis states that exposure to bacteria and other immune system modulators is important during development, and missing out on this exposure increases risk for asthma and allergy.

While it has been suggested that eczema may sometimes be an allergic reaction to the excrement from house dust mites,[16] with up to 5% of people showing antibodies to the mites,[17] the overall role this plays awaits further corroboration.[18]

A number of genes have been associated with eczema, one of which is filaggrin.[3] Genome-wide studies found three new genetic variants associated with eczema: OVOL1, ACTL9 and IL4-KIF3A.[19]

Eczema occurs about three times more frequently in individuals with celiac disease and about two times more frequently in relatives of those with celiac disease, potentially indicating a genetic link between the two conditions.[20][21]

Diagnosis of eczema is based mostly on the history and physical examination.[3] However, in uncertain cases, skin biopsy may be useful.[22] Those with eczema may be especially prone to misdiagnosis of food allergies.[23]

Patch tests are used in the diagnosis of allergic contact dermatitis.[24][25]

The term "eczema" refers to a set of clinical characteristics. Classification of the underlying diseases has been haphazard and unsystematic, with many synonyms being used to describe the same condition.

A type of dermatitis may be described by location (e.g. hand eczema), by specific appearance (eczema craquele or discoid), or by possible cause (varicose eczema). Further adding to the confusion, many sources use the term eczema interchangeably for the most common type of eczema (atopic dermatitis) .

The European Academy of Allergology and Clinical Immunology (EAACI) published a position paper in 2001, which simplifies the nomenclature of allergy-related diseases, including atopic and allergic contact eczemas.[26] Non-allergic eczemas are not affected by this proposal.

There are several different types of dermatitis including atopic dermatitis, contact dermatitis, stasis dermatitis, and seborrheic eczema.[2] Many use the term dermatitis and eczema synonymously.[1]

Others use the term eczema to specifically mean atopic dermatitis.[27][28][29] Atopic dermatitis is also known as atopic eczema.[3] In some languages, dermatitis and eczema mean the same thing, while in other languages dermatitis implies an acute condition and eczema a chronic one.[30]

There is no good evidence that a mother's diet during pregnancy, the formula used, or breastfeeding changes the risk.[32] There is tentative evidence that probiotics in infancy may reduce rates but it is insufficient to recommend its use.[33]

People with eczema should not get the smallpox vaccination due to risk of developing eczema vaccinatum, a potentially severe and sometimes fatal complication.[34]

There is no known cure for some types of dermatitis, with treatment aiming to control symptoms by reducing inflammation and relieving itching. Contact dermatitis is treated by avoiding what is causing it.

Bathing once or more a day is recommended.[3] It is a misconception that bathing dries the skin in people with eczema.[35]Soaps should be avoided as they tend to strip the skin of natural oils and lead to excessive dryness.[36] It is not clear whether dust mite reduction helps with eczema.

There has not been adequate evaluation of changing the diet to reduce eczema.[37][38] There is some evidence that infants with an established egg allergy may have a reduction in symptoms if eggs are eliminated from their diets.[37] Benefits have not been shown for other elimination diets, though the studies are small and poorly executed.[37][38] Establishing that there is a food allergy before dietary change could avoid unnecessary lifestyle changes.[37]

People can also wear clothing designed to manage the itching, scratching and peeling.[39]

Moisturizing agents (also known as emollients) are recommended at least once or twice a day.[3] Oilier formulations appear to be better and water-based formulations are not recommended.[3] It is unclear if moisturizers that contain ceramides are more or less effective than others.[40] Products that contain dyes, perfumes, or peanuts should not be used.[3]Occlusive dressings at night may be useful.[3]

There is little evidence for antihistamine and they are thus not generally recommended.[3] Sedative antihistamines, such as diphenhydramine, may be tried in those who are unable to sleep due to eczema.[3]

If symptoms are well controlled with moisturizers, steroids may only be required when flares occur.[3]Corticosteroids are effective in controlling and suppressing symptoms in most cases.[41] Once daily use is generally enough.[3] For mild-moderate eczema a weak steroid may be used (e.g. hydrocortisone), while in more severe cases a higher-potency steroid (e.g. clobetasol propionate) may be used. In severe cases, oral or injectable corticosteroids may be used. While these usually bring about rapid improvements, they have greater side effects.

Long term use of topical steroids may result in skin atrophy, stria, telangiectasia.[3] Their use on delicate skin (face or groin) is therefore typically with caution.[3] They are, however, generally well tolerated.[42]Red burning skin, where the skin turns red upon stopping steroid use, has been reported among adults who use topical steroids at least daily for more than a year.[43]

Topical immunosuppressants like pimecrolimus and tacrolimus may be better in the short term and appear equal to steroids after a year of use.[44] Their use is reasonable in those who do not respond to or are not tolerant of steroids.[45] Treatments are typically recommended for short or fixed periods of time rather than indefinitely.[3] Tacrolimus 0.1% has generally proved more effective than picrolimus, and equal in effect to mid-potency topical steroids.[32]

The United States Food and Drug Administration has issued a health advisory a possible risk of lymph node or skin cancer from these products,[46] however subsequent research has not supported these concerns.[45] A major debate, in the UK, has been about the cost of these medications and, given only finite NHS resources, when they are most appropriate to use.[47]

When eczema is severe and does not respond to other forms of treatment, systemic immunosuppressants are sometimes used. Immunosuppressants can cause significant side effects and some require regular blood tests. The most commonly used are ciclosporin, azathioprine, and methotrexate.

Light therapy using ultraviolet light has tentative support but the quality of the evidence is not very good.[48] A number of different types of light may be used including UVA and UVB;[49] in some forms of treatment, light sensitive chemicals such as psoralen are also used. Overexposure to ultraviolet light carries its own risks, particularly that of skin cancer.[50]

There is currently no scientific evidence for the claim that sulfur treatment relieves eczema.[51] It is unclear whether Chinese herbs help or harm.[52] Dietary supplements are commonly used by people with eczema.[53] Neither evening primrose oil nor borage seed oil taken orally have been shown to be effective.[54] Both are associated with gastrointestinal upset.[54]Probiotics do not appear to be effective.[55] There is insufficient evidence to support the use of zinc, selenium, vitamin D, vitamin E, pyridoxine (vitamin B6), sea buckthorn oil, hempseed oil, sunflower oil, or fish oil as dietary supplements.[53]

Other remedies lacking evidence to support them include chiropractic spinal manipulation and acupuncture.[56] There is little evidence supporting the use of psychological treatments.[57][needs update] While dilute bleach baths have been used for infected dermatitis there is little evidence for this practice.[58]

Most cases are well managed with topical treatments and ultraviolet light.[3] About 2% of cases however are not.[3] In more than 60% the condition goes away by adolescence.[3]

Globally dermatitis affected approximately 230million people as of 2010 (3.5% of the population).[59] Dermatitis is most commonly seen in infancy, with female predominance of eczema presentations occurring during the reproductive period of 1549 years.[60] In the UK about 20% of children have the condition, while in the United States about 10% are affected.[3]

Although little data on the rates of eczema over time exists prior to the 1940s, the rate of eczema has been found to have increased substantially in the latter half of the 20th Century, with eczema in school-aged children being found to increase between the late 1940s and 2000.[61] In the developed world there has been rise in the rate of eczema over time. The incidence and lifetime prevalence of eczema in England has been seen to increase in recent times.[3][62]

Dermatitis affected about 10% of U.S. workers in 2010, representing over 15 million workers with dermatitis. Prevalence rates were higher among females than among males, and among those with some college education or a college degree compared to those with a high school diploma or less. Workers employed in healthcare and social assistance industries and life, physical, and social science occupations had the highest rates of reported dermatitis. About 6% of dermatitis cases among U.S. workers were attributed to work by a healthcare professional, indicating that the prevalence rate of work-related dermatitis among workers was at least 0.6%.[63]

from Ancient Greek kzema,[64] from - ekz-ein, from ek "out" + - z-ein "to boil"

The term "atopic dermatitis" was coined in 1933 by Wise and Sulzberger.[65]Sulfur as a topical treatment for eczema was fashionable in the Victorian and Edwardian eras.[51]

The word dermatitis is from the Greek derma "skin" and - -itis "inflammation" and eczema is from Greek: ekzema "eruption".[66]

The terms "hypoallergenic" and "doctor tested" are not regulated,[67] and no research has been done showing that products labeled "hypoallergenic" are in fact less problematic than any others.

Read the rest here:
Dermatitis - Wikipedia

Posted in Eczema | Comments Off on Dermatitis – Wikipedia

Portal:Libertarianism – Wikipedia

Posted: at 11:31 pm

The Ludwig von Mises Institute (LvMI), based in Auburn, Alabama, is a libertarian academic organization engaged in research and scholarship in the fields of economics, philosophy and political economy. Its scholarship is inspired by the work of Austrian School economist Ludwig von Mises. Anarcho-capitalist thinkers such as Murray Rothbard have also had a strong influence on the Institute's work. The Institute is funded entirely through private donations.

The Institute does not consider itself a traditional think tank. While it has working relationships with individuals such as U.S. Representative Ron Paul and organizations like the Foundation for Economic Education, it does not seek to implement public policy. It has no formal affiliation with any political party (including the Libertarian Party), nor does it receive funding from any. The Institute also has a formal policy of not accepting contract work from corporations or other organizations.

The Institute's official motto is Tu ne cede malis sed contra audentior ito, which comes from Virgil's Aeneid, Book VI; the motto means "do not give in to evil but proceed ever more boldly against it." Early in his life, Mises chose this sentence to be his guiding principle in life. It is prominently displayed throughout the Institute's campus, on their website and on memorabilia.

Lysander Spooner (19 January 1808 14 May 1887) was a libertarian,[1]individualist anarchist, entrepreneur, political philosopher, abolitionist, supporter of the labor movement, and legal theorist of the 19th century. He is also known for competing with the U.S. Post Office with his American Letter Mail Company, which was forced out of business by the United States government. He has been identified by some contemporary writers as an anarcho-capitalist,[2][3] while other writers and activists believe he was anti-capitalist for vocalizing opposition to wage labor.[4]

Later known as an early individualist anarchist, Spooner advocated what he called Natural Law or the "Science of Justice" wherein acts of initiatory coercion against individuals and their property were considered "illegal" but the so-called criminal acts that violated only man-made legislation were not.

He believed that the price of borrowing capital could be brought down by competition of lenders if the government de-regulated banking and money. This he believed would stimulate entrepreneurship. In his Letter to Cleveland, Spooner argued, "All the great establishments, of every kind, now in the hands of a few proprietors, but employing a great number of wage labourers, would be broken up; for few or no persons, who could hire capital and do business for themselves would consent to labour for wages for another."[5] Spooner took his own advice and started his own business called American Letter Mail Company which competed with the U.S. Post Office.

Go here to see the original:
Portal:Libertarianism - Wikipedia

Posted in Libertarianism | Comments Off on Portal:Libertarianism – Wikipedia

Victimless Crimes Liberal Democrats

Posted: October 19, 2016 at 4:16 am

The LDP does not generally support the criminalisation of victimless crimes and seeks to reduce the intrusion of government into these areas.

Victimless crime is a term used to refer to behaviour that is illegal but does not violate or threaten the rights of anyone else. It can include situations where an individual acts alone as well as consensual acts in which two or more persons agree to commit a criminal offence in which no other person is involved.

The issue in situations of victimless crime is the same. Society has created a formal framework of laws to prohibit types of conduct thought to be against the public interest. Laws proscribing homicide, assaults and rape are common to most cultures. Thus, when the supposed victim freely consents to be the victim in one of these crimes, the question is whether the state should make an exception from the law for this situation.

Take assisted suicide as an example. If one person intentionally takes the life of another, this is usually murder. If the motive for this is to collect the inheritance, society has no difficulty in ignoring the motive and convicting the killer. But if the motive is to relieve the suffering of the victim by providing a clean death that would otherwise be denied, can society so quickly reject the motive?

It is a case of balancing the harms. On the one hand, society could impose pain and suffering on the victim by forcing him or her to endure a long decline into death. Or society could permit a system for terminating life under controlled circumstances so that the victims wishes could be respected without exposing others to the criminal system for assisting in realising those wishes.

But victimless crimes are not always so weighty. Some examples of low level victimless activities that may be criminalised include:

Victimless crimes usually regarded more seriously include:

This includes the elderly and seriously ill as well as less obvious scenarios. For example, helping someone such as a celebrity facing exposure for socially unacceptable behaviour who seeks a gun or other means to end life; a driver trapped in a burning tanker full of gasoline who begs a passing armed police officer toshoot him rather than let him burn to death; a person who suffers traumatic injury in a road accident and wishes to avoid the humiliation and pain of a lingering slow death.

These situations are distinguishable from soliciting the cessation of life-sustaining treatment so that an injured or illperson may die a natural death, or leaving instructions not to resuscitate in the event of death.

Consideration of victimless crime involving more than one participant needs to take account of whether all the participants are capable of giving genuine consent. This may not be the case if one or more of the participants are:

Libertarianism focuses on the autonomy of the individual, asserting each persons right to live their lives with the least possible interference from the law. Libertarians do not necessarily approve, sanction or endorse the victimless action that is criminalised. Indeed, they may strongly disapprove.

Where they differ from non-libertarians is their belief that the government should be exceedingly reluctant to intervene. People are entitled to live their lives and make their own choices whether or not those choices are wise or the same as others would make, provided they do so voluntarily and without infringing the rights of others.

Without necessarily supporting, advocating or approving of them, the LDP does not generally support the criminalisation of victimless crimes. Wherever possible it will seek to reduce the intrusion of government into these areas.

It nonetheless recognises that not all victimless crimes are capable of being entirely de-regulated. It acknowledges there may be unintended coercive consequences from re-legalisation and that some regulation may be warranted in specific instances.

The LDP also favours strong sanctions against crimes that infringe the rights of others, whether deliberately or through negligence.

Further information

Mandatory bicycle helmets not only are such laws offensive to liberty, but they do not achieve their aim.

See the original post here:

Victimless Crimes Liberal Democrats

Posted in Victimless Crimes | Comments Off on Victimless Crimes Liberal Democrats

Phillip D. Collins — Luciferianism: The Religion of …

Posted: at 4:14 am

Other Collins Articles:

Darwinism and the Rise of Gnosticism

Engineering Evolution: The Alchemy of Eugenics

More Collins Articles

LUCIFERIANISM: THE RELIGION OF APOTHEOSIS

Phillip D. Collins January 17, 2006 NewsWithViews.com

Luciferianism constitutes the nucleus of the ruling class religion. While there are definitely political and economic rationales for elite criminality, Luciferianism can account for the longevity of many of the oligarchs projects. Many of the longest and most brutal human endeavors have been underpinned by some form of religious zealotry. The Crusades testify to this historical fact. Likewise, the power elites ongoing campaign to establish a socialist totalitarian global government has Luciferianism to thank for both its longevity and frequently violent character. In the mind of the modern oligarch, Luciferianism provides religious legitimacy for otherwise morally questionable plans.

Luciferianism is the product of religious engineering, which sociologist William Sims Bainbridge defines as the conscious, systematic, skilled creation of a new religion ("New Religions, Science, and Secularization," no pagination). In actuality, this is a tradition that even precedes Bainbridge. It has been the practice of Freemasonry for years. It was also the practice of Masonrys religious and philosophical progenitors, the ancient pagan Mystery cults. The inner doctrines of the Mesopotamian secret societies provided the theological foundations for the Christian and Judaic heresies, Kabbalism and Gnosticism. All modern Luciferian philosophy finds scientific legitimacy in the Gnostic myth of Darwinism. As evolutionary thought was popularized, variants of Luciferianism were popularized along with it (particularly in the form of secular humanism, which shall be examined shortly). A historical corollary of this popularization has been the rise of several cults and mass movements, exemplified by the various mystical sects and gurus of the sixties counterculture. The metastasis of Luciferian thinking continues to this very day.

Luciferianism represents a radical revaluation of humanitys ageless adversary: Satan. It is the ultimate inversion of good and evil. The formula for this inversion is reflected by the narrative paradigm of the Gnostic Hypostasis myth. As opposed to the original Biblical version, the Gnostic account represents a revaluation of the Hebraic story of the first mans temptation, the desire of mere men to be as gods by partaking of the tree of the knowledge of good and evil (Raschke 26). Carl Raschke elaborates:

In The Hypostasis of the Archons, an Egyptian Gnostic document, we read how the traditional story of mans disobedience toward God is reinterpreted as a universal conflict between knowledge (gnosis) and the dark powers (exousia) of the world, which bind the human soul in ignorance. The Hypostasis describes man as a stepchild of Sophia (Wisdom) created according to the model of aion, the imperishable realm of eternity.

On the other hand, it is neither God the Imperishable nor Sophia who actually is responsible in the making of man. On the contrary, the task is undertaken by the archons, the demonic powers who, because of their weakness, entrap man in a material body and thus cut him off from his blessed origin. They place him in paradise and enjoin him against eating of the tree of knowledge. The prohibition, however, is viewed by the author of the text not as a holy command but as a malignant effort on the part of the inferior spirits to prevent Adam from having true communion with the High God, from gaining authentic gnosis. (26)

According to this bowdlerization, Adam is consistently contacted by the High God in hopes of reinitiating mans quest for gnosis (26). The archons intervene and create Eve to distract Adam from the pursuit of gnosis (26-27). However, this Gnostic Eve is actually a sort of undercover agent for the High God, who is charged with divulging to Adam the truth that has been withheld from him (27). The archons manage to sabotage this covert operation by facilitating sexual intercourse between Adam and Eve, an act that Gnostics contend was designed to defile the womans spiritual nature (27). At this juncture, the Hypostasis reintroduces a familiar antagonist from the original Genesis account:

But now the principle of feminine wisdom reappears in the form of the serpent, called the Instructor, who tells the mortal pair to defy the prohibition of the archons and eat of the tree of knowledge. (27)

The serpent successfully entices Adam and Eve to eat the forbidden fruit, but the bodily defilement of the woman prevents man from understanding the true motive underpinning the act (27). Thus, humanity is fettered by the archons curse, suggesting that the orthodox theological view of the violation of the command as sin must be regarded anew as the mindless failure to commit the act rightly in the first place (27). In this revisionist context, the serpent is no longer Satan, but is an incognito savior instead (27). Meanwhile, Gods role as benevolent Heavenly Father is vilified:

The God of Genesis, who comes to reprimand Adam and Eve after their transgression, is rudely caricatured in this tale as the Arrogant archon who opposes the will of the authentic heavenly father. (27)

Of course, within this Gnostic narrative, God incarnate is equally belittled. Jesus Christ, the Word made flesh, is reduced to little more than a forerunner of the coming Gnostic adept. According to the Gnostic mythology, Jesus was but a mere type of this perfect man (27). He came as a teacher and an exemplar, to show others the path to illumination (27-28). The true messiah has yet to come. Equally, the serpent is only a precursor to this messiah. He only initiates mans journey towards gnosis. The developmental voyage must be further facilitated by the serpents predecessor, the Gnostic Christ. The Hypostasis provides the paradigmatic template for all Luciferian mythologies.

Like the Hypostasis, the binary opposition of Luciferian mythology caricatures Jehovah as an oppressive tyrant. He becomes the archon of arrogance, the embodiment of ignorance and religious superstition. Satan, who retains his heavenly title of Lucifer, is the liberator of humanity. Masonry, which acts as the contemporary retainer for the ancient Mystery religion, reconceptualizes Satan in a similar fashion. In Morals and Dogma, 33rd degree Freemason Albert Pike candidly exalts the fallen angel:

LUCIFER, the Light-bearer! Strange and mysterious name to give to the Spirit of Darkness! Lucifer, the Son of the Morning! Is it he who bears the Light, and with its splendors intolerable blinds feeble, sensual, or selfish Souls? Doubt it not. (321)

He makes man aware of his own innate divinity and promises to unlock the god within us all. This theme of apotheosis underpinned both Gnosticism and the pagan Mystery religions. While Gnosticisms origins with the Ancient Mystery cults remains a source of contention amongst scholars, its promises of liberation from humanitys material side is strongly akin to the old pagan Mysterys variety of psychic therapy (28). In addition, the Ancient Mystery religion promised the:

opportunity to erase the curse of mortality by direct encounter with the patron deity, or in many instances by actually undergoing an apotheosis, a transfiguration of human into divine (28).

Like some varieties of Satanism, Luciferianism does not depict the devil as a literal metaphysical entity. Lucifer only symbolizes the cognitive powers of man. He is the embodiment of science and reason. It is the Luciferians religious conviction that these two facilitative forces will dethrone God and apotheosize man. It comes as little surprise that the radicals of the early revolutionary faith celebrated the arrival of Darwinism. Evolutionary theory was the edifying science of Promethean zealotry and the new secular religion of the scientific dictatorship. According to Masonic scholar Wilmshurst, the completion of human evolution involves man becoming a god-like being and unifying his consciousness with the Omniscient (94).

During the Enlightenment, Luciferianism was disseminated on the popular level as secular humanism. All of the governing precepts of Luciferianism are encompassed by secular humanism. This is made evident by the philosophys rejection of theistic morality and enthronement of man as his own absolute moral authority. While Luciferianism has no sacred texts, Humanist Manifesto I and II succinctly delineate its central tenets. Whittaker Chambers, former member of the communist underground in America, eloquently summarizes this truth:

Humanism is not new. It is, in fact, mans second oldest faith. Its promise was whispered in the first days of Creation under the Tree of the knowledge of Good and Evil: Ye shall be as gods. (Qutd. in Baker 206)

Transhumanism offers an updated, hi-tech variety of Luciferianism. The appellation Transhumanism was coined by evolutionary biologist Julian Huxley (Transhumanism, Wikipedia: The Free Encyclopedia, no pagination). Huxley defined the transhuman condition as man remaining man, but transcending himself, by realizing new possibilities of and for his human nature (no pagination). However, by 1990, Dr. Max More would radically redefine Transhumanism as follows:

Transhumanism is a class of philosophies that seek to guide us towards a posthuman condition. Transhumanism shares many elements of humanism, including a respect for reason and science, a commitment to progress, and a valuing of human (or transhuman) existence in this life Transhumanism differs from humanism in recognizing and anticipating the radical alterations in the nature and possibilities of our lives resulting from various sciences and technologies (No pagination)

Transhumanism advocates the use of nanotechnology, biotechnology, cognitive science, and information technology to propel humanity into a posthuman condition. Once he has arrived at this condition, man will cease to be man. He will become a machine, immune to death and all the other weaknesses intrinsic to his former human condition. The ultimate objective is to become a god. Transhumanism is closely aligned with the cult of artificial intelligence. In the very influential book The Age of Spiritual Machines, AI high priest Ray Kurzweil asserts that technological immortality could be achieved through magnetic resonance imaging or some technique of reading and replicating the human brains neural structure within a computer (Technological Immortality, no pagination). Through the merger of computers and humans, Kurzweil believes that man will become god-like spirits inhabiting cyberspace as well as the material universe (no pagination).

Following the Biblical revisionist tradition of the Gnostic Hypostasis myth, Transhumanists invert the roles of God and Satan. In an essay entitled In Praise of the Devil, Transhumanist ideologue Max More depicts Lucifer as a heroic rebel against a tyrannical God:

The Devil-Lucifer--is a force for good (where I define 'good' simply as that which I value, not wanting to imply any universal validity or necessity to the orientation). 'Lucifer' means 'light-bringer' and this should begin to clue us in to his symbolic importance. The story is that God threw Lucifer out of Heaven because Lucifer had started to question God and was spreading dissension among the angels. We must remember that this story is told from the point of view of the Godists (if I may coin a term) and not from that of the Luciferians (I will use this term to distinguish us from the official Satanists with whom I have fundamental differences). The truth may just as easily be that Lucifer resigned from heaven. (No pagination)

According to More, Lucifer probably exiled himself out of moral outrage towards the oppressive Jehovah:

God, being the well-documented sadist that he is, no doubt wanted to keep Lucifer around so that he could punish him and try to get him back under his (God's) power. Probably what really happened was that Lucifer came to hate God's kingdom, his sadism, his demand for slavish conformity and obedience, his psychotic rage at any display of independent thinking and behavior. Lucifer realized that he could never fully think for himself and could certainly not act on his independent thinking so long as he was under God's control. Therefore he left Heaven, that terrible spiritual-State ruled by the cosmic sadist Jehovah, and was accompanied by some of the angels who had had enough courage to question God's authority and his value-perspective. (No pagination)

More proceeds to reiterate 33rd Degree Mason Albert Pikes depiction of Lucifer:

Lucifer is the embodiment of reason, of intelligence, of critical thought. He stands against the dogma of God and all other dogmas. He stands for the exploration of new ideas and new perspectives in the pursuit of truth. (No pagination)

Lucifer is even considered a patron saint by some Transhumanists (Transtopian Symbolism, no pagination). Transhumanism retains the paradigmatic character of Luciferianism, albeit in a futurist context. Worse still, Transhumanism is hardly some marginalized cult. Richard Hayes, executive director of the Center for Genetics and Society, elaborates:

Last June at Yale University, the World Transhumanist Association held its first national conference. The Transhumanists have chapters in more than 20 countries and advocate the breeding of "genetically enriched" forms of "post-human" beings. Other advocates of the new techno-eugenics, such as Princeton University professor Lee Silver, predict that by the end of this century, "All aspects of the economy, the media, the entertainment industry, and the knowledge industry [will be] controlled by members of the GenRich class. . .Naturals [will] work as low-paid service providers or as laborers. . ." (No pagination)

Subscribe to the NewsWithViews Daily News Alerts!

With a growing body of academic luminaries and a techno-eugenical vision for the future, Transhumanism is carrying the banner of Luciferianism into the 21st century. Through genetic engineering and biotechnological augmentation of the physical body, Transhumanists are attempting to achieve the very same objective of their patron saint. I will ascend into heaven, I will exalt my throne above the stars of God:

I will sit also upon the mount of the congregation, in the sides of the north: I will ascend above the heights of the clouds; I will be like the most High. (Isaiah 14:13-14)

This declaration reflects the aspirations of the power elite as well. Whatever form the Luciferian religion assumes throughout the years, its goal remains the same: Apotheosis.

Sources Cited:

1, Bainbridge, William Sims. "New Religions, Science, and Secularization." Excerpted from Religion and the Social Order, 1993, Volume 3A, pages 277-292, 1993. 2, Hayes, Richard. "Selective Science." TomPaine.commonsense 12 February 2004. 3, More, Max. "Transhumanism: Towards a Futurist Philosophy." Maxmore.com 1996 4, "In Praise of the Devil." Lucifer.com 1999 5, Pike, Albert. Morals and Dogma. 1871. Richmond, Virginia: L.H. Jenkins, Inc., 1942. 6, Raschke, Carl A. The Interruption of Eternity: Modern Gnosticism and the Origins of the New Religious Consciousness. Chicago: Nelson-Hall, 1980. 7, "Transhumanism." Wikipedia: The Free Encyclopedia. 8 January 2006 8, "Transtopian Symbolism." Transtopia: Transhumanism Evolved 2003-2005 9, Wilmshurst, W.L. The Meaning of Masonry. New York: Gramercy, 1980.

2006 Phillip D. Collins - All Rights Reserved

E-Mails are used strictly for NWVs alerts, not for sale

Author Phillip D. Collins acted as the editor for The Hidden Face of Terrorism. He has also written articles for Paranoia Magazine, MKzine, NewsWithViews.com, and B.I.P.E.D.: The Official Website of Darwinian Dissent and Conspiracy Archive. He has an Associate of Arts and Science.

Currently, he is studying for a bachelor's degree in Communications at Wright State University. During the course of his seven-year college career, Phillip has studied philosophy, religion, and classic literature. He also co-authored the book, The Ascendancy of the Scientific Dictatorship: An Examination of Epistemic Autocracy, From the 19th to the 21st Century, which is available at: [Link]

E-Mail: collins.58@wright.edu

Home

Transhumanism advocates the use of nanotechnology, biotechnology, cognitive science, and information technology to propel humanity into a posthuman condition.

Go here to see the original:

Phillip D. Collins -- Luciferianism: The Religion of ...

Posted in Transtopian | Comments Off on Phillip D. Collins — Luciferianism: The Religion of …

Meme – Wikipedia

Posted: at 4:12 am

A meme ( MEEM)[1] is "an idea, behavior, or style that spreads from person to person within a culture".[2] A meme acts as a unit for carrying cultural ideas, symbols, or practices that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena with a mimicked theme. Supporters of the concept regard memes as cultural analogues to genes in that they self-replicate, mutate, and respond to selective pressures.[3]

Proponents theorize that memes are a viral phenomenon that may evolve by natural selection in a manner analogous to that of biological evolution. Memes do this through the processes of variation, mutation, competition, and inheritance, each of which influences a meme's reproductive success. Memes spread through the behavior that they generate in their hosts. Memes that propagate less prolifically may become extinct, while others may survive, spread, and (for better or for worse) mutate. Memes that replicate most effectively enjoy more success, and some may replicate effectively even when they prove to be detrimental to the welfare of their hosts.[4]

A field of study called memetics[5] arose in the 1990s to explore the concepts and transmission of memes in terms of an evolutionary model. Criticism from a variety of fronts has challenged the notion that academic study can examine memes empirically. However, developments in neuroimaging may make empirical study possible.[6] Some commentators in the social sciences question the idea that one can meaningfully categorize culture in terms of discrete units, and are especially critical of the biological nature of the theory's underpinnings.[7] Others have argued that this use of the term is the result of a misunderstanding of the original proposal.[8]

The word meme originated with Richard Dawkins' 1976 book The Selfish Gene. Dawkins's own position is somewhat ambiguous: he welcomed N. K. Humphrey's suggestion that "memes should be considered as living structures, not just metaphorically"[9] and proposed to regard memes as "physically residing in the brain".[10] Later, he argued that his original intentions, presumably before his approval of Humphrey's opinion, had been simpler.[11] At the New Directors' Showcase 2013 in Cannes, Dawkins' opinion on memetics was deliberately ambiguous.[12]

The word meme is a shortening (modeled on gene) of mimeme (from Ancient Greek pronounced[mmma] mmma, "imitated thing", from mimeisthai, "to imitate", from mimos, "mime")[13] coined by British evolutionary biologist Richard Dawkins in The Selfish Gene (1976)[1][14] as a concept for discussion of evolutionary principles in explaining the spread of ideas and cultural phenomena. Examples of memes given in the book included melodies, catchphrases, fashion, and the technology of building arches.[15]Kenneth Pike coined the related term emic and etic, generalizing the linguistic idea of phoneme, morpheme and tagmeme (as set out by Leonard Bloomfield), characterizing them as insider view and outside view of behaviour and extending the concept into a tagmemic theory of human behaviour (culminating in Language in Relation to a Unified Theory of the Structure of Human Behaviour, 1954).

The word meme originated with Richard Dawkins' 1976 book The Selfish Gene. Dawkins cites as inspiration the work of geneticist L. L. Cavalli-Sforza, anthropologist F. T. Cloak[16] and ethologist J. M. Cullen.[17] Dawkins wrote that evolution depended not on the particular chemical basis of genetics, but only on the existence of a self-replicating unit of transmissionin the case of biological evolution, the gene. For Dawkins, the meme exemplified another self-replicating unit with potential significance in explaining human behavior and cultural evolution. Although Dawkins invented the term 'meme' and developed meme theory, the possibility that ideas were subject to the same pressures of evolution as were biological attributes was discussed in Darwin's time. T. H. Huxley claimed that 'The struggle for existence holds as much in the intellectual as in the physical world. A theory is a species of thinking, and its right to exist is coextensive with its power of resisting extinction by its rivals.'[18]

Dawkins used the term to refer to any cultural entity that an observer might consider a replicator. He hypothesized that one could view many cultural entities as replicators, and pointed to melodies, fashions and learned skills as examples. Memes generally replicate through exposure to humans, who have evolved as efficient copiers of information and behavior. Because humans do not always copy memes perfectly, and because they may refine, combine or otherwise modify them with other memes to create new memes, they can change over time. Dawkins likened the process by which memes survive and change through the evolution of culture to the natural selection of genes in biological evolution.[15]

Dawkins defined the meme as a unit of cultural transmission, or a unit of imitation and replication, but later definitions would vary. The lack of a consistent, rigorous, and precise understanding of what typically makes up one unit of cultural transmission remains a problem in debates about memetics.[20] In contrast, the concept of genetics gained concrete evidence with the discovery of the biological functions of DNA. Meme transmission requires a physical medium, such as photons, sound waves, touch, taste or smell because memes can be transmitted only through the senses.

Dawkins noted that in a society with culture a person need not have descendants to remain influential in the actions of individuals thousands of years after their death:

But if you contribute to the world's culture, if you have a good idea...it may live on, intact, long after your genes have dissolved in the common pool. Socrates may or may not have a gene or two alive in the world today, as G.C. Williams has remarked, but who cares? The meme-complexes of Socrates, Leonardo, Copernicus and Marconi are still going strong.[21]

Memes, analogously to genes, vary in their aptitude to replicate; successful memes remain and spread, whereas unfit ones stall and are forgotten. Thus memes that prove more effective at replicating and surviving are selected in the meme pool.

Memes first need retention. The longer a meme stays in its hosts, the higher its chances of propagation are. When a host uses a meme, the meme's life is extended.[22] The reuse of the neural space hosting a certain meme's copy to host different memes is the greatest threat to that meme's copy.[23]

A meme which increases the longevity of its hosts will generally survive longer. On the contrary, a meme which shortens the longevity of its hosts will tend to disappear faster. However, as hosts are mortal, retention is not sufficient to perpetuate a meme in the long term; memes also need transmission.

Life-forms can transmit information both vertically (from parent to child, via replication of genes) and horizontally (through viruses and other means). Memes can replicate vertically or horizontally within a single biological generation. They may also lie dormant for long periods of time.

Memes reproduce by copying from a nervous system to another one, either by communication or imitation. Imitation often involves the copying of an observed behavior of another individual. Communication may be direct or indirect, where memes transmit from one individual to another through a copy recorded in an inanimate source, such as a book or a musical score. Adam McNamara has suggested that memes can be thereby classified as either internal or external memes (i-memes or e-memes).[6]

Some commentators have likened the transmission of memes to the spread of contagions.[24] Social contagions such as fads, hysteria, copycat crime, and copycat suicide exemplify memes seen as the contagious imitation of ideas. Observers distinguish the contagious imitation of memes from instinctively contagious phenomena such as yawning and laughing, which they consider innate (rather than socially learned) behaviors.[25]

Aaron Lynch described seven general patterns of meme transmission, or "thought contagion":[26]

Dawkins initially defined meme as a noun that "conveys the idea of a unit of cultural transmission, or a unit of imitation".[15] John S. Wilkins retained the notion of meme as a kernel of cultural imitation while emphasizing the meme's evolutionary aspect, defining the meme as "the least unit of sociocultural information relative to a selection process that has favorable or unfavorable selection bias that exceeds its endogenous tendency to change".[27] The meme as a unit provides a convenient means of discussing "a piece of thought copied from person to person", regardless of whether that thought contains others inside it, or forms part of a larger meme. A meme could consist of a single word, or a meme could consist of the entire speech in which that word first occurred. This forms an analogy to the idea of a gene as a single unit of self-replicating information found on the self-replicating chromosome.

While the identification of memes as "units" conveys their nature to replicate as discrete, indivisible entities, it does not imply that thoughts somehow become quantized or that "atomic" ideas exist that cannot be dissected into smaller pieces. A meme has no given size. Susan Blackmore writes that melodies from Beethoven's symphonies are commonly used to illustrate the difficulty involved in delimiting memes as discrete units. She notes that while the first four notes of Beethoven's Fifth Symphony (listen(helpinfo)) form a meme widely replicated as an independent unit, one can regard the entire symphony as a single meme as well.[20]

The inability to pin an idea or cultural feature to quantifiable key units is widely acknowledged as a problem for memetics. It has been argued however that the traces of memetic processing can be quantified utilizing neuroimaging techniques which measure changes in the connectivity profiles between brain regions."[6] Blackmore meets such criticism by stating that memes compare with genes in this respect: that while a gene has no particular size, nor can we ascribe every phenotypic feature directly to a particular gene, it has value because it encapsulates that key unit of inherited expression subject to evolutionary pressures. To illustrate, she notes evolution selects for the gene for features such as eye color; it does not select for the individual nucleotide in a strand of DNA. Memes play a comparable role in understanding the evolution of imitated behaviors.[20]

The 1981 book Genes, Mind, and Culture: The Coevolutionary Process by Charles J. Lumsden and E. O. Wilson proposed the theory that genes and culture co-evolve, and that the fundamental biological units of culture must correspond to neuronal networks that function as nodes of semantic memory. They coined their own word, "culturgen", which did not catch on. Coauthor Wilson later acknowledged the term meme as the best label for the fundamental unit of cultural inheritance in his 1998 book Consilience: The Unity of Knowledge, which elaborates upon the fundamental role of memes in unifying the natural and social sciences.[28]

Dawkins noted the three conditions that must exist for evolution to occur:[29]

Dawkins emphasizes that the process of evolution naturally occurs whenever these conditions co-exist, and that evolution does not apply only to organic elements such as genes. He regards memes as also having the properties necessary for evolution, and thus sees meme evolution as not simply analogous to genetic evolution, but as a real phenomenon subject to the laws of natural selection. Dawkins noted that as various ideas pass from one generation to the next, they may either enhance or detract from the survival of the people who obtain those ideas, or influence the survival of the ideas themselves. For example, a certain culture may develop unique designs and methods of tool-making that give it a competitive advantage over another culture. Each tool-design thus acts somewhat similarly to a biological gene in that some populations have it and others do not, and the meme's function directly affects the presence of the design in future generations. In keeping with the thesis that in evolution one can regard organisms simply as suitable "hosts" for reproducing genes, Dawkins argues that one can view people as "hosts" for replicating memes. Consequently, a successful meme may or may not need to provide any benefit to its host.[29]

Unlike genetic evolution, memetic evolution can show both Darwinian and Lamarckian traits. Cultural memes will have the characteristic of Lamarckian inheritance when a host aspires to replicate the given meme through inference rather than by exactly copying it. Take for example the case of the transmission of a simple skill such as hammering a nail, a skill that a learner imitates from watching a demonstration without necessarily imitating every discrete movement modeled by the teacher in the demonstration, stroke for stroke.[30]Susan Blackmore distinguishes the difference between the two modes of inheritance in the evolution of memes, characterizing the Darwinian mode as "copying the instructions" and the Lamarckian as "copying the product."[20]

Clusters of memes, or memeplexes (also known as meme complexes or as memecomplexes), such as cultural or political doctrines and systems, may also play a part in the acceptance of new memes. Memeplexes comprise groups of memes that replicate together and coadapt.[20] Memes that fit within a successful memeplex may gain acceptance by "piggybacking" on the success of the memeplex. As an example, John D. Gottsch discusses the transmission, mutation and selection of religious memeplexes and the theistic memes contained.[31] Theistic memes discussed include the "prohibition of aberrant sexual practices such as incest, adultery, homosexuality, bestiality, castration, and religious prostitution", which may have increased vertical transmission of the parent religious memeplex. Similar memes are thereby included in the majority of religious memeplexes, and harden over time; they become an "inviolable canon" or set of dogmas, eventually finding their way into secular law. This could also be referred to as the propagation of a taboo.

The discipline of memetics, which dates from the mid-1980s, provides an approach to evolutionary models of cultural information transfer based on the concept of the meme. Memeticists have proposed that just as memes function analogously to genes, memetics functions analogously to genetics. Memetics attempts to apply conventional scientific methods (such as those used in population genetics and epidemiology) to explain existing patterns and transmission of cultural ideas.

Principal criticisms of memetics include the claim that memetics ignores established advances in other fields of cultural study, such as sociology, cultural anthropology, cognitive psychology, and social psychology. Questions remain whether or not the meme concept counts as a validly disprovable scientific theory. This view regards memetics as a theory in its infancy: a protoscience to proponents, or a pseudoscience to some detractors.

An objection to the study of the evolution of memes in genetic terms (although not to the existence of memes) involves a perceived gap in the gene/meme analogy: the cumulative evolution of genes depends on biological selection-pressures neither too great nor too small in relation to mutation-rates. There seems no reason to think that the same balance will exist in the selection pressures on memes.[32]

Luis Benitez-Bribiesca M.D., a critic of memetics, calls the theory a "pseudoscientific dogma" and "a dangerous idea that poses a threat to the serious study of consciousness and cultural evolution". As a factual criticism, Benitez-Bribiesca points to the lack of a "code script" for memes (analogous to the DNA of genes), and to the excessive instability of the meme mutation mechanism (that of an idea going from one brain to another), which would lead to a low replication accuracy and a high mutation rate, rendering the evolutionary process chaotic.[33]

British political philosopher John Gray has characterized Dawkins' memetic theory of religion as "nonsense" and "not even a theory... the latest in a succession of ill-judged Darwinian metaphors", comparable to Intelligent Design in its value as a science.[34]

Another critique comes from semiotic theorists such as Deacon[35] and Kull.[36] This view regards the concept of "meme" as a primitivized concept of "sign". The meme is thus described in memetics as a sign lacking a triadic nature. Semioticians can regard a meme as a "degenerate" sign, which includes only its ability of being copied. Accordingly, in the broadest sense, the objects of copying are memes, whereas the objects of translation and interpretation are signs.[clarification needed]

Fracchia and Lewontin regard memetics as reductionist and inadequate.[37] Evolutionary biologist Ernst Mayr disapproved of Dawkins' gene-based view and usage of the term "meme", asserting it to be an "unnecessary synonym" for "concept", reasoning that concepts are not restricted to an individual or a generation, may persist for long periods of time, and may evolve.[38]

Opinions differ as to how best to apply the concept of memes within a "proper" disciplinary framework. One view sees memes as providing a useful philosophical perspective with which to examine cultural evolution. Proponents of this view (such as Susan Blackmore and Daniel Dennett) argue that considering cultural developments from a meme's-eye viewas if memes themselves respond to pressure to maximise their own replication and survivalcan lead to useful insights and yield valuable predictions into how culture develops over time. Others such as Bruce Edmonds and Robert Aunger have focused on the need to provide an empirical grounding for memetics to become a useful and respected scientific discipline.[39][40]

A third approach, described by Joseph Poulshock, as "radical memetics" seeks to place memes at the centre of a materialistic theory of mind and of personal identity.[41]

Prominent researchers in evolutionary psychology and anthropology, including Scott Atran, Dan Sperber, Pascal Boyer, John Tooby and others, argue the possibility of incompatibility between modularity of mind and memetics.[citation needed] In their view, minds structure certain communicable aspects of the ideas produced, and these communicable aspects generally trigger or elicit ideas in other minds through inference (to relatively rich structures generated from often low-fidelity input) and not high-fidelity replication or imitation. Atran discusses communication involving religious beliefs as a case in point. In one set of experiments he asked religious people to write down on a piece of paper the meanings of the Ten Commandments. Despite the subjects' own expectations of consensus, interpretations of the commandments showed wide ranges of variation, with little evidence of consensus. In another experiment, subjects with autism and subjects without autism interpreted ideological and religious sayings (for example, "Let a thousand flowers bloom" or "To everything there is a season"). People with autism showed a significant tendency to closely paraphrase and repeat content from the original statement (for example: "Don't cut flowers before they bloom"). Controls tended to infer a wider range of cultural meanings with little replicated content (for example: "Go with the flow" or "Everyone should have equal opportunity"). Only the subjects with autismwho lack the degree of inferential capacity normally associated with aspects of theory of mindcame close to functioning as "meme machines".[42]

In his book The Robot's Rebellion, Stanovich uses the memes and memeplex concepts to describe a program of cognitive reform that he refers to as a "rebellion". Specifically, Stanovich argues that the use of memes as a descriptor for cultural units is beneficial because it serves to emphasize transmission and acquisition properties that parallel the study of epidemiology. These properties make salient the sometimes parasitic nature of acquired memes, and as a result individuals should be motivated to reflectively acquire memes using what he calls a "Neurathian bootstrap" process.[43]

Although social scientists such as Max Weber sought to understand and explain religion in terms of a cultural attribute, Richard Dawkins called for a re-analysis of religion in terms of the evolution of self-replicating ideas apart from any resulting biological advantages they might bestow.

As an enthusiastic Darwinian, I have been dissatisfied with explanations that my fellow-enthusiasts have offered for human behaviour. They have tried to look for 'biological advantages' in various attributes of human civilization. For instance, tribal religion has been seen as a mechanism for solidifying group identity, valuable for a pack-hunting species whose individuals rely on cooperation to catch large and fast prey. Frequently the evolutionary preconception in terms of which such theories are framed is implicitly group-selectionist, but it is possible to rephrase the theories in terms of orthodox gene selection.

He argued that the role of key replicator in cultural evolution belongs not to genes, but to memes replicating thought from person to person by means of imitation. These replicators respond to selective pressures that may or may not affect biological reproduction or survival.[15]

In her book The Meme Machine, Susan Blackmore regards religions as particularly tenacious memes. Many of the features common to the most widely practiced religions provide built-in advantages in an evolutionary context, she writes. For example, religions that preach of the value of faith over evidence from everyday experience or reason inoculate societies against many of the most basic tools people commonly use to evaluate their ideas. By linking altruism with religious affiliation, religious memes can proliferate more quickly because people perceive that they can reap societal as well as personal rewards. The longevity of religious memes improves with their documentation in revered religious texts.[20]

Aaron Lynch attributed the robustness of religious memes in human culture to the fact that such memes incorporate multiple modes of meme transmission. Religious memes pass down the generations from parent to child and across a single generation through the meme-exchange of proselytism. Most people will hold the religion taught them by their parents throughout their life. Many religions feature adversarial elements, punishing apostasy, for instance, or demonizing infidels. In Thought Contagion Lynch identifies the memes of transmission in Christianity as especially powerful in scope. Believers view the conversion of non-believers both as a religious duty and as an act of altruism. The promise of heaven to believers and threat of hell to non-believers provide a strong incentive for members to retain their belief. Lynch asserts that belief in the Crucifixion of Jesus in Christianity amplifies each of its other replication advantages through the indebtedness believers have to their Savior for sacrifice on the cross. The image of the crucifixion recurs in religious sacraments, and the proliferation of symbols of the cross in homes and churches potently reinforces the wide array of Christian memes.[26]

Although religious memes have proliferated in human cultures, the modern scientific community has been relatively resistant to religious belief. Robertson (2007) [44] reasoned that if evolution is accelerated in conditions of propagative difficulty,[45] then we would expect to encounter variations of religious memes, established in general populations, addressed to scientific communities. Using a memetic approach, Robertson deconstructed two attempts to privilege religiously held spirituality in scientific discourse. Advantages of a memetic approach as compared to more traditional "modernization" and "supply side" theses in understanding the evolution and propagation of religion were explored.

In Cultural Software: A Theory of Ideology, Jack Balkin argued that memetic processes can explain many of the most familiar features of ideological thought. His theory of "cultural software" maintained that memes form narratives, social networks, metaphoric and metonymic models, and a variety of different mental structures. Balkin maintains that the same structures used to generate ideas about free speech or free markets also serve to generate racistic beliefs. To Balkin, whether memes become harmful or maladaptive depends on the environmental context in which they exist rather than in any special source or manner to their origination. Balkin describes racist beliefs as "fantasy" memes that become harmful or unjust "ideologies" when diverse peoples come together, as through trade or competition.[46]

In A Theory of Architecture, Nikos Salingaros speaks of memes as "freely propagating clusters of information" which can be beneficial or harmful. He contrasts memes to patterns and true knowledge, characterizing memes as "greatly simplified versions of patterns" and as "unreasoned matching to some visual or mnemonic prototype".[47] Taking reference to Dawkins, Salingaros emphasizes that they can be transmitted due to their own communicative properties, that "the simpler they are, the faster they can proliferate", and that the most successful memes "come with a great psychological appeal".[48]

Architectural memes, according to Salingaros, can have destructive power. "Images portrayed in architectural magazines representing buildings that could not possibly accommodate everyday uses become fixed in our memory, so we reproduce them unconsciously."[49] He lists various architectural memes that circulated since the 1920s and which, in his view, have led to contemporary architecture becoming quite decoupled from human needs. They lack connection and meaning, thereby preventing "the creation of true connections necessary to our understanding of the world". He sees them as no different from antipatterns in software design as solutions that are false but are re-utilized nonetheless.[50]

An "Internet meme" is a concept that spreads rapidly from person to person via the Internet, largely through Internet-based E-mailing, blogs, forums, imageboards like 4chan, social networking sites like Facebook, Instagram or Twitter, instant messaging, and video hosting services like YouTube and Twitch.tv.[51]

In 2013 Richard Dawkins characterized an Internet meme as one deliberately altered by human creativity, distinguished from Dawkins's original idea involving mutation by random change and a form of Darwinian selection.[52]

One technique of meme mapping represents the evolution and transmission of a meme across time and space.[53] Such a meme map uses a figure-8 diagram (an analemma) to map the gestation (in the lower loop), birth (at the choke point), and development (in the upper loop) of the selected meme. Such meme maps are nonscalar, with time mapped onto the y-axis and space onto the x-axis transect. One can read the temporal progression of the mapped meme from south to north on such a meme map. Paull has published a worked example using the "organics meme" (as in organic agriculture).[53]

Follow this link:

Meme - Wikipedia

Posted in Memetics | Comments Off on Meme – Wikipedia

Free Speech TV

Posted: at 4:10 am

So glad to have found [FSTV] because theres nothing else out there telling us whats really going on. - Rita

I'm watching RING OF FIRE ...That and all your other shows arethe best things on TV!! - John

I am so excited that I found your station flipping through the channels ... Keep up the good work. - Susan

Thom Hartmann is one of my heroes. - John

The Most informative and honest news station on American TV. No B.S. and great documentaries. - Kevin

[FSTV] is the best channel on tv. - Patricia

Most of us seek out media that tell us what we already believe to be true. Free Speech TV actually helps us think. - Alice

I want to thank Mike Papantonio for his wit and razor-sharp intellect, Amy Goodman for the highest standards of journalism... - Gail/Michigan

(Stephanie Miller) is why I started watching. Now watch Democracy Now! and Hartmann as well. - Deborah/Texas

"Free Speech TV is the best source of information that nobody knows about. We need to spread the word and educate the people." - Lorelei S.

"A little known TV station that offers an alternative viewpoint to the usual propaganda of network and cable news." - Ron S.

FSTV is the source. I'm greatful for the access these last four months. - Philadelphia, PA.

See original here:
Free Speech TV

Posted in Free Speech | Comments Off on Free Speech TV

New Atheism – Wikipedia

Posted: at 4:10 am

New Atheism is the journalistic term used to describe the positions promoted by atheists of the twenty-first century. This modern-day atheism and secularism is advanced by critics of religion and religious belief,[1] a group of modern atheist thinkers and writers who advocate the view that superstition, religion and irrationalism should not simply be tolerated but should be countered, criticized, and exposed by rational argument wherever its influence arises in government, education and politics.[2]

New Atheism lends itself to and often overlaps with secular humanism and antitheism, particularly in its criticism of what many New Atheists regard as the indoctrination of children and the perpetuation of ideologies founded on belief in the supernatural.

The 2004 publication of The End of Faith: Religion, Terror, and the Future of Reason by Sam Harris, a bestseller in the United States, was joined over the next couple years by a series of popular best-sellers by atheist authors.[3] Harris was motivated by the events of September 11, 2001, which he laid directly at the feet of Islam, while also directly criticizing Christianity and Judaism.[4] Two years later Harris followed up with Letter to a Christian Nation, which was also a severe criticism of Christianity.[5] Also in 2006, following his television documentary The Root of All Evil?, Richard Dawkins published The God Delusion, which was on the New York Times best-seller list for 51 weeks.[6]

In a 2010 column entitled "Why I Don't Believe in the New Atheism", Tom Flynn contends that what has been called "New Atheism" is neither a movement nor new, and that what was new was the publication of atheist material by big-name publishers, read by millions, and appearing on bestseller lists.[7]

These are some of the significant books on the subject of atheism and religion:

On September 30, 2007 four prominent atheists (Richard Dawkins, Christopher Hitchens, Sam Harris, and Daniel Dennett) met at Hitchens' residence for a private two-hour unmoderated discussion. The event was videotaped and titled "The Four Horsemen".[9] During "The God Debate" in 2010 featuring Christopher Hitchens vs Dinesh D'Souza the men were collectively referred to as the "Four Horsemen of the Non-Apocalypse",[10] an allusion to the biblical Four Horsemen from the Book of Revelation.[11]

Sam Harris is the author of the bestselling non-fiction books The End of Faith, Letter to a Christian Nation, The Moral Landscape, and Waking Up: A Guide to Spirituality Without Religion, as well as two shorter works, initially published as e-books, Free Will[12] and Lying.[13] Harris is a co-founder of the Reason Project.

Richard Dawkins is the author of The God Delusion,[14] which was preceded by a Channel 4 television documentary titled The Root of all Evil?. He is also the founder of the Richard Dawkins Foundation for Reason and Science.

Christopher Hitchens was the author of God Is Not Great[15] and was named among the "Top 100 Public Intellectuals" by Foreign Policy and Prospect magazine. In addition, Hitchens served on the advisory board of the Secular Coalition for America. In 2010 Hitchens published his memoir Hitch-22 (a nickname provided by close personal friend Salman Rushdie, whom Hitchens always supported during and following The Satanic Verses controversy).[16] Shortly after its publication, Hitchens was diagnosed with esophageal cancer, which led to his death in December 2011.[17] Before his death, Hitchens published a collection of essays and articles in his book Arguably;[18] a short edition Mortality[19] was published posthumously in 2012. These publications and numerous public appearances provided Hitchens with a platform to remain an astute atheist during his illness, even speaking specifically on the culture of deathbed conversions and condemning attempts to convert the terminally ill, which he opposed as "bad taste".[20][21]

Daniel Dennett, author of Darwin's Dangerous Idea,[22]Breaking the Spell[23] and many others, has also been a vocal supporter of The Clergy Project,[24] an organization that provides support for clergy in the US who no longer believe in God and cannot fully participate in their communities any longer.[25]

The "Four Horsemen" video, convened by Dawkins' Foundation, can be viewed free online at his web site: Part 1, Part 2.

After the death of Hitchens, Ayaan Hirsi Ali (who attended the 2012 Global Atheist Convention, which Hitchens was scheduled to attend) was referred to as the "plus one horse-woman", since she was originally invited to the 2007 meeting of the "Horsemen" atheists but had to cancel at the last minute.[26] Hirsi Ali was born in Mogadishu, Somalia, fleeing in 1992 to the Netherlands in order to escape an arranged marriage.[27] She became involved in Dutch politics, rejected faith, and became vocal in opposing Islamic ideology, especially concerning women, as exemplified by her books Infidel and The Caged Virgin.[28] Hirsi Ali was later involved in the production of the film Submission, for which her friend Theo Van Gogh was murdered with a death threat to Hirsi Ali pinned to his chest.[29] This resulted in Hirsi Ali's hiding and later immigration to the United States, where she now resides and remains a prolific critic of Islam,[30] and the treatment of women in Islamic doctrine and society,[31] and a proponent of free speech and the freedom to offend.[32][33]

While "The Four Horsemen" are arguably the foremost proponents of atheism, there are a number of other current, notable atheists including: Lawrence M. Krauss, (author of A Universe from Nothing),[34]James Randi (paranormal debunker and former illusionist),[35]Jerry Coyne (Why Evolution is True[36] and its complementary blog,[37] which specifically includes polemics against topical religious issues), Greta Christina (Why are you Atheists so Angry?),[38]Victor J. Stenger (The New Atheism),[39]Michael Shermer (Why People Believe Weird Things),[40]David Silverman (President of the American Atheists and author of Fighting God: An Atheist Manifesto for a Religious World), Ibn Warraq (Why I Am Not a Muslim),[41]Matt Dillahunty (host of the Austin-based webcast and cable-access television show The Atheist Experience),[42]Bill Maher (writer and star of the 2008 documentary Religulous),[43]Steven Pinker (noted cognitive scientist, linguist, psychologist and author),[44]Julia Galef (co-host of the podcast Rationally Speaking), A.C. Grayling (philosopher and considered to be the "Fifth Horseman of New Atheism"), and Michel Onfray (Atheist Manifesto: The Case Against Christianity, Judaism, and Islam).

Many contemporary atheists write from a scientific perspective. Unlike previous writers, many of whom thought that science was indifferent, or even incapable of dealing with the "God" concept, Dawkins argues to the contrary, claiming the "God Hypothesis" is a valid scientific hypothesis,[45] having effects in the physical universe, and like any other hypothesis can be tested and falsified. Other contemporary atheists such as Victor Stenger propose that the personal Abrahamic God is a scientific hypothesis that can be tested by standard methods of science. Both Dawkins and Stenger conclude that the hypothesis fails any such tests,[46] and argue that naturalism is sufficient to explain everything we observe in the universe, from the most distant galaxies to the origin of life, species, and the inner workings of the brain and consciousness. Nowhere, they argue, is it necessary to introduce God or the supernatural to understand reality. Atheists have been associated with the argument from divine hiddenness and the idea that "absence of evidence is evidence of absence" when evidence can be expected.[citation needed]

Non-believers assert that many religious or supernatural claims (such as the virgin birth of Jesus and the afterlife) are scientific claims in nature. They argue, as do deists and Progressive Christians, for instance, that the issue of Jesus' supposed parentage is not a question of "values" or "morals", but a question of scientific inquiry.[47] Rational thinkers believe science is capable of investigating at least some, if not all, supernatural claims.[48] Institutions such as the Mayo Clinic and Duke University are attempting to find empirical support for the healing power of intercessory prayer.[49] According to Stenger, these experiments have found no evidence that intercessory prayer works.[50]

Stenger also argues in his book, God: The Failed Hypothesis, that a God having omniscient, omnibenevolent and omnipotent attributes, which he termed a 3O God, cannot logically exist.[51] A similar series of logical disproofs of the existence of a God with various attributes can be found in Michael Martin and Ricki Monnier's The Impossibility of God,[52] or Theodore M. Drange's article, "Incompatible-Properties Arguments".[53]

Richard Dawkins has been particularly critical of the conciliatory view that science and religion are not in conflict, noting, for example, that the Abrahamic religions constantly deal in scientific matters. In a 1998 article published in Free Inquiry magazine,[47] and later in his 2006 book The God Delusion, Dawkins expresses disagreement with the view advocated by Stephen Jay Gould that science and religion are two non-overlapping magisteria (NOMA) each existing in a "domain where one form of teaching holds the appropriate tools for meaningful discourse and resolution". In Gould's proposal, science and religion should be confined to distinct non-overlapping domains: science would be limited to the empirical realm, including theories developed to describe observations, while religion would deal with questions of ultimate meaning and moral value. Dawkins contends that NOMA does not describe empirical facts about the intersection of science and religion, "it is completely unrealistic to claim, as Gould and many others do, that religion keeps itself away from science's turf, restricting itself to morals and values. A universe with a supernatural presence would be a fundamentally and qualitatively different kind of universe from one without. The difference is, inescapably, a scientific difference. Religions make existence claims, and this means scientific claims." Matt Ridley notes that religion does more than talk about ultimate meanings and morals, and science is not proscribed from doing the same. After all, morals involve human behavior, an observable phenomenon, and science is the study of observable phenomena. Ridley notes that there is substantial scientific evidence on evolutionary origins of ethics and morality.[54]

Popularized by Sam Harris is the view that science and thereby currently unknown objective facts may instruct human morality in a globally comparable way. Harris' book The Moral Landscape[55] and accompanying TED Talk How Science can Determine Moral Values[56] proposes that human well-being and conversely suffering may be thought of as a landscape with peaks and valleys representing numerous ways to achieve extremes in human experience, and that there are objective states of well-being.

New atheism is politically engaged in a variety of ways. These include campaigns to reduce the influence of religion in the public sphere, attempts to promote cultural change (centering, in the United States, on the mainstream acceptance of atheism), and efforts to promote the idea of an "atheist identity". Internal strategic divisions over these issues have also been notable, as are questions about the diversity of the movement in terms of its gender and racial balance.[57]

Edward Feser's book The Last Superstition presents arguments based on the philosophy of Aristotle and Thomas Aquinas against New Atheism.[58] According to Feser it necessarily follows from AristotelianThomistic metaphysics that God exists, that the human soul is immortal, and that the highest end of human life (and therefore the basis of morality) is to know God. Feser argues that science never disproved Aristotle's metaphysics, but rather Modern philosophers decided to reject it on the basis of wishful thinking. In the latter chapters Feser proposes that scientism and materialism are based on premises that are inconsistent and self-contradictory and that these conceptions lead to absurd consequences.

Cardinal William Levada believes that New Atheism has misrepresented the doctrines of the church.[59] Cardinal Walter Kasper described New Atheism as "aggressive", and he believed it to be the primary source of discrimination against Christians.[60] In a Salon interview, the journalist Chris Hedges argued that New Atheism propaganda is just as extreme as that of Christian right propaganda.[61]

The theologians Jeffrey Robbins and Christopher Rodkey take issue with what they regard as "the evangelical nature of the new atheism, which assumes that it has a Good News to share, at all cost, for the ultimate future of humanity by the conversion of as many people as possible." They believe they have found similarities between new atheism and evangelical Christianity and conclude that the all-consuming nature of both "encourages endless conflict without progress" between both extremities.[62] Sociologist William Stahl said "What is striking about the current debate is the frequency with which the New Atheists are portrayed as mirror images of religious fundamentalists."[63]

The atheist philosopher of science Michael Ruse has made the claim that Richard Dawkins would fail "introductory" courses on the study of "philosophy or religion" (such as courses on the philosophy of religion), courses which are offered, for example, at many educational institutions such as colleges and universities around the world.[64][65] Ruse also claims that the movement of New Atheismwhich is perceived, by him, to be a "bloody disaster"makes him ashamed, as a professional philosopher of science, to be among those hold to an atheist position, particularly as New Atheism does science a "grave disservice" and does a "disservice to scholarship" at more general level.[64][65]

Glenn Greenwald,[66][67] Toronto-based journalist and Mideast commentator Murtaza Hussain,[66][67]Salon columnist Nathan Lean,[67] scholars Wade Jacoby and Hakan Yavuz,[68] and historian of religion William Emilsen[69] have accused the New Atheist movement of Islamophobia. Wade Jacoby and Hakan Yavuz assert that "a group of 'new atheists' such as Richard Dawkins, Sam Harris, and Christopher Hitchens" have "invoked Samuel Huntington's 'clash of civilizations' theory to explain the current political contestation" and that this forms part of a trend toward "Islamophobia [...] in the study of Muslim societies".[68] William W. Emilson argues that "the 'new' in the new atheists' writings is not their aggressiveness, nor their extraordinary popularity, nor even their scientific approach to religion, rather it is their attack not only on militant Islamism but also on Islam itself under the cloak of its general critique of religion".[69] Murtaza Hussain has alleged that leading figures in the New Atheist movement "have stepped in to give a veneer of scientific respectability to today's politically useful bigotry".[66][70]

See the rest here:
New Atheism - Wikipedia

Posted in Atheism | Comments Off on New Atheism – Wikipedia

National Security Agency – Wikipedia

Posted: at 4:09 am

Not to be confused with NASA. National Security Agency

Seal of the National Security Agency

Flag of the National Security Agency

The National Security Agency (NSA) is an intelligence organization of the United States government, responsible for global monitoring, collection, and processing of information and data for foreign intelligence and counterintelligence purposes, a discipline known as signals intelligence (SIGINT). NSA is concurrently charged with protection of U.S. government communications and information systems against penetration and network warfare.[8][9] Although many of NSA's programs rely on "passive" electronic collection, the agency is authorized to accomplish its mission through active clandestine means,[10] among which are physically bugging electronic systems[11] and allegedly engaging in sabotage through subversive software.[12][13] Moreover, NSA maintains physical presence in a large number of countries across the globe, where its Special Collection Service (SCS) inserts eavesdropping devices in difficult-to-reach places. SCS collection tactics allegedly encompass "close surveillance, burglary, wiretapping, breaking and entering".[14][15]

Unlike the Defense Intelligence Agency (DIA) and the Central Intelligence Agency (CIA), both of which specialize primarily in foreign human espionage, NSA does not unilaterally conduct human-source intelligence gathering, despite often being portrayed so in popular culture. Instead, NSA is entrusted with assistance to and coordination of SIGINT elements at other government organizations, which are prevented by law from engaging in such activities without the approval of the NSA via the Defense Secretary.[16] As part of these streamlining responsibilities, the agency has a co-located organization called the Central Security Service (CSS), which was created to facilitate cooperation between NSA and other U.S. military cryptanalysis components. Additionally, the NSA Director simultaneously serves as the Commander of the United States Cyber Command and as Chief of the Central Security Service.

Originating as a unit to decipher coded communications in World War II, it was officially formed as the NSA by President Harry S. Truman in 1952. Since then, it has become one of the largest U.S. intelligence organizations in terms of personnel and budget,[6][17] operating as part of the Department of Defense and simultaneously reporting to the Director of National Intelligence.

NSA surveillance has been a matter of political controversy on several occasions, such as its spying on anti-Vietnam war leaders or economic espionage. In 2013, the extent of some of the NSA's secret surveillance programs was revealed to the public by Edward Snowden. According to the leaked documents, the NSA intercepts the communications of over a billion people worldwide, many of whom are American citizens, and tracks the movement of hundreds of millions of people using cellphones. Internationally, research has pointed to the NSA's ability to surveil the domestic Internet traffic of foreign countries through "boomerang routing".[18]

The origins of the National Security Agency can be traced back to April 28, 1917, three weeks after the U.S. Congress declared war on Germany in World War I. A code and cipher decryption unit was established as the Cable and Telegraph Section which was also known as the Cipher Bureau. It was headquartered in Washington, D.C. and was part of the war effort under the executive branch without direct Congressional authorization. During the course of the war it was relocated in the army's organizational chart several times. On July 5, 1917, Herbert O. Yardley was assigned to head the unit. At that point, the unit consisted of Yardley and two civilian clerks. It absorbed the navy's cryptoanalysis functions in July 1918. World War I ended on November 11, 1918, and MI-8 moved to New York City on May 20, 1919, where it continued intelligence activities as the Code Compilation Company under the direction of Yardley.[19][20]

MI-8 also operated the so-called "Black Chamber".[22] The Black Chamber was located on East 37th Street in Manhattan. Its purpose was to crack the communications codes of foreign governments. Jointly supported by the State Department and the War Department, the chamber persuaded Western Union, the largest U.S. telegram company, to allow government officials to monitor private communications passing through the company's wires.[23]

Other "Black Chambers" were also found in Europe. They were established by the French and British governments to read the letters of targeted individuals, employing a variety of techniques to surreptitiously open, copy, and reseal correspondence before forwarding it to unsuspecting recipients.[24]

Despite the American Black Chamber's initial successes, it was shut down in 1929 by U.S. Secretary of State Henry L. Stimson, who defended his decision by stating: "Gentlemen do not read each other's mail".[21]

During World War II, the Signal Security Agency (SSA) was created to intercept and decipher the communications of the Axis powers.[25] When the war ended, the SSA was reorganized as the Army Security Agency (ASA), and it was placed under the leadership of the Director of Military Intelligence.[25]

On May 20, 1949, all cryptologic activities were centralized under a national organization called the Armed Forces Security Agency (AFSA).[25] This organization was originally established within the U.S. Department of Defense under the command of the Joint Chiefs of Staff.[26] The AFSA was tasked to direct Department of Defense communications and electronic intelligence activities, except those of U.S. military intelligence units.[26] However, the AFSA was unable to centralize communications intelligence and failed to coordinate with civilian agencies that shared its interests such as the Department of State, Central Intelligence Agency (CIA) and the Federal Bureau of Investigation (FBI).[26] In December 1951, President Harry S. Truman ordered a panel to investigate how AFSA had failed to achieve its goals. The results of the investigation led to improvements and its redesignation as the National Security Agency.[27]

The agency was formally established by Truman in a memorandum of October 24, 1952, that revised National Security Council Intelligence Directive (NSCID) 9.[28] Since President Truman's memo was a classified document,[28] the existence of the NSA was not known to the public at that time. Due to its ultra-secrecy the U.S. intelligence community referred to the NSA as "No Such Agency".[29]

In the 1960s, the NSA played a key role in expanding America's commitment to the Vietnam War by providing evidence of a North Vietnamese attack on the American destroyer USSMaddox during the Gulf of Tonkin incident.[30]

A secret operation, code-named "MINARET", was set up by the NSA to monitor the phone communications of Senators Frank Church and Howard Baker, as well as major civil rights leaders, including Martin Luther King, Jr., and prominent U.S. journalists and athletes who criticized the Vietnam War.[31] However, the project turned out to be controversial, and an internal review by the NSA concluded that its Minaret program was "disreputable if not outright illegal".[31]

In the aftermath of the Watergate scandal, a congressional hearing in 1975 led by Sen. Frank Church[32] revealed that the NSA, in collaboration with Britain's SIGINT intelligence agency Government Communications Headquarters (GCHQ), had routinely intercepted the international communications of prominent anti-Vietnam war leaders such as Jane Fonda and Dr. Benjamin Spock.[33] Following the resignation of President Richard Nixon, there were several investigations of suspected misuse of FBI, CIA and NSA facilities.[34] Senator Frank Church uncovered previously unknown activity,[34] such as a CIA plot (ordered by the administration of President John F. Kennedy) to assassinate Fidel Castro.[35] The investigation also uncovered NSA's wiretaps on targeted American citizens.[36]

After the Church Committee hearings, the Foreign Intelligence Surveillance Act of 1978 was passed into law. This was designed to limit the practice of mass surveillance in the United States.[34]

In 1986, the NSA intercepted the communications of the Libyan government during the immediate aftermath of the Berlin discotheque bombing. The White House asserted that the NSA interception had provided "irrefutable" evidence that Libya was behind the bombing, which U.S. President Ronald Reagan cited as a justification for the 1986 United States bombing of Libya.[37][38]

In 1999, a multi-year investigation by the European Parliament highlighted the NSA's role in economic espionage in a report entitled 'Development of Surveillance Technology and Risk of Abuse of Economic Information'.[39] That year, the NSA founded the NSA Hall of Honor, a memorial at the National Cryptologic Museum in Fort Meade, Maryland.[40] The memorial is a, "tribute to the pioneers and heroes who have made significant and long-lasting contributions to American cryptology".[40] NSA employees must be retired for more than fifteen years to qualify for the memorial.[40]

NSA's infrastructure deteriorated in the 1990s as defense budget cuts resulted in maintenance deferrals. On January 24, 2000, NSA headquarters suffered a total network outage for three days caused by an overloaded network. Incoming traffic was successfully stored on agency servers, but it could not be directed and processed. The agency carried out emergency repairs at a cost of $3 million to get the system running again. (Some incoming traffic was also directed instead to Britain's GCHQ for the time being.) Director Michael Hayden called the outage a "wake-up call" for the need to invest in the agency's infrastructure.[41]

In the aftermath of the September 11 attacks, the NSA created new IT systems to deal with the flood of information from new technologies like the Internet and cellphones. ThinThread contained advanced data mining capabilities. It also had a "privacy mechanism"; surveillance was stored encrypted; decryption required a warrant. The research done under this program may have contributed to the technology used in later systems. ThinThread was cancelled when Michael Hayden chose Trailblazer, which did not include ThinThread's privacy system.[43]

Trailblazer Project ramped up in 2002. SAIC, Boeing, CSC, IBM, and Litton worked on it. Some NSA whistleblowers complained internally about major problems surrounding Trailblazer. This led to investigations by Congress and the NSA and DoD Inspectors General. The project was cancelled in early 2004; it was late, over budget, and didn't do what it was supposed to do. The government then raided the whistleblowers' houses. One of them, Thomas Drake, was charged with violating 18 U.S.C.793(e) in 2010 in an unusual use of espionage law. He and his defenders claim that he was actually being persecuted for challenging the Trailblazer Project. In 2011, all ten original charges against Drake were dropped.[44][45]

Turbulence started in 2005. It was developed in small, inexpensive "test" pieces, rather than one grand plan like Trailblazer. It also included offensive cyber-warfare capabilities, like injecting malware into remote computers. Congress criticized Turbulence in 2007 for having similar bureaucratic problems as Trailblazer.[45] It was to be a realization of information processing at higher speeds in cyberspace.[46]

The massive extent of the NSA's spying, both foreign and domestic, was revealed to the public in a series of detailed disclosures of internal NSA documents beginning in June 2013. Most of the disclosures were leaked by former NSA contractor, Edward Snowden.

It was revealed that the NSA intercepts telephone and Internet communications of over a billion people worldwide, seeking information on terrorism as well as foreign politics, economics[47] and "commercial secrets".[48] In a declassified document it was revealed that 17,835 phone lines were on an improperly permitted "alert list" from 2006 to 2009 in breach of compliance, which tagged these phone lines for daily monitoring.[49][50][51] Eleven percent of these monitored phone lines met the agency's legal standard for "reasonably articulable suspicion" (RAS).[49][52]

A dedicated unit of the NSA locates targets for the CIA for extrajudicial assassination in the Middle East.[53] The NSA has also spied extensively on the European Union, the United Nations and numerous governments including allies and trading partners in Europe, South America and Asia.[54][55]

The NSA tracks the locations of hundreds of millions of cellphones per day, allowing them to map people's movements and relationships in detail.[56] It reportedly has access to all communications made via Google, Microsoft, Facebook, Yahoo, YouTube, AOL, Skype, Apple and Paltalk,[57] and collects hundreds of millions of contact lists from personal email and instant messaging accounts each year.[58] It has also managed to weaken much of the encryption used on the Internet (by collaborating with, coercing or otherwise infiltrating numerous technology companies), so that the majority of Internet privacy is now vulnerable to the NSA and other attackers.[59][60]

Domestically, the NSA collects and stores metadata records of phone calls,[61] including over 120 million US Verizon subscribers,[62] as well as Internet communications,[57] relying on a secret interpretation of the Patriot Act whereby the entirety of US communications may be considered "relevant" to a terrorism investigation if it is expected that even a tiny minority may relate to terrorism.[63] The NSA supplies foreign intercepts to the DEA, IRS and other law enforcement agencies, who use these to initiate criminal investigations. Federal agents are then instructed to "recreate" the investigative trail via parallel construction.[64]

The NSA also spies on influential Muslims to obtain information that could be used to discredit them, such as their use of pornography. The targets, both domestic and abroad, are not suspected of any crime but hold religious or political views deemed "radical" by the NSA.[65]

Although NSAs surveillance activities are controversial, government agencies and private enterprises have common needs, and sometimes cooperate at subtle and complex technical levels. Big data is becoming more advantageous, justifying the cost of required computer hardware, and social media lead the trend. The interests of NSA and Silicon Valley began to converge as advances in computer storage technology drastically reduced the costs of storing enormous amounts of data and at the same time the value of the data for use in consumer marketing began to rise. On the other hand, social media sites are growing as voluntary data mining operations on a scale that rivals or exceeds anything the government could attempt on its own.[66]

According to a report in The Washington Post in July 2014, relying on information provided by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans, and are not the intended targets. The newspaper said it had examined documents including emails, text messages, and online accounts that support the claim.[67]

Despite President Obama's claims that these programs have congressional oversight, members of Congress were unaware of the existence of these NSA programs or the secret interpretation of the Patriot Act, and have consistently been denied access to basic information about them.[68] Obama has also claimed that there are legal checks in place to prevent inappropriate access of data and that there have been no examples of abuse;[69] however, the secret FISC court charged with regulating the NSA's activities is, according to its chief judge, incapable of investigating or verifying how often the NSA breaks even its own secret rules.[70] It has since been reported that the NSA violated its own rules on data access thousands of times a year, many of these violations involving large-scale data interceptions;[71] and that NSA officers have even used data intercepts to spy on love interests.[72] The NSA has "generally disregarded the special rules for disseminating United States person information" by illegally sharing its intercepts with other law enforcement agencies.[73] A March 2009 opinion of the FISC court, released by court order, states that protocols restricting data queries had been "so frequently and systemically violated that it can be fairly said that this critical element of the overall ... regime has never functioned effectively."[74][75] In 2011 the same court noted that the "volume and nature" of the NSA's bulk foreign Internet intercepts was "fundamentally different from what the court had been led to believe".[73] Email contact lists (including those of US citizens) are collected at numerous foreign locations to work around the illegality of doing so on US soil.[58]

Legal opinions on the NSA's bulk collection program have differed. In mid-December 2013, U.S. District Court Judge Richard Leon ruled that the "almost-Orwellian" program likely violates the Constitution, and wrote, "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval. Surely, such a program infringes on 'that degree of privacy' that the Founders enshrined in the Fourth Amendment. Indeed, I have little doubt that the author of our Constitution, James Madison, who cautioned us to beware 'the abridgement of freedom of the people by gradual and silent encroachments by those in power,' would be aghast."[76]

Later that month, U.S. District Judge William Pauley ruled that the NSA's collection of telephone records is legal and valuable in the fight against terrorism. In his opinion, he wrote, "a bulk telephony metadata collection program [is] a wide net that could find and isolate gossamer contacts among suspected terrorists in an ocean of seemingly disconnected data" and noted that a similar collection of data prior to 9/11 might have prevented the attack.[77]

An October 2014 United Nations report condemned mass surveillance by the United States and other countries as violating multiple international treaties and conventions that guarantee core privacy rights.[78]

On March 20, 2013 the Director of National Intelligence, Lieutenant General James Clapper, testified before Congress that the NSA does not wittingly collect any kind of data on millions or hundreds of millions of Americans, but he retracted this in June after details of the PRISM program were published, and stated instead that meta-data of phone and Internet traffic are collected, but no actual message contents.[79] This was corroborated by the NSA Director, General Keith Alexander, before it was revealed that the XKeyscore program collects the contents of millions of emails from US citizens without warrant, as well as "nearly everything a user does on the Internet". Alexander later admitted that "content" is collected, but stated that it is simply stored and never analyzed or searched unless there is "a nexus to al-Qaida or other terrorist groups".[69]

Regarding the necessity of these NSA programs, Alexander stated on June 27 that the NSA's bulk phone and Internet intercepts had been instrumental in preventing 54 terrorist "events", including 13 in the US, and in all but one of these cases had provided the initial tip to "unravel the threat stream".[80] On July 31 NSA Deputy Director John Inglis conceded to the Senate that these intercepts had not been vital in stopping any terrorist attacks, but were "close" to vital in identifying and convicting four San Diego men for sending US$8,930 to Al-Shabaab, a militia that conducts terrorism in Somalia.[81][82][83]

The U.S. government has aggressively sought to dismiss and challenge Fourth Amendment cases raised against it, and has granted retroactive immunity to ISPs and telecoms participating in domestic surveillance.[84][85] The U.S. military has acknowledged blocking access to parts of The Guardian website for thousands of defense personnel across the country,[86][87] and blocking the entire Guardian website for personnel stationed throughout Afghanistan, the Middle East, and South Asia.[88]

The NSA is led by the Director of the National Security Agency (DIRNSA), who also serves as Chief of the Central Security Service (CHCSS) and Commander of the United States Cyber Command (USCYBERCOM) and is the highest-ranking military official of these organizations. He is assisted by a Deputy Director, who is the highest-ranking civilian within the NSA/CSS.

NSA also has an Inspector General, head of the Office of the Inspector General (OIG), a General Counsel, head of the Office of the General Counsel (OGC) and a Director of Compliance, who is head of the Office of the Director of Compliance (ODOC).[89]

Unlike other intelligence organizations such as CIA or DIA, NSA has always been particularly reticent concerning its internal organizational structure.

As of the mid-1990s, the National Security Agency was organized into five Directorates:

Each of these directorates consisted of several groups or elements, designated by a letter. There were for example the A Group, which was responsible for all SIGINT operations against the Soviet Union and Eastern Europe, and G Group, which was responsible for SIGINT related to all non-communist countries. These groups were divided in units designated by an additional number, like unit A5 for breaking Soviet codes, and G6, being the office for the Middle East, North Africa, Cuba, Central and South America.[91][92]

As of 2013[update], NSA has about a dozen directorates, which are designated by a letter, although not all of them are publicly known. The directorates are divided in divisions and units starting with the letter of the parent directorate, followed by a number for the division, the sub-unit or a sub-sub-unit.

The main elements of the organizational structure of the NSA are:[93]

In the year 2000, a leadership team was formed, consisting of the Director, the Deputy Director and the Directors of the Signals Intelligence (SID), the Information Assurance (IAD) and the Technical Directorate (TD). The chiefs of other main NSA divisions became associate directors of the senior leadership team.[101]

After president George W. Bush initiated the President's Surveillance Program (PSP) in 2001, the NSA created a 24-hour Metadata Analysis Center (MAC), followed in 2004 by the Advanced Analysis Division (AAD), with the mission of analyzing content, Internet metadata and telephone metadata. Both units were part of the Signals Intelligence Directorate.[102]

A 2016 proposal would combine the Signals Intelligence Directorate with the Information Assurance Directorate into a Directorate of Operations.[103]

The NSA maintains at least two watch centers:

The number of NSA employees is officially classified[4] but there are several sources providing estimates. In 1961, NSA had 59,000 military and civilian employees, which grew to 93,067 in 1969, of which 19,300 worked at the headquarters at Fort Meade. In the early 1980s NSA had roughly 50,000 military and civilian personnel. By 1989 this number had grown again to 75,000, of which 25,000 worked at the NSA headquarters. Between 1990 and 1995 the NSA's budget and workforce were cut by one third, which led to a substantial loss of experience.[106]

In 2012, the NSA said more than 30,000 employees worked at Fort Meade and other facilities.[2] In 2012, John C. Inglis, the deputy director, said that the total number of NSA employees is "somewhere between 37,000 and one billion" as a joke,[4] and stated that the agency is "probably the biggest employer of introverts."[4] In 2013 Der Spiegel stated that the NSA had 40,000 employees.[5] More widely, it has been described as the world's largest single employer of mathematicians.[107] Some NSA employees form part of the workforce of the National Reconnaissance Office (NRO), the agency that provides the NSA with satellite signals intelligence.

As of 2013 about 1,000 system administrators work for the NSA.[108]

The NSA received criticism early on in 1960 after two agents had defected to the Soviet Union. Investigations by the House Un-American Activities Committee and a special subcommittee of the United States House Committee on Armed Services revealed severe cases of ignorance in personnel security regulations, prompting the former personnel director and the director of security to step down and leading to the adoption of stricter security practices.[109] Nonetheless, security breaches reoccurred only a year later when in an issue of Izvestia of July 23, 1963, a former NSA employee published several cryptologic secrets.

The very same day, an NSA clerk-messenger committed suicide as ongoing investigations disclosed that he had sold secret information to the Soviets on a regular basis. The reluctance of Congressional houses to look into these affairs had prompted a journalist to write, "If a similar series of tragic blunders occurred in any ordinary agency of Government an aroused public would insist that those responsible be officially censured, demoted, or fired." David Kahn criticized the NSA's tactics of concealing its doings as smug and the Congress' blind faith in the agency's right-doing as shortsighted, and pointed out the necessity of surveillance by the Congress to prevent abuse of power.[109]

Edward Snowden's leaking of the existence of PRISM in 2013 caused the NSA to institute a "two-man rule", where two system administrators are required to be present when one accesses certain sensitive information.[108] Snowden claims he suggested such a rule in 2009.[110]

The NSA conducts polygraph tests of employees. For new employees, the tests are meant to discover enemy spies who are applying to the NSA and to uncover any information that could make an applicant pliant to coercion.[111] As part of the latter, historically EPQs or "embarrassing personal questions" about sexual behavior had been included in the NSA polygraph.[111] The NSA also conducts five-year periodic reinvestigation polygraphs of employees, focusing on counterintelligence programs. In addition the NSA conducts periodic polygraph investigations in order to find spies and leakers; those who refuse to take them may receive "termination of employment", according to a 1982 memorandum from the director of the NSA.[112]

There are also "special access examination" polygraphs for employees who wish to work in highly sensitive areas, and those polygraphs cover counterintelligence questions and some questions about behavior.[112] NSA's brochure states that the average test length is between two and four hours.[113] A 1983 report of the Office of Technology Assessment stated that "It appears that the NSA [National Security Agency] (and possibly CIA) use the polygraph not to determine deception or truthfulness per se, but as a technique of interrogation to encourage admissions."[114] Sometimes applicants in the polygraph process confess to committing felonies such as murder, rape, and selling of illegal drugs. Between 1974 and 1979, of the 20,511 job applicants who took polygraph tests, 695 (3.4%) confessed to previous felony crimes; almost all of those crimes had been undetected.[111]

In 2010 the NSA produced a video explaining its polygraph process.[115] The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the Web site of the Defense Security Service. Jeff Stein of The Washington Post said that the video portrays "various applicants, or actors playing them it's not clear describing everything bad they had heard about the test, the implication being that none of it is true."[116] AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video.[115] George Maschke, the founder of the Web site, accused the NSA polygraph video of being "Orwellian".[116]

After Edward Snowden revealed his identity in 2013, the NSA began requiring polygraphing of employees once per quarter.[117]

The number of exemptions from legal requirements has been criticized. When in 1964 the Congress was hearing a bill giving the director of the NSA the power to fire at will any employee,The Washington Post wrote: "This is the very definition of arbitrariness. It means that an employee could be discharged and disgraced on the basis of anonymous allegations without the slightest opportunity to defend himself." Yet, the bill was accepted by an overwhelming majority.[109]

The heraldic insignia of NSA consists of an eagle inside a circle, grasping a key in its talons.[118] The eagle represents the agency's national mission.[118] Its breast features a shield with bands of red and white, taken from the Great Seal of the United States and representing Congress.[118] The key is taken from the emblem of Saint Peter and represents security.[118]

When the NSA was created, the agency had no emblem and used that of the Department of Defense.[119] The agency adopted its first of two emblems in 1963.[119] The current NSA insignia has been in use since 1965, when then-Director, LTG Marshall S. Carter (USA) ordered the creation of a device to represent the agency.[120]

The NSA's flag consists of the agency's seal on a light blue background.

Crews associated with NSA missions have been involved in a number of dangerous and deadly situations.[121] The USS Liberty incident in 1967 and USS Pueblo incident in 1968 are examples of the losses endured during the Cold War.[121]

The National Security Agency/Central Security Service Cryptologic Memorial honors and remembers the fallen personnel, both military and civilian, of these intelligence missions.[122] It is made of black granite, and has 171 names carved into it, as of 2013[update] .[122] It is located at NSA headquarters. A tradition of declassifying the stories of the fallen was begun in 2001.[122]

NSANet stands for National Security Agency Network and is the official NSA intranet.[123] It is a classified network,[124] for information up to the level of TS/SCI[125] to support the use and sharing of intelligence data between NSA and the signals intelligence agencies of the four other nations of the Five Eyes partnership. The management of NSANet has been delegated to the Central Security Service Texas (CSSTEXAS).[126]

NSANet is a highly secured computer network consisting of fiber-optic and satellite communication channels which are almost completely separated from the public Internet. The network allows NSA personnel and civilian and military intelligence analysts anywhere in the world to have access to the agency's systems and databases. This access is tightly controlled and monitored. For example, every keystroke is logged, activities are audited at random and downloading and printing of documents from NSANet are recorded.[127]

In 1998, NSANet, along with NIPRNET and SIPRNET, had "significant problems with poor search capabilities, unorganized data and old information".[128] In 2004, the network was reported to have used over twenty commercial off-the-shelf operating systems.[129] Some universities that do highly sensitive research are allowed to connect to it.[130]

The thousands of Top Secret internal NSA documents that were taken by Edward Snowden in 2013 were stored in "a file-sharing location on the NSA's intranet site" so they could easily be read online by NSA personnel. Everyone with a TS/SCI-clearance had access to these documents and as a system administrator, Snowden was responsible for moving accidentally misplaced highly sensitive documents to more secure storage locations.[131]

The DoD Computer Security Center was founded in 1981 and renamed the National Computer Security Center (NCSC) in 1985. NCSC was responsible for computer security throughout the federal government.[132] NCSC was part of NSA,[133] and during the late 1980s and the 1990s, NSA and NCSC published Trusted Computer System Evaluation Criteria in a six-foot high Rainbow Series of books that detailed trusted computing and network platform specifications.[134] The Rainbow books were replaced by the Common Criteria, however, in the early 2000s.[134]

On July 18, 2013, Greenwald said that Snowden held "detailed blueprints of how the NSA does what they do", thereby sparking fresh controversy.[135]

Headquarters for the National Security Agency is located at 39632N 764617W / 39.10889N 76.77139W / 39.10889; -76.77139 in Fort George G. Meade, Maryland, although it is separate from other compounds and agencies that are based within this same military installation. Ft. Meade is about 20mi (32km) southwest of Baltimore,[136] and 25mi (40km) northeast of Washington, DC.[137] The NSA has its own exit off Maryland Route 295 South labeled "NSA Employees Only".[138][139] The exit may only be used by people with the proper clearances, and security vehicles parked along the road guard the entrance.[140]

NSA is the largest employer in the U.S. state of Maryland, and two-thirds of its personnel work at Ft. Meade.[141] Built on 350 acres (140ha; 0.55sqmi)[142] of Ft. Meade's 5,000 acres (2,000ha; 7.8sqmi),[143] the site has 1,300 buildings and an estimated 18,000 parking spaces.[137][144]

The main NSA headquarters and operations building is what James Bamford, author of Body of Secrets, describes as "a modern boxy structure" that appears similar to "any stylish office building."[145] The building is covered with one-way dark glass, which is lined with copper shielding in order to prevent espionage by trapping in signals and sounds.[145] It contains 3,000,000 square feet (280,000m2), or more than 68 acres (28ha), of floor space; Bamford said that the U.S. Capitol "could easily fit inside it four times over."[145]

The facility has over 100 watchposts,[146] one of them being the visitor control center, a two-story area that serves as the entrance.[145] At the entrance, a white pentagonal structure,[147] visitor badges are issued to visitors and security clearances of employees are checked.[148] The visitor center includes a painting of the NSA seal.[147]

The OPS2A building, the tallest building in the NSA complex and the location of much of the agency's operations directorate, is accessible from the visitor center. Bamford described it as a "dark glass Rubik's Cube".[149] The facility's "red corridor" houses non-security operations such as concessions and the drug store. The name refers to the "red badge" which is worn by someone without a security clearance. The NSA headquarters includes a cafeteria, a credit union, ticket counters for airlines and entertainment, a barbershop, and a bank.[147] NSA headquarters has its own post office, fire department, and police force.[150][151][152]

The employees at the NSA headquarters reside in various places in the Baltimore-Washington area, including Annapolis, Baltimore, and Columbia in Maryland and the District of Columbia, including the Georgetown community.[153]

Following a major power outage in 2000, in 2003 and in follow-ups through 2007, The Baltimore Sun reported that the NSA was at risk of electrical overload because of insufficient internal electrical infrastructure at Fort Meade to support the amount of equipment being installed. This problem was apparently recognized in the 1990s but not made a priority, and "now the agency's ability to keep its operations going is threatened."[154]

Baltimore Gas & Electric (BGE, now Constellation Energy) provided NSA with 65 to 75 megawatts at Ft. Meade in 2007, and expected that an increase of 10 to 15 megawatts would be needed later that year.[155] In 2011, NSA at Ft. Meade was Maryland's largest consumer of power.[141] In 2007, as BGE's largest customer, NSA bought as much electricity as Annapolis, the capital city of Maryland.[154]

One estimate put the potential for power consumption by the new Utah Data Center at US$40million per year.[156]

When the agency was established, its headquarters and cryptographic center were in the Naval Security Station in Washington, D.C. The COMINT functions were located in Arlington Hall in Northern Virginia, which served as the headquarters of the U.S. Army's cryptographic operations.[157] Because the Soviet Union had detonated a nuclear bomb and because the facilities were crowded, the federal government wanted to move several agencies, including the AFSA/NSA. A planning committee considered Fort Knox, but Fort Meade, Maryland, was ultimately chosen as NSA headquarters because it was far enough away from Washington, D.C. in case of a nuclear strike and was close enough so its employees would not have to move their families.[158]

Construction of additional buildings began after the agency occupied buildings at Ft. Meade in the late 1950s, which they soon outgrew.[158] In 1963 the new headquarters building, nine stories tall, opened. NSA workers referred to the building as the "Headquarters Building" and since the NSA management occupied the top floor, workers used "Ninth Floor" to refer to their leaders.[159] COMSEC remained in Washington, D.C., until its new building was completed in 1968.[158] In September 1986, the Operations 2A and 2B buildings, both copper-shielded to prevent eavesdropping, opened with a dedication by President Ronald Reagan.[160] The four NSA buildings became known as the "Big Four."[160] The NSA director moved to 2B when it opened.[160]

On March 30, 2015, shortly before 9am, a stolen sports utility vehicle approached an NSA police vehicle blocking the road near the gate of Fort Meade, after it was told to leave the area. NSA officers fired on the SUV, killing the 27-year-old driver, Ricky Hall (a transgender person also known as Mya), and seriously injuring his 20-year-old male passenger. An NSA officer's arm was injured when Hall subsequently crashed into his vehicle.[161][162]

The two, dressed in women's clothing after a night of partying at a motel with the man they'd stolen the SUV from that morning, "attempted to drive a vehicle into the National Security Agency portion of the installation without authorization", according to an NSA statement.[163] FBI spokeswoman Amy Thoreson said the incident is not believed to be related to terrorism.[164] In June 2015 the FBI closed its investigation into the incident and federal prosecutors have declined to bring charges against anyone involved.[165]

An anonymous police official told The Washington Post, "This was not a deliberate attempt to breach the security of NSA. This was not a planned attack." The two are believed to have made a wrong turn off the highway, while fleeing from the motel after stealing the vehicle. A small amount of cocaine was found in the SUV. A local CBS reporter initially said a gun was found,[166] but her later revision does not.[167] Dozens of journalists were corralled into a parking lot blocks away from the scene, and were barred from photographing the area.[168]

In 1995, The Baltimore Sun reported that the NSA is the owner of the single largest group of supercomputers.[169]

NSA held a groundbreaking ceremony at Ft. Meade in May 2013 for its High Performance Computing Center 2, expected to open in 2016.[170] Called Site M, the center has a 150 megawatt power substation, 14 administrative buildings and 10 parking garages.[150] It cost $3.2billion and covers 227 acres (92ha; 0.355sqmi).[150] The center is 1,800,000 square feet (17ha; 0.065sqmi)[150] and initially uses 60 megawatts of electricity.[171]

Increments II and III are expected to be completed by 2030, and would quadruple the space, covering 5,800,000 square feet (54ha; 0.21sqmi) with 60 buildings and 40 parking garages.[150]Defense contractors are also establishing or expanding cybersecurity facilities near the NSA and around the Washington metropolitan area.[150]

As of 2012, NSA collected intelligence from four geostationary satellites.[156] Satellite receivers were at Roaring Creek Station in Catawissa, Pennsylvania and Salt Creek Station in Arbuckle, California.[156] It operated ten to twenty taps on U.S. telecom switches. NSA had installations in several U.S. states and from them observed intercepts from Europe, the Middle East, North Africa, Latin America, and Asia.[156]

NSA had facilities at Friendship Annex (FANX) in Linthicum, Maryland, which is a 20 to 25-minute drive from Ft. Meade;[172] the Aerospace Data Facility at Buckley Air Force Base in Aurora outside Denver, Colorado; NSA Texas in the Texas Cryptology Center at Lackland Air Force Base in San Antonio, Texas; NSA Georgia at Fort Gordon in Augusta, Georgia; NSA Hawaii in Honolulu; the Multiprogram Research Facility in Oak Ridge, Tennessee, and elsewhere.[153][156]

On January 6, 2011 a groundbreaking ceremony was held to begin construction on NSA's first Comprehensive National Cyber-security Initiative (CNCI) Data Center, known as the "Utah Data Center" for short. The $1.5B data center is being built at Camp Williams, Utah, located 25 miles (40km) south of Salt Lake City, and will help support the agency's National Cyber-security Initiative.[173] It is expected to be operational by September 2013.[156]

In 2009, to protect its assets and to access more electricity, NSA sought to decentralize and expand its existing facilities in Ft. Meade and Menwith Hill,[174] the latter expansion expected to be completed by 2015.[175]

The Yakima Herald-Republic cited Bamford, saying that many of NSA's bases for its Echelon program were a legacy system, using outdated, 1990s technology.[176] In 2004, NSA closed its operations at Bad Aibling Station (Field Station 81) in Bad Aibling, Germany.[177] In 2012, NSA began to move some of its operations at Yakima Research Station, Yakima Training Center, in Washington state to Colorado, planning to leave Yakima closed.[178] As of 2013, NSA also intended to close operations at Sugar Grove, West Virginia.[176]

Following the signing in 19461956[179] of the UKUSA Agreement between the United States, United Kingdom, Canada, Australia and New Zealand, who then cooperated on signals intelligence and ECHELON,[180] NSA stations were built at GCHQ Bude in Morwenstow, United Kingdom; Geraldton, Pine Gap and Shoal Bay, Australia; Leitrim and Ottawa, Canada; Misawa, Japan; and Waihopai and Tangimoana,[181] New Zealand.[182]

NSA operates RAF Menwith Hill in North Yorkshire, United Kingdom, which was, according to BBC News in 2007, the largest electronic monitoring station in the world.[183] Planned in 1954, and opened in 1960, the base covered 562 acres (227ha; 0.878sqmi) in 1999.[184]

The agency's European Cryptologic Center (ECC), with 240 employees in 2011, is headquartered at a US military compound in Griesheim, near Frankfurt in Germany. A 2011 NSA report indicates that the ECC is responsible for the "largest analysis and productivity in Europe" and focusses on various priorities, including Africa, Europe, the Middle East and counterterrorism operations.[185]

In 2013, a new Consolidated Intelligence Center, also to be used by NSA, is being built at the headquarters of the United States Army Europe in Wiesbaden, Germany.[186] NSA's partnership with Bundesnachrichtendienst (BND), the German foreign intelligence service, was confirmed by BND president Gerhard Schindler.[186]

Thailand is a "3rd party partner" of the NSA along with nine other nations.[187] These are non-English-speaking countries that have made security agreements for the exchange of SIGINT raw material and end product reports.

Thailand is the site of at least two US SIGINT collection stations. One is at the US Embassy in Bangkok, a joint NSA-CIA Special Collection Service (SCS) unit. It presumably eavesdrops on foreign embassies, governmental communications, and other targets of opportunity.[188]

Read the original post:
National Security Agency - Wikipedia

Posted in NSA | Comments Off on National Security Agency – Wikipedia