The World Of Quantum Physics: EVERYTHING Is Energy – In5D …

by John Assaraf,

Nobel Prize winning physicists have proven beyond doubt that the physical world is one large sea of energy that flashes into and out of being in milliseconds, over and over again.

This is the world of Quantum Physics.

They have proven that thoughts are what put together and hold together this ever-changing energy field into the objects that we see.

So why do we see a person instead of a flashing cluster of energy?

A movie is a collection of about 24 frames a second. Each frame is separated by a gap. However, because of the speed at which one frame replaces another, our eyes get cheated into thinking that we see a continuous and moving picture.

A TV tube is simply a tube with heaps of electrons hitting the screen in a certain way, creating the illusion of form and motion.

This is what all objects are anyway. You have 5 physical senses (sight, sound, touch, smell, and taste).

Each of these senses has a specific spectrum (for example, a dog hears a different range of sound than you do; a snake sees a different spectrum of light than you do; and so on).

In other words, your set of senses perceives the sea of energy from a certain limited standpoint and makes up an image from that.

It is not complete, nor is it accurate. It is just an interpretation.

All of our interpretations are solely based on the internal map of reality that we have, and not the real truth. Our map is a result of our personal lifes collective experiences.

Our thoughts are linked to this invisible energy and they determine what the energy forms. Your thoughts literally shift the universe on a particle-by-particle basis to create your physical life.

Look around you.

Everything you see in our physical world started as an idea, an idea that grew as it was shared and expressed, until it grew enough into a physical object through a number of steps.

You literally become what you think about most.

Your life becomes what you have imagined and believed in most.

The world is literally your mirror, enabling you to experience in the physical plane what you hold as your truth until you change it.

Quantum physics shows us that the world is not the hard and unchangeable thing it may appear to be. Instead, it is a very fluid place continuously built up using our individual and collective thoughts.

What we think is true is really an illusion, almost like a magic trick.

Fortunately we have begun to uncover the illusion and most importantly, how to change it.

Nine systems comprise the human body including Circulatory, Digestive, Endocrine, Muscular, Nervous, Reproductive, Respiratory, Skeletal, and Urinary.

Tissues and organs.

Cells.

Molecules.

Atoms.

Sub-atomic particles.

Energy!

You and I are pure energy-light in its most beautiful and intelligent configuration. Energy that is constantly changing beneath the surface and you control it all with your powerful mind.

If you could see yourself under a powerful electron microscope and conduct other experiments on yourself, you would see that you are made up of a cluster of ever-changing energy in the form of electrons, neutrons, photons and so on.

So is everything else around you. Quantum physics tells us that it is the act of observing an object that causes it to be there where and how we observe it.

An object does not exist independently of its observer! So, as you can see, your observation, your attention to something, and your intention, literally creates that thing.

This is scientific and proven.

Your world is made of spirit, mind and body.

Each of those three, spirit, mind and body, has a function that is unique to it and not shared with the other. What you see with your eyes and experience with your body is the physical world, which we shall call Body. Body is an effect, created by a cause.

This cause is Thought.

Body cannot create. It can only experience and be experienced that is its unique function.

Thought cannot experience it can only make up, create and interpret. It needs a world of relativity (the physical world, Body) to experience itself.

Spirit is All That Is, that which gives Life to Thought and Body.

Body has no power to create, although it gives the illusion of power to do so. This illusion is the cause of much frustration. Body is purely an effect and has no power to cause or create.

The key with all of this information is how do you learn to see the universe differently than you do now so that you can manifest everything you truly desire.

Tags: electrons, energy, EVERYTHING Is Energy, illusion, magic, particles, Physics, quantum physics, The World Of Quantum Physics, The World Of Quantum Physics: EVERYTHING Is Energy

See the article here:

The World Of Quantum Physics: EVERYTHING Is Energy – In5D …

Nothing Is Solid & Everything Is Energy Scientists …

We’re creating viewer supported news. Become a member!

It has been written about before, over and over again, but cannot be emphasized enough. The world of quantum physics is an eerie one, one that sheds light on the truth about our world in ways that challenge the existing framework of accepted knowledge.

What we perceive as our physical material world, is really not physical or material at all, in fact, it is far from it. This has been proven time and time again by multiple Nobel Prize (among many other scientists around the world) winning physicists, one of them beingNiels Bohr, a Danish Physicist who made significant contributions to understanding atomic structure and quantum theory.

If quantum mechanics hasnt profoundly shocked you, you havent understood it yet.Everything we call real is made of things that cannot be regarded as real. Niels Bohr

At the turn of the nineteenth century, physicists started to explore the relationship between energy and the structure of matter. In doing so, the belief that a physical, Newtonian material universe that was at the very heart of scientific knowing was dropped, and the realization that matter is nothing but an illusion replaced it. Scientists began to recognize that everything in the Universe is made out of energy.

Despite the unrivaled empirical success of quantum theory, the very suggestion that it may be literally true as a description of nature is still greeted with cynicism, incomprehension and even anger. (T. Folger, Quantum Shmantum; Discover 22:37-43, 2001)

Quantum physicists discovered that physical atoms are made up of vortices of energy that are constantly spinning and vibrating, each one radiating its own unique energy signature. Therefore, if we really want to observe ourselves and find out what we are, we are really beings of energy and vibration, radiating our own unique energy signature -this is fact and is what quantum physics has shown us time and time again. We are much more than what we perceive ourselves to be, and its time we begin to see ourselves in that light. If you observed the composition of an atom with a microscope you would see a small, invisible tornado-like vortex, with a number of infinitely small energy vortices called quarks and photons. These are what make up the structure of the atom. As you focused in closer and closer on the structure of the atom, you would see nothing, you would observe a physical void. The atom has no physical structure, we have no physical structure, physical things really dont have any physical structure! Atoms are made out of invisible energy, not tangible matter.

Get over it, and accept the inarguable conclusion. The universe is immaterial-mental and spiritual (1) Richard Conn Henry, Professor of Physics and Astronomy at Johns Hopkins University (quote taken from the mental universe)

Its quite the conundrum, isnt it? Our experience tells us that our reality is made up of physical material things, and that our world is an independently existing objective one. The revelation that the universe is not an assembly of physical parts, suggested by Newtonian physics, and instead comes from a holistic entanglement of immaterial energy waves stems from the work of Albert Einstein, Max Planck and Werner Heisenberg, among others. (0)

What does it mean that our physical material reality isnt really physical at all? It could mean a number of things, and concepts such as this cannot be explored if scientists remain within the boundaries of the only perceived world existing, the world we see. As Nikola Tesla supposedly said:

The day science begins to study non-physical phenomena, it will make more progress in one decade than in all the previous centuries of its existence.

Fortunately, many scientists have already taken the leap, and have already questioned the meaning and implications of what weve discovered with quantum physics. One of these potential revelations is that the observer creates the reality.

A fundamental conclusion of the new physics also acknowledges that the observer creates the reality. As observers, we are personally involved with the creation of our own reality. Physicists are being forced to admit that the universe is a mental construction. Pioneering physicist Sir James Jeans wrote: The stream of knowledge is heading toward a non-mechanical reality; the universe begins to look more like a great thought than like a great machine. Mind no longer appears to be an accidental intruder into the realm of matter, we ought rather hail it as the creator and governor of the realm of matter. (R. C. Henry, The Mental Universe; Nature 436:29, 2005)

One great example that illustrates the role of consciousness within the physical material world (which we know not to be so physical) is the double slit experiment. This experiment has been used multiple times to explore the role of consciousness in shaping the nature of physical reality.(2)

A double-slit optical system was used to test the possible role of consciousness in the collapse of the quantum wave-function. The ratio of the interference patterns double-slit spectral power to its single-slit spectral power was predicted to decrease when attention was focused toward the double-slit as compared to away from it. The study found that factors associated with consciousness, such as meditation, experience, electrocortical markers of focused attention and psychological factors such as openness and absorption, significantly correlated in predicted ways with perturbations in the double-slit interference pattern.(2)

This is just the beginning. I wrote another article earlier this year that has much more, sourced information with regards to the role of consciousness and our physical material world:

10 Scientific Studies That Prove Consciousness Can Alter Our Physical Material World.

The significance of this information is for us to wake up, and realize that we are all energy, radiating our own unique energy signature. Feelings, thoughts and emotions play a vital role, quantum physics helps us see the significance of how we all feel. If all of us are in a peaceful loving state inside, it will no doubt impact the external world around us, and influence how others feel as well.

If you want to know the secrets of the universe, think in terms of energy, frequency and vibration. Nikola Tesla.

Studies have shown that positive emotions and operating from a place of peace within oneself can lead to a very different experience for the person emitting those emotions and for those around them. At our subatomic level, does the vibrational frequency change the manifestation of physical reality? If so, in what way? We know that when an atom changes its state, it absorbs or emits electromagnetic frequencies, which are responsible for changing its state. Do different states of emotion, perception and feelings result in different electromagnetic frequencies? Yes! This has been proven. (3)

HERE is a great video that touches on what I am trying to get across here. We are all connected.

Space is just a construct that gives the illusion that there areseparate objects Dr. Quantum (source)

Sources:

(1)http://henry.pha.jhu.edu/The.mental.Universe.pdf

(2)http://media.noetic.org/uploads/files/PhysicsEssays-Radin-DoubleSlit-2012.pdf

(3)http://www.heartmath.org/research/research-publications/energetic-heart-bioelectromagnetic-communication-within-and-between-people.html

communities.washinghttp://media.noetic.org/uploads/files/PhysicsEssays-Radin-DoubleSlit-2012.pdftontimes.com/neighborhood/energy-harnassed/2012/sep/30/secrets-universe-unlocked/

Your life path number can tell you A LOT about you.

With the ancient science of Numerology you can find out accurate and revealing information just from your name and birth date.

Get your free numerology reading and learn more about how you can use numerology in your life to find out more about your path and journey. Get Your free reading.

See the rest here:

Nothing Is Solid & Everything Is Energy Scientists …

Quantum mind – Wikipedia

The quantum mind or quantum consciousness[1] group of hypotheses propose that classical mechanics cannot explain consciousness. It posits that quantum mechanical phenomena, such as quantum entanglement and superposition, may play an important part in the brain’s function and could form the basis of an explanation of consciousness.

Hypotheses have been proposed about ways for quantum effects to be involved in the process of consciousness, but even those who advocate them admit that the hypotheses remain unproven, and possibly unprovable. Some of the proponents propose experiments that could demonstrate quantum consciousness, but the experiments have not yet been possible to perform.

Terms used in the theory of quantum mechanics can be misinterpreted by laymen in ways that are not valid but that sound mystical or religious, and therefore may seem to be related to consciousness. These misinterpretations of the terms are not justified in the theory of quantum mechanics. According to Sean Carroll, “No theory in the history of science has been more misused and abused by cranks and charlatansand misunderstood by people struggling in good faith with difficult ideasthan quantum mechanics.”[2] Lawrence Krauss says, “No area of physics stimulates more nonsense in the public arena than quantum mechanics.”[3] Some proponents of pseudoscience use quantum mechanical terms in an effort to justify their statements, but this effort is misleading, and it is a false interpretation of the physical theory. Quantum mind theories of consciousness that are based on this kind of misinterpretations of terms are not valid by scientific methods or from empirical experiments.

Eugene Wigner developed the idea that quantum mechanics has something to do with the workings of the mind. He proposed that the wave function collapses due to its interaction with consciousness. Freeman Dyson argued that “mind, as manifested by the capacity to make choices, is to some extent inherent in every electron.”[4]

Other contemporary physicists and philosophers considered these arguments to be unconvincing.[5] Victor Stenger characterized quantum consciousness as a “myth” having “no scientific basis” that “should take its place along with gods, unicorns and dragons.”[6]

David Chalmers argued against quantum consciousness. He instead discussed how quantum mechanics may relate to dualistic consciousness.[7] Chalmers is skeptical of the ability of any new physics to resolve the hard problem of consciousness.[8][9]

David Bohm viewed quantum theory and relativity as contradictory, which implied a more fundamental level in the universe.[10] He claimed both quantum theory and relativity pointed towards this deeper theory, which he formulated as a quantum field theory. This more fundamental level was proposed to represent an undivided wholeness and an implicate order, from which arises the explicate order of the universe as we experience it.

Bohm’s proposed implicate order applies both to matter and consciousness. He suggested that it could explain the relationship between them. He saw mind and matter as projections into our explicate order from the underlying implicate order. Bohm claimed that when we look at matter, we see nothing that helps us to understand consciousness.

Bohm discussed the experience of listening to music. He believed the feeling of movement and change that make up our experience of music derive from holding the immediate past and the present in the brain together. The musical notes from the past are transformations rather than memories. The notes that were implicate in the immediate past become explicate in the present. Bohm viewed this as consciousness emerging from the implicate order.

Bohm saw the movement, change or flow, and the coherence of experiences, such as listening to music, as a manifestation of the implicate order. He claimed to derive evidence for this from Jean Piaget’s[11] work on infants. He held these studies to show that young children learn about time and space because they have a “hard-wired” understanding of movement as part of the implicate order. He compared this “hard-wiring” to Chomsky’s theory that grammar is “hard-wired” into human brains.

Bohm never proposed a specific means by which his proposal could be falsified, nor a neural mechanism through which his “implicate order” could emerge in a way relevant to consciousness.[10] Bohm later collaborated on Karl Pribram’s holonomic brain theory as a model of quantum consciousness.[12]

According to philosopher Paavo Pylkknen, Bohm’s suggestion “leads naturally to the assumption that the physical correlate of the logical thinking process is at the classically describable level of the brain, while the basic thinking process is at the quantum-theoretically describable level.”[13]

Theoretical physicist Roger Penrose and anaesthesiologist Stuart Hameroff collaborated to produce the theory known as Orchestrated Objective Reduction (Orch-OR). Penrose and Hameroff initially developed their ideas separately and later collaborated to produce Orch-OR in the early 1990s. The theory was reviewed and updated by the authors in late 2013.[14][15]

Penrose’s argument stemmed from Gdel’s incompleteness theorems. In Penrose’s first book on consciousness, The Emperor’s New Mind (1989),[16] he argued that while a formal system cannot prove its own consistency, Gdels unprovable results are provable by human mathematicians.[17] He took this disparity to mean that human mathematicians are not formal proof systems and are not running a computable algorithm. According to Bringsjorg and Xiao, this line of reasoning is based on fallacious equivocation on the meaning of computation.[18] In the same book, Penrose wrote, “One might speculate, however, that somewhere deep in the brain, cells are to be found of single quantum sensitivity. If this proves to be the case, then quantum mechanics will be significantly involved in brain activity.”[16]:p.400

Penrose determined wave function collapse was the only possible physical basis for a non-computable process. Dissatisfied with its randomness, Penrose proposed a new form of wave function collapse that occurred in isolation and called it objective reduction. He suggested each quantum superposition has its own piece of spacetime curvature and that when these become separated by more than one Planck length they become unstable and collapse.[19] Penrose suggested that objective reduction represented neither randomness nor algorithmic processing but instead a non-computable influence in spacetime geometry from which mathematical understanding and, by later extension, consciousness derived.[19]

Hameroff provided a hypothesis that microtubules would be suitable hosts for quantum behavior.[20] Microtubules are composed of tubulin protein dimer subunits. The dimers each have hydrophobic pockets that are 8nm apart and that may contain delocalized pi electrons. Tubulins have other smaller non-polar regions that contain pi electron-rich indole rings separated by only about 2nm. Hameroff proposed that these electrons are close enough to become entangled.[21] Hameroff originally suggested the tubulin-subunit electrons would form a BoseEinstein condensate, but this was discredited.[22] He then proposed a Frohlich condensate, a hypothetical coherent oscillation of dipolar molecules. However, this too was experimentally discredited.[23]

However, Orch-OR made numerous false biological predictions, and is not an accepted model of brain physiology.[24] In other words, there is a missing link between physics and neuroscience,[25] for instance, the proposed predominance of ‘A’ lattice microtubules, more suitable for information processing, was falsified by Kikkawa et al.,[26][27] who showed all in vivo microtubules have a ‘B’ lattice and a seam. The proposed existence of gap junctions between neurons and glial cells was also falsified.[28] Orch-OR predicted that microtubule coherence reaches the synapses via dendritic lamellar bodies (DLBs), however De Zeeuw et al. proved this impossible,[29] by showing that DLBs are located micrometers away from gap junctions.[30]

In January 2014, Hameroff and Penrose claimed that the discovery of quantum vibrations in microtubules by Anirban Bandyopadhyay of the National Institute for Materials Science in Japan in March 2013[31] corroborates the Orch-OR theory.[15][32]

Although these theories are stated in a scientific framework, it is difficult to separate them from the personal opinions of the scientist. The opinions are often based on intuition or subjective ideas about the nature of consciousness. For example, Penrose wrote,

my own point of view asserts that you can’t even simulate conscious activity. What’s going on in conscious thinking is something you couldn’t properly imitate at all by computer…. If something behaves as though it’s conscious, do you say it is conscious? People argue endlessly about that. Some people would say, ‘Well, you’ve got to take the operational viewpoint; we don’t know what consciousness is. How do you judge whether a person is conscious or not? Only by the way they act. You apply the same criterion to a computer or a computer-controlled robot.’ Other people would say, ‘No, you can’t say it feels something merely because it behaves as though it feels something.’ My view is different from both those views. The robot wouldn’t even behave convincingly as though it was conscious unless it really was which I say it couldn’t be, if it’s entirely computationally controlled.[33]

Penrose continues,

A lot of what the brain does you could do on a computer. I’m not saying that all the brain’s action is completely different from what you do on a computer. I am claiming that the actions of consciousness are something different. I’m not saying that consciousness is beyond physics, either although I’m saying that it’s beyond the physics we know now…. My claim is that there has to be something in physics that we don’t yet understand, which is very important, and which is of a noncomputational character. It’s not specific to our brains; it’s out there, in the physical world. But it usually plays a totally insignificant role. It would have to be in the bridge between quantum and classical levels of behavior that is, where quantum measurement comes in.[34]

In response, W. Daniel Hillis replied, “Penrose has committed the classical mistake of putting humans at the center of the universe. His argument is essentially that he can’t imagine how the mind could be as complicated as it is without having some magic elixir brought in from some new principle of physics, so therefore it must involve that. It’s a failure of Penrose’s imagination…. It’s true that there are unexplainable, uncomputable things, but there’s no reason whatsoever to believe that the complex behavior we see in humans is in any way related to uncomputable, unexplainable things.”[34]

Lawrence Krauss is also blunt in criticizing Penrose’s ideas. He said, “Well, Roger Penrose has given lots of new-age crackpots ammunition by suggesting that at some fundamental scale, quantum mechanics might be relevant for consciousness. When you hear the term ‘quantum consciousness,’ you should be suspicious…. Many people are dubious that Penrose’s suggestions are reasonable, because the brain is not an isolated quantum-mechanical system.”[3]

Hiroomi Umezawa and collaborators proposed a quantum field theory of memory storage.[35][36] Giuseppe Vitiello and Walter Freeman proposed a dialog model of the mind. This dialog takes place between the classical and the quantum parts of the brain.[37][38][39] Their quantum field theory models of brain dynamics are fundamentally different from the Penrose-Hameroff theory.

Karl Pribram’s holonomic brain theory (quantum holography) invoked quantum mechanics to explain higher order processing by the mind.[40][41] He argued that his holonomic model solved the binding problem.[42] Pribram collaborated with Bohm in his work on the quantum approaches to mind and he provided evidence on how much of the processing in the brain was done in wholes.[43] He proposed that ordered water at dendritic membrane surfaces might operate by structuring Bose-Einstein condensation supporting quantum dynamics.[44]

Although Subhash Kak’s work is not directly related to that of Pribram, he likewise proposed that the physical substrate to neural networks has a quantum basis,[45][46] but asserted that the quantum mind has machine-like limitations.[47] He points to a role for quantum theory in the distinction between machine intelligence and biological intelligence, but that in itself cannot explain all aspects of consciousness.[48][49] He has proposed that the mind remains oblivious of its quantum nature due to the principle of veiled nonlocality.[50][51]

Henry Stapp proposed that quantum waves are reduced only when they interact with consciousness. He argues from the Orthodox Quantum Mechanics of John von Neumann that the quantum state collapses when the observer selects one among the alternative quantum possibilities as a basis for future action. The collapse, therefore, takes place in the expectation that the observer associated with the state. Stapp’s work drew criticism from scientists such as David Bourget and Danko Georgiev.[52] Georgiev[53][54][55] criticized Stapp’s model in two respects:

Stapp has responded to both of Georgiev’s objections.[56][57]

British philosopher David Pearce defends what he calls physicalistic idealism (“”Physicalistic idealism” is the non-materialist physicalist claim that reality is fundamentally experiential and that the natural world is exhaustively described by the equations of physics and their solutions […],” and has conjectured that unitary conscious minds are physical states of quantum coherence (neuronal superpositions).[58][59][60][61] This conjecture is, according to Pearce, amenable to falsification unlike most theories of consciousness, and Pearce has outlined an experimental protocol describing how the hypothesis could be tested.[62] Pearce admits that his ideas are “highly speculative,” “counterintuitive,” and “incredible.”[60]

These hypotheses of the quantum mind remain hypothetical speculation, as Penrose and Pearce admitted in their discussion. Until they make a prediction that is tested by experiment, the hypotheses aren’t based in empirical evidence. According to Lawrence Krauss, “It is true that quantum mechanics is extremely strange, and on extremely small scales for short times, all sorts of weird things happen. And in fact we can make weird quantum phenomena happen. But what quantum mechanics doesn’t change about the universe is, if you want to change things, you still have to do something. You can’t change the world by thinking about it.”[3]

The process of testing the hypotheses with experiments is fraught with problems, including conceptual/theoretical, practical, and ethical issues.

The idea that a quantum effect is necessary for consciousness to function is still in the realm of philosophy. Penrose proposes that it is necessary. But other theories of consciousness do not indicate that it is needed. For example, Daniel Dennett proposed a theory called multiple drafts model that doesn’t indicate that quantum effects are needed. The theory is described in Dennett’s book, Consciousness Explained, published in 1991.[63] A philosophical argument on either side isn’t scientific proof, although the philosophical analysis can indicate key differences in the types of models, and they can show what type of experimental differences might be observed. But since there isn’t a clear consensus among philosophers, it isn’t conceptual support that a quantum mind theory is needed.

There are computers that are specifically designed to compute using quantum mechanical effects. Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement.[64] They are different from binary digital electronic computers based on transistors. Whereas common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1), quantum computation uses quantum bits, which can be in superpositions of states. One of the greatest challenges is controlling or removing quantum decoherence. This usually means isolating the system from its environment as interactions with the external world cause the system to decohere. Currently, some quantum computers require their qubits to be cooled to 20 millikelvins in order to prevent significant decoherence.[65] As a result, time consuming tasks may render some quantum algorithms inoperable, as maintaining the state of qubits for a long enough duration will eventually corrupt the superpositions.[66] There aren’t any obvious analogies between the functioning of quantum computers and the human brain. Some of the hypothetical models of quantum mind have proposed mechanisms for maintaining quantum coherence in the brain, but they have not been shown to operate.

Quantum entanglement is a physical phenomenon often invoked for quantum mind models. This effect occurs when pairs or groups of particles interact so that the quantum state of each particle cannot be described independently of the other(s), even when the particles are separated by a large distance. Instead, a quantum state has to be described for the whole system. Measurements of physical properties such as position, momentum, spin, and polarization, performed on entangled particles are found to be correlated. If one of the particles is measured, the same property of the other particle immediately adjusts to maintain the conservation of the physical phenomenon. According to the formalism of quantum theory, the effect of measurement happens instantly, no matter how far apart the particles are.[67][68] It is not possible to use this effect to transmit classical information at faster-than-light speeds[69] (see Faster-than-light Quantum mechanics). Entanglement is broken when the entangled particles decohere through interaction with the environment; for example, when a measurement is made[70] or the particles undergo random collisions or interactions. According to David Pearce, “In neuronal networks, ion-ion scattering, ion-water collisions, and long-range Coulomb interactions from nearby ions all contribute to rapid decoherence times; but thermally-induced decoherence is even harder experimentally to control than collisional decoherence.” He anticipated that quantum effects would have to be measured in femtoseconds, a trillion times faster than the rate at which neurons function (milliseconds).[62]

Another possible conceptual approach is to use quantum mechanics as an analogy to understand a different field of study like consciousness, without expecting that the laws of quantum physics will apply. An example of this approach is the idea of Schrdinger’s cat. Erwin Schrdinger described how one could, in principle, create entanglement of a large-scale system by making it dependent on an elementary particle in a superposition. He proposed a scenario with a cat in a locked steel chamber, wherein the cat’s life or death depended on the state of a radioactive atom, whether it had decayed and emitted radiation or not. According to Schrdinger, the Copenhagen interpretation implies that the cat remains both alive and dead until the state has been observed. Schrdinger did not wish to promote the idea of dead-and-alive cats as a serious possibility; on the contrary, he intended the example to illustrate the absurdity of the existing view of quantum mechanics.[71] However, since Schrdinger’s time, other interpretations of the mathematics of quantum mechanics have been advanced by physicists, some of which regard the “alive and dead” cat superposition as quite real.[72][73] Schrdinger’s famous thought experiment poses the question, “when does a quantum system stop existing as a superposition of states and become one or the other?” In the same way, it is possible to ask whether the brain’s act of making a decision is analogous to having a superposition of states of two decision outcomes, so that making a decision means “opening the box” to reduce the brain from a combination of states to one state. But even Schrdinger didn’t think this really happened to the cat; he didn’t think the cat was literally alive and dead at the same time. This analogy about making a decision uses a formalism that is derived from quantum mechanics, but it doesn’t indicate the actual mechanism by which the decision is made. In this way, the idea is similar to quantum cognition. This field clearly distinguishes itself from the quantum mind as it is not reliant on the hypothesis that there is something micro-physical quantum mechanical about the brain. Quantum cognition is based on the quantum-like paradigm,[74][75] generalized quantum paradigm,[76] or quantum structure paradigm[77] that information processing by complex systems such as the brain can be mathematically described in the framework of quantum information and quantum probability theory. This model uses quantum mechanics only as an analogy, but doesn’t propose that quantum mechanics is the physical mechanism by which it operates. For example, quantum cognition proposes that some decisions can be analyzed as if there are interference between two alternatives, but it is not a physical quantum interference effect.

The demonstration of a quantum mind effect by experiment is necessary. Is there a way to show that consciousness is impossible without a quantum effect? Can a sufficiently complex digital, non-quantum computer be shown to be incapable of consciousness? Perhaps a quantum computer will show that quantum effects are needed. In any case, complex computers that are either digital or quantum computers may be built. These could demonstrate which type of computer is capable of conscious, intentional thought. But they don’t exist yet, and no experimental test has been demonstrated.

Quantum mechanics is a mathematical model that can provide some extremely accurate numerical predictions. Richard Feynman called quantum electrodynamics, based on the quantum mechanics formalism, “the jewel of physics” for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen.[78]:Ch1 So it is not impossible that the model could provide an accurate prediction about consciousness that would confirm that a quantum effect is involved. If the mind depends on quantum mechanical effects, the true proof is to find an experiment that provides a calculation that can be compared to an experimental measurement. It has to show a measurable difference between a classical computation result in a brain and one that involves quantum effects.

The main theoretical argument against the quantum mind hypothesis is the assertion that quantum states in the brain would lose coherency before they reached a scale where they could be useful for neural processing. This supposition was elaborated by Tegmark. His calculations indicate that quantum systems in the brain decohere at sub-picosecond timescales.[79][80] No response by a brain has shows computation results or reactions on this fast of a timescale. Typical reactions are on the order of milliseconds, trillions of times longer than sub-picosecond time scales.[81]

Daniel Dennett uses an experimental result in support of his Multiple Drafts Model of an optical illusion that happens on a time scale of less than a second or so. In this experiment, two different colored lights, with an angular separation of a few degrees at the eye, are flashed in succession. If the interval between the flashes is less than a second or so, the first light that is flashed appears to move across to the position of the second light. Furthermore, the light seems to change color as it moves across the visual field. A green light will appear to turn red as it seems to move across to the position of a red light. Dennett asks how we could see the light change color before the second light is observed.[63] Velmans argues that the cutaneous rabbit illusion, another illusion that happens in about a second, demonstrates that there is a delay while modelling occurs in the brain and that this delay was discovered by Libet.[82] These slow illusions that happen at times of less than a second don’t support a proposal that the brain functions on the picosecond time scale.

According to David Pearce, a demonstration of picosecond effects is “the fiendishly hard part feasible in principle, but an experimental challenge still beyond the reach of contemporary molecular matter-wave interferometry. …The conjecture predicts that we’ll discover the interference signature of sub-femtosecond macro-superpositions.”[62]

Penrose says,

The problem with trying to use quantum mechanics in the action of the brain is that if it were a matter of quantum nerve signals, these nerve signals would disturb the rest of the material in the brain, to the extent that the quantum coherence would get lost very quickly. You couldn’t even attempt to build a quantum computer out of ordinary nerve signals, because they’re just too big and in an environment that’s too disorganized. Ordinary nerve signals have to be treated classically. But if you go down to the level of the microtubules, then there’s an extremely good chance that you can get quantum-level activity inside them.

For my picture, I need this quantum-level activity in the microtubules; the activity has to be a large scale thing that goes not just from one microtubule to the next but from one nerve cell to the next, across large areas of the brain. We need some kind of coherent activity of a quantum nature which is weakly coupled to the computational activity that Hameroff argues is taking place along the microtubules.

A demonstration of a quantum effect in the brain has to explain this problem or explain why it is not relevant, or that the brain somehow circumvents the problem of the loss of quantum coherency at body temperature. As Penrose proposes, it may require a new type of physical theory.

Can self-awareness, or understanding of a self in the surrounding environment, be done by a classical parallel processor, or are quantum effects needed to have a sense of “oneness”? According to Lawrence Krauss, “You should be wary whenever you hear something like, ‘Quantum mechanics connects you with the universe’ … or ‘quantum mechanics unifies you with everything else.’ You can begin to be skeptical that the speaker is somehow trying to use quantum mechanics to argue fundamentally that you can change the world by thinking about it.”[3] A subjective feeling is not sufficient to make this determination. Humans don’t have a reliable subjective feeling for how we do a lot of functions. According to Daniel Dennett, “On this topic, Everybody’s an expert… but they think that they have a particular personal authority about the nature of their own conscious experiences that can trump any hypothesis they find unacceptable.”[83]

Since humans are the only animals known to be conscious, then performing experiments to demonstrate quantum effects in consciousness requires experimentation on a living human brain. This is not automatically excluded or impossible, but it seriously limits the kinds of experiments that can be done. Studies of the ethics of brain studies are being actively solicited[84] by the BRAIN Initiative, a U.S. Federal Government funded effort to document the connections of neurons in the brain.

An ethically objectionable practice by proponents of quantum mind theories involves the practice of using quantum mechanical terms in an effort to make the argument sound more impressive, even when they know that those terms are irrelevant. Dale DeBakcsy notes that “trendy parapsychologists, academic relativists, and even the Dalai Lama have all taken their turn at robbing modern physics of a few well-sounding phrases and stretching them far beyond their original scope in order to add scientific weight to various pet theories.”[85] At the very least, these proponents must make a clear statement about whether quantum formalism is being used as an analogy or as an actual physical mechanism, and what evidence they are using for support. An ethical statement by a researcher should specify what kind of relationship their hypothesis has to the physical laws.

Misleading statements of this type have been given by, for example, Deepak Chopra. Chopra has commonly referred to topics such as quantum healing or quantum effects of consciousness. Seeing the human body as being undergirded by a “quantum mechanical body” composed not of matter but of energy and information, he believes that “human aging is fluid and changeable; it can speed up, slow down, stop for a time, and even reverse itself,” as determined by one’s state of mind.[86] Robert Carroll states Chopra attempts to integrate Ayurveda with quantum mechanics to justify his teachings.[87] Chopra argues that what he calls “quantum healing” cures any manner of ailments, including cancer, through effects that he claims are literally based on the same principles as quantum mechanics.[88] This has led physicists to object to his use of the term quantum in reference to medical conditions and the human body.[88] Chopra said, “I think quantum theory has a lot of things to say about the observer effect, about non-locality, about correlations. So I think theres a school of physicists who believe that consciousness has to be equated, or at least brought into the equation, in understanding quantum mechanics.”[89] On the other hand, he also claims “[Quantum effects are] just a metaphor. Just like an electron or a photon is an indivisible unit of information and energy, a thought is an indivisible unit of consciousness.”[89] In his book Quantum Healing, Chopra stated the conclusion that quantum entanglement links everything in the Universe, and therefore it must create consciousness.[90] In either case, the references to the word “quantum” don’t mean what a physicist would claim, and arguments that use the word “quantum” shouldn’t be taken as scientifically proven.

Chris Carter includes statements in his book, Science and Psychic Phenomena,[91] of quotes from quantum physicists in support of psychic phenomena. In a review of the book, Benjamin Radford wrote that Carter used such references to “quantum physics, which he knows nothing about and which he (and people like Deepak Chopra) love to cite and reference because it sounds mysterious and paranormal…. Real, actual physicists I’ve spoken to break out laughing at this crap…. If Carter wishes to posit that quantum physics provides a plausible mechanism for psi, then it is his responsibility to show that, and he clearly fails to do so.”[92] Sharon Hill has studied amateur paranormal research groups, and these groups like to use “vague and confusing language: ghosts ‘use energy,’ are made up of ‘magnetic fields’, or are associated with a ‘quantum state.'”[93][94]

Statements like these about quantum mechanics indicate a temptation to misinterpret technical, mathematical terms like entanglement in terms of mystical feelings. This approach can be interpreted as a kind of Scientism, using the language and authority of science when the scientific concepts don’t apply.

A larger problem in the popular press with the quantum mind hypotheses is that they are extracted without scientific support or justification and used to support areas of pseudoscience. In brief, for example, the property of quantum entanglement refers to the connection between two particles that share a property such as angular momentum. If the particles collide, then they are no longer entangled. Extrapolating this property from the entanglement of two elementary particles to the functioning of neurons in the brain to be used in a computation is not simple. It is a long chain to prove a connection between entangled elementary particles and a macroscopic effect that affects human consciousness. It is also necessary to show how sensory inputs affect the coupled particles and then computation is accomplished.

Perhaps the final question is, what difference does it make if quantum effects are involved in computations in the brain? It is already known that quantum mechanics plays a role in the brain, since quantum mechanics determines the shapes and properties of molecules like neurotransmitters and proteins, and these molecules affect how the brain works. This is the reason that drugs such as morphine affect consciousness. As Daniel Dennett said, “quantum effects are there in your car, your watch, and your computer. But most things most macroscopic objects are, as it were, oblivious to quantum effects. They don’t amplify them; they don’t hinge on them.”[34] Lawrence Krauss said, “We’re also connected to the universe by gravity, and we’re connected to the planets by gravity. But that doesn’t mean that astrology is true…. Often, people who are trying to sell whatever it is they’re trying to sell try to justify it on the basis of science. Everyone knows quantum mechanics is weird, so why not use that to justify it? … I don’t know how many times I’ve heard people say, ‘Oh, I love quantum mechanics because I’m really into meditation, or I love the spiritual benefits that it brings me.’ But quantum mechanics, for better or worse, doesn’t bring any more spiritual benefits than gravity does.”[3]

But it appears that these molecular quantum effects are not what the proponents of the quantum mind are interested in. Proponents seem to want to use the nonlocal, nonclassical aspects of quantum mechanics to connect the human consciousness to a kind of universal consciousness or to long-range supernatural abilities. Although it isn’t impossible that these effects may be observed, they have not been found at present, and the burden of proof is on those who claim that these effects exist. The ability of humans to transfer information at a distance without a known classical physical mechanism has not been shown.

Read more here:

Quantum mind – Wikipedia

Quantum – Wikipedia

In physics, a quantum (plural: quanta) is the minimum amount of any physical entity involved in an interaction. The fundamental notion that a physical property may be “quantized” is referred to as “the hypothesis of quantization”.[1] This means that the magnitude of the physical property can take on only discrete values consisting of integer multiples of one quantum.

For example, a photon is a single quantum of light (or of any other form of electromagnetic radiation), and can be referred to as a “light quantum”. Similarly, the energy of an electron bound within an atom is also quantized, and thus can only exist in certain discrete values. Atoms and matter in general are stable because electrons can only exist at discrete energy levels in an atom. Quantization is one of the foundations of the much broader physics of quantum mechanics. Quantization of the energy and its influence on how energy and matter interact (quantum electrodynamics) is part of the fundamental framework for understanding and describing nature.

The word quantum comes from the Latin quantus, meaning “how great”. “Quanta”, short for “quanta of electricity” (electrons), was used in a 1902 article on the photoelectric effect by Philipp Lenard, who credited Hermann von Helmholtz for using the word in the area of electricity. However, the word quantum in general was well known before 1900.[2] It was often used by physicians, such as in the term quantum satis. Both Helmholtz and Julius von Mayer were physicians as well as physicists. Helmholtz used quantum with reference to heat in his article[3] on Mayer’s work, and the word quantum can be found in the formulation of the first law of thermodynamics by Mayer in his letter[4] dated July 24, 1841. Max Planck used quanta to mean “quanta of matter and electricity”,[5] gas, and heat.[6] In 1905, in response to Planck’s work and the experimental work of Lenard (who explained his results by using the term quanta of electricity), Albert Einstein suggested that radiation existed in spatially localized packets which he called “quanta of light” (“Lichtquanta”).[7]

The concept of quantization of radiation was discovered in 1900 by Max Planck, who had been trying to understand the emission of radiation from heated objects, known as black-body radiation. By assuming that energy can only be absorbed or released in tiny, differential, discrete packets he called “bundles” or “energy elements”,[8] Planck accounted for certain objects changing colour when heated.[9] On December 14, 1900, Planck reported his findings to the German Physical Society, and introduced the idea of quantization for the first time as a part of his research on black-body radiation.[10] As a result of his experiments, Planck deduced the numerical value of h, known as the Planck constant, and could also report a more precise value for the AvogadroLoschmidt number, the number of real molecules in a mole and the unit of electrical charge, to the German Physical Society. After his theory was validated, Planck was awarded the Nobel Prize in Physics for his discovery in 1918.

While quantization was first discovered in electromagnetic radiation, it describes a fundamental aspect of energy not just restricted to photons.[11] In the attempt to bring theory into agreement with experiment, Max Planck postulated that electromagnetic energy is absorbed or emitted in discrete packets, or quanta.[12]

Follow this link:

Quantum – Wikipedia

Understanding Memetics – SCP Foundation

Summary, for those in a hurry:

Memetics deals with information transfer, specifically cultural information in society. The basic idea is to conflate the exchange of information between people with genetic material, to track the mutation of ideas as they are transmitted from one person to the next in the way you could track viral transmissions and mutations. However, a meme also provides benefits to the carrier if they spread it.

Meme : Memetics :: Gene : Genetics

Memetics does NOT refer to telepathy, ESP or any imaginary psychic mental magic. These words are memetic, and if you understand them then they are having a completely ordinary memetic effect on you.

Memetics in regards to SCP objects tends to focus on the impossible rather than the mundane, regarding effects that are transmitted via information. In general, the effects themselves should remain in the realm of information. A memetic SCP would be more likely to be a phrase that makes you think you have wings as opposed to a phrase that makes you actually grow a pair of wings. If you write up magic words that make people grow wings, it should be described as something other than memetic.

Memetic SCPs do not emanate auras or project beams. They are SCPs which involve ideas and symbols which trigger a response in those who understand them.

Memetic is often incorrectly used by new personnel as the official sounding term for “Weird Mind Shit.” However, that is not actually what memetic means. These words are memetic. They are producing a memetic effect in your mind right now, without any magical mind rays lashing out of your computer monitor to grasp your fragile consciousness. Memes are information, more specifically, cultural information.

Outside of the Foundation’s walls the concept of memetics is not taken very seriously; it is a theory that conflates the transfer of cultural information with evolutionary biology.

meme : memetic :: gene : genetic

The idea was that certain memes prosper and others wither the same way certain genes produce stronger offspring that out-compete creatures with different genes. Also, it is easy to compare the spread and mutation of information to the spread of a virus. The reason we use the term memetic in our work is largely due to this, as the truly dangerous memes out there can spread like wildfire due to the fact that the very knowledge of them can count as an infection.

Understanding the true nature of memetic threats is critical to surviving them. You cannot wear a special set of magical goggles made of telekill to protect yourself from a meme. THE GOGGLES DO NOTHING. If you just read those words in your head with a bad Teutonic accent, congratulations on being victim to yet another memetic effect. If you did not know that phrase was an oft-repeated quote from the Simpsons then congratulations; you are now infected with that knowledge and are free to participate in its spread.

A meme perpetuates itself by being beneficial to the carrier to spread to new hosts. You now understand that THE GOGGLES DO NOTHING; you’re in on the joke. However, you might have friends who aren’t, and don’t get it. It benefits you to explain them, because then you both have something new to laugh together about when it gets brought up. This is what makes a meme effective – how much incentive a carrier has to spread it. Unless an anomalous meme’s effect is the compulsive urge for the carrier to infect others, there needs to be incentive to spread it.

An artifact can no more have a memetic aura or project a memetic beam than a creature could have a genetic aura or genetic beam. Even though you could imagine a creature with genes that allow it to produce some kind of aura or beam like a big doofy X-man, remember that the examples we have of such creatures in containment are not getting their super-powered emanations from anything resembling our scientific understanding of genetics and biology. Neither are the memetic artifacts. We contain these things specifically because we cannot understand or explain them yet. At the end of the day we’re still using a clumsy concept to describe things we don’t have a full grasp of.

It is very rare that anything with a dangerous memetic component could be described as hostile to begin with. We do not contain memetic threats because they are out to get us. They are threats because it is dangerous for us to merely perceive them. It is exceptionally rare for dangerous memes to even have anything resembling sapience with the exception of certain known entities which exist entirely within the medium of “cultural information” such as SCP-, SCP-732 and SCP-423.

A dangerous meme is basically a trigger that sets off something inside of you that you may or may have not been aware of. What would your knee jerk reaction be to knowing that your rival is sleeping with your one true love? How would you react if you were to unwittingly catch them in the act? That kind of sudden revelation can make a mild mannered citizen into a killer, so don’t be surprised that there are other strange bits of information out there that can break the human mind in different yet equally drastic ways.

Protecting yourself from memetic threat is very tricky and can be worse than the threat itself. There are reasons that we behave the way we do, there are reasons our emotions soar when we hear just the right combination of sounds in a piece of music. Do you want to stop thinking about the Simpsons or your obnoxious nerdy friends that quote it every time you hear the phrase THE GOGGLES DO NOTHING? That would require forgetting about the Simpsons and your friends.

Do you want to survive hearing or reading the phrase ” ?” Well, sadly we don’t quite know what other information you need to forget or know to prevent [DATA EXPUNGED] but we’re getting better. Lobotomies and pills help, and are one of the few times that the cure is not worse than the disease. The sum total of our human condition; our cultural knowledge and upbringing and memories and identity; this is what makes us susceptible to the occasional memetic compulsion.

So it’s not the basalt monolith or its bizarre carvings that is making you strangle your companions with your own intestines, the problem was within you all along.

Should you ever find yourself under a memetic compulsion and aware of the fact, remember that there are certain mental exercises that you can perform which may save your life or the lives of your companions. Changing the information your mind is being presented with may just change how you react to it, and the more abrupt or absurd the change is the better.

Imagine the fearsome entity is wearing a bright pink nightgown. Draw a mustache on the haunted painting. Pee on the stone altar. Wear the terrible sculpture like a hat.

And if all else fails, bend over and kiss your ass goodbye. I’m not kidding. That could actually help.

– Dr. Johannes Sorts received a special dispensation to use the word “doofy” in this document

But seriously

This was originally intended as a piece of fiction on its own before it got stuck into the information bar with plenty of other plainly out-of-character writing guides. So here’s the important things to take away:

1 – “Memetics” is a specific concept regarding information exchange. It has nothing to do with telepathy or ESP or psychic compulsions.

2 – SCP-148 has no effect on anything memetic. Don’t screw this up or we will give you an incredibly hard time about it.

3 – Psychic compulsions are lame and you should think twice before using them in your new SCP, even if you avoid misusing the term “memetic” when you do it.

4 – Sorts’ Rule for all memetic SCPs is “Memetic effect + crazy to death = failure.”

5 – Wear it like a haaaaat!!

The rest is here:

Understanding Memetics – SCP Foundation

Negentropy – Wikipedia

The negentropy has different meanings in information theory and theoretical biology. In a biological context, the negentropy (also negative entropy, syntropy, extropy, ectropy or entaxy[1]) of a living system is the entropy that it exports to keep its own entropy low; it lies at the intersection of entropy and life. In other words, negentropy is reverse entropy. It means things becoming more orderly. By ‘order’ is meant organisation, structure and function: the opposite of randomness or chaos. The concept and phrase “negative entropy” was introduced by Erwin Schrdinger in his 1944 popular-science book What is Life?[2] Later, Lon Brillouin shortened the phrase to negentropy,[3][4] to express it in a more “positive” way: a living system imports negentropy and stores it.[5] In 1974, Albert Szent-Gyrgyi proposed replacing the term negentropy with syntropy. That term may have originated in the 1940s with the Italian mathematician Luigi Fantappi, who tried to construct a unified theory of biology and physics. Buckminster Fuller tried to popularize this usage, but negentropy remains common.

In a note to What is Life? Schrdinger explained his use of this phrase.

In 2009, Mahulikar & Herwig redefined negentropy of a dynamically ordered sub-system as the specific entropy deficit of the ordered sub-system relative to its surrounding chaos.[6] Thus, negentropy has SI units of (J kg1 K1) when defined based on specific entropy per unit mass, and (K1) when defined based on specific entropy per unit energy. This definition enabled: i) scale-invariant thermodynamic representation of dynamic order existence, ii) formulation of physical principles exclusively for dynamic order existence and evolution, and iii) mathematical interpretation of Schrdinger’s negentropy debt.

In information theory and statistics, negentropy is used as a measure of distance to normality.[7][8][9] Out of all distributions with a given mean and variance, the normal or Gaussian distribution is the one with the highest entropy. Negentropy measures the difference in entropy between a given distribution and the Gaussian distribution with the same mean and variance. Thus, negentropy is always nonnegative, is invariant by any linear invertible change of coordinates, and vanishes if and only if the signal is Gaussian.

Negentropy is defined as

where S ( x ) {displaystyle S(varphi _{x})} is the differential entropy of the Gaussian density with the same mean and variance as p x {displaystyle p_{x}} and S ( p x ) {displaystyle S(p_{x})} is the differential entropy of p x {displaystyle p_{x}} :

Negentropy is used in statistics and signal processing. It is related to network entropy, which is used in independent component analysis.[10][11]

There is a physical quantity closely linked to free energy (free enthalpy), with a unit of entropy and isomorphic to negentropy known in statistics and information theory. In 1873, Willard Gibbs created a diagram illustrating the concept of free energy corresponding to free enthalpy. On the diagram one can see the quantity called capacity for entropy. This quantity is the amount of entropy that may be increased without changing an internal energy or increasing its volume.[12] In other words, it is a difference between maximum possible, under assumed conditions, entropy and its actual entropy. It corresponds exactly to the definition of negentropy adopted in statistics and information theory. A similar physical quantity was introduced in 1869 by Massieu for the isothermal process[13][14][15] (both quantities differs just with a figure sign) and then Planck for the isothermal-isobaric process.[16] More recently, the MassieuPlanck thermodynamic potential, known also as free entropy, has been shown to play a great role in the so-called entropic formulation of statistical mechanics,[17] applied among the others in molecular biology[18] and thermodynamic non-equilibrium processes.[19]

In 1953, Lon Brillouin derived a general equation[20] stating that the changing of an information bit value requires at least kT ln(2) energy. This is the same energy as the work Le Szilrd’s engine produces in the idealistic case. In his book,[21] he further explored this problem concluding that any cause of this bit value change (measurement, decision about a yes/no question, erasure, display, etc.) will require the same amount of energy.

See original here:

Negentropy – Wikipedia

Artificial intelligence – Wikipedia

Artificial intelligence (AI, also machine intelligence, MI) is intelligence demonstrated by machines, in contrast to the natural intelligence (NI) displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

The scope of AI is disputed: as machines become increasingly capable, tasks considered as requiring “intelligence” are often removed from the definition, a phenomenon known as the AI effect, leading to the quip, “AI is whatever hasn’t been done yet.”[3] For instance, optical character recognition is frequently excluded from “artificial intelligence”, having become a routine technology.[4] Capabilities generally classified as AI as of 2017[update] include successfully understanding human speech, competing at the highest level in strategic game systems (such as chess and Go[6]), autonomous cars, intelligent routing in content delivery network and military simulations.

Artificial intelligence was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism,[7][8] followed by disappointment and the loss of funding (known as an “AI winter”),[9][10] followed by new approaches, success and renewed funding.[8][11] For most of its history, AI research has been divided into subfields that often fail to communicate with each other.[12] These sub-fields are based on technical considerations, such as particular goals (e.g. “robotics” or “machine learning”),[13] the use of particular tools (“logic” or “neural networks”), or deep philosophical differences.[14][15][16] Subfields have also been based on social factors (particular institutions or the work of particular researchers).[12]

The traditional problems (or goals) of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception and the ability to move and manipulate objects.[13] General intelligence is among the field’s long-term goals.[17] Approaches include statistical methods, computational intelligence, and traditional symbolic AI. Many tools are used in AI, including versions of search and mathematical optimization, neural networks and methods based on statistics, probability and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy and many others.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it”.[18] This raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence, issues which have been explored by myth, fiction and philosophy since antiquity.[19] Some people also consider AI to be a danger to humanity if it progresses unabatedly.[20] Others believe that AI, unlike previous technological revolutions, will create a risk of mass unemployment.[21]

In the twenty-first century, AI techniques have experienced a resurgence following concurrent advances in computer power, large amounts of data, and theoretical understanding; and AI techniques have become an essential part of the technology industry, helping to solve many challenging problems in computer science.[22][11]

Thought-capable artificial beings appeared as storytelling devices in antiquity,[23] and have been common in fiction, as in Mary Shelley’s Frankenstein or Karel apek’s R.U.R. (Rossum’s Universal Robots).[24] These characters and their fates raised many of the same issues now discussed in the ethics of artificial intelligence.[19]

The study of mechanical or “formal” reasoning began with philosophers and mathematicians in antiquity. The study of mathematical logic led directly to Alan Turing’s theory of computation, which suggested that a machine, by shuffling symbols as simple as “0” and “1”, could simulate any conceivable act of mathematical deduction. This insight, that digital computers can simulate any process of formal reasoning, is known as the ChurchTuring thesis.[25] Along with concurrent discoveries in neurobiology, information theory and cybernetics, this led researchers to consider the possibility of building an electronic brain. Turing proposed that “if a human could not distinguish between responses from a machine and a human, the machine could be considered intelligent”.[26] The first work that is now generally recognized as AI was McCullouch and Pitts’ 1943 formal design for Turing-complete “artificial neurons”.

The field of AI research was born at a workshop at Dartmouth College in 1956.[28] Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marvin Minsky (MIT) and Arthur Samuel (IBM) became the founders and leaders of AI research.[29] They and their students produced programs that the press described as “astonishing”: computers were learning checkers strategies (c. 1954)[31] (and by 1959 were reportedly playing better than the average human),[32] solving word problems in algebra, proving logical theorems (Logic Theorist, first run c. 1956) and speaking English.[33] By the middle of the 1960s, research in the U.S. was heavily funded by the Department of Defense[34] and laboratories had been established around the world.[35] AI’s founders were optimistic about the future: Herbert Simon predicted, “machines will be capable, within twenty years, of doing any work a man can do”. Marvin Minsky agreed, writing, “within a generation… the problem of creating ‘artificial intelligence’ will substantially be solved”.[7]

They failed to recognize the difficulty of some of the remaining tasks. Progress slowed and in 1974, in response to the criticism of Sir James Lighthill and ongoing pressure from the US Congress to fund more productive projects, both the U.S. and British governments cut off exploratory research in AI. The next few years would later be called an “AI winter”,[9] a period when obtaining funding for AI projects was difficult.

In the early 1980s, AI research was revived by the commercial success of expert systems,[37] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985 the market for AI had reached over a billion dollars. At the same time, Japan’s fifth generation computer project inspired the U.S and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting hiatus began.[10]

In the late 1990s and early 21st century, AI began to be used for logistics, data mining, medical diagnosis and other areas.[22] The success was due to increasing computational power (see Moore’s law), greater emphasis on solving specific problems, new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards.[38] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov on 11 May 1997.

In 2011, a Jeopardy! quiz show exhibition match, IBM’s question answering system, Watson, defeated the two greatest Jeopardy champions, Brad Rutter and Ken Jennings, by a significant margin. Faster computers, algorithmic improvements, and access to large amounts of data enabled advances in machine learning and perception; data-hungry deep learning methods started to dominate accuracy benchmarks around 2012.[41] The Kinect, which provides a 3D bodymotion interface for the Xbox 360 and the Xbox One use algorithms that emerged from lengthy AI research[42] as do intelligent personal assistants in smartphones.[43] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps.[6][44] In the 2017 Future of Go Summit, AlphaGo won a three-game match with Ke Jie,[45] who at the time continuously held the world No. 1 ranking for two years.[46][47] This marked the completion of a significant milestone in the development of Artificial Intelligence as Go is an extremely complex game, more so than Chess.

According to Bloomberg’s Jack Clark, 2015 was a landmark year for artificial intelligence, with the number of software projects that use AI within Google increased from a “sporadic usage” in 2012 to more than 2,700 projects. Clark also presents factual data indicating that error rates in image processing tasks have fallen significantly since 2011.[48] He attributes this to an increase in affordable neural networks, due to a rise in cloud computing infrastructure and to an increase in research tools and datasets.[11] Other cited examples include Microsoft’s development of a Skype system that can automatically translate from one language to another and Facebook’s system that can describe images to blind people.[48]

A typical AI perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[1] An AI’s intended goal function can be simple (“1 if the AI wins a game of Go, 0 otherwise”) or complex (“Do actions mathematically similar to the actions that got you rewards in the past”). Goals can be explicitly defined, or can be induced. If the AI is programmed for “reinforcement learning”, goals can be implicitly induced by rewarding some types of behavior and punishing others.[a] Alternatively, an evolutionary system can induce goals by using a “fitness function” to mutate and preferentially replicate high-scoring AI systems; this is similar to how animals evolved to innately desire certain goals such as finding food, or how dogs can be bred via artificial selection to possess desired traits. Some AI systems, such as nearest-neighbor, instead reason by analogy; these systems are not generally given goals, except to the degree that goals are somehow implicit in their training data. Such systems can still be benchmarked if the non-goal system is framed as a system whose “goal” is to successfully accomplish its narrow classification task.[51]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[b] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following recipe for optimal play at tic-tac-toe:

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or “rules of thumb”, that have worked well in the past), or can themselves write other algorithms. Some of the “learners” described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, if given infinite data, time, and memory, learn to approximate any function, including whatever combination of mathematical functions would best describe the entire world. These learners could therefore, in theory, derive all possible knowledge, by considering every possible hypothesis and matching it against the data. In practice, it is almost never possible to consider every possibility, because of the phenomenon of “combinatorial explosion”, where the amount of time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering broad swaths of possibililities that are unlikely to be fruitful.[53] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding an pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered in turn.[55]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): “If an otherwise healthy adult has a fever, then they may have influenza”. A second, more general, approach is Bayesian inference: “If the current patient has a fever, adjust the probability they have influenza in such-and-such way”. The third major approach, extremely popular in routine business AI applications, is analogizers such as SVM and nearest-neighbor: “After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza”. A fourth approach is harder to intuitively understand, but is inspired by how the brain’s machinery works: the neural network approach uses artificial “neurons” that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to “reinforce” connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[57]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as “since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well”. They can be nuanced, such as “X% of families have geographically separate species with color variants, so there is an Y% chance that undiscovered black swans exist”. Learners also work on the basis of “Occam’s razor”: The simplest theory that explains the data is the likeliest. Therefore, to be successful, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better. Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Besides classic overfitting, learners can also disappoint by “learning the wrong lesson”. A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers don’t determine the spatial relationship between components of the picture; instead, they learn abstract patterns of pixels that humans are oblivious to, but that linearly correlate with images of certain types of real objects. Faintly superimposing such a pattern on a legitimate image results in an “adversarial” image that the system misclassifies.[c][60][61][62]

Compared with humans, existing AI lacks several features of human “commonsense reasoning”; most notably, humans have powerful mechanisms for reasoning about “nave physics” such as space, time, and physical interactions. This enables even young children to easily make inferences like “If I roll this pen off a table, it will fall on the floor”. Humans also have a powerful mechanism of “folk psychology” that helps them to interpret natural-language sentences such as “The city councilmen refused the demonstrators a permit because they advocated violence”. (A generic AI has difficulty inferring whether the councilmen or the demonstrators are the ones alleged to be advocating violence.)[65][66][67] This lack of “common knowledge” means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[68][69][70]

The overall research goal of artificial intelligence is to create technology that allows computers and machines to function in an intelligent manner. The general problem of simulating (or creating) intelligence has been broken down into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention.[13]

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[71] By the late 1980s and 1990s, AI research had developed methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.[72]

These algorithms proved to be insufficient for solving large reasoning problems, because they experienced a “combinatorial explosion”: they became exponentially slower as the problems grew larger.[53] In fact, even humans rarely use the step-by-step deduction that early AI research was able to model. They solve most of their problems using fast, intuitive judgements.[73]

Knowledge representation[74] and knowledge engineering[75] are central to classical AI research. Some “expert systems” attempt to gather together explicit knowledge possessed by experts in some narrow domain. In addition, some projects attempt to gather the “commonsense knowledge” known to the average person into a database containing extensive knowledge about the world. Among the things a comprehensive commonsense knowledge base would contain are: objects, properties, categories and relations between objects;[76] situations, events, states and time;[77] causes and effects;[78] knowledge about knowledge (what we know about what other people know);[79] and many other, less well researched domains. A representation of “what exists” is an ontology: the set of objects, relations, concepts, and properties formally described so that software agents can interpret them. The semantics of these are captured as description logic concepts, roles, and individuals, and typically implemented as classes, properties, and individuals in the Web Ontology Language.[80] The most general ontologies are called upper ontologies, which attempt to provide a foundation for all other knowledge[81] by acting as mediators between domain ontologies that cover specific knowledge about a particular knowledge domain (field of interest or area of concern). Such formal knowledge representations can be used in content-based indexing and retrieval,[82] scene interpretation,[83] clinical decision support,[84] knowledge discovery (mining “interesting” and actionable inferences from large databases),[85] and other areas.[86]

Among the most difficult problems in knowledge representation are:

Intelligent agents must be able to set goals and achieve them.[93] They need a way to visualize the futurea representation of the state of the world and be able to make predictions about how their actions will change itand be able to make choices that maximize the utility (or “value”) of available choices.[94]

In classical planning problems, the agent can assume that it is the only system acting in the world, allowing the agent to be certain of the consequences of its actions.[95] However, if the agent is not the only actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not only assess its environment and make predictions, but also evaluate its predictions and adapt based on its assessment.[96]

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal. Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.[97]

Machine learning, a fundamental concept of AI research since the field’s inception,[98] is the study of computer algorithms that improve automatically through experience.[99][100]

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression is the attempt to produce a function that describes the relationship between inputs and outputs and predicts how the outputs should change as the inputs change. In reinforcement learning[101] the agent is rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and punishments to form a strategy for operating in its problem space.

Natural language processing[102] gives machines the ability to read and understand human language. A sufficiently powerful natural language processing system would enable natural language user interfaces and the acquisition of knowledge directly from human-written sources, such as newswire texts. Some straightforward applications of natural language processing include information retrieval, text mining, question answering[103] and machine translation.[104] Many current approaches use word co-occurrence frequencies to construct syntactic representations of text. “Keyword spotting” strategies for search are popular and scalable but dumb; a search query for “dog” might only match documents with the literal word “dog” and miss a document with the word “poodle”. “Lexical affinity” strategies use the occurrence of words such as “accident” to assess the sentiment of a document. Modern statistical NLP approaches can combine all these strategies as well as others, and often achieve acceptable accuracy at the page or paragraph level, but continue to lack the semantic understanding required to classify isolated sentences well. Besides the usual difficulties with encoding semantic commonsense knowledge, existing semantic NLP sometimes scales too poorly to be viable in business applications. Beyond semantic NLP, the ultimate goal of “narrative” NLP is to embody a full understanding of commonsense reasoning.[105]

Machine perception[106] is the ability to use input from sensors (such as cameras, microphones, tactile sensors, sonar and others) to deduce aspects of the world. Computer vision[107] is the ability to analyze visual input. A few selected subproblems are speech recognition,[108] facial recognition and object recognition.[109]

The field of robotics[110] is closely related to AI. Intelligence is required for robots to handle tasks such as object manipulation[111] and navigation, with sub-problems such as localization, mapping, and motion planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn from and build a map of its environment, figure out how to get from one point in space to another, and execute that movement (which often involves compliant motion, a process where movement requires maintaining physical contact with an object).[113]

Within developmental robotics, developmental learning approaches are elaborated upon to allow robots to accumulate repertoires of novel skills through autonomous self-exploration, social interaction with human teachers, and the use of guidance mechanisms (active learning, maturation, motor synergies, etc.).[114][115]

Affective computing is the study and development of systems that can recognize, interpret, process, and simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and cognitive science. While the origins of the field may be traced as far back as the early philosophical inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard’s 1995 paper on “affective computing”.[124][125] A motivation for the research is the ability to simulate empathy, where the machine would be able to interpret human emotions and adapts its behavior to give an appropriate response to those emotions.

Emotion and social skills[126] are important to an intelligent agent for two reasons. First, being able to predict the actions of others by understanding their motives and emotional states allow an agent to make better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to detect and model human emotions. Second, in an effort to facilitate humancomputer interaction, an intelligent machine may want to display emotions (even if it does not experience those emotions itself) to appear more sensitive to the emotional dynamics of human interaction.

Many researchers think that their work will eventually be incorporated into a machine with artificial general intelligence, combining all the skills mentioned above and even exceeding human ability in most or all these areas.[17][127] A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.[128][129]

Many of the problems above may also require general intelligence, if machines are to solve the problems as well as people do. For example, even specific straightforward tasks, like machine translation, require that a machine read and write in both languages (NLP), follow the author’s argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author’s original intent (social intelligence). A problem like machine translation is considered “AI-complete”, because all of these problems need to be solved simultaneously in order to reach human-level machine performance.

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues.[130] A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence by studying psychology or neurobiology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering?[14] Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems?[15] Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require “sub-symbolic” processing?[16] John Haugeland, who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should more properly be referred to as synthetic intelligence,[131] a term which has since been adopted by some non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology, computational philosophy, and computer science. Computational psychology is used to make computer programs that mimic human behavior.[134] Computational philosophy, is used to develop an adaptive, free-flowing computer mind.[134] Implementing computer science serves the goal of creating computers that can perform tasks that only people could previously accomplish.[134] Together, the humanesque behavior, mind, and actions make up artificial intelligence.

In the 1940s and 1950s, a number of researchers explored the connection between neurobiology, information theory, and cybernetics. Some of them built machines that used electronic networks to exhibit rudimentary intelligence, such as W. Grey Walter’s turtles and the Johns Hopkins Beast. Many of these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio Club in England.[135] By 1960, this approach was largely abandoned, although elements of it would be revived in the 1980s.

When access to digital computers became possible in the middle 1950s, AI research began to explore the possibility that human intelligence could be reduced to symbol manipulation. The research was centered in three institutions: Carnegie Mellon University, Stanford and MIT, and each one developed its own style of research. John Haugeland named these approaches to AI “good old fashioned AI” or “GOFAI”.[136] During the 1960s, symbolic approaches had achieved great success at simulating high-level thinking in small demonstration programs. Approaches based on cybernetics or neural networks were abandoned or pushed into the background.[137] Researchers in the 1960s and the 1970s were convinced that symbolic approaches would eventually succeed in creating a machine with artificial general intelligence and considered this the goal of their field.

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to formalize them, and their work laid the foundations of the field of artificial intelligence, as well as cognitive science, operations research and management science. Their research team used the results of psychological experiments to develop programs that simulated the techniques that people used to solve problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the development of the Soar architecture in the middle 1980s.[138][139]

Unlike Newell and Simon, John McCarthy felt that machines did not need to simulate human thought, but should instead try to find the essence of abstract reasoning and problem solving, regardless of whether people used the same algorithms.[14] His laboratory at Stanford (SAIL) focused on using formal logic to solve a wide variety of problems, including knowledge representation, planning and learning.[140] Logic was also the focus of the work at the University of Edinburgh and elsewhere in Europe which led to the development of the programming language Prolog and the science of logic programming.[141]

Researchers at MIT (such as Marvin Minsky and Seymour Papert)[142] found that solving difficult problems in vision and natural language processing required ad-hoc solutions they argued that there was no simple and general principle (like logic) that would capture all the aspects of intelligent behavior. Roger Schank described their “anti-logic” approaches as “scruffy” (as opposed to the “neat” paradigms at CMU and Stanford).[15] Commonsense knowledge bases (such as Doug Lenat’s Cyc) are an example of “scruffy” AI, since they must be built by hand, one complicated concept at a time.[143]

When computers with large memories became available around 1970, researchers from all three traditions began to build knowledge into AI applications.[144] This “knowledge revolution” led to the development and deployment of expert systems (introduced by Edward Feigenbaum), the first truly successful form of AI software.[37] The knowledge revolution was also driven by the realization that enormous amounts of knowledge would be required by many simple AI applications.

By the 1980s progress in symbolic AI seemed to stall and many believed that symbolic systems would never be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition. A number of researchers began to look into “sub-symbolic” approaches to specific AI problems.[16] Sub-symbolic methods manage to approach intelligence without specific representations of knowledge.

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering problems that would allow robots to move and survive.[145] Their work revived the non-symbolic viewpoint of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This coincided with the development of the embodied mind thesis in the related field of cognitive science: the idea that aspects of the body (such as movement, perception and visualization) are required for higher intelligence.

Interest in neural networks and “connectionism” was revived by David Rumelhart and others in the middle of the 1980s.[146] Neural networks are an example of soft computing — they are solutions to problems which cannot be solved with complete logical certainty, and where an approximate solution is often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary computation and many statistical tools. The application of soft computing to AI is studied collectively by the emerging discipline of computational intelligence.[147]

In the 1990s, AI researchers developed sophisticated mathematical tools to solve specific subproblems. These tools are truly scientific, in the sense that their results are both measurable and verifiable, and they have been responsible for many of AI’s recent successes. The shared mathematical language has also permitted a high level of collaboration with more established fields (like mathematics, economics or operations research). Stuart Russell and Peter Norvig describe this movement as nothing less than a “revolution” and “the victory of the neats”.[38] Critics argue that these techniques (with few exceptions) are too focused on particular problems and have failed to address the long-term goal of general intelligence. There is an ongoing debate about the relevance and validity of statistical approaches in AI, exemplified in part by exchanges between Peter Norvig and Noam Chomsky.

In the course of 60 or so years of research, AI has developed a large number of tools to solve the most difficult problems in computer science. A few of the most general of these methods are discussed below.

Many problems in AI can be solved in theory by intelligently searching through many possible solutions:[155] Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule.[156] Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[157] Robotics algorithms for moving limbs and grasping objects use local searches in configuration space.[111] Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches[158] are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use “heuristics” or “rules of thumb” that prioritize choices in favor of those that are more likely to reach a goal, and to do so in a shorter number of steps. In some search methodologies heuristics can also serve to entirely eliminate some choices that are unlikely to lead to a goal (called “pruning the search tree”). Heuristics supply the program with a “best guess” for the path on which the solution lies.[159] Heuristics limit the search for solutions into a smaller sample size.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[160]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization)[161] and evolutionary algorithms (such as genetic algorithms, gene expression programming, and genetic programming).[162]

Logic[163] is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning[164] and inductive logic programming is a method for learning.[165]

Several different forms of logic are used in AI research. Propositional or sentential logic[166] is the logic of statements which can be true or false. First-order logic[167] also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic,[168] is a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Subjective logic[citation needed] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method, ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.

Default logics, non-monotonic logics and circumscription[88] are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics;[76] situation calculus, event calculus and fluent calculus (for representing events and time);[77] causal calculus;[78] belief calculus;[169] and modal logics.[79]

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of powerful tools to solve these problems using methods from probability theory and economics.[170]

Bayesian networks[171] are a very general tool that can be used for a large number of problems: reasoning (using the Bayesian inference algorithm),[172] learning (using the expectation-maximization algorithm),[d][174] planning (using decision networks)[175] and perception (using dynamic Bayesian networks).[176] Bayesian networks are used in AdSense to choose what ads to place and on XBox Live to rate and match players. Probabilistic algorithms can also be used for filtering, prediction, smoothing and finding explanations for streams of data, helping perception systems to analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[176]

A key concept from the science of economics is “utility”: a measure of how valuable something is to an intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[178] and information value theory.[94] These tools include models such as Markov decision processes,[179] dynamic decision networks,[176] game theory and mechanism design.[180]

The simplest AI applications can be divided into two types: classifiers (“if shiny then diamond”) and controllers (“if shiny then pick up”). Controllers do, however, also classify conditions before inferring actions, and therefore classification forms a central part of many AI systems. Classifiers are functions that use pattern matching to determine a closest match. They can be tuned according to examples, making them very attractive for use in AI. These examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain predefined class. A class can be seen as a decision that has to be made. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[181]

A classifier can be trained in various ways; there are many statistical and machine learning approaches. The decision tree[182] is perhaps the most widely used machine learning algorithm. Other widely used classifiers are the neural network,[184] k-nearest neighbor algorithm,[e][186] kernel methods such as the support vector machine (SVM),[f][188] Gaussian mixture model[189] and the extremely popular naive Bayes classifier.[g][191] The performance of these classifiers have been compared over a wide range of tasks. Classifier performance depends greatly on the characteristics of the data to be classified. There is no single classifier that works best on all given problems; this is also referred to as the “no free lunch” theorem. Determining a suitable classifier for a given problem is still more an art than science.[192]

Neural networks, or neural nets, were inspired by the architecture of neurons in the human brain. A simple “neuron” N accepts input from multiple other neurons, each of which, when activated (or “fired”), cast a weighted “vote” for or against whether neuron N should itself activate. Learning requires an algorithm to adjust these weights based on the training data; one simple algorithm (dubbed “fire together, wire together”) is to increase the weight between two connected neurons when the activation of one triggers the successful activation of another. The net forms “concepts” that are distributed among a subnetwork of shared[h] neurons that tend to fire together; a concept meaning “leg” might be coupled with a subnetwork meaning “foot” that includes the sound for “foot”. Neurons have a continuous spectrum of activation; in addition, neurons can process inputs in a nonlinear way rather than weighing straightforward votes. Modern neural nets can learn both continuous functions and, surprisingly, digital logical operations. Neural networks’ early successes included predicting the stock market and (in 1995) a mostly self-driving car.[i] In the 2010s, advances in neural networks using deep learning thrust AI into widespread public consciousness and contributed to an enormous upshift in corporate AI spending; for example, AI-related M&A in 2017 was over 25 times as large as in 2015.[194][195]

The study of non-learning artificial neural networks[184] began in the decade before the field of AI research was founded, in the work of Walter Pitts and Warren McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw, Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feedforward neural networks (where the signal passes in only one direction) and recurrent neural networks (which allow feedback and short-term memories of previous input events). Among the most popular feedforward networks are perceptrons, multi-layer perceptrons and radial basis networks.[196] Neural networks can be applied to the problem of intelligent control (for robotics) or learning, using such techniques as Hebbian learning (“fire together, wire together”), GMDH or competitive learning.[197]

Today, neural networks are often trained by the backpropagation algorithm, which had been around since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa,[198][199] and was introduced to neural networks by Paul Werbos.[200][201][202]

Hierarchical temporal memory is an approach that models some of the structural and algorithmic properties of the neocortex.[203]

In short, most neural networks use some form of gradient descent on a hand-created neural topology. However, some research groups, such as Uber, argue that simple neuroevolution to mutate new neural network topologies and weights may be competitive with sophisticated gradient descent approaches. One advantage of neuroevolution is that it may be less prone to get caught in “dead ends”.[204]

Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a feedforward network with six hidden layers can learn a seven-link causal chain (six hidden layers + output layer) and has a “credit assignment path” (CAP) depth of seven. Many deep learning systems need to be able to learn chains ten or more causal links in length.[205] Deep learning has transformed many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing and others.[206][207][205]

According to one overview,[208] the expression “Deep Learning” was introduced to the Machine Learning community by Rina Dechter in 1986[209] and gained traction after Igor Aizenberg and colleagues introduced it to Artificial Neural Networks in 2000.[210] The first functional Deep Learning networks were published by Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965.[211][pageneeded] These networks are trained one layer at a time. Ivakhnenko’s 1971 paper[212] describes the learning of a deep feedforward multilayer perceptron with eight layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and Ruslan Salakhutdinov introduced another way of pre-training many-layered feedforward neural networks (FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks, deep neural networks can model complex non-linear relationships. Over the last few years, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer.[214]

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to the Neocognitron introduced by Kunihiko Fukushima in 1980.[215] In 1989, Yann LeCun and colleagues applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs already processed an estimated 10% to 20% of all the checks written in the US.[216] Since 2011, fast implementations of CNNs on GPUs have won many visual pattern recognition competitions.[205]

CNNs with 12 convolutional layers were used in conjunction with reinforcement learning by Deepmind’s “AlphaGo Lee”, the program that beat a top Go champion in 2016.[217]

Early on, deep learning was also applied to sequence learning with recurrent neural networks (RNNs)[218] which are in theory Turing complete[219] and can run arbitrary programs to process arbitrary sequences of inputs. The depth of an RNN is unlimited and depends on the length of its input sequence; thus, an RNN is an example of deep learning.[205] RNNs can be trained by gradient descent[220][221][222] but suffer from the vanishing gradient problem.[206][223] In 1992, it was shown that unsupervised pre-training of a stack of recurrent neural networks can speed up subsequent supervised learning of deep sequential problems.[224]

Numerous researchers now use variants of a deep learning recurrent NN called the long short-term memory (LSTM) network published by Hochreiter & Schmidhuber in 1997.[225] LSTM is often trained by Connectionist Temporal Classification (CTC).[226] At Google, Microsoft and Baidu this approach has revolutionised speech recognition.[227][228][229] For example, in 2015, Google’s speech recognition experienced a dramatic performance jump of 49% through CTC-trained LSTM, which is now available through Google Voice to billions of smartphone users.[230] Google also used LSTM to improve machine translation,[231] Language Modeling[232] and Multilingual Language Processing.[233] LSTM combined with CNNs also improved automatic image captioning[234] and a plethora of other applications.

Early symbolic AI inspired Lisp[235] and Prolog,[236] which dominated early AI programming. Modern AI development often uses mainstream languages such as Python or C++,[237] or niche languages such as Wolfram Language.[238]

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as the Turing test. This procedure allows almost all the major problems of artificial intelligence to be tested. However, it is a very difficult challenge and at present all agents fail.[239]

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry, hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing tests. Smaller problems provide more achievable goals and there are an ever-increasing number of positive results.[citation needed]

For example, performance at draughts (i.e. checkers) is optimal,[citation needed] performance at chess is high-human and nearing super-human (see computer chess:computers versus human) and performance at many everyday tasks (such as recognizing a face or crossing a room without bumping into something) is sub-human.

A quite different approach measures machine intelligence through tests which are developed from mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties devising intelligence tests using notions from Kolmogorov complexity and data compression.[240] Two major advantages of mathematical definitions are their applicability to nonhuman intelligences and their absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA). As the name implies, this helps to determine that a user is an actual person and not a computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a machine and targeted to a human as opposed to being administered by a human and targeted to a machine. A computer asks a user to complete a simple test then generates a grade for that test. Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person taking the test. A common type of CAPTCHA is the test that requires the typing of distorted letters, numbers or symbols that appear in an image undecipherable by a computer.

AI is relevant to any intellectual task. Modern artificial intelligence techniques are pervasive and are too numerous to list here. Frequently, when a technique reaches mainstream use, it is no longer considered artificial intelligence; this phenomenon is described as the AI effect.

High-profile examples of AI include autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art (such as poetry), proving mathematical theorems, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, prediction of judicial decisions[244] and targeting online advertisements.[245][246]

With social media sites overtaking TV as a source for news for young people and news organisations increasingly reliant on social media platforms for generating distribution,[247] major publishers now use artificial intelligence (AI) technology to post stories more effectively and generate higher volumes of traffic.[248]

There are a number of competitions and prizes to promote research in artificial intelligence. The main areas promoted are: general machine intelligence, conversational behavior, data-mining, robotic cars, robot soccer and games.

Artificial intelligence is breaking into the healthcare industry by assisting doctors. According to Bloomberg Technology, Microsoft has developed AI to help doctors find the right treatments for cancer.[249] There is a great amount of research and drugs developed relating to cancer. In detail, there are more than 800 medicines and vaccines to treat cancer. This negatively affects the doctors, because there are too many options to choose from, making it more difficult to choose the right drugs for the patients. Microsoft is working on a project to develop a machine called “Hanover”. Its goal is to memorize all the papers necessary to cancer and help predict which combinations of drugs will be most effective for each patient. One project that is being worked on at the moment is fighting myeloid leukemia, a fatal cancer where the treatment has not improved in decades. Another study was reported to have found that artificial intelligence was as good as trained doctors in identifying skin cancers.[250] Another study is using artificial intelligence to try and monitor multiple high-risk patients, and this is done by asking each patient numerous questions based on data acquired from live doctor to patient interactions.[251]

According to CNN, there was a recent study by surgeons at the Children’s National Medical Center in Washington which successfully demonstrated surgery with an autonomous robot. The team supervised the robot while it performed soft-tissue surgery, stitching together a pig’s bowel during open surgery, and doing so better than a human surgeon, the team claimed.[252] IBM has created its own artificial intelligence computer, the IBM Watson, which has beaten human intelligence (at some levels). Watson not only won at the game show Jeopardy! against former champions,[253] but, was declared a hero after successfully diagnosing a women who was suffering from leukemia.[254]

Advancements in AI have contributed to the growth of the automotive industry through the creation and evolution of self-driving vehicles. As of 2016, there are over 30 companies utilizing AI into the creation of driverless cars. A few companies involved with AI include Tesla, Google, and Apple.[255]

Many components contribute to the functioning of self-driving cars. These vehicles incorporate systems such as braking, lane changing, collision prevention, navigation and mapping. Together, these systems, as well as high performance computers, are integrated into one complex vehicle.[256]

Recent developments in autonomous automobiles have made the innovation of self-driving trucks possible, though they are still in the testing phase. The UK government has passed legislation to begin testing of self-driving truck platoons in 2018.[257] Self-driving truck platoons are a fleet of self-driving trucks following the lead of one non-self-driving truck, so the truck platoons aren’t entirely autonomous yet. Meanwhile, the Daimler, a German automobile corporation, is testing the Freightliner Inspiration which is a semi-autonomous truck that will only be used on the highway.[258]

One main factor that influences the ability for a driver-less automobile to function is mapping. In general, the vehicle would be pre-programmed with a map of the area being driven. This map would include data on the approximations of street light and curb heights in order for the vehicle to be aware of its surroundings. However, Google has been working on an algorithm with the purpose of eliminating the need for pre-programmed maps and instead, creating a device that would be able to adjust to a variety of new surroundings.[259] Some self-driving cars are not equipped with steering wheels or brake pedals, so there has also been research focused on creating an algorithm that is capable of maintaining a safe environment for the passengers in the vehicle through awareness of speed and driving conditions.[260]

Another factor that is influencing the ability for a driver-less automobile is the safety of the passenger. To make a driver-less automobile, engineers must program it to handle high risk situations. These situations could include a head on collision with pedestrians. The car’s main goal should be to make a decision that would avoid hitting the pedestrians and saving the passengers in the car. But there is a possibility the car would need to make a decision that would put someone in danger. In other words, the car would need to decide to save the pedestrians or the passengers.[261] The programing of the car in these situations is crucial to a successful driver-less automobile.

Financial institutions have long used artificial neural network systems to detect charges or claims outside of the norm, flagging these for human investigation. The use of AI in banking can be traced back to 1987 when Security Pacific National Bank in US set-up a Fraud Prevention Task force to counter the unauthorised use of debit cards. Programs like Kasisto and Moneystream are using AI in financial services.

Banks use artificial intelligence systems today to organize operations, maintain book-keeping, invest in stocks, and manage properties. AI can react to changes overnight or when business is not taking place.[262] In August 2001, robots beat humans in a simulated financial trading competition.[263] AI has also reduced fraud and financial crimes by monitoring behavioral patterns of users for any abnormal changes or anomalies.[264]

The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories.[265] For example, AI based buying and selling platforms have changed the law of supply and demand in that it is now possible to easily estimate individualized demand and supply curves and thus individualized pricing. Furthermore, AI machines reduce information asymmetry in the market and thus making markets more efficient while reducing the volume of trades. Furthermore, AI in the markets limits the consequences of behavior in the markets again making markets more efficient. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking.

In video games, artificial intelligence is routinely used to generate dynamic purposeful behavior in non-player characters (NPCs). In addition, well-understood AI techniques are routinely used for pathfinding. Some researchers consider NPC AI in games to be a “solved problem” for most production tasks. Games with more atypical AI include the AI director of Left 4 Dead (2008) and the neuroevolutionary training of platoons in Supreme Commander 2 (2010).[266][267]

Worldwide annual military spending on robotics rose from 5.1 billion USD in 2010 to 7.5 billion USD in 2015.[268][269] Military drones capable of autonomous action are widely considered a useful asset. In 2017, Vladimir Putin stated that “Whoever becomes the leader in (artificial intelligence) will become the ruler of the world”.[270][271] Many artificial intelligence researchers seek to distance themselves from military applications of AI.[272]

A platform (or “computing platform”) is defined as “some sort of hardware architecture or software framework (including application frameworks), that allows software to run”. As Rodney Brooks pointed out many years ago, it is not just the artificial intelligence software that defines the AI features of the platform, but rather the actual platform itself that affects the AI that results, i.e., there needs to be work in AI problems on real-world platforms rather than in isolation.

A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep-learning frameworks to robot platforms such as the Roomba with open interface.[274] Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j, TensorFlow, Theano and Torch.

Collective AI is a platform architecture that combines individual AI into a collective entity, in order to achieve global results from individual behaviors.[275][276] With its collective structure, developers can crowdsource information and extend the functionality of existing AI domains on the platform for their own use, as well as continue to create and share new domains and capabilities for the wider community and greater good.[277] As developers continue to contribute, the overall platform grows more intelligent and is able to perform more requests, providing a scalable model for greater communal benefit.[276] Organizations like SoundHound Inc. and the Harvard John A. Paulson School of Engineering and Applied Sciences have used this collaborative AI model.[278][276]

Original post:

Artificial intelligence – Wikipedia

Posted in Ai

AI File – What is it and how do I open it?

Did your computer fail to open an AI file? We explain what AI files are and recommend software that we know can open or convert your AI files.

AI is the acronym for Adobe Illustrator. Files that have the .ai extension are drawing files that the Adobe Illustrator application has created.

The Adobe Illustrator application was developed by Adobe Systems. The files created by this application are composed of paths that are connected by points and are saved in vector format. The technology used to create these files allows the user to re-size the AI image without losing any of the image’s quality.

Some third-party programs allow users to “rastersize” the images created in Adobe Illustrator, which allows them to convert the AI file into bitmap format. While this may make the file size smaller and easier to open across multiple applications, some of the file quality may be lost in the process.

Go here to see the original:

AI File – What is it and how do I open it?

Posted in Ai

Ideas about Ai

Subscribe to receive email notificationswhenever new talks are published.

Please enter an email address.

Please enter a valid email address.

Did you mean?

Please checkDailyorWeeklyand try again.

Please check your details and try again.

Please check your details and try again.

Sorry, we’re currently having troubleprocessing new newsletter signups.Please try again later.

View original post here:

Ideas about Ai

Posted in Ai

What is Artificial Intelligence (AI)? – Definition from …

Artificial intelligence is a branch of computer science that aims to create intelligent machines. It has become an essential part of the technology industry.

Research associated with artificial intelligence is highly technical and specialized. The core problems of artificial intelligence include programming computers for certain traits such as:

Knowledge engineering is a core part of AI research. Machines can often act and react like humans only if they have abundant information relating to the world. Artificial intelligence must have access to objects, categories, properties and relations between all of them to implement knowledge engineering. Initiating common sense, reasoning and problem-solving power in machines is a difficult and tedious task.

Machine learning is also a core part of AI. Learning without any kind of supervision requires an ability to identify patterns in streams of inputs, whereas learning with adequate supervision involves classification and numerical regressions. Classification determines the category an object belongs to and regression deals with obtaining a set of numerical input or output examples, thereby discovering functions enabling the generation of suitable outputs from respective inputs. Mathematical analysis of machine learning algorithms and their performance is a well-defined branch of theoretical computer science often referred to as computational learning theory.

Machine perception deals with the capability to use sensory inputs to deduce the different aspects of the world, while computer vision is the power to analyze visual inputs with a few sub-problems such as facial, object and gesture recognition.

Robotics is also a major field related to AI. Robots require intelligence to handle tasks such as object manipulation and navigation, along with sub-problems of localization, motion planning and mapping.

See original here:

What is Artificial Intelligence (AI)? – Definition from …

Posted in Ai

Commune – Wikipedia

Not to be confused with comune.

A commune (the French word appearing in the 12th century from Medieval Latin communia, meaning a large gathering of people sharing a common life; from Latin communis, things held in common)[1] is an intentional community of people living together, sharing common interests, often having common values and beliefs, as well as shared property, possessions, resources, and, in some communes, work, income or assets.

In addition to the communal economy, consensus decision-making, non-hierarchical structures and ecological living have become important core principles for many communes. There are many contemporary intentional communities all over the world, a list of which can be found at the Fellowship for Intentional Community (FIC).[2] For the usually larger-scale, political entities in communist political theory, see socialist communes, which are similar but distinct social organizations.

Benjamin Zablocki categorized communities this way:[3]

Many communal ventures encompass more than one of these categorizations. Some communes, such as the ashrams of the Vedanta Society or the Theosophical commune Lomaland, formed around spiritual leaders, while others formed around political ideologies. For others, the “glue” is simply the desire for a more shared, sociable lifestyle.

The central characteristics of communes, or core principles that define communes, have been expressed in various forms over the years. Before 1840 such communities were known as “communist and socialist settlements”; by 1860, they were also called “communitarian” and by around 1920 the term “intentional community”[citation needed] had been added to the vernacular of some theorists. The term “communitarian” was invented by the Suffolk-born radical John Goodwyn Barmby, subsequently a Unitarian minister.[4]

At the start of the 1970s, “The New Communes” author Ron E. Roberts classified communes as a subclass of a larger category of Utopias. He listed three main characteristics. Communes of this period tended to develop their own characteristics of theory though, so while many strived for variously expressed forms of egalitarianism Roberts’ list should never be read as typical. Roberts’ three listed items were: first, egalitarianism that communes specifically rejected hierarchy or graduations of social status as being necessary to social order. Second, human scale that members of some communes saw the scale of society as it was then organized as being too industrialized (or factory sized) and therefore unsympathetic to human dimensions. And third, that communes were consciously anti-bureaucratic.

Twenty five years later, Dr. Bill Metcalf, in his edited book “Shared Visions, Shared Lives” defined communes as having the following core principles: the importance of the group as opposed to the nuclear family unit, a “common purse”, a collective household, group decision making in general and intimate affairs. Sharing everyday life and facilities, a commune is an idealized form of family, being a new sort of “primary group” (generally with fewer than 20 people although again there are outstanding examples of much larger communes or communes that experienced episodes with much larger populations). Commune members have emotional bonds to the whole group rather than to any sub-group, and the commune is experienced with emotions which go beyond just social collectivity.

With the simple definition of a commune as an intentional community with 100% income sharing, the online directory of the Fellowship for Intentional Community (FIC)[2] lists 186 communes worldwide (17 August 2011).[7] Some of these are religious institutions such as abbeys and monasteries. Others are based in anthroposophic philosophy, including Camphill villages that provide support for the education, employment, and daily lives of adults and children with developmental disabilities, mental health problems or other special needs.[8] Many communes are part of the New Age movement.

Many cultures naturally practice communal or tribal living, and would not designate their way of life as a planned ‘commune’ per se, though their living situation may have many characteristics of a commune.

In Germany, a large number of the intentional communities define themselves as communes and there is a network of political communes called “Kommuja”[9] with about 30 member groups (May 2009). Germany has a long tradition of intentional communities going back to the groups inspired by the principles of Lebensreform in the 19th century. Later, about 100 intentional communities were started in the Weimar Republic after World War I, many had a communal economy. In the 1960s, there was a resurgence of communities calling themselves communes, starting with the Kommune 1 in Berlin, followed by Kommune 2 (also Berlin) and Kommune 3 in Wolfsburg.

In the German commune book, Das KommuneBuch, communes are defined by Elisabeth Vo as communities which:

Kibbutzim in Israel, (sing., kibbutz) are examples of officially organized communes, the first of which were based on agriculture. Today, there are dozens of urban communes growing in the cities of Israel, often called urban kibbutzim. The urban kibbutzim are smaller and more anarchist.[11] Most of the urban communes in Israel emphasize social change, education, and local involvement in the cities where they live. Some of the urban communes have members who are graduates of zionist-socialist youth movements, like HaNoar HaOved VeHaLomed, HaMahanot HaOlim and Hashomer Hatsair.[12]

In 1831 John Vandeleur (a landlord) established a commune on his Ralahine Estate at Newmarket-on-Fergus, Co. Clare. Vandeleur asked Edward Thomas Craig, an English socialist, to formulate rules and regulations for the commune. It was set up with a population of 22 adult single men, 7 married men and their 7 wives, 5 single women, 4 orphan boys and 5 children under the age of 9 years. No money was employed, only credit notes which could be used in the commune shop. All occupants were committed to a life with no alcohol, tobacco, snuff or gambling. All were required to work for 12 hours a day during the summer and from dawn to dusk in winter. The social experiment prospered for a time and 29 new members joined. However, in 1833 the experiment collapsed due to the gambling debts of John Vandeleur. The members of the commune met for the last time on 23 November 1833 and placed on record a declaration of the contentment, peace and happiness they had experienced for two years under the arrangements introduced by Mr. Vandeleur and Mr. Craig and which through no fault of the Association was now at an end.[13]

In imperial Russia, the vast majority of Russian peasants held their land in communal ownership within a mir community, which acted as a village government and a cooperative.[14][15] The very widespread and influential pre-Soviet Russian tradition of Monastic communities of both sexes could also be considered a form of communal living. After the end of Communism in Russia monastic communities have again become more common, populous and, to a lesser degree, more influential in Russian society. Various patterns of Russian behaviortoloka (), pomochi (), artel’ ()are also based on Communal (“”) traditions.

A 19th century advocate and practitioner of communal living was the utopian socialist John Goodwyn Barmby, who founded a Communist Church before becoming a Unitarian minister.[16] The UK today has several communes or intentional communities, increasing since the New Towns Act 1946 to recuperate a lost sense of community at the centralization of population in Post-War New Towns such as Crawley or Corby.

The Simon Community in London is an example of social cooperation, made to ease homelessness within London. It provides food and religion and is staffed by homeless people and volunteers. Mildly nomadic, they run street “cafs” which distribute food to their known members and to the general public.

The Bruderhof has three locations in the UK[18] and follows the example of the earliest Christians in the Book of Acts by living in community and sharing all things in common.[19] In Glandwr, near Crymych, Pembrokeshire, a co-op called Lammas Ecovillage focuses on planning and sustainable development. Granted planning permission by the Welsh Government in 2009, it has since created 9 holdings and is a central communal hub for its community.[20] In Scotland, the Findhorn Foundation founded by Peter and Eileen Caddy and Dorothy Maclean in 1962[21] is prominent for its educational centre and experimental architectural community project based at The Park, in Moray, Scotland, near the village of Findhorn.[22]

The Findhorn Ecovillage community at The Park, Findhorn, a village in Moray, Scotland, and at Cluny Hill in Forres, now houses more than 400 people.[23]

There is a long history of communes in America (see this short discussion of Utopian communities) which led to the rise in the communes of the hippie movementthe “back-to-the-land” ventures of the 1960s and 1970s .[24] One commune that played a large role in the hippie movement was Kaliflower, a utopian living cooperative that existed in San Francisco between 1967 and 1973 built on values of free love and anti-capitalism.

Andrew Jacobs of The New York Times wrote that “after decades of contraction, the American commune movement has been expanding since the mid-1990s, spurred by the growth of settlements that seek to marry the utopian-minded commune of the 1960s with the American predilection for privacy and capital appreciation.”[25] (See Intentional community). The Fellowship for Intentional Community (FIC) is the best source for listings of and more information about communes in the United States.

While many American communes are short lived, some have been in operation for over 50 years. The Bruderhof was established in the US in 1954,[26] Twin Oaks in 1967[27] and Koinonia Farm in 1942.[28] Twin Oaks is a rare example of a non-religious commune surviving for longer than 30 years.

As of 2010, the Venezuelan state has initiated the construction of almost 200 “socialist communes” which are billed as autonomous and independent from the government. The communes have their own “productive gardens that grow their own vegetables as a method of self-supply. The communes also make independent decisions in regards to administration and the use of funding.[29] The idea has been denounced as an attempt to undermine elected local governments, since the central government could shift its funding away from these in favor of communes, which are overseen by the federal Ministry of Communes and Social Protection.[30]

See the article here:

Commune – Wikipedia

xkcd: Free Speech

xkcd: Free Speech

xkcd updates every Monday, Wednesday, and Friday.

Free Speech

[[A person speaking to the reader.]]Person: Public Service Announcehment: The *right to free speech* means the government can’t arrest you for what you say.[[Close-up on person’s face.]]Person: It doesn’t mean that anyone else has to listen to your bullshit, – or host you while you share it.[[Back to full figure.]]Person: The 1st Amendment doesn’t shield you from criticism or consequences.[[Close-up.]]Person: If you’re yelled at, boycotted, have your show canceled, or get banned from an internet community, your free speech rights aren’t being violated.[[Person, holding palm upward.]]Person: It’s just that the people listening think you’re an asshole,[[A door that is ajar.]]Person: And they’re showing you the door.{{Title text: I can’t remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you’re saying that the most compelling thing you can say for your position is that it’s not literally illegal to express.}}

This work is licensed under aCreative Commons Attribution-NonCommercial 2.5 License.

This means you’re free to copy and share these comics (but not to sell them). More details.

More here:

xkcd: Free Speech

Salman Spiritual :: Diamond Jubilee Sparks :: 2017-2018 …

Bismillahir Rahmanir RahimIn the name of Allah, the Most Beneficent, the Most Merciful.

SalmanSpiritual.Com :: Towards the Inner Vision of the Truth

Welcome to SalmanSpiritual.com. We are please to announce the launch of the Higher Spiritual Enlightenment Posts project. The Project started with jolt of aspiration and inspiration came on New Year’s eve which led to the commitment of making 2017The Year for Higher Spiritual Enlightenment through the publication of Candle Post #138. My prayer for the Enlightenment Posts project is summarized below:

Let us pray to Noor Mowlana Hazar Imam to bless all of us with drops of Light into our spiritual hearts. The spiritual heart in each person is the representative of the soul, thus these luminous drops will start to illuminate and purify the ego, the vital, the mind and the body. The precise process is to usher Light into each cell of one’s body through intense dhikr. This is the proven mechanism to eliminate all types of darknesses in our own personal world. This purification process will bring greater peace, light, delight and awareness within ourselves and will increase our aspiration to reach our own spiritual and luminous potential in the field of Higher Spiritual Enlightenment.

The Diamond Jubilee of Noor Mowlana Shah Karim Al-Hussaini Hazar Imam (a.s.) has inspired me to write posts on the topic of Higher Spiritual Enlightenment. This project was started on January 9, 2017 to increase our knowledge and enhance our yearning through Dhikr and Angelic Salwat. The web page and PDF links to the 7 latest posts for this project are shown below. All posts can be accessed through the Table of Contents section on the index page of Higher Spiritual Enlightenment Posts project.

Please share information about the Higher Spiritual Enlightenment project with family and friends! May Noor Mowlana Hazar Imam bless you for this seva and may he fill your spiritual heart with his NOOR and nothing else! Ameen.

Web Page

PDF

Date

Download

Download

Download

Download

Download

Download

Download

SalmanSpiritual.Com :: Focus & Contents

New Dhikrs/Tasbis

Download mp3 audio track titled ‘Noore Karim, Ya Majma al-Nurayn’ from audio.salmanspiritual.com. In this mp3 track, Noore Karim, Ya Majma al-Nurayn is recited 40 times. Click here to see the explanation of this dhikr.

Download mp3 audio track titled ‘Noore Karim Plus 21 Tasbis’ from audio.salmanspiritual.com. Download PDF or click here to see the lyrics and explanation of the dhikr, and how to spread the benefit of this dhikr.

The Candle PostsA Vital ResourceIn his Irshad on July 11, 2007 and his firman of December 13, 2008, Noor Mowlana Hazar Imam has put the onus on us to search for higher spiritual enlightenment under his supervision and then use this spiritual enlightenment as a continuous internal guide in our daily lives. This calling from our beloved Holy Imam needs tremendous effort. Therefore, I have made a very humble niyat to send motivational gems for higher spiritual enlightenment in the form of candle posts. Click here to see the candle postings index page for Volumes 1-4.

New Candle Posts

Candle Post #138 :: 2017 The Year for Higher Spiritual EnlightenmentThe Diamond Jubilee of Noor Mowlana Hazar Imam has inspired me to make 2017 the Year of Higher Spiritual Enlightenment. Let us pray to Noor Mowlana Hazar Imam to bless all of us with drops of Light into our spiritual hearts. The spiritual heart in each person is the representative of the soul, thus these luminous drops will start to illuminate and purify the ego, the vital, the mind and the body. Read more…

Candle Post #137 :: Hazrat Bibi Fatimat-az-Zahra (a.s.), Majma al-NuraynThis candle post is about a new title, majma al-nurayn (The confluence of two lights), of Hazrat Bibi Fatimat-az-Zahra, Khatun-i-Jannat (‘Alayhi-s-salam). She is also the ‘mother of Imamate’. Read more…

Candle Posts PDF ArchiveThe PDF versions of Candle Posts 115 to 138 are now available on a separate page. Read more…

Holy Quran ResourcesSalmanSpiritual.com is pleased to bring to you searchable versions of Yusufali’s and Pickthall’s translations of the Holy Quran. In addition to this, hyperlinks are provided to Quran Explorer, which is like an electronic Quran complete with meanings & with sound. Arabic text and English transliteration & translation of the Holy Qur’an are also available on SalmanSpiritual.com. The original source of these materials is at http://www.sacred-texts.com. Click here to explore these resources.

Resources for Holy Ramadan & Idd-ul FitrIn order to be spiritually and esoterically engaged in the month of Holy Ramadan, a number of resources have been created over the past four years. Click here to see and download these resources.

Foundation of Faith :: Curriculum for Spiritual EnlightenmentThese resources provide foundational knowledge on key aspects of our faith and are arranged in a sequential manner. The knowlege base addresses the requirements of being an active Ismaili who is searching for higher spiritual enlightenment under the supervision and guidance of NOOR Mowlana Shah Karim Al-Hussaini (a.s.). Precious gems have been compilied from our literature which spans over a period of 1400 years. Click here to see and download these resources.

Higher Spiritual Enlightenment :: Educational ResourcesOver the past 10 years, especially over the extended Golden Jubilee year, a massive effort was expended to develop the concept of a Golden Noorani Didar in the forehead of a seeker of higher spiritual enlightenment. In addition to this, the concept of a spiritual and a luminous nazrana was also articulated to augment the concepts of material, and time and knowledge nazranas. Click here to see and download these resources.

Audio.SalmanSpiritual.ComThe audio subdomain of SalmanSpiritual.Com has mp3 tracks of Ayatul Kursi, Anant Akhado (500 verses), Anant Naa Nav Chhuga (90 verses) and Moti venti (50 verses), Durood O Salaam Qasida and Dhikr tasbis. Click here to explore these resources.

Our ardent prayer is:May our beloved NOOR Mowlana Shah Karim Al-Hussaini (a.s.) Hazar Imam bless us all with Noorani Didars during Bandagi and in Zaheri Noorani Didars with the different Jamats across the world during the Diamond Jubilee!Ameen

Rakh Mowla je Noor te Yaqeen (Certainly, we trust in Mowla’s Light only)

Haizinda Qayampaya (Our Present Imam is Living and His NOOR is Eternal)

Your spiritual brother in religion,Noorallah Juma (noor-allah@salmanspiritual.com)SalmanSpiritual.comWednesday, April 18, 2018

See the original post:

Salman Spiritual :: Diamond Jubilee Sparks :: 2017-2018 …

Enlightenment in Buddhism – Wikipedia

The English term enlightenment is the western translation of the term bodhi, “awakening”, which was popularised in the Western world through the 19th century translations of Max Mller. It has the western connotation of a sudden insight into a transcendental truth.

The term is also being used to translate several other Buddhist terms and concepts used to denote insight (prajna, kensho and satori); knowledge (vidhya); the “blowing out” (Nirvana) of disturbing emotions and desires and the subsequent freedom or release (vimutti); and the attainment of Buddhahood, as exemplified by Gautama Buddha.

What exactly constituted the Buddha’s awakening is unknown. It may probably have involved the knowledge that liberation was attained by the combination of mindfulness and dhyna, applied to the understanding of the arising and ceasing of craving. The relation between dhyana and insight is a core problem in the study of Buddhism, and is one of the fundamentals of Buddhist practice.

In the western world the concept of (spiritual) enlightenment has taken on a romantic meaning. It has become synonymous with self-realization and the true self and false self, being regarded as a substantial essence being covered over by social conditioning.[pageneeded], [pageneeded], [pageneeded], [pageneeded]

Robert S. Cohen notes that the majority of English books on Buddhism use the term “enlightenment” to translate the term bodhi. The root budh, from which both bodhi and Buddha are derived, means “to wake up” or “to recover consciousness”. Cohen notes that bodhi is not the result of an illumination, but of a path of realization, or coming to understanding. The term “enlightenment” is event-oriented, whereas the term “awakening” is process-oriented. The western use of the term “enlighten” has Christian roots, as in Calvin’s “It is God alone who enlightens our minds to perceive his truths”.

Early 19th century bodhi was translated as “intelligence”. The term “enlighten” was first being used in 1835, in an English translation of a French article, while the first recorded use of the term ‘enlightenment’ is credited (by the Oxford English Dictionary) to the Journal of the Asiatic Society of Bengal (February, 1836). In 1857 The Times used the term “the Enlightened” for the Buddha in a short article, which was reprinted the following year by Max Mller. Thereafter, the use of the term subsided, but reappeared with the publication of Max Mller’s Chips from a german Workshop, which included a reprint from the Times-article. The book was translated in 1969 into German, using the term “der Erleuchtete”. Max Mller was an essentialist, who believed in a natural religion, and saw religion as an inherent capacity of human beings. “Enlightenment” was a means to capture natural religious truths, as distinguished from mere mythology.[note 1]

By the mid-1870s it had become commonplace to call the Buddha “enlightened”, and by the end of the 1880s the terms “enlightened” and “enlightenment” dominated the English literature.

Bodhi (Sanskrit, Pli), from the verbal root budd, “to awaken”, “to understand”, means literally “to have woken up and understood”. According to Johannes Bronkhorst, Tillman Vetter, and K.R. Norman, bodhi was at first not specified. K.R. Norman:

It is not at all clear what gaining bodhi means. We are accustomed to the translation “enlightenment” for bodhi, but this is misleading … It is not clear what the buddha was awakened to, or at what particular point the awakening came.[18]

According to Norman, bodhi may basically have meant the knowledge that nibbana was attained, due to the practice of dhyana. Originally only “prajna” may have been mentioned, and Tillman Vetter even concludes that originally dhyana itself was deemed liberating, with the stilling of pleasure of pain in the fourth jhana. Gombrich also argues that the emphasis on insight is a later development.

In Theravada Buddhism, bodhi refers to the realisation of the four stages of enlightenment and becoming an Arahant. In Theravada Buddhism, bodhi is equal to supreme insight, and the realisation of the four noble truths, which leads to deliverance. According to Nyanatiloka,

(Through Bodhi) one awakens from the slumber or stupor (inflicted upon the mind) by the defilements (kilesa, q.v.) and comprehends the Four Noble Truths (sacca, q.v.).

This equation of bodhi with the four noble truths is a later development, in response to developments within Indian religious thought, where “liberating insight” was deemed essential for liberation. The four noble truths as the liberating insight of the Buddha eventually were superseded by Prattyasamutpda, the twelvefold chain of causation, and still later by anatta, the emptiness of the self.

In Mahayana Buddhism, bodhi is equal to prajna, insight into the Buddha-nature, sunyata and tathat. This is equal to the realisation of the non-duality of absolute and relative.

In Theravada Buddhism pann (Pali) means “understanding”, “wisdom”, “insight”. “Insight” is equivalent to vipassana’, insight into the three marks of existence, namely anicca, dukkha and anatta. Insight leads to the four stages of enlightenment and Nirvana.

In Mahayana Buddhism Prajna (Sanskrit) means “insight” or “wisdom”, and entails insight into sunyata. The attainment of this insight is often seen as the attainment of “enlightenment”.[need quotation to verify]

Kensho and Satori are Japanese terms used in Zen traditions. Kensho means “seeing into one’s true nature.” Ken means “seeing”, sho means “nature”, “essence”, c.q Buddha-nature. Satori (Japanese) is often used interchangeably with kensho, but refers to the experience of kensho. The Rinzai tradition sees kensho as essential to the attainment of Buddhahood, but considers further practice essential to attain Buddhahood.

East-Asian (Chinese) Buddhism emphasizes insight into Buddha-nature. This term is derived from Indian tathagata-garbha thought, “the womb of the thus-gone” (the Buddha), the inherent potential of every sentient being to become a Buddha. This idea was integrated with the Yogacara-idea of the laya vijna, and further developed in Chinese Buddhism, which integrated Indian Buddhism with native Chinese thought. Buddha-nature came to mean both the potential of awakening and the whole of reality, a dynamic interpenetration of absolute and relative. In this awakening it is realized that observer and observed are not distinct entities, but mutually co-dependent.

The term vidhya is being used in contrast to avidhya, ignorance or the lack of knowledge, which binds us to samsara. The Mahasaccaka Sutta[note 2] describes the three knowledges which the Buddha attained:

According to Bronkhorst, the first two knowledges are later additions, while insight into the four truths represents a later development, in response to concurring religious traditions, in which “liberating insight” came to be stressed over the practice of dhyana.

Vimutti, also called moksha, means “freedom”, “release”,[note 3] “deliverance”. Sometimes a distinction is being made between ceto-vimutti, “liberation of the mind”, and panna-vimutti, “liberation by understanding”. The Buddhist tradition recognises two kinds of ceto-vimutti, one temporarily and one permanent, the last being equivalent to panna-vimutti.[note 4]

Yogacara uses the term raya parvtti, “revolution of the basis”,

… a sudden revulsion, turning, or re-turning of the laya vijna back into its original state of purity […] the Mind returns to its original condition of non-attachment, non-discrimination and non-duality”.

Nirvana is the “blowing out” of disturbing emotions, which is the same as liberation.[web 1] The usage of the term “enlightenment” to translate “nirvana” was popularized in the 19th century, due, in part, to the efforts of Max Muller, who used the term consistently in his translations.

Siddhartha Gautama, known as the Buddha, is said to have achieved full awakening, known as samyaksabodhi (Sanskrit; Pli: sammsabodhi), “perfect Buddhahood”, or anuttar-samyak-sabodhi, “highest perfect awakening”.

The term buddha has acquired somewhat different meanings in the various Buddhist traditions. An equivalent term for Buddha is Tathgata, “the thus-gone”. The way to Buddhahood is somewhat differently understood in the various buddhist traditions.

In the suttapitaka, the Buddhist canon as preserved in the Theravada-tradition, a couple of texts can be found in which the Buddha’s attainment of liberation forms part of the narrative.[40][note 5]

The Ariyapariyesana Sutta[note 6] describes how the Buddha was dissatisfied with the teachings of Alara Kalama and Uddaka Ramaputta, wandered further through Magadhan country, and then found “an agreeable piece of ground” which served for striving. The sutra then only says that he attained Nibbana.

The Mahasaccaka Sutta[note 7] describes his ascetic practices, which he abandoned. There-after he remembered a spontaneous state of jhana, and set out for jhana-practice. After destroying the disturbances of the mind, and attaining concentration of the mind, he attained three knowledges (vidhya):

According to the Mahasaccaka Sutta these insights, including the way to attain liberation, led the Buddha himself straight to liberation. called “awakening.”

Schmithausen[note 8] notes that the mention of the four noble truths as constituting “liberating insight”, which is attained after mastering the Rupa Jhanas, is a later addition to texts such as Majjhima Nikaya 36. Bronkhorst notices that

…the accounts which include the Four Noble Truths had a completely different conception of the process of liberation than the one which includes the Four Dhyanas and the destruction of the intoxicants.

It calls in question the reliability of these accounts, and the relation between dhyana and insight, which is a core problem in the study of early Buddhism. Originally the term prajna may have been used, which came to be replaced by the four truths in those texts where “liberating insight” was preceded by the four jhanas. Bronkhorst also notices that the conception of what exactly this “liberating insight” was developed throughout time. Whereas originally it may not have been specified, later on the four truths served as such, to be superseded by pratityasamutpada, and still later, in the Hinayana schools, by the doctrine of the non-existence of a substantial self or person. And Schmithausen notices that still other descriptions of this “liberating insight” exist in the Buddhist canon:

“that the five Skandhas are impermanent, disagreeable, and neither the Self nor belonging to oneself”;[note 9] “the contemplation of the arising and disappearance (udayabbaya) of the five Skandhas”;[note 10] “the realisation of the Skandhas as empty (rittaka), vain (tucchaka) and without any pith or substance (asaraka).[note 11]

An example of this substitution, and its consequences, is Majjhima Nikaya 36:42-43, which gives an account of the awakening of the Buddha.

In Theravada Buddhism, reaching full awakening is equivalent in meaning to reaching Nirva.[web 2] Attaining Nirva is the ultimate goal of Theravada and other rvaka traditions.[web 3] It involves the abandonment of the ten fetters and the cessation of dukkha or suffering. Full awakening is reached in four stages.

In Mahyna Buddhism the Bodhisattva is the ideal. The ultimate goal is not only of one’s own liberation in Buddhahood, but the liberation of all living beings.

In time, the Buddha’s awakening came to be understood as an immediate full awakening and liberation, instead of the insight into and certainty about the way to follow to reach enlightenment. However, in some Zen traditions this perfection came to be relativized again; according to one contemporary Zen master, “Shakyamuni buddha and Bodhidharma are still practicing.”

But Mahayana Buddhism also developed a cosmology with a wide range of buddhas and bodhisattvas, who assist humans on their way to liberation.

In the western world the concept of enlightenment has taken on a romantic meaning. It has become synonymous with self-realization and the true self, being regarded as a substantial essence being covered over by social conditioning.

The use of the western word enlightenment is based on the supposed resemblance of bodhi with Aufklrung, the independent use of reason to gain insight into the true nature of our world. In fact there are more resemblances with Romanticism than with the Enlightenment: the emphasis on feeling, on intuitive insight, on a true essence beyond the world of appearances.

The equivalent term “awakening” has also been used in a Christian context, namely the Great Awakenings, several periods of religious revival in American religious history. Historians and theologians identify three or four waves of increased religious enthusiasm occurring between the early 18th century and the late 19th century. Each of these “Great Awakenings” was characterized by widespread revivals led by evangelical Protestant ministers, a sharp increase of interest in religion, a profound sense of conviction and redemption on the part of those affected, an increase in evangelical church membership, and the formation of new religious movements and denominations.

The romantic idea of enlightenment as insight into a timeless, transcendent reality has been popularized especially by D.T. Suzuki.[web 4][web 5] Further popularization was due to the writings of Heinrich Dumoulin.[web 6] Dumoulin viewed metaphysics as the expression of a transcendent truth, which according to him was expressed by Mahayana Buddhism, but not by the pragmatic analysis of the oldest Buddhism, which emphasizes anatta. This romantic vision is also recognizable in the works of Ken Wilber.

In the oldest Buddhism this essentialism is not recognizable.[web 7] According to critics it doesn’t really contribute to a real insight into Buddhism:[web 8]

…most of them labour under the old clich that the goal of Buddhist psychological analysis is to reveal the hidden mysteries in the human mind and thereby facilitate the development of a transcendental state of consciousness beyond the reach of linguistic expression.

A common reference in western culture is the notion of “enlightenment experience”. This notion can be traced back to William James, who used the term “religious experience” in his book, The Varieties of Religious Experience. Wayne Proudfoot traces the roots of the notion of “religious experience” further back to the German theologian Friedrich Schleiermacher (1768-1834), who argued that religion is based on a feeling of the infinite. Schleiermacher used the notion of “religious experience” to defend religion against the growing scientific and secular critique.

It was popularised by the Transcendentalists, and exported to Asia via missionaries. Transcendentalism developed as a reaction against 18th Century rationalism, John Locke’s philosophy of Sensualism, and the predestinationism of New England Calvinism. It is fundamentally a variety of diverse sources such as Hindu texts like the Vedas, the Upanishads and the Bhagavad Gita, various religions, and German idealism.

It was adopted by many scholars of religion, of which William James was the most influential.[note 12]

The notion of “experience” has been criticised. Robert Sharf points out that “experience” is a typical western term, which has found its way into Asian religiosity via western influences.[note 13]

The notion of “experience” introduces a false notion of duality between “experiencer” and “experienced”, whereas the essence of kensho is the realisation of the “non-duality” of observer and observed.[dead link] “Pure experience” does not exist; all experience is mediated by intellectual and cognitive activity. The specific teachings and practices of a specific tradition may even determine what “experience” someone has, which means that this “experience” is not the proof of the teaching, but a result of the teaching. A pure consciousness without concepts, reached by “cleaning the doors of perception” as per romantic poet William Blake[note 14], would, according to Mohr, be an overwhelming chaos of sensory input without coherence.

Sakyamuni’s Buddhahood is celebrated on Bodhi Day. In Sri Lanka and Japan different days are used for this celebration.

According to the Theravada tradition in Sri Lanka, Sakyamuni reached Buddhahood at the full moon in May. This is celebrated at Wesak Poya, the full moon in May, as Sambuddhatva jayanthi (also known as Sambuddha jayanthi).[web 9]

According to the Zen tradition, the Buddha reached his decisive insight on 8 December. This is celebrated in Zen monasteries with a very intensive eight-day session of Rhatsu.

It rests upon the notion of the primacy of religious experiences, preferably spectacular ones, as the origin and legitimation of religious action. But this presupposition has a natural home, not in Buddhism, but in Christian and especially Protetstant Christian movements which prescribe a radical conversion.

See Sekida for an example of this influence of William James and Christian conversion stories, mentioning Luther and St. Paul. See also McMahan for the influence of Christian thought on Buddhism.

[T]he role of experience in the history of Buddhism has been greatly exaggerated in contemporary scholarship. Both historical and ethnographic evidence suggests that the privileging of experience may well be traced to certain twentieth-century reform movements, notably those that urge a return to zazen or vipassana meditation, and these reforms were profoundly influenced by religious developments in the west […] While some adepts may indeed experience “altered states” in the course of their training, critical analysis shows that such states do not constitute the reference point for the elaborate Buddhist discourse pertaining to the “path”.

See more here:

Enlightenment in Buddhism – Wikipedia

Spiritual Enlightenment: What It Is and How to Experience It

Affiliate Disclosure: Certain products, tools and services we recommend on this site may be affiliate links. All the products we recommend are either things we use ourselves or have researched and confirmed are of the highest quality and integrity. Conscious Lifestyle Magazine is also a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn advertising fees by advertising and linking to amazon.com. These programs allow us to provide quality content to you at no charge.

See the original post here:

Spiritual Enlightenment: What It Is and How to Experience It

ONE TRIBE FESTIVAL – 130 FOR 4 DAYS)

As One Tribe Festival takes a fallow year Audio Farm keeps to its evolving nomadic roots and reopens the gates under the original alias of Audio Farm Festival.

2017 saw One Tribe Festival blossom like a magical flower from its Audio Farm tree, from a seed that was planted at the first festival in 2013. So from the crew that brought you One Tribe, expect more of the same magic, music, people, energy, love, laughter and beauty, but in a more intimate space at Audio Farm Festival 2018.

Read the original post:

ONE TRIBE FESTIVAL – 130 FOR 4 DAYS)

Cystic fibrosis – Wikipedia, the free encyclopedia

Cystic fibrosis (CF) is a genetic disorder that affects mostly the lungs, but also the pancreas, liver, kidneys, and intestine.[1][5] Long-term issues include difficulty breathing and coughing up mucus as a result of frequent lung infections.[1] Other signs and symptoms may include sinus infections, poor growth, fatty stool, clubbing of the fingers and toes, and infertility in most males.[1] Different people may have different degrees of symptoms.[1]

CF is inherited in an autosomal recessive manner.[1] It is caused by the presence of mutations in both copies of the gene for the cystic fibrosis transmembrane conductance regulator (CFTR) protein.[1] Those with a single working copy are carriers and otherwise mostly normal.[3] CFTR is involved in production of sweat, digestive fluids, and mucus.[6] When CFTR is not functional, secretions which are usually thin instead become thick.[7] The condition is diagnosed by a sweat test and genetic testing.[1] Screening of infants at birth takes place in some areas of the world.[1]

There is no known cure for cystic fibrosis.[3] Lung infections are treated with antibiotics which may be given intravenously, inhaled, or by mouth.[1] Sometimes, the antibiotic azithromycin is used long term.[1] Inhaled hypertonic saline and salbutamol may also be useful.[1] Lung transplantation may be an option if lung function continues to worsen.[1] Pancreatic enzyme replacement and fat-soluble vitamin supplementation are important, especially in the young.[1] Airway clearance techniques such as chest physiotherapy have some short-term benefit, but long-term effects are unclear.[8] The average life expectancy is between 42 and 50 years in the developed world.[4][9] Lung problems are responsible for death in 80% of people with cystic fibrosis.[1]

CF is most common among people of Northern European ancestry and affects about one out of every 3,000 newborns.[1] About one in 25 people is a carrier.[3] It is least common in Africans and Asians.[1] It was first recognized as a specific disease by Dorothy Andersen in 1938, with descriptions that fit the condition occurring at least as far back as 1595.[5] The name “cystic fibrosis” refers to the characteristic fibrosis and cysts that form within the pancreas.[5][10]

Contents

The main signs and symptoms of cystic fibrosis are salty-tasting skin,[11] poor growth, and poor weight gain despite normal food intake,[12] accumulation of thick, sticky mucus,[13] frequent chest infections, and coughing or shortness of breath.[14] Males can be infertile due to congenital absence of the vas deferens.[15] Symptoms often appear in infancy and childhood, such as bowel obstruction due to meconium ileus in newborn babies.[16] As the children grow, they exercise to release mucus in the alveoli.[17] Ciliated epithelial cells in the person have a mutated protein that leads to abnormally viscous mucus production.[13] The poor growth in children typically presents as an inability to gain weight or height at the same rate as their peers, and is occasionally not diagnosed until investigation is initiated for poor growth. The causes of growth failure are multifactorial and include chronic lung infection, poor absorption of nutrients through the gastrointestinal tract, and increased metabolic demand due to chronic illness.[12]

In rare cases, cystic fibrosis can manifest itself as a coagulation disorder. Vitamin K is normally absorbed from breast milk, formula, and later, solid foods. This absorption is impaired in some cystic fibrosis patients. Young children are especially sensitive to vitamin K malabsorptive disorders because only a very small amount of vitamin K crosses the placenta, leaving the child with very low reserves and limited ability to absorb vitamin K from dietary sources after birth. Because factors II, VII, IX, and X (clotting factors) are vitamin Kdependent, low levels of vitamin K can result in coagulation problems. Consequently, when a child presents with unexplained bruising, a coagulation evaluation may be warranted to determine whether an underlying disease is present.[18]

Lung disease results from clogging of the airways due to mucus build-up, decreased mucociliary clearance, and resulting inflammation.[19][20] Inflammation and infection cause injury and structural changes to the lungs, leading to a variety of symptoms. In the early stages, incessant coughing, copious phlegm production, and decreased ability to exercise are common. Many of these symptoms occur when bacteria that normally inhabit the thick mucus grow out of control and cause pneumonia. In later stages, changes in the architecture of the lung, such as pathology in the major airways (bronchiectasis), further exacerbate difficulties in breathing. Other signs include coughing up blood (hemoptysis), high blood pressure in the lung (pulmonary hypertension), heart failure, difficulties getting enough oxygen to the body (hypoxia), and respiratory failure requiring support with breathing masks, such as bilevel positive airway pressure machines or ventilators.[21] Staphylococcus aureus, Haemophilus influenzae, and Pseudomonas aeruginosa are the three most common organisms causing lung infections in CF patients.[20] In addition to typical bacterial infections, people with CF more commonly develop other types of lung disease. Among these is allergic bronchopulmonary aspergillosis, in which the body’s response to the common fungus Aspergillus fumigatus causes worsening of breathing problems. Another is infection with Mycobacterium avium complex, a group of bacteria related to tuberculosis, which can cause lung damage and does not respond to common antibiotics.[22] People with CF are susceptible to getting a pneumothorax.[23]

Mucus in the paranasal sinuses is equally thick and may also cause blockage of the sinus passages, leading to infection. This may cause facial pain, fever, nasal drainage, and headaches. Individuals with CF may develop overgrowth of the nasal tissue (nasal polyps) due to inflammation from chronic sinus infections.[24] Recurrent sinonasal polyps can occur in 10% to 25% of CF patients.[20] These polyps can block the nasal passages and increase breathing difficulties.[25][26]

Cardiorespiratory complications are the most common cause of death (about 80%) in patients at most CF centers in the United States.[20]

Prior to prenatal and newborn screening, cystic fibrosis was often diagnosed when a newborn infant failed to pass feces (meconium). Meconium may completely block the intestines and cause serious illness. This condition, called meconium ileus, occurs in 510%[20] of newborns with CF. In addition, protrusion of internal rectal membranes (rectal prolapse) is more common, occurring in as many as 10% of children with CF,[20] and it is caused by increased fecal volume, malnutrition, and increased intraabdominal pressure due to coughing.[27]

The thick mucus seen in the lungs has a counterpart in thickened secretions from the pancreas, an organ responsible for providing digestive juices that help break down food. These secretions block the exocrine movement of the digestive enzymes into the duodenum and result in irreversible damage to the pancreas, often with painful inflammation (pancreatitis).[28] The pancreatic ducts are totally plugged in more advanced cases, usually seen in older children or adolescents.[20] This causes atrophy of the exocrine glands and progressive fibrosis.[20]

The lack of digestive enzymes leads to difficulty absorbing nutrients with their subsequent excretion in the feces, a disorder known as malabsorption. Malabsorption leads to malnutrition and poor growth and development because of calorie loss. Resultant hypoproteinemia may be severe enough to cause generalized edema.[20] Individuals with CF also have difficulties absorbing the fat-soluble vitamins A, D, E, and K.[29]

In addition to the pancreas problems, people with cystic fibrosis experience more heartburn,[29] intestinal blockage by intussusception, and constipation.[30] Older individuals with CF may develop distal intestinal obstruction syndrome when thickened feces cause intestinal blockage.[29]

Exocrine pancreatic insufficiency occurs in the majority (85% to 90%) of patients with CF.[20] It is mainly associated with “severe” CFTR mutations, where both alleles are completely nonfunctional (e.g. F508/F508).[20] It occurs in 10% to 15% of patients with one “severe” and one “mild” CFTR mutation where little CFTR activity still occurs, or where two “mild” CFTR mutations exist.[20] In these milder cases, sufficient pancreatic exocrine function is still present so that enzyme supplementation is not required.[20] Usually, no other GI complications occur in pancreas-sufficient phenotypes, and in general, such individuals usually have excellent growth and development.[20] Despite this, idiopathic chronic pancreatitis can occur in a subset of pancreas-sufficient individuals with CF, and is associated with recurrent abdominal pain and life-threatening complications.[20]

Thickened secretions also may cause liver problems in patients with CF. Bile secreted by the liver to aid in digestion may block the bile ducts, leading to liver damage. Over time, this can lead to scarring and nodularity (cirrhosis). The liver fails to rid the blood of toxins and does not make important proteins, such as those responsible for blood clotting.[31][32] Liver disease is the third-most common cause of death associated with CF.[20]

The pancreas contains the islets of Langerhans, which are responsible for making insulin, a hormone that helps regulate blood glucose. Damage of the pancreas can lead to loss of the islet cells, leading to a type of diabetes unique to those with the disease.[33] This cystic fibrosis-related diabetes shares characteristics that can be found in type 1 and type 2 diabetics, and is one of the principal nonpulmonary complications of CF.[34]

Vitamin D is involved in calcium and phosphate regulation. Poor uptake of vitamin D from the diet because of malabsorption can lead to the bone disease osteoporosis in which weakened bones are more susceptible to fractures.[35] In addition, people with CF often develop clubbing of their fingers and toes due to the effects of chronic illness and low oxygen in their tissues.[36][37]

Infertility affects both men and women. At least 97% of men with cystic fibrosis are infertile, but not sterile and can have children with assisted reproductive techniques.[38] The main cause of infertility in men with CF is congenital absence of the vas deferens (which normally connects the testes to the ejaculatory ducts of the penis), but potentially also by other mechanisms such as causing no sperm, abnormally shaped sperm, and few sperm with poor motility.[39] Many men found to have congenital absence of the vas deferens during evaluation for infertility have a mild, previously undiagnosed form of CF.[40] Around 20% of women with CF have fertility difficulties due to thickened cervical mucus or malnutrition. In severe cases, malnutrition disrupts ovulation and causes a lack of menstruation.[41]

CF is caused by a mutation in the gene cystic fibrosis transmembrane conductance regulator (CFTR). The most common mutation, F508, is a deletion ( signifying deletion) of three nucleotides[42] that results in a loss of the amino acid phenylalanine (F) at the 508th position on the protein. This mutation accounts for two-thirds (6670%[20]) of CF cases worldwide and 90% of cases in the United States; however, over 1500 other mutations can produce CF.[43] Although most people have two working copies (alleles) of the CFTR gene, only one is needed to prevent cystic fibrosis. CF develops when neither allele can produce a functional CFTR protein. Thus, CF is considered an autosomal recessive disease.

The CFTR gene, found at the q31.2 locus of chromosome 7, is 230,000 base pairs long, and creates a protein that is 1,480 amino acids long. More specifically, the location is between base pair 117,120,016 and 117,308,718 on the long arm of chromosome 7, region 3, band 1, subband 2, represented as 7q31.2. Structurally, CFTR is a type of gene known as an ABC gene. The product of this gene (the CFTR protein) is a chloride ion channel important in creating sweat, digestive juices, and mucus. This protein possesses two ATP-hydrolyzing domains, which allows the protein to use energy in the form of ATP. It also contains two domains comprising six alpha helices apiece, which allow the protein to cross the cell membrane. A regulatory binding site on the protein allows activation by phosphorylation, mainly by cAMP-dependent protein kinase.[21] The carboxyl terminal of the protein is anchored to the cytoskeleton by a PDZ domain interaction.[44]

In addition, the evidence is increasing that genetic modifiers besides CFTR modulate the frequency and severity of the disease. One example is mannan-binding lectin, which is involved in innate immunity by facilitating phagocytosis of microorganisms. Polymorphisms in one or both mannan-binding lectin alleles that result in lower circulating levels of the protein are associated with a threefold higher risk of end-stage lung disease, as well as an increased burden of chronic bacterial infections.[20]

Several mutations in the CFTR gene can occur, and different mutations cause different defects in the CFTR protein, sometimes causing a milder or more severe disease. These protein defects are also targets for drugs which can sometimes restore their function. F508-CFTR, which occurs in >90% of patients in the U.S., creates a protein that does not fold normally and is not appropriately transported to the cell membrane, resulting in its degradation. Other mutations result in proteins that are too short (truncated) because production is ended prematurely. Other mutations produce proteins that do not use energy (in the form of ATP) normally, do not allow chloride, iodide, and thiocyanate to cross the membrane appropriately,[45] and degrade at a faster rate than normal. Mutations may also lead to fewer copies of the CFTR protein being produced.[21]

The protein created by this gene is anchored to the outer membrane of cells in the sweat glands, lungs, pancreas, and all other remaining exocrine glands in the body. The protein spans this membrane and acts as a channel connecting the inner part of the cell (cytoplasm) to the surrounding fluid. This channel is primarily responsible for controlling the movement of halogens from inside to outside of the cell; however, in the sweat ducts, it facilitates the movement of chloride from the sweat duct into the cytoplasm. When the CFTR protein does not resorb ions in sweat ducts, chloride and thiocyanate[46] released from sweat glands are trapped inside the ducts and pumped to the skin. Additionally hypothiocyanite, OSCN, cannot be produced by the immune defense system.[47][48] Because chloride is negatively charged, this modifies the electrical potential inside and outside the cell that normally causes cations to cross into the cell. Sodium is the most common cation in the extracellular space. The excess chloride within sweat ducts prevents sodium resorption by epithelial sodium channels and the combination of sodium and chloride creates the salt, which is lost in high amounts in the sweat of individuals with CF. This lost salt forms the basis for the sweat test.[21]

Most of the damage in CF is due to blockage of the narrow passages of affected organs with thickened secretions. These blockages lead to remodeling and infection in the lung, damage by accumulated digestive enzymes in the pancreas, blockage of the intestines by thick feces, etc. Several theories have been posited on how the defects in the protein and cellular function cause the clinical effects. The most current theory suggests that defective ion transport leads to dehydration in the airway epithelia, thickening mucus. In airway epithelial cells, the cilia exist in between the cell’s apical surface and mucus in a layer known as airway surface liquid (ASL). The flow of ions from the cell and into this layer is determined by ion channels such as CFTR. CFTR not only allows chloride ions to be drawn from the cell and into the ASL, but it also regulates another channel called ENac, which allows sodium ions to leave the ASL and enter the respiratory epithelium. CFTR normally inhibits this channel, but if the CFTR is defective, then sodium flows freely from the ASL and into the cell. As water follows sodium, the depth of ASL will be depleted and the cilia will be left in the mucous layer.[49] As cilia cannot effectively move in a thick, viscous environment, mucociliary clearance is deficient and a buildup of mucus occurs, clogging small airways.[50] The accumulation of more viscous, nutrient-rich mucus in the lungs allows bacteria to hide from the body’s immune system, causing repeated respiratory infections. The presence of the same CFTR proteins in the pancreatic duct and sweat glands in the skin also cause symptoms in these systems.

The lungs of individuals with cystic fibrosis are colonized and infected by bacteria from an early age. These bacteria, which often spread among individuals with CF, thrive in the altered mucus, which collects in the small airways of the lungs. This mucus leads to the formation of bacterial microenvironments known as biofilms that are difficult for immune cells and antibiotics to penetrate. Viscous secretions and persistent respiratory infections repeatedly damage the lung by gradually remodeling the airways, which makes infection even more difficult to eradicate.[51]

Over time, both the types of bacteria and their individual characteristics change in individuals with CF. In the initial stage, common bacteria such as S. aureus and H. influenzae colonize and infect the lungs.[20] Eventually, Pseudomonas aeruginosa (and sometimes Burkholderia cepacia) dominates. By 18 years of age, 80% of patients with classic CF harbor P. aeruginosa, and 3.5% harbor B. cepacia.[20] Once within the lungs, these bacteria adapt to the environment and develop resistance to commonly used antibiotics. Pseudomonas can develop special characteristics that allow the formation of large colonies, known as “mucoid” Pseudomonas, which are rarely seen in people who do not have CF.[51] Scientific evidences suggest the interleukin 17 pathway plays a key role in resistance and modulation of the inflammatory response during P. aeruginosa infection in CF.[52] In particular, interleukin 17-mediated immunity plays a double-edged activity during chronic airways infection; on one side, it contributes to the control of P. aeruginosa burden, while on the other, it propagates exacerbated pulmonary neutrophilia and tissue remodeling.[52]

Infection can spread by passing between different individuals with CF.[53] In the past, people with CF often participated in summer “CF camps” and other recreational gatherings.[54][55] Hospitals grouped patients with CF into common areas and routine equipment (such as nebulizers)[56] was not sterilized between individual patients.[57] This led to transmission of more dangerous strains of bacteria among groups of patients. As a result, individuals with CF are now routinely isolated from one another in the healthcare setting, and healthcare providers are encouraged to wear gowns and gloves when examining patients with CF to limit the spread of virulent bacterial strains.[58]

CF patients may also have their airways chronically colonized by filamentous fungi (such as Aspergillus fumigatus, Scedosporium apiospermum, Aspergillus terreus) and/or yeasts (such as Candida albicans); other filamentous fungi less commonly isolated include Aspergillus flavus and Aspergillus nidulans (occur transiently in CF respiratory secretions) and Exophiala dermatitidis and Scedosporium prolificans (chronic airway-colonizers); some filamentous fungi such as Penicillium emersonii and Acrophialophora fusispora are encountered in patients almost exclusively in the context of CF.[59] Defective mucociliary clearance characterizing CF is associated with local immunological disorders. In addition, the prolonged therapy with antibiotics and the use of corticosteroid treatments may also facilitate fungal growth. Although the clinical relevance of the fungal airway colonization is still a matter of debate, filamentous fungi may contribute to the local inflammatory response and therefore to the progressive deterioration of the lung function, as often happens with allergic bronchopulmonary aspergillosis the most common fungal disease in the context of CF, involving a Th2-driven immune response to Aspergillus species.[59][60]

Cystic fibrosis may be diagnosed by many different methods, including newborn screening, sweat testing, and genetic testing.[61] As of 2006 in the United States, 10% of cases are diagnosed shortly after birth as part of newborn screening programs. The newborn screen initially measures for raised blood concentration of immunoreactive trypsinogen.[62] Infants with an abnormal newborn screen need a sweat test to confirm the CF diagnosis. In many cases, a parent makes the diagnosis because the infant tastes salty.[20] Immunoreactive trypsinogen levels can be increased in individuals who have a single mutated copy of the CFTR gene (carriers) or, in rare instances, in individuals with two normal copies of the CFTR gene. Due to these false positives, CF screening in newborns can be controversial.[63][64] Most U.S. states and countries do not screen for CF routinely at birth. Therefore, most individuals are diagnosed after symptoms (e.g. sinopulmonary disease and GI manifestations[20]) prompt an evaluation for cystic fibrosis. The most commonly used form of testing is the sweat test. Sweat testing involves application of a medication that stimulates sweating (pilocarpine). To deliver the medication through the skin, iontophoresis is used, whereby one electrode is placed onto the applied medication and an electric current is passed to a separate electrode on the skin. The resultant sweat is then collected on filter paper or in a capillary tube and analyzed for abnormal amounts of sodium and chloride. People with CF have increased amounts of them in their sweat. In contrast, people with CF have less thiocyanate and hypothiocyanite in their saliva[65] and mucus (Banfi et al.). In the case of milder forms of CF, transepithelial potential difference measurements can be helpful. CF can also be diagnosed by identification of mutations in the CFTR gene.[66]

People with CF may be listed in a disease registry that allows researchers and doctors to track health results and identify candidates for clinical trials.[67]

Women who are pregnant or couples planning a pregnancy can have themselves tested for the CFTR gene mutations to determine the risk that their child will be born with CF. Testing is typically performed first on one or both parents and, if the risk of CF is high, testing on the fetus is performed. The American College of Obstetricians and Gynecologists recommends all people thinking of becoming pregnant be tested to see if they are a carrier.[68]

Because development of CF in the fetus requires each parent to pass on a mutated copy of the CFTR gene and because CF testing is expensive, testing is often performed initially on one parent. If testing shows that parent is a CFTR gene mutation carrier, the other parent is tested to calculate the risk that their children will have CF. CF can result from more than a thousand different mutations.[69] As of 2016, typically only the most common mutations are tested for, such as F508[69] Most commercially available tests look for 32 or fewer different mutations. If a family has a known uncommon mutation, specific screening for that mutation can be performed. Because not all known mutations are found on current tests, a negative screen does not guarantee that a child will not have CF.[70]

During pregnancy, testing can be performed on the placenta (chorionic villus sampling) or the fluid around the fetus (amniocentesis). However, chorionic villus sampling has a risk of fetal death of one in 100 and amniocentesis of one in 200;[71] a recent study has indicated this may be much lower, about one in 1,600.[72]

Economically, for carrier couples of cystic fibrosis, when comparing preimplantation genetic diagnosis (PGD) with natural conception (NC) followed by prenatal testing and abortion of affected pregnancies, PGD provides net economic benefits up to a maternal age around 40 years, after which NC, prenatal testing, and abortion have higher economic benefit.[73]

While no cures for CF are known, several treatment methods are used. The management of CF has improved significantly over the past 70 years. While infants born with it 70 years ago would have been unlikely to live beyond their first year, infants today are likely to live well into adulthood. Recent advances in the treatment of cystic fibrosis have meant that individuals with cystic fibrosis can live a fuller life less encumbered by their condition. The cornerstones of management are the proactive treatment of airway infection, and encouragement of good nutrition and an active lifestyle. Pulmonary rehabilitation as a management of CF continues throughout a person’s life, and is aimed at maximizing organ function, and therefore the quality of life. At best, current treatments delay the decline in organ function. Because of the wide variation in disease symptoms, treatment typically occurs at specialist multidisciplinary centers and is tailored to the individual. Targets for therapy are the lungs, gastrointestinal tract (including pancreatic enzyme supplements), the reproductive organs (including assisted reproductive technology), and psychological support.[62]

The most consistent aspect of therapy in CF is limiting and treating the lung damage caused by thick mucus and infection, with the goal of maintaining quality of life. Intravenous, inhaled, and oral antibiotics are used to treat chronic and acute infections. Mechanical devices and inhalation medications are used to alter and clear the thickened mucus. These therapies, while effective, can be extremely time-consuming.

Many people with CF are on one or more antibiotics at all times, even when healthy, to prophylactically suppress infection. Antibiotics are absolutely necessary whenever pneumonia is suspected or a noticeable decline in lung function is seen, and are usually chosen based on the results of a sputum analysis and the person’s past response. This prolonged therapy often necessitates hospitalization and insertion of a more permanent IV such as a peripherally inserted central catheter or Port-a-Cath. Inhaled therapy with antibiotics such as tobramycin, colistin, and aztreonam is often given for months at a time to improve lung function by impeding the growth of colonized bacteria.[74][75][76] Inhaled antibiotic therapy helps lung function by fighting infection, but also has significant drawbacks such as development of antibiotic resistance, tinnitus, and changes in the voice.[77] Inhaled levofloxacin may be used to treat Pseudomonas aeruginosa in people with cystic fibrosis who are infected.[78] The early management of Pseudomonas aeruginosa infection is easier and better, using nebulised antibiotics with or without oral antibiotics may sustain its eradication up to 2 years.[79]

Antibiotics by mouth such as ciprofloxacin or azithromycin are given to help prevent infection or to control ongoing infection.[80] The aminoglycoside antibiotics (e.g. tobramycin) used can cause hearing loss, damage to the balance system in the inner ear or kidney failure with long-term use.[81] To prevent these side-effects, the amount of antibiotics in the blood is routinely measured and adjusted accordingly.

All these factors related to the antibiotics use, the chronicity of the disease, and the emergence of resistant bacteria demand more exploration for different strategies such as antibiotic adjuvant therapy.[82]

Aerosolized medications that help loosen secretions include dornase alfa and hypertonic saline.[83] Dornase is a recombinant human deoxyribonuclease, which breaks down DNA in the sputum, thus decreasing its viscosity.[84] Denufosol, an investigational drug, opens an alternative chloride channel, helping to liquefy mucus.[85] Whether inhaled corticosteroids are useful is unclear, but stopping inhaled corticosteroid therapy is safe.[86] There is weak evidence that corticosteroid treatment may cause harm by interfering with growth.[86] Pneumococcal vaccination has not been studied as of 2014.[87] As of 2014, there is no clear evidence from randomized controlled trials that the influenza vaccine is beneficial for people with cystic fibrosis.[88]

Ivacaftor is a medication taken by mouth for the treatment of CF due to a number of specific mutations.[89][90] It improves lung function by about 10%; however, as of 2014 it is expensive.[89] The first year it was on the market, the list price was over $300,000 per year in the United States.[89] In July 2015, the U.S. Food and Drug Administration approved lumacaftor, a chaperone for protein folding, for use in combination with ivacaftor.

In 2018, the FDA approved the combination ivacaftor/tezacaftor; the manufacturer announced a list price of $292,000 per year.[91] Tezacaftor helps move the CFTR protein to the correct position on the cell surface, and is designed to treat people with the F508del mutation.[92]

Several mechanical techniques are used to dislodge sputum and encourage its expectoration. One technique is chest physiotherapy were a respiratory therapist percusses an individual’s chest by hand several times a day, to loosen up secretions. This “percussive effect” can be administered also through specific devices that device chest wall oscillation or intrapulmonary percussive ventilator. Other methods such as biphasic cuirass ventilation, and associated clearance mode available in such devices, integrate a cough assistance phase, as well as a vibration phase for dislodging secretions. These are portable and adapted for home use. Chest physiotherapy is beneficial for short-term airway clearance.[8]

Another technique is positive expiratory pressure physiotherapy that consists of providing a back pressure to the airways during expiration. This effect is provided by devices that consists of a mask or a mouthpiece in which a resistance is applied only on the expiration phase.[93] Operating principles of this technique seems to be the increase of gas pressure behind mucus through collateral ventilation along with a temporary increase in functional residual capacity preventing the early collapse of small airways during exhalation.[94][95]

As lung disease worsens, mechanical breathing support may become necessary. Individuals with CF may need to wear special masks at night to help push air into their lungs. These machines, known as bilevel positive airway pressure (BiPAP) ventilators, help prevent low blood oxygen levels during sleep. Non-invasive ventilators may be used during physical therapy to improve sputum clearance.[96] It is not known if this type of therapy has an impact on pulmonary exacerbations or disease progression.[96] It is notknown what role non-invasive ventilation therapy has for improving exercise capacity in people with cystic fibrosis.[96] During severe illness, a tube may be placed in the throat (a procedure known as a tracheostomy) to enable breathing supported by a ventilator.

For children, preliminary studies show massage therapy may help people and their families’ quality of life.[97]

Some lung infections require surgical removal of the infected part of the lung. If this is necessary many times, lung function is severely reduced.[98] The most effective treatment options for people with CF who have spontaneous or recurrent pneumothoraces is not clear.[23]

Lung transplantation often becomes necessary for individuals with CF as lung function and exercise tolerance decline. Although single lung transplantation is possible in other diseases, individuals with CF must have both lungs replaced because the remaining lung might contain bacteria that could infect the transplanted lung. A pancreatic or liver transplant may be performed at the same time to alleviate liver disease and/or diabetes.[99] Lung transplantation is considered when lung function declines to the point where assistance from mechanical devices is required or someone’s survival is threatened.[100]

Newborns with intestinal obstruction typically require surgery, whereas adults with distal intestinal obstruction syndrome typically do not. Treatment of pancreatic insufficiency by replacement of missing digestive enzymes allows the duodenum to properly absorb nutrients and vitamins that would otherwise be lost in the feces. However, the best dosage and form of pancreatic enzyme replacement is unclear, as are the risks and long-term effectiveness of this treatment.[101]

So far, no large-scale research involving the incidence of atherosclerosis and coronary heart disease in adults with cystic fibrosis has been conducted. This is likely because the vast majority of people with cystic fibrosis do not live long enough to develop clinically significant atherosclerosis or coronary heart disease.

Diabetes is the most common nonpulmonary complication of CF. It mixes features of type 1 and type 2 diabetes, and is recognized as a distinct entity, cystic fibrosis-related diabetes.[34][102] While oral antidiabetic drugs are sometimes used, the recommended treatment is the use of insulin injections or an insulin pump,[103] and, unlike in type 1 and 2 diabetes, dietary restrictions are not recommended.[34]

There is no strong evidence that people with cystic fibrosis can prevent osteoporosis by increasing their intake of vitamin D.[104] Bisphosphonates taken by mouth or intravenously can be used to improve the bone mineral density in people with cystic fibrosis.[105] When taking bisphosphates intravenously, adverse effects such as pain and flu-like symptoms can be an issue.[105] The adverse effects of bisphosphates taken by mouth on the gastrointestinal tract are not known.[105]

Poor growth may be avoided by insertion of a feeding tube for increasing food energy through supplemental feeds or by administration of injected growth hormone.[106]

Sinus infections are treated by prolonged courses of antibiotics. The development of nasal polyps or other chronic changes within the nasal passages may severely limit airflow through the nose, and over time reduce the person’s sense of smell. Sinus surgery is often used to alleviate nasal obstruction and to limit further infections. Nasal steroids such as fluticasone are used to decrease nasal inflammation.[107]

Female infertility may be overcome by assisted reproduction technology, particularly embryo transfer techniques. Male infertility caused by absence of the vas deferens may be overcome with testicular sperm extraction, collecting sperm cells directly from the testicles. If the collected sample contains too few sperm cells to likely have a spontaneous fertilization, intracytoplasmic sperm injection can be performed.[108] Third party reproduction is also a possibility for women with CF. Whether taking antioxidants affects outcomes is unclear.[109]

The prognosis for cystic fibrosis has improved due to earlier diagnosis through screening and better treatment and access to health care. In 1959, the median age of survival of children with CF in the United States was six months.[110] In 2010, survival is estimated to be 37 years for women and 40 for men.[111] In Canada, median survival increased from 24 years in 1982 to 47.7 in 2007.[112]

In the US, of those with CF who are more than 18 years old as of 2009, 92% had graduated from high school, 67% had at least some college education, 15% were disabled, 9% were unemployed, 56% were single, and 39% were married or living with a partner.[113]

Chronic illnesses can be very difficult to manage. CF is a chronic illness that affects the “digestive and respiratory tracts resulting in generalized malnutrition and chronic respiratory infections”.[114] The thick secretions clog the airways in the lungs, which often cause inflammation and severe lung infections.[115][116] If it is compromised, it affects the quality of life (QOL) of someone with CF and their ability to complete such tasks as everyday chores. According to Schmitz and Goldbeck (2006), CF significantly increases emotional stress on both the individual and the family, “and the necessary time-consuming daily treatment routine may have further negative effects on quality of life”.[117] However, Havermans and colleagues (2006) have shown that young outpatients with CF who have participated in the Cystic Fibrosis Questionnaire-Revised “rated some QOL domains higher than did their parents”.[118] Consequently, outpatients with CF have a more positive outlook for themselves. Furthermore, many ways can improve the QOL in CF patients. Exercise is promoted to increase lung function. Integrating an exercise regimen into the CF patients daily routine can significantly improve QOL.[119] No definitive cure for CF is known, but diverse medications are used, such as mucolytics, bronchodilators, steroids, and antibiotics, that have the purpose of loosening mucus, expanding airways, decreasing inflammation, and fighting lung infections, respectively.[120]

Cystic fibrosis is the most common life-limiting autosomal recessive disease among people of European heritage.[122] In the United States, about 30,000 individuals have CF; most are diagnosed by six months of age. In Canada, about 4,000 people have CF.[123] Around 1 in 25 people of European descent, and one in 30 of Caucasian Americans,[124] is a carrier of a CF mutation. Although CF is less common in these groups, roughly one in 46 Hispanics, one in 65 Africans, and one in 90 Asians carry at least one abnormal CFTR gene.[125][126] Ireland has the world’s highest prevalence of CF, at one in 1353.[127]

Although technically a rare disease, CF is ranked as one of the most widespread life-shortening genetic diseases. It is most common among nations in the Western world. An exception is Finland, where only one in 80 people carries a CF mutation.[128] The World Health Organization states, “In the European Union, one in 20003000 newborns is found to be affected by CF”.[129] In the United States, one in 3,500 children is born with CF.[130] In 1997, about one in 3,300 Caucasian children in the United States was born with CF. In contrast, only one in 15,000 African American children suffered from it, and in Asian Americans, the rate was even lower at one in 32,000.[131]

Cystic fibrosis is diagnosed in males and females equally. For reasons that remain unclear, data have shown that males tend to have a longer life expectancy than females,[132][133] but recent studies suggest this gender gap may no longer exist perhaps due to improvements in health care facilities,[134][135] while a recent study from Ireland identified a link between the female hormone estrogen and worse outcomes in CF.[136]

The distribution of CF alleles varies among populations. The frequency of F508 carriers has been estimated at one in 200 in northern Sweden, one in 143 in Lithuanians, and one in 38 in Denmark. No F508 carriers were found among 171 Finns and 151 Saami people.[137] F508 does occur in Finland, but it is a minority allele there. CF is known to occur in only 20 families (pedigrees) in Finland.[138]

The F508 mutation is estimated to be up to 52,000 years old.[139] Numerous hypotheses have been advanced as to why such a lethal mutation has persisted and spread in the human population. Other common autosomal recessive diseases such as sickle-cell anemia have been found to protect carriers from other diseases, an evolutionary trade-off known as heterozygote advantage. Resistance to the following have all been proposed as possible sources of heterozygote advantage:

CF is supposed to have appeared about 3,000 BC because of migration of peoples, gene mutations, and new conditions in nourishment.[148] Although the entire clinical spectrum of CF was not recognized until the 1930s, certain aspects of CF were identified much earlier. Indeed, literature from Germany and Switzerland in the 18th century warned “Wehe dem Kind, das beim Ku auf die Stirn salzig schmeckt, es ist verhext und muss bald sterben” or “Woe to the child who tastes salty from a kiss on the brow, for he is cursed and soon must die”, recognizing the association between the salt loss in CF and illness.[148]

In the 19th century, Carl von Rokitansky described a case of fetal death with meconium peritonitis, a complication of meconium ileus associated with CF. Meconium ileus was first described in 1905 by Karl Landsteiner.[148] In 1936, Guido Fanconi described a connection between celiac disease, cystic fibrosis of the pancreas, and bronchiectasis.[149]

In 1938, Dorothy Hansine Andersen published an article, “Cystic Fibrosis of the Pancreas and Its Relation to Celiac Disease: a Clinical and Pathological Study”, in the American Journal of Diseases of Children. She was the first to describe the characteristic cystic fibrosis of the pancreas and to correlate it with the lung and intestinal disease prominent in CF.[10] She also first hypothesized that CF was a recessive disease and first used pancreatic enzyme replacement to treat affected children. In 1952, Paul di SantAgnese discovered abnormalities in sweat electrolytes; a sweat test was developed and improved over the next decade.[150]

The first linkage between CF and another marker (Paroxonase) was found in 1985 by Hans Eiberg, indicating that only one locus exists for CF. In 1988, the first mutation for CF, F508 was discovered by Francis Collins, Lap-Chee Tsui, and John R. Riordan on the seventh chromosome. Subsequent research has found over 1,000 different mutations that cause CF.

Because mutations in the CFTR gene are typically small, classical genetics techniques had been unable to accurately pinpoint the mutated gene.[151] Using protein markers, gene-linkage studies were able to map the mutation to chromosome 7. Chromosome-walking and -jumping techniques were then used to identify and sequence the gene.[152] In 1989, Lap-Chee Tsui led a team of researchers at the Hospital for Sick Children in Toronto that discovered the gene responsible for CF. CF represents a classic example of how a human genetic disorder was elucidated strictly by the process of forward genetics.

This section needs to be updated. In particular: Research needs update; Orkambi needs to be moved to Treatments section. Please update this article to reflect recent events or newly available information. (April 2018)

Gene therapy has been explored as a potential cure for CF. Results from clinical trials have shown limited success as of 2016, and using gene therapy as routine therapy is not suggested.[153] A small study published in 2015 found a small benefit.[154]

The focus of much CF gene therapy research is aimed at trying to place a normal copy of the CFTR gene into affected cells. Transferring the normal CFTR gene into the affected epithelium cells would result in the production of functional CFTR protein in all target cells, without adverse reactions or an inflammation response. To prevent the lung manifestations of CF, only 510% the normal amount of CFTR gene expression is needed.[155] Multiple approaches have been tested for gene transfer, such as liposomes and viral vectors in animal models and clinical trials. However, both methods were found to be relatively inefficient treatment options,[156] mainly because very few cells take up the vector and express the gene, so the treatment has little effect. Additionally, problems have been noted in cDNA recombination, such that the gene introduced by the treatment is rendered unusable.[157] There has been a functional repair in culture of CFTR by CRISPR/Cas9 in intestinal stem cell organoids of cystic fibrosis patients.[158]

A number of small molecules that aim at compensating various mutations of the CFTR gene are under development. One approach is to develop drugs that get the ribosome to overcome the stop codon and synthesize a full-length CFTR protein. About 10% of CF results from a premature stop codon in the DNA, leading to early termination of protein synthesis and truncated proteins. These drugs target nonsense mutations such as G542X, which consists of the amino acid glycine in position 542 being replaced by a stop codon. Aminoglycoside antibiotics interfere with protein synthesis and error-correction. In some cases, they can cause the cell to overcome a premature stop codon by inserting a random amino acid, thereby allowing expression of a full-length protein.[159] The aminoglycoside gentamicin has been used to treat lung cells from CF patients in the laboratory to induce the cells to grow full-length proteins.[160] Another drug targeting nonsense mutations is ataluren, which is undergoing Phase III clinical trials as of October2011[update].[161] Lumacaftor/ivacaftor was approved by the FDA in July 2015.[162]

It is unclear as of 2014 if ursodeoxycholic acid is useful for those with cystic fibrosis-related liver disease.[163]

Read the rest here:

Cystic fibrosis – Wikipedia, the free encyclopedia

Posted in Cf

Gene Therapy – Sumanas, Inc.

Gene Therapy

A few years ago, a clinical trial began in France in the hope of curing children with a type of genetic immune deficiency called SCID-X1. Children with this disease have a defective gene, called gamma-c, which prevents a subset of the cells of the immune system from forming, and predisposes the children to life-threatening infections. In an attempt to cure the childrenwho would otherwise die at a young agephysicians used gene therapy to provide them with normal gamma-c genes.

This particular trial has had striking success as well as tragedy. Eight of the eleven children are currently thriving. However, in two cases the therapy successfully introduced gamma-c genes, but these children have since developed leukemia. In both children, a gamma-c gene inserted next to another gene, called LMO2. The LMO2 gene has previously been linked to leukemia, and scientists speculate that the insertion of the gamma-c gene next to LMO2 may have overstimulated the gene, causing T cells to proliferate in excess. An LMO2 effect, in combination with the proliferation-inducing effects of the gamma-c gene itself, may be the cause of the leukemia in these two patients. Scientists are still investigating other possible causes.

From this single trial, it is clear that gene therapy holds significant promise, yet it is also clear that it poses significant risks. To learn more about the application of gene therapy in SCID, view the accompanying animation.

Read more from the original source:

Gene Therapy – Sumanas, Inc.