Nano Medicine – Treatments for Antibiotic Resistant Bacteria

Antibiotic resistance is now a bigger crisis than the AIDS epidemic of the 1980s, a landmark report recently warned. The spread of deadly superbugs that evade even the most powerful antibiotics is happening across the world, United Nations officials have confirmed. The effects will be devastating meaning a simple scratch or urinary tract infection could kill.

Tuberculosis (TB) is a scourge that is threatening to get ugly because TB is usually cured by taking antibiotics for six to nine months. However, if that treatment is interrupted or the dose is cut down, the stubborn bacteria battle back and mutate into a tougher strain that can no longer be killed by drugs. Such strains are scaring the heck out of the medical community for good reason. Tuberculosis is highly contagious, holding the potential to wipe out wide swaths of humanity in the case of an epidemic of these drug resistant strains.

Australias first victim of a killer strain of drug-resistant tuberculosis died amid warnings of a looming health epidemic on Queenslands doorstep. Medical experts are seriously concerned about the handling of the TB epidemic in Papua New Guinea after Catherina Abraham died of an incurable form of the illness, known as XDR-TB (extensively drug resistant TB) in Cairns Base Hospital. Of course we always get big scares from the mainstream medical press, who are big cheerleaders of big pharmaceutical companies as our governmental medical officials.

Now medical experts are warning that drug resistant tuberculosis is such a problem in the Asia Pacific region that it could overwhelm health systems.

A drug-resistant TB case did touch off a scare in U.S. We dont know too much about a Nepalese man whos in medical isolation in Texas while being treated for extensively drug-resistant tuberculosis, or XDR-TB, the most difficult-to-treat kind.

XDR-TB is resistant not only to isoniazid and rifampin but also a class of drugs called fluoroquinolones and one or more potent injectable antibiotics. This is one of the nastiest of all antibiotics, which easily destroys peoples lives by itself.

TB germs become drug-resistant when patients fail to complete a course of treatment. When a partly-resistant strain is treated with the wrong drugs, it can become extensively resistant. There are about 60,000 people with XDR-TB strains like the Nepalese man whos in isolation. That means there are other people with XDR-TB traveling the world at any given time.

China and India Will Spread TB around the World

Continued here:
Nano Medicine - Treatments for Antibiotic Resistant Bacteria

Believe What the Jewish Apostles Taught — Why Conditional …

Why Tradition about the fate of the lost (as torment forever) is unbiblical and not hermeneutically correct.

Why Conditional Immortality is absolutely true and all unsaved souls will one day be "destroyed."

Why there is no immortal soul doctrine in the Bible, for the lost, at all.

A CHALLENGETO THOSE WHO DISAGREE

We are so persuaded of our position, and so confident in the Scriptural evidence presented on this site, that we honestly do not believe that anyone who shares our faith in the final authority of Scripture will be able to cling to endless torment after reading this entire siteandthe links.

Greek Philosophers Taught:

It is clear that PLATO and many Greek philosophers taught the soul was indestructible:

"The belief in the immortality of the soul came to the Jews from contact with Greek thought and chiefly through the philosophy of Plato, its principal exponent"The Jewish Encyclopedia (www.jewishencyclopedia.com, searched "immortality")

Link:
Believe What the Jewish Apostles Taught -- Why Conditional ...

Human Genetic Engineering Effects

Human Genetic Engineering Effects 3.58/5 (71.63%) 172 votes

Some people can think of Human Genetic Engineering as a thing that makes them live a healthier life for a long time. People can think of it as a something straight from the heaven or a programmed human being. Genetic engineering is a concept that can be used for enhancing the life of human beings.

However, Human Genetic Engineering Effects are also there that can harm humans. A lot of doctors or scientists involved in gene engineering believe that if the research produces accurate and effective manipulation of DNA in the humans, then they can make medicines for diseases that have no cure. This will also enable the doctors to make changes in the genes of a child before the birth of that child, so there will be no defects on a child from birth.

This process can also be applied on curing hereditary disease. It will prevent the disease from carrying forward to other coming generations. This research primarily focused on being applied on families that have a history of suffering from diseases. It will fix the wrong positioning of the genes. TheHuman Genetic Engineering Effects are in its application towards animals and plants that have been modified genetically. When farmers make use of gene-engineering for breeding plants, then this will result in fast production of food items. Fast and increased production will also put down the prices of several food items. Human Genetic Engineering can also add taste and nutrition to different food items.

Human Genetic Engineering Effects can also help in fighting with severe uncured diseases. Those who suffer from life threatening diseases like cancer or AIDS can have a better idea about maintaining their lives according to the circumstances. This can only be done with the help of Human Genetic Engineering.

Hereditary diseases will not trouble any person, and nor there will be any fear of deadly virus taking place in people on all corners of the world. Human Genetic Engineering can achieve all these things in a theoretical way. Human Genetic Engineering Effects can also be seen in societies concerning health. It has tremendous benefits on health.

Human Genetic Engineering can help people in fighting with cystic fibrosis problems. It also helps to fight against diabetes, and many other specific diseases. Bubble boy is also a disease that can be treated successfully with the help Human Genetic Engineering. It is also termed as Severe Combined Immune efficiency.

Gene mutation is the only thing responsible for the characterization of this deadly disease. This mutation causes ADA deficiencies that later result in destroying the immune system cells. Human Genetic Engineering Effects include ecological problems that might be present in organisms developed or generated by Human Genetic Engineering. However, it can leave positive impacts on a lot of diseases.

One cannot predict the changes that can occur with the use of species that generates with the help of Human Genetic Engineering Effects. A newly generated species creates ecology imbalances due to Human Genetic Engineering Effects. This is a similar case with exotic or natural species.

Go here to read the rest:
Human Genetic Engineering Effects

Gene – Wikipedia, the free encyclopedia

This article is about the heritable unit for transmission of biological traits. For the name, see Eugene.

A gene is a locus (or region) of DNA that encodes a functional RNA or protein product, and is the molecular unit of heredity.[1][2]:Glossary The transmission of genes to an organism's offspring is the basis of the inheritance of phenotypic traits. Most biological traits are under the influence of polygenes (many different genes) as well as the geneenvironment interactions. Some genetic traits are instantly visible, such as eye colour or number of limbs, and some are not, such as blood type, risk for specific diseases, or the thousands of basic biochemical processes that comprise life.

Genes can acquire mutations in their sequence, leading to different variants, known as alleles, in the population. These alleles encode slightly different versions of a protein, which cause different phenotype traits. Colloquial usage of the term "having a gene" (e.g., "good genes," "hair colour gene") typically refers to having a different allele of the gene. Genes evolve due to natural selection or survival of the fittest of the alleles.

The concept of a gene continues to be refined as new phenomena are discovered.[3] For example, regulatory regions of a gene can be far removed from its coding regions, and coding regions can be split into several exons. Some viruses store their genome in RNA instead of DNA and some gene products are functional non-coding RNAs. Therefore, a broad, modern working definition of a gene is any discrete locus of heritable, genomic sequence which affect an organism's traits by being expressed as a functional product or by Regulation of gene expression.[4][5]

The existence of discrete inheritable units was first suggested by Gregor Mendel (18221884).[6] From 1857 to 1864, he studied inheritance patterns in 8000 common edible pea plants, tracking distinct traits from parent to offspring. He described these mathematically as 2ncombinations where n is the number of differing characteristics in the original peas. Although he did not use the term gene, he explained his results in terms of discrete inherited units that give rise to observable physical characteristics. This description prefigured the distinction between genotype (the genetic material of an organism) and phenotype (the visible traits of that organism). Mendel was also the first to demonstrate independent assortment, the distinction between dominant and recessive traits, the distinction between a heterozygote and homozygote, and the phenomenon of discontinuous inheritance.

Prior to Mendel's work, the dominant theory of heredity was one of blending inheritance, which suggested that each parent contributed fluids to the fertilisation process and that the traits of the parents blended and mixed to produce the offspring. Charles Darwin developed a theory of inheritance he termed pangenesis, which used the term gemmule to describe hypothetical particles that would mix during reproduction. Although Mendel's work was largely unrecognized after its first publication in 1866, it was 'rediscovered' in 1900 by three European scientists, Hugo de Vries, Carl Correns, and Erich von Tschermak, who claimed to have reached similar conclusions in their own research.

The word gene is derived (via pangene) from the Ancient Greek word (gnos) meaning "race, offspring".[7]Gene was coined in 1909 by Danish botanist Wilhelm Johannsen to describe the fundamental physical and functional unit of heredity,[8] while the related word genetics was first used by William Bateson in 1905.[9]

Advances in understanding genes and inheritance continued throughout the 20th century. Deoxyribonucleic acid (DNA) was shown to be the molecular repository of genetic information by experiments in the 1940s to 1950s.[10][11] The structure of DNA was studied by Rosalind Franklin using X-ray crystallography, which led James D. Watson and Francis Crick to publish a model of the double-stranded DNA molecule whose paired nucleotide bases indicated a compelling hypothesis for the mechanism of genetic replication.[12][13] Collectively, this body of research established the central dogma of molecular biology, which states that proteins are translated from RNA, which is transcribed from DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses. The modern study of genetics at the level of DNA is known as molecular genetics.

In 1972, Walter Fiers and his team at the University of Ghent were the first to determine the sequence of a gene: the gene for Bacteriophage MS2 coat protein.[14] The subsequent development of chain-termination DNA sequencing in 1977 by Frederick Sanger improved the efficiency of sequencing and turned it into a routine laboratory tool.[15] An automated version of the Sanger method was used in early phases of the Human Genome Project.[16]

The theories developed in the 1930s and 1940s to integrate molecular genetics with Darwinian evolution are called the modern evolutionary synthesis, a term introduced by Julian Huxley.[17] Evolutionary biologists subsequently refined this concept, such as George C. Williams' gene-centric view of evolution. He proposed an evolutionary concept of the gene as a unit of natural selection with the definition: "that which segregates and recombines with appreciable frequency."[18]:24 In this view, the molecular gene transcribes as a unit, and the evolutionary gene inherits as a unit. Related ideas emphasizing the centrality of genes in evolution were popularized by Richard Dawkins.[19][20]

More here:
Gene - Wikipedia, the free encyclopedia

Applied behavior analysis – Wikipedia, the free encyclopedia

Applied behavior analysis (ABA) is defined as the process of systematically applying interventions based upon the principles of learning theory to improve socially significant behaviors to a meaningful degree, and to demonstrate that the interventions employed are responsible for the improvement in behavior.[1] Despite much confusion throughout the mental health community, ABA was previously called behavior modification but it revised as the earlier approach involved superimposing consequences to change behavior without determining the behavior-environment interactions first. Moreover, the current approach also seeks to emit replacement behaviors which serve the same function as the aberrant behaviors.[2][3][4] By functionally assessing the relationship between a targeted behavior and the environment as well as identifying antecedents and consequences, the methods of ABA can be used to change that behavior.[3]

Methods in applied behavior analysis range from validated intensive behavioral interventionsmost notably utilized for children with an autism spectrum disorder (ASD)[5]to basic research which investigates the rules by which humans adapt and maintain behavior. However, ABA contributes to a full range of areas including: HIV prevention,[6] conservation of natural resources,[7] education,[8]gerontology,[9]health and exercise,[10]organizational behavior management (i.e., industrial safety),[11]language acquisition,[12] littering,[13]medical procedures,[14] parenting,[15]psychotherapy, seatbelt use,[16]severe mental disorders,[17] sports,[18]substance abuse, and zoo management and care of animals.[19]

ABA is defined as an applied natural science devoted to developing and analyzing procedures that produce effective and beneficial changes in behavior.[1] It is one of the three fields of behavior analysis. The other two are radical behaviorism, or the philosophy of the science; and experimental analysis of behavior, or basic experimental research.[5] ABA is also based on operant and respondent conditioning and social learning theory. While radical behaviorism forms the conceptual piece for behavior analysis and acknowledges the presence of cognition and emotions, methodological behaviorism only recognized observable behaviors; the latter was the basis behind behavior modification throughout the 1960s and 1970s.

Contrary to popular belief, behavior analysts emphasize that the science of behavior must be a natural science as opposed to a social science.[20] As such, behavior analysts focus on the observable relationship of behavior to the environment, including antecedents and consequences, without resort to "hypothetical constructs".[21]

Although deriving from a similar philosophy, behavior modification only changed behavior by superimposing consequential procedures; instead, ABA seeks to understand environmental contingencies.[2][3] More specifically, it analyzes the function of behavior, such as what prompts that behavior (the antecedent) as well as promoting replacement behaviors and consequential strategies.[4] Typically, ABA is based on data collection and assessments to accurately examine a behavior's function and to discover the procedures that will produce measurable behavioral changes.

Much of the beginnings of ABA can be traced to a group of faculty and researchers at the University of Washington including Don Baer, Sidney Bijou, Bill Hopkins, Jay Birnbrauer, Ivar Lovaas, Todd Risley, James Sherman, and Montrose Wolf.[22] In the 1960s, Baer, Hopkins, Risley, Sherman, and Wolf became faculty in the Department of Human Development and Family Life at the University of Kansas.[23] They and their colleagues began a concentrated effort at developing and perfecting the application of behavior analysis to address a wide variety of human problems. They also founded the Journal of Applied Behavioral Analysis in 1968 which publishes research examining the application of behavior analysis to socially relevant behavior.

ABA is a science used in a wide range of fields to change behavior with various subtypes, such as organizational behavior management, positive behavior support,[24][25][26] and clinical behavior analysis (including contingency management, acceptance and commitment therapy, and habit reversal training). Most of the time people use the subtype term early intensive behavioral intervention (including discrete trial teaching), a treatment procedure used for young children with autism, interchangeably with ABA. However, the latter is a distinct psychological science.[5]

Ole Ivar Lvaas is considered a grandfather of ABA and developed standardized teaching interventions based on those behavioral principals. Lovaas devoted nearly a half a century to groundbreaking research and practice aimed at improving the lives of children with autism and their families. In 1965, Lovaas published a series of articles that therapeutic approaches to autism. The first two articles presented his system for coding behaviors during direct observations and a pioneering investigation of superimposed antecedents and consequences that maintained a problem behavior.[27] The subsequent articles built upon these methods and reported the first demonstration of an effective way to teach nonverbal children to speak, a study on establishing social (secondary) reinforcers, a procedure for teaching children to imitate, and several studies on interventions to reduce life-threatening self-injury and aggression.

Lovaas was cited in his early career to use low dosages of electroshock therapy to children with extreme self injurious behavior.[27] In 1973, Lovaas published a long-term follow-up for the behavior modification intervention and was dismayed to find that most of the subjects had reverted to their pre-intervention behaviors. After these findings, Lovaas and his colleagues proposed several ways to improve outcomes such as starting intervention during the children's preschool years instead of later in childhood or adolescence, involving parents in the intervention, and implementing the intervention in the family's home rather than an institutional setting. Subsequent articles like the 1987 "Behavioral Treatment and Normal Educational and Intellectual Functioning in Young Autistic Children" reinforce this proposal of early and intensive interventionwithout the use of aversives (such as electric shocks)paired with continual therapy yields the most effective results for children with autism.[27] Lovaas highly believed that the support and involvement in parents applying therapy at home contributed to a higher success rate. Lovaas dedicated his life to the study of autism and was a strong advocate for people with autism even co-founding what is today the Autism Society of America.[27]

Baer, Wolf, and Risley's 1968 article[28] is still used as the standard description of ABA.[29] It describes the seven dimensions of ABA: application; a focus on behavior; the use of analysis; and its technological, conceptually systematic, effective, and general approach.

More here:
Applied behavior analysis - Wikipedia, the free encyclopedia

Neurology – Wikipedia, the free encyclopedia

Neurology (from Greek: , neuron, and the suffix - -logia "study of") is a branch of medicine dealing with disorders of the nervous system. Neurology deals with the diagnosis and treatment of all categories of conditions and disease involving the central and peripheral nervous system (and its subdivisions, the autonomic nervous system and the somatic nervous system); including their coverings, blood vessels, and all effector tissue, such as muscle.[1] Neurological practice relies heavily on the field of neuroscience, which is the scientific study of the nervous system.

A neurologist is a physician specializing in neurology and trained to investigate, or diagnose and treat neurological disorders.[2] Neurologists may also be involved in clinical research, clinical trials, and basic or translational research. While neurology is a non-surgical specialty, its corresponding surgical specialty is neurosurgery.[2]

A large number of neurological disorders have been described. These can affect the central nervous system (brain and spinal cord), the peripheral nervous system, the autonomic nervous system and the muscular system.

Occupation type

Activity sectors

Education required

Many neurologists also have additional training or interest in one area of neurology, such as stroke, epilepsy, neuromuscular, sleep medicine, pain management, or movement disorders.

In the United States and Canada, neurologists are physicians having completed postgraduate training in neurology after graduation from medical school. Neurologists complete, on average, at least 1013 years of college education and clinical training. This training includes obtaining a four-year undergraduate degree, a medical degree (D.O. or M.D.), which comprises an additional four years of study, and then completing a one-year internship and a three- or four-year residency in neurology.[6] The four-year residency consists of one year of internal medicine training followed by three years of training in neurology.

Some neurologists receive additional subspecialty training focusing on a particular area of neurology. These training programs are called fellowships, and are one to two years in duration. Sub-specialties include: brain injury medicine, clinical neurophysiology, epilepsy, hospice and palliative medicine, neurodevelopmental disabilities, neuromuscular medicine, pain medicine and sleep medicine, vascular neurology (stroke),[7]behavioral neurology, child neurology, headache, multiple sclerosis, neuroimaging, neurorehabilitation, and interventional neurology.

In Germany, a compulsory year of psychiatry must be done to complete a residency of neurology.

Read the original here:
Neurology - Wikipedia, the free encyclopedia

Nanotechnology – Wikipedia, the free encyclopedia

Nanotechnology ("nanotech") is the manipulation of matter on an atomic, molecular, and supramolecular scale. The earliest, widespread description of nanotechnology[1][2] referred to the particular technological goal of precisely manipulating atoms and molecules for fabrication of macroscale products, also now referred to as molecular nanotechnology. A more generalized description of nanotechnology was subsequently established by the National Nanotechnology Initiative, which defines nanotechnology as the manipulation of matter with at least one dimension sized from 1 to 100 nanometers. This definition reflects the fact that quantum mechanical effects are important at this quantum-realm scale, and so the definition shifted from a particular technological goal to a research category inclusive of all types of research and technologies that deal with the special properties of matter that occur below the given size threshold. It is therefore common to see the plural form "nanotechnologies" as well as "nanoscale technologies" to refer to the broad range of research and applications whose common trait is size. Because of the variety of potential applications (including industrial and military), governments have invested billions of dollars in nanotechnology research. Until 2012, through its National Nanotechnology Initiative, the USA has invested 3.7 billion dollars, the European Union has invested 1.2 billion and Japan 750 million dollars.[3]

Nanotechnology as defined by size is naturally very broad, including fields of science as diverse as surface science, organic chemistry, molecular biology, semiconductor physics, microfabrication, etc.[4] The associated research and applications are equally diverse, ranging from extensions of conventional device physics to completely new approaches based upon molecular self-assembly, from developing new materials with dimensions on the nanoscale to direct control of matter on the atomic scale.

Scientists currently debate the future implications of nanotechnology. Nanotechnology may be able to create many new materials and devices with a vast range of applications, such as in medicine, electronics, biomaterials energy production, and consumer products. On the other hand, nanotechnology raises many of the same issues as any new technology, including concerns about the toxicity and environmental impact of nanomaterials,[5] and their potential effects on global economics, as well as speculation about various doomsday scenarios. These concerns have led to a debate among advocacy groups and governments on whether special regulation of nanotechnology is warranted.

The concepts that seeded nanotechnology were first discussed in 1959 by renowned physicist Richard Feynman in his talk There's Plenty of Room at the Bottom, in which he described the possibility of synthesis via direct manipulation of atoms. The term "nano-technology" was first used by Norio Taniguchi in 1974, though it was not widely known.

Inspired by Feynman's concepts, K. Eric Drexler used the term "nanotechnology" in his 1986 book Engines of Creation: The Coming Era of Nanotechnology, which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity with atomic control. Also in 1986, Drexler co-founded The Foresight Institute (with which he is no longer affiliated) to help increase public awareness and understanding of nanotechnology concepts and implications.

Thus, emergence of nanotechnology as a field in the 1980s occurred through convergence of Drexler's theoretical and public work, which developed and popularized a conceptual framework for nanotechnology, and high-visibility experimental advances that drew additional wide-scale attention to the prospects of atomic control of matter. In the 1980s, two major breakthroughs sparked the growth of nanotechnology in modern era.

First, the invention of the scanning tunneling microscope in 1981 which provided unprecedented visualization of individual atoms and bonds, and was successfully used to manipulate individual atoms in 1989. The microscope's developers Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory received a Nobel Prize in Physics in 1986.[6][7] Binnig, Quate and Gerber also invented the analogous atomic force microscope that year.

Second, Fullerenes were discovered in 1985 by Harry Kroto, Richard Smalley, and Robert Curl, who together won the 1996 Nobel Prize in Chemistry.[8][9] C60 was not initially described as nanotechnology; the term was used regarding subsequent work with related graphene tubes (called carbon nanotubes and sometimes called Bucky tubes) which suggested potential applications for nanoscale electronics and devices.

In the early 2000s, the field garnered increased scientific, political, and commercial attention that led to both controversy and progress. Controversies emerged regarding the definitions and potential implications of nanotechnologies, exemplified by the Royal Society's report on nanotechnology.[10] Challenges were raised regarding the feasibility of applications envisioned by advocates of molecular nanotechnology, which culminated in a public debate between Drexler and Smalley in 2001 and 2003.[11]

Meanwhile, commercialization of products based on advancements in nanoscale technologies began emerging. These products are limited to bulk applications of nanomaterials and do not involve atomic control of matter. Some examples include the Silver Nano platform for using silver nanoparticles as an antibacterial agent, nanoparticle-based transparent sunscreens, and carbon nanotubes for stain-resistant textiles.[12][13]

Read more from the original source:
Nanotechnology - Wikipedia, the free encyclopedia

List of superhuman features and abilities in fiction …

Many forms of fiction feature characters attributed with superhuman, supernatural, or paranormal abilities, often referred to as "superpowers" (also spelled "super powers" and "super-powers") or "powers." This tradition is especially rich in the fictional universes of various comic book stories. This is a list of many of those powers that have been known to be used. Some of these categories overlap.

Examples of ways in which a character has gained the ability to generate an effect.

Examples of methods by which a character generates an effect.

This section refers to the ability to manipulate or otherwise interact with superpowers themselves, not "power" such as electrical power or gravitational power.

Powers which affect an individual's body.

The abilities of extra-sensory perception (ESP) and communication.

These powers may be manifested by various methods, including: by some method of molecular control; by access to, or partially or fully shifting to another dimension; by manipulating the geometric dimensions of time or space; or by some other unnamed method.

Ability to control or manipulate the elements of nature.

These powers deal with energy generation, conversion and manipulation. In addition to generic energy, versions of these powers exist that deal with such things as light, sound, electricity, nuclear energy, and the Darkforce dimension.

The following powers could be manifested in any number of ways.

Visit link:
List of superhuman features and abilities in fiction ...

History of chemistry – Wikipedia, the free encyclopedia

The history of chemistry represents a time span from ancient history to the present. By 1000 BC, civilizations used technologies that would eventually form the basis to the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze.

The protoscience of chemistry, alchemy, was unsuccessful in explaining the nature of matter and its transformations. However, by performing experiments and recording the results, alchemists set the stage for modern chemistry. The distinction began to emerge when a clear differentiation was made between chemistry and alchemy by Robert Boyle in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, chemists are seen as applying scientific method to their work.

Chemistry is considered to have become an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs.[1]

The earliest recorded metal employed by humans seems to be gold which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, c. 40,000 BC.[2]

Silver, copper, tin and meteoric iron can also be found native, allowing a limited amount of metalworking in ancient cultures.[3] Egyptian weapons made from meteoric iron in about 3000 BC were highly prized as "Daggers from Heaven".[4]

Arguably the first chemical reaction used in a controlled manner was fire. However, for millennia fire was seen simply as a mystical force that could transform one substance into another (burning wood, or boiling water) while producing heat and light. Fire affected many aspects of early societies. These ranged from the simplest facets of everyday life, such as cooking and habitat lighting, to more advanced technologies, such as pottery, bricks, and melting of metals to make tools.

It was fire that led to the discovery of glass and the purification of metals which in turn gave way to the rise of metallurgy.[citation needed] During the early stages of metallurgy, methods of purification of metals were sought, and gold, known in ancient Egypt as early as 2900 BC, became a precious metal.

Certain metals can be recovered from their ores by simply heating the rocks in a fire: notably tin, lead and (at a higher temperature) copper, a process known as smelting. The first evidence of this extractive metallurgy dates from the 5th and 6th millenniumBC, and was found in the archaeological sites of Majdanpek, Yarmovac and Plocnik, all three in Serbia. To date, the earliest copper smelting is found at the Belovode site,[5] these examples include a copper axe from 5500 BC belonging to the Vina culture.[6] Other signs of early metals are found from the third millenniumBC in places like Palmela (Portugal), Los Millares (Spain), and Stonehenge (United Kingdom). However, as often happens with the study of prehistoric times, the ultimate beginnings cannot be clearly defined and new discoveries are continuous and ongoing.

These first metals were single ones or as found. By combining copper and tin, a superior metal could be made, an alloy called bronze, a major technological shift which began the Bronze Age about 3500 BC. The Bronze Age was period in human cultural development when the most advanced metalworking (at least in systematic and widespread use) included techniques for smelting copper and tin from naturally occurring outcroppings of copper ores, and then smelting those ores to cast bronze. These naturally occurring ores typically included arsenic as a common impurity. Copper/tin ores are rare, as reflected in the fact that there were no tin bronzes in western Asia before 3000 BC.

After the Bronze Age, the history of metallurgy was marked by armies seeking better weaponry. Countries in Eurasia prospered when they made the superior alloys, which, in turn, made better armor and better weapons.[citation needed] This often determined the outcomes of battles.[citation needed] Significant progress in metallurgy and alchemy was made in ancient India.[7]

See original here:
History of chemistry - Wikipedia, the free encyclopedia

Plato: Immortality and the Forms

The most illustrious student Socrates had in philosophy was Plato, whose beautifully written dialogues not only offered an admiring account of the teachings of his master but also provided him with an opportunity to develop and express his own insightful philosophical views. In the remainder of our readings from Platonic dialogues, we will assume that the "Socrates" who speaks is merely a fictional character created by the author, attributing the philosophical doctrines to Plato himself. In the middle and late dialogues, Plato employed the conversational structure as a way of presenting dialectic, a pattern of argumentation that examines each issue from several sides, exploring the interplay of alternative ideas while subjecting all of them to evaluation by reason.

Plato was a more nearly systematic thinker than Socrates had been. He established his own school of philosophy, the Academy, during the fourth century, and he did not hesitate to offer a generation of young Athenians the positive results of his brilliant reasoning. Although he shared Socrates's interest in ethical and social philosophy, Plato was much more concerned to establish his views on matters of metaphysics and epistemology, trying to discover the ultimate constituents of reality and the grounds for our knowledge of them.

Plato's (Meno) is a transitional dialogue: although it is Socratic in tone, it introduces some of the epistemological and metaphysical themes that we will see developed more fully in the middle dialogues, which are clearly Plato's own. In a setting uncluttered by concern for Socrates's fate, it centers on the general problem of the origins of our moral knowledge.

The Greek notion of [aret], or virtue, is that of an ability or skill in some particular respect. The virtue of a baker is what enables the baker to produce good bread; the virtue of the gardener is what enables the gardener to grow nice flowers; etc. In this sense, virtues clearly differ from person to person and from goal to goal. But Socrates is interested in true virtue, which (like genuine health) should be the same for everyone. This broad concept of virtue may include such specific virtues as courage, wisdom, or moderation, but it should nevertheless be possible to offer a perfectly general description of virtue as a whole, the skill or ability to be fully human. But what is that?

When Meno suggests that virtue is simply the desire for good things, Socrates argues that this cannot be the case. Since different human beings are unequal in virtue, virtue must be something that varies among them, he argues, but desire for one believes to be good is perfectly universal Since no human being ever knowingly desires what is bad, differences in their conduct must be a consequence of differences in what they know. (Meno 77e) This is a remarkable claim. Socrates holds that knowing what is right automatically results in the desire to do it, even though this feature of our moral experience could be doubted. (Aristotle, for example, would later explicitly disagree with this view, carefully outlining the conditions under which weakness of will interferes with moral conduct.) In this context, however, the Socratic position effectively shifts the focus of the dialogue from morality to epistemology: the question really at stake is how we know what virtue is.

For questions of this sort, Socrates raises a serious dilemma: how can we ever learn what we do not know? Either we already know what we are looking for, in which case we don't need to look, or we don't know what we're looking for, in which case we wouldn't recognize it if we found it. (Meno 80e) The paradox of knowledge is that, in the most fundamental questions about our own nature and function, it seems impossible for us to learn anything. The only escape, Socrates proposed, is to acknowledge that we already know what we need to know. This is the doctrine of recollection, Plato's conviction that our most basic knowledge comes when we bring back to mind our acquaintance with eternal realities during a previous existence of the soul.

The example offered in this dialogue is discovery of an irrational number, the square root of 2. Socrates leads an uneducated boy through the sophisticated geometrical demonstration with careful questions, showing that the boy somehow already knows the correct answers on his own. All of us have had the experience (usually in mathematical contexts, Plato believed) of suddenly realizing the truth of something of which we had been unaware, and it does often feel as if we are not really discovering something entirely new but rather merely remembering something we already knew. Such experiences lend some plausibility to Plato's claim that recollection may be the source of our true opinions about the most fundamental features of reality. (Meno 85d) What is more, this doctrine provides an explanation of the effectiveness of Socratic method: the goal is not to convey new information but rather to elicit awareness of something that an individual already knows implicitly.

The further question of the dialogue is whether or not virtue can be taught. On the one hand, it seems that virtue must be a kind of wisdom, which we usually assume to be one of the acquirable benefits of education. On the other hand, if virtue could be taught, we should be able to identify both those who teach it and those who learn from them, which we cannot easily do in fact. (Meno 96c) (Here Socrates offers a scathing attack on the sophists, who had often claimed that they were effective teachers of virtue.) So it seems that virtue cannot be taught. Plato later came to disagree with his teacher on this point, arguing that genuine knowledge of virtue is attainable through application of appropriate educational methods.

Perhaps our best alternative, Socrates held, is to suppose that virtue is a (divinely bestowed?) true opinion that merely happens to lack the sort of rational justification which would earn it the status of certain knowledge. Whether or not we agree with this rather gloomy conclusion about the unteachability of virtue, the distinction between genuine knowledge and mere true opinion is of the greatest importance. For philosophical knowledge, it is not enough to accept beliefs that happen to be true; we must also have reasons that adequately support them.

Go here to read the rest:
Plato: Immortality and the Forms

Fermentation – Wikipedia, the free encyclopedia

Fermentation is a metabolic process that converts sugar to acids, gases or alcohol. It occurs in yeast and bacteria, but also in oxygen-starved muscle cells, as in the case of lactic acid fermentation. Fermentation is also used more broadly to refer to the bulk growth of microorganisms on a growth medium, often with the goal of producing a specific chemical product. French microbiologist Louis Pasteur is often remembered for his insights into fermentation and its microbial causes. The science of fermentation is known as zymology.

Fermentation takes place in the lack of oxygen (when the electron transport chain is unusable) and becomes the cells primary means of ATP (energy) production.[1] It turns NADH and pyruvate produced in the glycolysis step into NAD+ and various small molecules depending on the type of fermentation (see examples below). In the presence of O2, NADH and pyruvate are used to generate ATP in respiration. This is called oxidative phosphorylation, and it generates much more ATP than glycolysis alone. For that reason, cells generally benefit from avoiding fermentation when oxygen is available, the exception being obligate anaerobes which cannot tolerate oxygen.

The first step, glycolysis, is common to all fermentation pathways:

Pyruvate is CH3COCOO. Pi is phosphate. Two ADP molecules and two Pi are converted to two ATP and two water molecules via substrate-level phosphorylation. Two molecules of NAD+ are also reduced to NADH.[2]

In oxidative phosphorylation the energy for ATP formation is derived from an electrochemical proton gradient generated across the inner mitochondrial membrane (or, in the case of bacteria, the plasma membrane) via the electron transport chain. Glycolysis has substrate-level phosphorylation (ATP generated directly at the point of reaction).

Humans have used fermentation to produce food and beverages since the Neolithic age. For example, fermentation is used for preservation in a process that produces lactic acid as found in such sour foods as pickled cucumbers, kimchi and yogurt (see fermentation in food processing), as well as for producing alcoholic beverages such as wine (see fermentation in winemaking) and beer. Fermentation can even occur within the stomachs of animals, such as humans. Auto-brewery syndrome is a rare medical condition where the stomach contains brewers yeast that break down starches into ethanol; which enters the blood stream.[3]

To many people, fermentation simply means the production of alcohol: grains and fruits are fermented to produce beer and wine. If a food soured, one might say it was 'off' or fermented. Here are some definitions of fermentation. They range from informal, general usage to more scientific definitions.[4]

Fermentation does not necessarily have to be carried out in an anaerobic environment. For example, even in the presence of abundant oxygen, yeast cells greatly prefer fermentation to aerobic respiration, as long as sugars are readily available for consumption (a phenomenon known as the Crabtree effect).[5] The antibiotic activity of hops also inhibits aerobic metabolism in yeast[citation needed].

Fermentation reacts NADH with an endogenous, organic electron acceptor.[1] Usually this is pyruvate formed from the sugar during the glycolysis step. During fermentation, pyruvate is metabolized to various compounds through several processes:

Sugars are the most common substrate of fermentation, and typical examples of fermentation products are ethanol, lactic acid, carbon dioxide, and hydrogen gas (H2). However, more exotic compounds can be produced by fermentation, such as butyric acid and acetone. Yeast carries out fermentation in the production of ethanol in beers, wines, and other alcoholic drinks, along with the production of large quantities of carbon dioxide. Fermentation occurs in mammalian muscle during periods of intense exercise where oxygen supply becomes limited, resulting in the creation of lactic acid.[6]

Read the rest here:
Fermentation - Wikipedia, the free encyclopedia

Integrative & Lifestyle Medicine – Cleveland Clinic

Center for Integrative & Lifestyle Medicine Sign-Up for Our Newsletter

Free quarterly e-newsletter designed to provide you with the latest on complementary approaches to prevention and healing.

Subscribe Now

Cleveland Clinic's Center for Integrative & Lifestyle Medicine is dedicated to addressing the increasing demand for integrative healthcare by researching and providing access to practices that address the physical as well as lifestyle, emotional, and spiritual needs of patients.

As the body of evidence for alternative medicine grows, we remain at the forefront providing the most updated education and practices to patients. Cleveland Clinic's Center for Integrative & Lifestyle Medicine sees more than 5,000 patients per year for a variety of services.

Learn about our wide range of services and treatments including acupuncture, massage and lifestyle management programs.

Meet the Integrative & Lifestyle Medicine team of physicians and specialists.

Womens Wellness Week is a complete program that gives you physical, nutritional and informational tools you need to live healthier.

Disclaimer: Cleveland Clinic does not endorse Young Living Essential Oils Products and has not authorized the use of its name in association with Young Living Essential Oils Products.

Treat someone you love to a gift certificate good for any Cleveland Clinic Center for Integrative & Lifestyle Medicine service even physician consults and holistic psychotherapy.

Read the rest here:
Integrative & Lifestyle Medicine - Cleveland Clinic

National Institute of Neurological Disorders and Stroke …

New Award Creates Stable Funding for Outstanding Neuroscience Investigators

As NINDS Director, my goal is to optimize the progress of basic, translational, and clinical neuroscience research. One issue that slows the pace of discovery is that, rather than directly engaging in research, many principal investigators spend a great deal of their time writing and administering grant proposals. This is a consequence not only of the current constrained budget climate, but also of the fact that NIH grants fund individual projects that are relatively short in duration.

We feel that it is time to free up smart, talented people with innovative ideas to focus their time and effort on doing excellent science. To empower investigators to use their time more productively, NINDS is piloting a new funding mechanism the Research Program Award (RPA). Rather than funding a single project, an RPA will support an NINDS investigators overall research program for up to eight years. This initial pilot program aims to fund up to 30 investigators in FY 2016 who have demonstrated strong potential to do high impact science. The announcement describing this new award was released July 15 for an application deadline of Oct 6. For further information about the RPA, see the blog post from our Extramural Director Robert Finkelstein.

Go here to see the original:
National Institute of Neurological Disorders and Stroke ...

body burning – Neurology – MedHelp

When you had your IVF, did you have a reaction to the drugs? I too have had a lot of the same symptoms, especially the on and off joint pain, weakness mostly on left side, eye blurriness and a freakish jitter of the eyes when reading. As some one else posted, I am an avid reader and this is probably the most annoying symptom for me as it doesn't go away. Until 6 am this morning when I woke up with the side I was lying on feeling like I was burning up. Changed sides, back front, whichever I am lying on burns the most and makes the bed super hot.

I have seen all kinds of doctors including neurologist and have had every test you can think of. I show an auto immune deficiency, but nothing they can pin down. You stop asking the doctors since they start thinking you are faking it. Even my psychologist wondered and I have been seeing her 8 years since great cancer (genetic) and a hysterectomy due to fibroids and overly large ovaries (the doctor's term) being military growing up and then active duty and dual military I have lived everywhere including Guam, Germany, England, Japan and most states. No one has been able to pin it down, but after trial and error my psychologist and I figured out the right combo of drugs for depression and bi polar. Still heat, extreme cold, stress or even a minor illness can trigger symptoms, as can excessive stress. I came off lorazapam and take a small dose of diazapam and it helps. Then this morning I wake up with this terrible burning.

back to my original question, I had IVF through the military 20 years ago after my second eptopic pregnancy. I already had a son born between the two, so staying pregnant wasn't the issue, I just had no tubes. The initial hormone shots sucked big time, but it was not until I got pregnant (I insisted no more than 3 eggs against the doctors persistent recommendations). My overies produced 40 eggs which is almost unheard of. However, from two weeks on (they are very keen to see how it goes and I can't even tell you how many ultrasounds you get for this unless you have had it done, them you know) my ovaries kept getting larger and larger. I looked nine months pregnant from the beginning and almost died. Apparently I was one of the "25%" this happens too. Although I have beautiful twin daughters now 19, I have always said that all of those hormones and my reaction to them was going to be a problem latter on. I thought the hysterectomy resolved that, but after seeing that you had IVF also,I wonder if the nerve damage and autoimmune deficiency might be related to that. I have never once connected them until now. I will mention it to my doctor today as he burning is unbearable and I have to see him even if he can't help.

has any other woman had IVF over the years and are now experiencing nerve and joint issues? I am on three forums for this pain and now the burning. Despite the wonderful outcome, maybe messing with Mother Nature isn't such a good idea.

I realize it has been 3 years since your post, but perhaps someone can have some answers. Doctors don't.

Read the original:
body burning - Neurology - MedHelp

Anti-Aging Medicine – LIFEWORKS WELLNESS CENTER

Anti-aging medicine is the application of medical technology for the early detection, prevention, treatment and reversal of the three age-related Ds-dysfunction, disorders and diseases.

As an extension of preventive health care, anti-aging medicine it is the next great model of health care for the new millennium.

According to the American Academy for Anti-Aging Medicine (A4M), the three rules of anti-aging medicine are:

The longer you live, the better your chances are for living even longer. Medical knowledge doubles every 3.5 years, so a time will come when perhaps we will know how to stop aging, put it on hold and even eventually reset the clock mechanism of life itself.

If you have had your cholesterol tested, had a thermogram, or taken bio-identical hormones, you have experienced anti-aging medicine. 90% of all adult illness is due to the degenerative processes of aging. This includes heart disease, most cancers, adult-onset diabetes, stroke, high blood pressure, osteoporosis, osteoarthritis, autoimmune disease, glaucoma, and Alzheimer's.

With early detection and appropriate intervention, most of these diseases can be prevented, cured, or have their downward course reversed.

Anti-Aging Medicine in Tampa Bay

Anyone over the age of 30- or 40 years knows that they cant perform the way they used to. This is often true both physically and mentally. It is a rare person who can do more or better at age 50 that they could at 30. But all do wish for that.

Excerpt from:
Anti-Aging Medicine - LIFEWORKS WELLNESS CENTER

Bioengineering (B.S.) | Degree Programs | Clemson …

Freshmen who major in engineering at Clemson are initially admitted into our general engineering program, where youll have a year to explore many different engineering disciplines, meet faculty from each of our engineering departments and discover which major fits your personal interests and talents. On the admissions application, you will apply as a general engineering major.

Once into your core bioengineering curriculum, your classes will combine a solid background in engineering with the study of life sciences. From class to the lab, research is integral to a bioengineering career, and our students are encouraged to get involved in research projects as soon as possible. Classes include the study of EKG simulation, tissue engineering of heart valves, medical technology in the developing world and orthopaedic implants to name a few.

Bioelectrical Concentration If you opt to go the bioelectrical route, you will become skilled in inventing, improving and maintaining the machines that allow physicians and technicians to perform procedures with greater accuracy and precision and less invasion.

Biomaterials Concentration If you choose to specialize in biomaterials, youll study tissue engineering and appliances that can physically improve patient health. Some examples include artificial hips and growing new body parts with patient cells.

Combined Bachelors/Masters Plan Jump-start your Master of Science in bioengineering while completing your bachelors. In our dual-degree program, you can apply some graduate credits to both degrees.

More:
Bioengineering (B.S.) | Degree Programs | Clemson ...

Medical Chiropractic & Anti-Aging Treatment | Dr. McCrory MD

Who We Are

We are passionate about serving our patients with the most advanced medical treatments available in Roseville, CA. Dr. McCrory has years of experience and knowledge in his specialized healing treatments including anti-aging, pain management, EWOT, Prolozone Therapy, and Hormone Therapy.

Demonstrating a compassionate and caring manner in all of our treatments and services is our first priority.

Our mission is to make every effort to do whatever is necessary so that everyone we come in contact with has the opportunity to be proactive in their health. Establishing common sense treatment strategies that both prevent disease, slow the aging process, and optimize productive living.

At Roseville Medical, we strive to present a program of wellness care for our patients so they may gain and benefit a healthier lifestyle.

Our advanced and accurate treatments are designed to be an exact match to your wellness needs. We proudly utilize the most sophisticated technologies to find the root cause of your illness or pain.

Dr. McCrory is one of only a few hundred medical professionals who have obtained both chiropractic and medical degrees. We help our patients to regain vigor and well-being to look and feel youthful and healthy.

Dr. McCrory is one of only a few hundred physicians in the U.S. who has obtained both chiropractic and medical degrees.

Exercise With Oxygen Therapy

Roseville Advanced Medical Group is pleased to announce a proven medical technology that promises to dramatically improve peoples health and sense of well-being, increase mental and physical stamina, and inhibit the aging process. EWOT exercise with oxygen therapy is a means of improving oxygenation of the tissues of the body, increasing energy production in the cells, and thereby enhancing the vitality and health of the individual.

Visit link:
Medical Chiropractic & Anti-Aging Treatment | Dr. McCrory MD

Medical Chiropractic & Anti-Aging Treatment | Dr. McCrory MD

Who We Are

We are passionate about serving our patients with the most advanced medical treatments available in Roseville, CA. Dr. McCrory has years of experience and knowledge in his specialized healing treatments including anti-aging, pain management, EWOT, Prolozone Therapy, and Hormone Therapy.

Demonstrating a compassionate and caring manner in all of our treatments and services is our first priority.

Our mission is to make every effort to do whatever is necessary so that everyone we come in contact with has the opportunity to be proactive in their health. Establishing common sense treatment strategies that both prevent disease, slow the aging process, and optimize productive living.

At Roseville Medical, we strive to present a program of wellness care for our patients so they may gain and benefit a healthier lifestyle.

Our advanced and accurate treatments are designed to be an exact match to your wellness needs. We proudly utilize the most sophisticated technologies to find the root cause of your illness or pain.

Dr. McCrory is one of only a few hundred medical professionals who have obtained both chiropractic and medical degrees. We help our patients to regain vigor and well-being to look and feel youthful and healthy.

Dr. McCrory is one of only a few hundred physicians in the U.S. who has obtained both chiropractic and medical degrees.

Exercise With Oxygen Therapy

Roseville Advanced Medical Group is pleased to announce a proven medical technology that promises to dramatically improve peoples health and sense of well-being, increase mental and physical stamina, and inhibit the aging process. EWOT exercise with oxygen therapy is a means of improving oxygenation of the tissues of the body, increasing energy production in the cells, and thereby enhancing the vitality and health of the individual.

Read more from the original source:
Medical Chiropractic & Anti-Aging Treatment | Dr. McCrory MD

Modern Medicine: Towards Prevention, Cure, Well-being and …

If we accept this, the path before us is very clear.

The first goal of medicine is to see to it that no one has to reach a hospital or a clinic. This is what I mean by prevention. [Technically called primary prevention; not of course ruling out the importance of secondary prevention (early detection and prompt treatment) and tertiary prevention (restoring function and reducing disability)]. This should be the very first goal, not the last; which means, health promotion/education activities along with clean water, nutritious food, clean and disaster free habitation, proper sanitation, control of pollution, poverty alleviation, empowerment of the deprived and disadvantaged, life-style modifications. A tall order, which involves multiple agencies, not in control of medicine and its movers. That is one prime reason why it is not top of the agenda, perhaps. And of course because it cuts at the very root of the justification for the medical establishment, and its proliferation. But medicine has as much a treatment orientation as a social perspective. For health is a means to well-being, and health can be achieved for all only when all are mobilized for health and become conscious of what should be its legitimate thrust (Singh and Singh, 2004). Moreover, prevention, understood as preventing a disease from occurring, also means, finding out vaccines, and other preventive measures, for all its diseases, not just the infectious. Let this stop sounding ludicrous. It is comforting to note that work on vaccines is underway, especially for diabetes (Phillips et al., 2008; Richardson et al., 2009), hypertension (Mitka, 2007; Ambhl et al., 2007; Phisitkul, 2009), cancers (especially cervical: Jenkins, 2008; Keam and Harper, 2008; Schwarz, 2008), and at least suggested for schizophrenia and other mental disorders (Tomlinson et al., 2009). And, related to this, it must research and highlight life style changes which prevent disease, and tackle diseases of poverty and of lifestyle (Singh and Singh, 2008). Some work in this field is already on, for example, in cancers (Anand et al., 2008), Type 2 Diabetes (Misra, 2009), cardiovascular disease (Pischke et al., 2008), ulcerative colitis (Langhorst et al., 2007); as also the beneficial effects of a vegetarian diet in lifestyle diseases (Segasothy and Phillips, 1999). Preventive psychiatry also needs a boost, for psychiatry, and overall for medicine, for do we not know that psychological distress is at the root of common medical problems that reach a primary care physician, and complicate many manifestations of other disorders at all levels of their manifestation. The complex relationships between gene-environment interactions, particularly the interplay of vulnerability and resilience factors within a persons biography need close scrutiny in individualized preventive psychiatry (Mller-Spahn, 2008), as does reduction of stigma in secondary prevention (Reeder and Pryor, 2008). The role of health psychology and the related field of behavioural medicine which focus on the interplay among biological dispositions, behaviour, and social context also need enthusiastic backing as a means to health promoting behaviours and preventing health damaging ones (Kaplan, 2009). Modern medicine must look closely at, and not pooh pooh, the claims of alternative and complementary medicine, including yoga, meditation and spirituality, just because one is put off by their tall sounding claims, and some charlatans in the group. Rather, it must submit their claims to rigorous scientific and experimental scrutiny. Some recent studies in yoga in general (Lipton, 2008; Bijlani, 2008; Corliss, 2001; Oken et al., 2006; Brown and Gerbarg, 2005; Shapiro et al., 2007; Flegal et al. 2007), and yoga in cancers (Culos-Reed et al., 2006; Bower et al., 2005; Smith and Pukall, 2009; Danhauer et al., 2009) are promising in this direction. Studies of meditation as an adjunct to modern medicine deserve special mention here. Some promising leads are in works on Longevity and health through yogic meditation (Bushell, 2009), and meditation in general (Bushell and Thiese, 2009); meditative practices for health (Ospina et al., 2007), and their clinical trials (Ospina et al., 2008); Sudarshan kriya in stress, anxiety and depression (Brown and Gerbarg, 2009); Transcendental Meditation and longevity (Alexander et al., 1989); meditation and slowing of aging, (Epel et al., 2009); mindfulness and distress (Jain et al., 2007); and mindfulness and well-being (Shapiro et al., 2008). Spirituality and its various scientific studies need a close scrutiny too. Some areas of spirituality which have interested researchers in recent times are positive emotions and spirituality (Vaillant, 2008), its neurobiology (Mohandas, 2008), healing presence (McDonough-Means et al., 2004), spiritual encounter and complementary treatment (Foster, 2006), spirituality and psychiatry (Mohr, 2006), health and spirituality in critical care (Puchalski, 2004), spirituality and critical care holistic nursing (Carpenter et al., 2008), difficulty in talking about spirituality in a medical setting (Molzahn and Sheilds, 2008) etc. To promote rigorous scientific scrutiny of claims in Complementary and Alternative medicine [CAM], laudable efforts are on by relatively new Journals in the field like Evidenced Based Complementary and Alternative Medicine (an Oxford Journal, since June 2004, http://ecam.oxfordjournals.org), BMC Complementary and Alternative Medicine (Published by BioMed Central, since 2001, http://www.biomedcentral.com/bmccomplementalternmed), Alternative Therapies in Health and Medicine (since 1995, first journal in the field of CAM to be indexed with NLM http://www.alternative-therapies.com), Journal of Alternative and Complementary Medicine (since 1995, Official Journal of the International society for Complementary Medicine Research, http://www.liebertpub.com/products/product.aspx?pid=26). Some notable relatively recent work in CAM in the field in anxiety and depression (van der Watt et al., 2008), depression in women (Manber et al., 2002), menopausal women (Kronenberg and Fugh-Berman, 2002), sleep disorders in the elderly (Gooneratne, 2008), osteoarthritis (Ernst, 2006), asthma (Pretorius, 2009) etc, should not go unnoticed. And while all this happens, the preventive and social medicine guys from mainstream medicine need to awaken and clean up their act. To stop being sidelined, and point out how they matter. And of course, to re-emphasize that prevention is better than cure (Phakathi, 2009, where it is in relation to child sexual abuse, but applicable elsewhere too).

The third is to closely study and report on longevity and well-being studies. Well-being implies the presence of (1) positive emotions and the absence of negative ones; (2) mature character traits, including self-directedness, cooperativeness, and self-transcendence; (3) life satisfaction or quality of life; and (4) character strengths and virtues, such as hope, compassion, and courage, all of which are now measurable by scales (Cloninger, 2008). Some relatively recent literature focuses on telomeres and longevity (Haussmann and Mauck, 2008); sex differences in longevity, (Franceschi et al., 2000); Immunology and longevity (Candore et al., 2006); psychosocial factors and longevity (Darviri et al., 2009) etc. The paradigm of Reorganizational healing [ROH] is an interesting recent work in the field of wellness, behaviour change, holistic practice and healing (Epstein et al., 2009). Anyone who has not visited a hospital and is over 60 years is a precious commodity to research. Anyone who has none of the lifestyle diseases till 60 is also similarly precious. All those who are 90 and active physically and mentally, even if diseased, form another very precious group. And all centenarians are the most precious group to study. It is fascinating to see the breadth of studies on this topic. There is a burgeoning research on centenarians in the last decade, some of promise and interest are in the areas of Centenarians and healthy aging (Engberg et al., 2009); antioxidants and healthy centenarians (Mecocci et al., 2000); nonagenarians and centenarians in China, (Ye et al., 2009); centenarians in Bama (Xiao et al., 1996); quality of life and longevity, (Jeune, 2002); Danish centenarians, not necessarily healthy but still autonomous (Andersen-Ranberg et al., 2001); and not necessarily having dementia, (Andersen-Ranberg et al., 2006); centenarians and their cognitive functions, (Silver et al., 2001); dementia free centenarians (Perls, 2004a, 2004b); centenarians being different (Perls 2006); cognitive states of Centenarians, (Luczywek et al., 2007); successful aging in centenarians: myths and reality, (Motta et al., 2005); physical activity and centenarians (Antonini et al., 2008); centenarians aging gracefully (Willcox et al., 2007) etc.

Read more here:
Modern Medicine: Towards Prevention, Cure, Well-being and ...

Pharmaceutical pharmacogenomics glossary & taxonomy

Pharmacogenomics is often referred to as a "revolution" or "the great new wave" in medicine - a future filled with promise not just for better, safer, and ore affordable healthcare (i.e. affordable for both consumers and third-party payers) but also, according to some, greater economic returns for drug makers. While there are in fact a handful of drugs on the market with genotype-based prescribing requirements, such as Herceptin, this next great wave has been slow to arrive. Insight Pharma Reports, Pharmacogenomics: Delivering on the promise, 2009

Guide to terms in these glossaries Site Map Related glossaries include Diagnostics Biomarkers Molecular diagnostics, genetic & genomic testing Clinical Cancer diagnostics, genomics, prognostics & therapeutics Drugs Drug safety & pharmacovigilance Drug targets Informatics: Drug discovery informatics Clinical & medical informatics Technologies Metabolic engineering & profiling Microarrays Sequencing Biology Expression, gene & proteinGenomicsSNPs & genetic variations

ADME: Abbreviation for Absorption, Distribution, Metabolism, Excretion. See also pharmacokinetics, drug disposition. [IUPAC Med Chem] Also referred to as ADME/ Tox ADME/ Toxicology or ADMET.

These key properties of pharmaceutical compounds are tested for as part of lead optimization activities.Related terms: DMPK, pharmacokinetics, predictive ADME, toxicogenomics.

chronopharmacokinetics: Pharmacokinetic parameters are generally assumed to be invariate with the time of day, although circadian variation of drug metabolism and drug response is known. As proposed, chronopharmacokinetics considers the implications of the chronovariability of pharmacokinetic parameters. In order to investigate chronovariation in the rate of disappearance of a substance from the approximate a linear course until very low blood levels are attained. ... It is concluded that: 1) rhythmicity within elimination curves can only be determined by repetition of the experiment at different times of the diel period; 2)the expectation that a rate-constant estimated at one time of the day may be valid for another part of the day carries with it an unknown risk. No pharmacokinetic analysis can be considered definitive unless chronopharmacokinetic variation of parameters is considered. FM Sturtevant, Chronopharmacokinetics of ethanol. I. Review of the literature and theoretical considerations, Chronobiologia 3(3): 237- 262, Jul-Sept 1976

chronopharmacology: The science dealing with the phenomenon of rhythmicity in living organisms is called chronobiology. The branch dealing with the pharmacologic aspects of chronobiology is termed chronopharmacology, which may be subdivided into chronotherapy, chronopharmacokinetics and chronotoxicity. WA Ritschel, H Forusz, Chronopharmacology: a review of drugs studied, Methods Find Exp Clin Pharmacology 16(1): 57- 75, Jan-Feb 1994 Related terms; Pharmacogenomics

clinical pharmacology: The branch of pharmacology that deals directly with the effectiveness and safety of drugs in humans. MeSH, 1980

Over the past decades, the scope of clinical pharmacology within the pharmaceutical industry has widened considerably. Key growth has been in the area of translational science and exploratory medicine, where clinical pharmacologists are nowadays the mediator between basic research and establishment of clinical usefulness. This role has led to and is supported by the rapid developments in pharmacokinetic-pharmacodynamic modeling and simulation, a strong focus on biomarkers for early informed decision-making, and the advent of pharmacogenomics into safety and efficacy predictions and evaluations. The ultimate goal - safer, more efficacious drug prescription - is shared with that of today's drive for more personalized medicine. This article reviews the evolution of clinical pharmacology within the industry, the regulatory, clinical and societal drivers for this evolution, and the analogy with the establishment of personalized medicine in clinical practice. Clinical pharmacology, biomarkers and personalized medicine: education please. Koning P, Keirns J. Biomark Med. 2009 Dec;3(6):685-700. http://www.ncbi.nlm.nih.gov/pubmed/20477707

clinical pharmacometabolomics: The segregation of patient populations using small molecule biomarkers in clinical trials, adverse drug reaction, and drug efficacy evaluation. Phenomenome Discoveries http://www.phenomenome.com/ Broader term: pharmacometabolomics

computational pharmacology: Our ultimate goal is transforming the process of drug design through the use of advanced computational techniques, particularly machine learning and knowledge- based approaches applied to high throughput molecular biology data. We create novel algorithms for the analysis and interpretation of gene expression arrays, proteomics, metabonomics, and combinatorial chemistry. We also create tools for building, maintaining and applying knowledge- bases of molecular biology, and for knowledge- driven inference from multiple biological data types. Finally, we are developing and applying natural language processing techniques for information extraction from and management of the biomedical literature. The UCHSC Center for Computational Pharmacology, Univ. of Colorado Health Sciences Center, US http://compbio.ucdenver.edu/Hunter_lab/

See the article here:
Pharmaceutical pharmacogenomics glossary & taxonomy