Two people spent 48 hours in nonstop virtual reality – Engadget

Johnson has been challenging the rules of consumer VR from the beginning -- when virtual reality hit the mainstream last year, he spent 24 hours immersed in a mix of Rift, Vive and Gear VR experiences, setting an unofficial record for longest time in virtual reality. This year, he doubled that effort, recruiting Sarah Jones from Coventry University to join him in two days of extreme VR immersion -- breaking for only five minutes each hour to record vlogs and use the facilities.

The experiment was designed to question the arbitrary limits of VR-use time and help expose virtual reality to a wider consumer audience, but it wasn't a PR stunt for any specific headset manufacturer. "In fact, it was quite the opposite," he says. Every company he invited to participate in the project turned him down. "Mostly because they thought we'd die," he joked.

The fears of the likes of Oculus VR and HTC weren't completely unfounded. Johnson didn't just spend two days watching movies and playing games in virtual reality -- he wore VR goggles while driving go-karts, getting tattoos and walking across the wings of an airplane in-flight. "We wanted it to be as physical as possible," he says. "How extreme do you need to get with the physical additions to VR to make it feel real?" It sounds almost like a silly question, but when you're wearing a headset that partially blinds you to your environment, the influence of your mixed reality could have unexpected results.

Johnson and Jones' wind-walking adventure, for instance, was seen through a GearVR's pass-through camera -- but despite the physical exertion of fighting the wind on the wing of a plane, the experience wasn't completely real. "It still didn't feel real to us with what we were seeing," he says, "but the movement -- the buffeting and forcing yourself against the wind, they were the things that physically added the extra dimension." They just couldn't see well enough through the GearVR to get the full experience. Johnson thinks it might have been better if the headset had been displaying a VR dragon ride. "If everything you were seeing felt real, that would all be amazing."

Go-karting fared a little better -- the limited view of the GearVR's pass-through camera gave the drivers' vision a lower framerate and letterboxing but didn't seem to hamper the experience in the same way. "It's amazing that our brains just corrected and we got used to seeing that view," Johnson says. "We were going pretty quickly around the go-karting track, not hitting anything -- though with really reduced visibility."

These spectacle events are novel, but some of the more interesting results came from the smaller experiments. Johnson wore a VR headset to a tattoo parlor to see if the distraction of a false reality could dull the pain of being branded with a nerdy Apple tattoo in the real world. It did.

After briefly removing the headset to measure his pain threshold in the real world, Johnson spent the rest of his tattoo session playing Gunjack. "If the headset off was my 10 benchmark," he said, giving the pain a number, "It came down to like a six or a seven. It really did seem to have some effect." According to his Apple Watch, his heart rate dropped in VR too, averaging at 74 beats per minute in the headset to 103 without.

Living in VR drastically changed mundane everyday life, too. Having a face-to-face conversation with anybody meant logging into Facebook Spaces or another social-VR app, and sleeping was an altogether different kind of experience.

"When you wake up in VR, you just believe everything," he explains. Normally, virtual reality is a conscious choice, but if you wake up in a simulation, surrounded by dinosaurs and spaceships, you don't' have time to question your reality as you regain consciousness. "It's kind of like waking up in an unfamiliar hotel room. You may not know where you are or what the timezone is, but you just believe you're in a hotel room. Why would you not?"

Despite breaking every VR health-and-safety guideline imaginable, Johnson and Jones walked away from the experiment relatively unscathed. They learned, at worst, that watching a 360-degree movie in a car is a nauseating experience -- but that doesn't mean their extended time in VR didn't have consequences.

Johnson admits his vision without glasses was slightly more blurry for a few days after the experience, but it was the physical pain that bothered him most. "The bridge of my nose got bruised," he said, "And Sarah's cheeks have kind of permanent red marks on them." If the health and safety warnings were right, it wasn't because of the risk of experiencing altered-reality for long periods -- it was because the headsets were never designed to be worn indefinitely. "I think we're just physically glad to be out," he concluded. "If you had done anything for two straight days, you'd just be glad to be out."

View original post here:

Two people spent 48 hours in nonstop virtual reality - Engadget

Polygraph for pedophiles: how virtual reality is used to assess sex offenders – The Guardian

A virtual reality headset. Patients are shown computer-generated images of naked children and measured for signs of arousal. Photograph: Eric Risberg/AP

In a maximum security mental health facility in Montreal is a cave-like virtual reality vault thats used to show images of child sexual abuse to sex offenders. Patients sit inside the vault with devices placed around their penises to measure signs of arousal as they are shown computer-generated animations of naked children.

We do develop pornography, but these images and animations are not used for the pleasure of the patient but to assess them, said Patrice Renaud, who heads up the project at the Institut Philippe-Pinel. Its a bit like using a polygraph but with other measurement techniques.

The system, combined with other psychological assessments, is used to build up a profile of the individuals sexual preferences that can be used by the court to determine the risk they pose to society and by mental health professionals to determine treatment.

Not all child molesters are pedophiles (people who are sexually attracted to children) and not all pedophiles molest children, although the terms are often wrongly used interchangeably. In many cases, those who molest children are situational offenders, which means their offense is outside of their typical sexual preference or behavior.

You can have someone who molested a child once but is not a pedophile as such they may have been intoxicated or have another mental health disorder, said Renaud, who also leads the Cyberpsychology Lab at the University of Quebec in Outaouais. We need to know if they have a preferred mode of sexual expression.

Renaud uses virtual reality for two reasons: first, because it does not involve images of real people, but digital ones, and second, because the immersive nature of the medium allows researchers to measure something closer to natural behavior.

The vault itself is a small room with screens on all sides, on to which are projected animations of naked children and adults standing in natural settings. The research team can generate synthetic characters in a range of ages and shapes and can adapt features like facial expression, genital size, and eye and hair color to correspond with the patients victims or sexual fantasies.

The patients sit on a stool inside the chamber wearing stereoscopic glasses which create the three-dimensional effect on the surrounding walls. The glasses are fitted with eye-tracking technology to ensure they arent trying to trick the system by avoiding looking at the critical content.

These guys do not like going through this assessment, said Renaud, pointing out that the results can be shocking for the patient.

Its not easy for someone to discover he is attracted to violently molesting a kid. He may have been using the internet for some masturbatory activities using non-violent images or videos of children which is not a good thing. But being tested in the lab and knowing he is also attracted to violence may be something thats very difficult to understand.

Renaud acknowledges that the use of penile plethysmography, which involves placing a cuff-shaped sensor around the genitals, is controversial. Its not only invasive but there is some disagreement in the scientific community about its reliability in measuring sexual deviancy. Consequently, Renauds team is exploring a less invasive alternative: electroencephalography. This uses a cap that reads activity in the brain related to erectile response and sexual appetites.

Its not easy for someone to discover he is attracted to violently molesting a kid

Renaud believes the same cap could be used to track the persons empathy response to expressions of pain, fear or sadness in the virtual child victim. These inhibit the sexual response of non-deviant individuals.

Some deviant individuals can be attracted to signs of emotional distress.

If we find that the guy is attracted to children and doesnt feel empathy for the fact that the child is in pain, thats good information for predicting behavior, he said.

Renaud and his team assess about 80 patients per year, including pedophiles, rapists and other sexual deviants assigned by the court for assessment.

The lab is under intense scrutiny from ethical committees and the police in Quebec. The computer-generated imagery must be encrypted and stored in a highly secure closed computer network inside the maximum security hospital so that the material doesnt fall into the wrong hands.

However, at a time when virtual reality pornography is on the rise, its not unreasonable to assume that someone will if it hasnt already happened create virtual reality child abuse images designed explicitly to arouse rather than diagnose pedophiles.

Thanks to advances in computer graphics, such experiences could be created without ever harming or exploiting children. But even if no children are harmed in the making of such imagery, would society tolerate its creation? Could the content provide an outlet to some pedophiles who dont want to offend in real life? Or would a VR experience normalize behavior and act as a gateway to physical abuse?

Jamie Sivrais, of A Voice For The Innocent, which provides community support to survivors of rape and sexual abuse, said that people have a long history of blaming technology for human problems. He pointed to VHS tapes being used to create child abuse images and predators using internet chat rooms and smartphones to meet and abuse children.

If the technology exists, there will be people who abuse it, he said.

I think this is a human problem. The same criticisms of VR could have (and have been) made about the internet and smartphones, and they are valid criticisms. So as we continue to push the envelope of technology, lets also continue to expand resources for people who are hurt by abuse.

Ethan Edwards, the co-founder of Virtuous Pedophiles, an online support group for people attracted to children but who do not want to molest them, argues virtual reality could help prevent real-life offences.

Edwards believes that, provided the imagery of children is computer-generated and doesnt involve any real victims, it should be legal, as should life-size child sex dolls and erotic stories about children.

I have a strong civil liberties streak and feel such things should be legal in the absence of very strong evidence they cause harm, he said.

Nick Devin, a pedophile and co-founder of the site, called for thorough scientific research. The answer may be different for different people. For me, doing these things wouldnt increase or reduce the risk to kids: Im not going to molest a kid whether I fantasize or not.

Its a view echoed by Canadian forensic psychologist Michael Seto. He believes that VR could provide a safer outlet for individuals with well-developed self control.

But for others, such as those who are more impulsive, prone to risk-taking, or indifferent about the effects of their actions on others, then access to virtual child pornography could have negative effects and perhaps increase their desire for contact with real children.

Its a risk that concerns Renaud, who describes VR child abuse imagery and child-shaped sex robots as a very bad idea.

Only a very small portion of pedophiles could use that kind of sexual proxy without having the urge to go outside and get the real stuff, he said.

Its not just child sex abuse experiences that are concerning to Renaud, but violent first-person sexual experiences including rape and even entirely new deviances like having sex with monsters with three penises and blue skin.

We dont know what effect these sexual experiences will have on the behavior of children and adults in the future, he said.

Go here to read the rest:

Polygraph for pedophiles: how virtual reality is used to assess sex offenders - The Guardian

Virtual reality is being used to show naked images to paedophiles – Metro

Computer-generated images are used (Picture: Shutterstock)

Suspected paedophiles at a maximum security mental health facility are shown virtual reality images of child abuse and pornography.

The controversial practice is used to determine the individuals arousal when viewing the material, and researchers claim it can predict whether they are threats to the public.

People admitted to the Institut Philippe-Pinel hospital in Montreal, Canada, sit with devices placed on their penises to measure arousal, and wear glasses that simulate virtual reality.

Patients rapists, paedophiles and sexual deviants are shown computer-generated images of naked children, and an eye-tracking device means they cannot look away.

Patrice Renaud, who leads the project, told The Guardian: We do develop pornography, but these images and animations are not used for the pleasure of the patient but to assess them.

The project determines the patients sexual preference, which is then used by a court to rule whether they pose a risk to the public or not.

Mr Renaud said: You can have someone who molested a child once but is not a paedophile as such they may have been intoxicated or have another mental health disorder.

If we find that the guy is attracted to children and doesnt feel empathy for the fact that the child is in pain, thats good information for predicting behaviour.

The experiment takes place and the material created must be encrypted and stored in a secure computer network minimising the chance it could spread outside the hospital.

It is also under intense scrutiny from ethical committees and the police.

Follow this link:

Virtual reality is being used to show naked images to paedophiles - Metro

Virtual reality helps Honeygrow worker-bees acclimate – Philly.com

Some worker-training programs take days to imbue in new employees corporate culture and best practices.

But after just 15 minutes under the spell of a virtual-reality headset and spiffy VR program created by Northern Liberties experiential video shop Klip Collective, new hires at the Philadelphia-based Honeygrow fast-casual dining chain are already feeling the company spirit.

Theyre connecting with its HG Engine best-practices philosophy. Learning food-prep techniques. Practically tasting the dishes. So theyre instantly energized, eager to dive into the work themselves, said company executives.

Our goal was to provide a consistent yet unique on-boarding and initial training experience for all employees, regardless of geographic location or who the individual performing the training would be, Justin Rosenberg, Honeygrows founder and CEO, said Wednesday. Klip has really impressed us with taking our ideas and exceeding our expectations by making them a reality.

Back in the early, local-only days of his salad and stir-fry emporiums the first location on 16th Street between Sansom and Chestnut opened exactly five years ago this Thursday Rosenberg could afford to be very hands-on. He would personally welcome all new employees and immerse them in the ways of Honeygrow an upscale fast-food alternative obsessed with personalized orders, fresh ingredients, fast turnaround, and hospitable treatment of guests.

But all thats getting harder to do as the privately owned chain expands. Seventeen Honeygrow locations now stretch south to Washington, D.C., and north to Brooklyn. More are coming to Boston, Pittsburgh, Chicago and Manhattan the latter our first, smaller-footprint Minigrow, said Rosenberg. By the end of the year, well be up to 25 locations.

Enter the VR training solution, as executed by Klip Collective. Its an idea (just dubbed brilliant by Entrepreneur magazine) that first started brewing when Rosenberg got a Google Cardboard with my Sunday Times and I thought, What can I do with this? The answer: a VR experience that allows Rosenberg and team to warm up new trainees virtually, with much better focus than reading a written manual would have, and with more consistency than a local manager would, if having a bad day. The VR experience also is being used for recruitment, to interest potential job applicants. And it impresses our guests, when they walk in and see employees doing it.

Said Klip Collective co-founder Ricardo Rivera: When a new hire puts on the VR headset and presses the start button on the remote, Justin materializes in our virtual-3D Honeygrow restaurant to share welcoming remarks and philosophy how Honeygrow is all about thinking differently, bringing people together over high-quality, wholesome, simple foods.

Then we offer an interactive tour of a Honeygrow that gives a good feel for how and why things are done, with a casual video game at the end thats meant to be both fun and instructive, Rivera said.

No stranger to integrating tech into the operation as new hires (virtually) discover Honeygrow locations also feature a custom variation on the classic split-flap railroad-station sign that communicates the news when customer orders are done.

Restaurant touch screens take a page from the Wawa customer ordering system, though Honeygrow dresses its models with special screen savers still images and videos of neighborhood locations that are a love letter to every market we go into, said Jen Dennis, chief brand officer.

In that game component of the VR experience, participants learn-by-doing how food is best stored on refrigerator shelves for health safety (fish on top, beef below, then pork and chicken on the bottom shelves).

Were finding this gamification really helps people grasp and retain information, said Dennis.

So more will be built into the next phase, Honeygrow VR 2.0, said Kevin Ritchie, a post-production wizard at Klip Collectives sister company, Monogram. Given the ever-improving state of the technology, anything you do in VR is a work-in-progress. When we first got started on the project, we thought it would run on Samsung Galaxy smartphones and Gear VR glasses. Then the Google Daydream-ready phones and companion goggles came out and were so much better in terms of screen resolution and processing power. The new Google Pixel phones dont overheat, as was happening with the Galaxys.

How about mixing VR with AR, augmented reality, which would allow trainees to do hands-on food prep with a superimposed timer and graphic arrows pointing them in the right directions? A nice idea, but the tech is not there yet.

For the sake of future-proofing, Klip Collective lights its sets (in this case, the Honeygrow restaurant in Cherry Hill) like a Hollywood film production, shoots VR with an ultra-high definition $55,000 Nokia VR camera, and processes the footage on a server system so powerful it could run an automated car factory.

If you want to convince VR viewers theyre really in the moment, you cant afford to cut corners, said Ritchie.

Published: June 7, 2017 4:29 PM EDT | Updated: June 7, 2017 4:30 PM EDT

We recently asked you to support our journalism. The response, in a word, is heartening. You have encouraged us in our mission to provide quality news and watchdog journalism. Some of you have even followed through with subscriptions, which is especially gratifying. Our role as an independent, fact-based news organization has never been clearer. And our promise to you is that we will always strive to provide indispensable journalism to our community. Subscriptions are available for home delivery of the print edition and for a digital replica viewable on your mobile device or computer. Subscriptions start as low as 25 per day. We're thankful for your support in every way.

Here is the original post:

Virtual reality helps Honeygrow worker-bees acclimate - Philly.com

Play piano with this virtual reality glove – University of California

Engineers at UC San Diego are using soft robotics technology to make light, flexible gloves that allow users to feel tactile feedback when they interact with virtual reality environments. The researchers used the gloves to realistically simulate the tactile feeling of playing a virtual piano keyboard.

Engineers recently presented their research, which is still at the prototype stage, at the Electronic Imaging, Engineering Reality for Virtual Reality conference in Burlingame, Calif.

Currently, VR user interfaces consist of remote-like devices that vibrate when a user touches a virtual surface or object. Theyre not realistic, said Jurgen Schulze, a researcher at the Qualcomm Institute at UC San Diego and one of the papers senior authors. You cant touch anything, or feel resistance when youre pushing a button. By contrast, we are trying to make the user feel like theyre in the actual environment from a tactile point of view.

Other research teams and industry have worked on gloves as VR interfaces. But these are bulky and made from heavy materials, such as metal. The glove the engineers developed has a soft exoskeleton equipped with soft robotic muscles that make it much lighter and easier to use.

This is a first prototype but it is surprisingly effective, said Michael Tolley, a mechanical engineering professor at the Jacobs School of Engineering at UC San Diego and also a senior author.

One key element in the gloves design is a type of soft robotic component called a McKibben muscle, essentially latex chambers covered with braided fibers. The muscles respond like springs to apply force when the user moves their fingers. The board controls the muscles by inflating and deflating them.The system involves three main components: a Leap Motion sensor that detects the position and movement of the users hands; a custom fluidic control board that controls the gloves movements; and soft robotic components in the glove that individually inflate or deflate to mimic the forces that the user would encounter in the VR environment. The system interacts with a computer that displays a virtual piano keyboard with a river and trees in the background.

Researchers 3-D-printed a mold to make the gloves soft exoskeleton. This will make the devices easier to manufacture and suitable for mass production, they said. Researchers used silicone rubber for the exoskeleton, with Velcro straps embedded at the joints.

Engineers conducted an informal pilot study of 15 users, including two VR interface experts. All tried the demo which allowed them to play the piano in VR. They all agreed that the gloves increased the immersive experience. They described it as mesmerizing and amazing.

The engineers are working on making the glove cheaper, less bulky and more portable. They also would like to bypass the Leap Motion device altogether to make system more compact.

Our final goal is to create a device that provides a richer experience in VR, Tolley said. But you could imagine it being used for surgery and video games, among other applications.

Tolley is a faculty member in the Contextual Robotics Institute at UC San Diego. Schulze is an adjust professor in computer science, where he teaches courses on VR.

Read the original here:

Play piano with this virtual reality glove - University of California

Study: This virtual reality simulation could reduce fear of death – TNW

If youve ever played a virtual reality game, youre probably used to dying at least digitally. But not like this.

Scientists are using VR headsets to create out-of-body experiences that may be able to reduce the fear of death, according to a recently published study. According to Mel Slater, one of the studys authors and a research professor at the University of Barcelona:

My lab has been working for many years on the influence of changing someones body in virtual reality on their attitudes, perceptions, behavior and cognition. For example, placing White people in a Black virtual body reduces their implicit racial bias, while putting adults into a child body changes their perceptions and self-identification.

Here we wanted to see what the effects were of establishing a strong feeling of ownership over a virtual body, and then moving people out of it, so simulating an out-of-body experience. According to the literature, out-of-body experiences are typically associated with changes of attitudes about death, so we wanted to see if this would happen with a virtual out-of-body experience.

The study, published in PLOS One, uses an Oculus Rift headset and a virtual reality simulation known as the full body ownership illusion. In it, researchers created a virtual human body designed to be the participants own. Once the participant assimilated to the illusion, the view shifted from first-person to third-person, creating an experience similar to how some describeout-of-body incidents.

So far, the study has only attempted the simulation on 32 women, 16 of which experienced the out-of-body incident, and 16 more in a control group who didnt experience this phenomena.

After the study, participants in the main group reported lower anxiety about death than the control group, althoughresearchers admit the studyis still in the preliminary stages. Limited as it may be, it should surprise no one that a virtual reality simulation could help overcome fears even the fear of death. It is, after all, being studied in multiple other scientific disciplines as a way to do just that.

A Virtual Out-of-Body Experience Reduces Fear of Death on PLOS

Read next: Alien mystery solved: It was just gas

Read more:

Study: This virtual reality simulation could reduce fear of death - TNW

How AI And Machine Learning Are Helping Drive The GE Digital Transformation – Forbes


Forbes
How AI And Machine Learning Are Helping Drive The GE Digital Transformation
Forbes
This is the story of how GE has accomplished this digital transformation by leveraging AI and machine learning fueled by the power of Big Data. Undertaking the Digital Transformation. The GE transformation is an effort that is still in progress, but ...

See original here:

How AI And Machine Learning Are Helping Drive The GE Digital Transformation - Forbes

AI Plant and Animal Identification Helps Us All Be Citizen Scientists – Smithsonian

Screenshots from the iNaturalist app, which uses "deep learning" to automatically identify what bugor fish, bird, or mammalyou might be looking at.

On a recent trip to the local botanical gardens, I noticed a tall, striking purple flower Id never noticed before. I tried to Google it, but I didnt know quite what to ask. Purple flower brought me pictures of narcissus and freesia, orchids and primrose, gladiolus and morning glory. None of them were the flower Id seen.

But thanks to artificial intelligence, curious amateur naturalists like me now have better ways to identify the nature around us. Several new sites and apps use AI technology to put names to photographs.

iNaturalist.orgis one of these sites. Founded in 2008, has until now been solely a crowdsourcing site. Users post a picture of a plant or animal and a community of scientists and naturalists will identify it. Its mission is to connect experts and amateur "citizen scientists," getting people excited about plants and wildlife while using the data gathered to potentially help professional scientists monitor changes in biodiversity or even discover new species.

The crowdsourced model generally works well, says Scott Loarie, iNaturalists co-director. But there are some limitations. First, it can be much harder to get an identification of your photograph depending on where you live. In California, where Loarie is based, he can get an identification within an hour. Thats because a large number of the experts that frequent iNaturalist are based on the West Coast. But someone in, say, rural Thailandmay have to wait much longer to receive an ID: The average amount of time it takes to get an identification is 18 days. Another issue:As the site has become more popular, the balance of observers (people posting pictures) to identifiers (people telling you what the pictures are) has become skewed, with far more observers than identifiers. This threatens to overwhelm the volunteer experts.

This month, iNaturalist plans to launch an app that uses AI to identify plants and animals down to the species level. The app takes advantage of so-called deep learning, using artificial neural networks that allow computers to learn as humans do, so their capabilities can advance over time.

Were hopeful this will engage a whole new group of citizen scientists, Loarie says.

The app is trained by being fed labeled images from iNaturalists massive database of research grade observationsobservations that havebeen verified by the sites community of experts. Once the model has been trained on enough labeled images, it begins to be able to identify unlabeled images. Currently iNaturalist is able to add a new species to the model every 1.7 hours. The more images uploaded by users and identified by experts, the better.

The more stuff we get, the more trained up the model will be, Loarie says.

The iNaturalist team wants to the model to always be accurate, even if that means not being as precise as possible. Right now the model tries to give a confident response about the animal's genus, then a more cautious response about the species, offering the top 10 possibilities. It currently is correct about the genus 86 percent of the time, and gives the species in its top 10 results 77 percent of the time. These numbers should improve as the model continues to be trained.

Playing around with a demo version, I entered a picture of a puffin perched on a rock. We're pretty sure this is in thegenusPuffins, it said, giving the correct speciesAtlantic puffinas the top suggested result. Then I entered a picture of an African clawed frog. We're pretty sure this is in thegenusWestern spadefoot toads, it told me, offering African clawed frog as among its top 10 results.

The AI was not confident enough to make a recommendation about a picture of my son, but suggested he might be a northern leopard frog, a garden snail or a gopher snake, among other, non-human creatures. As all of these are spotted, I realized the computer vision was seeing the polka-dot background of my sons highchair and misidentifying it as part of the specimen. So I cropped the picture until only his face was visible and pressed classify. We're pretty sure this is in thesuborderLizards, the AI responded. Either my baby looks like a lizard orthe real answer, I presumethis shows that the model only recognizes what its been fed. And no one is feeding it pictures of humans, for obvious reasons.

iNaturalist hopes the app will take pressure off its community of experts, and allow for a larger community of observers to participate, such as groups of schoolchildren. It could also allow for camera trapping sending in streams of images from a camera trap, which takes a picture when its triggered by motion. iNaturalist has discouraged camera trapping, as it floods the site with huge amounts of images that may or may not actually need expert identification that (some images will be empty, while others would catch common animals like squirrels that the camera's owner could easily identify himself or herself). But with the AI that wouldnt be a problem. iNaturalist also hopes the new technology will engage a new community of users, including people who might have an interest in nature but wouldnt be willing to wait several days for an identification under the crowdsourced model.

Quick species identificationcould also be useful in other situations, such as law enforcement.

Lets say TSA workers open a suitcase and someones got geckos, says Loarie. They need to know whether to arrest someone or not.

In this case, the AI could tell the TSA agents what type of gecko they were looking at, which could aid in an investigation.

iNaturalist is not the only site taking advantage of computer vision to engage citizen scientists. The CornellsMerlin Bird IDapp uses AI to identify more than 750 North American birds. You just have to answer a few simple questions first, including the size and color of the bird you saw.Pl@ntNetdoes the same for plants, after you tell it what part of the plant its looking at (flower, fruit, etc.).

This is all part of a larger wave of interest in using AI to identify images. There are AIprograms that canidentify objects from drawings(even bad ones).AIs can look at paintingsand identify artists and genres. Many experts think computer vision will play ahuge role in healthcare, making it easier to identify, for example, skin cancers. Car manufacturersuse computer vision to teach carsto identify and avoid hitting pedestrians. A plot point of arecent episode of the comedy Silicon Valleydealt with a computer vision app for identifying food. But since its creator only trained it on hot dogssince training a neural network requires countless hours of human laborit could only distinguish between hot dogs and not hot dogs.

This question of humor labor is important. Massive databases of correctly labeled images are crucial to training AIs, and can be hard to come by. iNaturalist, as a longtime crowdsourced site, already has exactly this kind of database, which is why its model has been advancing so quickly, Loarie says. Other sites and apps have to find their data elsewhere, often from academic images.

Its still early days, but I guarantee in the next year youre going to see a proliferation of these kinds of apps, Loarie says.

Excerpt from:

AI Plant and Animal Identification Helps Us All Be Citizen Scientists - Smithsonian

How Apple reinvigorated its AI aspirations in under a year – Engadget

Well, technically, it's been three years of R&D, but Apple had a bit of trouble getting out of its own way for the first two. See, back in 2010, when Apple released the first version of Siri, the tech world promptly lost its mind. "Siri is as revolutionary as the Mac," the Harvard Business Review crowed, though CNN found that many people feared the company had unwittingly invented Skynet v1.0. But for as revolutionary as Siri appeared to be at first, its luster quickly wore off once the general public got ahold of it and recognized the system's numerous shortcomings.

Fast forward to 2014. Apple is at the end of its rope with Siri's listening and comprehension issues. The company realizes that minor tweaks to Siri's processes can't fix its underlying problems and a full reboot is required. So that's exactly what they did. The original Siri relied on hidden Markov models -- a statistical tool used to model time series data (essentially reconstructing the sequence of states in a system based only on the output data) -- to recognize temporal patterns in handwriting and speech recognition.

The company replaced and supplemented these models with a variety of machine learning techniques including Deep Neural Networks and "long short-term memory networks" (LSTMNs). These neural networks are effectively more generalized versions of the Markov model. However, because they posses memory and can track context -- as opposed to simply learning patterns as Markov models do -- they're better equipped to understand nuances like grammar and punctuation to return a result closer to what the user really intended.

The new system quickly spread beyond Siri. As Steven Levy points out, "You see it when the phone identifies a caller who isn't in your contact list (but who did email you recently). Or when you swipe on your screen to get a shortlist of the apps that you are most likely to open next. Or when you get a reminder of an appointment that you never got around to putting into your calendar."

By the WWDC 2016 keynote, Apple had made some solid advancements in its AI research. "We can tell the difference between the Orioles who are playing in the playoffs and the children who are playing in the park, automatically," Apple senior vice president Craig Federighi told the assembled crowd.

The company also released during WWDC 2016 its neural network API running Basic Neural Network Subroutines, an array of functions enabling third party developers to construct neural networks for use on devices across the Apple ecosystem.

However, Apple had yet to catch up with the likes of Google and Amazon, both of whom had either already released an AI-powered smart home companion (looking at you, Alexa) or were just about to (Home would be released that November). This is due in part to the fact that Apple faced severe difficulties recruiting and retaining top AI engineering talent because it steadfastly refused to allow its researchers to publish their findings. That's not so surprising coming from a company so famous for its tight-lipped R&D efforts that it once sued a news outlet because a drunk engineer left a prototype phone in a Palo Alto bar.

"Apple is off the scale in terms of secrecy," Richard Zemel, a professor in the computer science department at the University of Toronto, told Bloomberg in 2015. "They're completely out of the loop." The level of secrecy was so severe that new hires to the AI teams were reportedly directed not to announce their new positions on social media.

"There's no way they can just observe and not be part of the community and take advantage of what is going on," Yoshua Bengio, a professor of computer science at the University of Montreal, told Bloomberg. "I believe if they don't change their attitude, they will stay behind."

Luckily for Apple, those attitudes did change and quickly. After buying Seattle-based machine learning AI startup Turi for around $200 million in August 2016, Apple hired AI expert Russ Salakhutdinov away from Carnegie Mellon University that October. It was his influence that finally pushed Apple's AI out of the shadows and into the light of peer review.

In December 2016, while speaking at the the Neural Information Processing Systems conference in Barcelona, Salakhutdinov stunned his audience when he announced that Apple would begin publishing its work, going so far as to display an overhead slide reading, "Can we publish? Yes. Do we engage with academia? Yes."

Later that month Apple made good on Salakhutdinov's promise, publishing "Learning from Simulated and Unsupervised Images through Adversarial Training". The paper looked at the shortcomings of using simulated objects to train machine vision systems. It showed that while simulated images are easier to teach than photographs, the results don't work particularly well in the real world. Apple's solution employed a deep-learning system, known as known as Generative Adversarial Networks (GANs), that pitted a pair of neural networks against one another in a race to generate images close enough to photo-realistic to fool a third "discriminator" network. This way, researchers can exploit the ease of training networks using simulated images without the drop in performance once those systems are out of the lab.

In January 2017, Apple further signaled its seriousness by joining Amazon, Facebook, Google, IBM and Microsoft in the Partnership on AI. This industry group seeks to establish ethical, transparency and privacy guidelines in the field of AI research while promoting research and cooperation between its members. The following month, Apple drastically expanded its Seattle AI offices, renting a full two floors at Two Union Square and hiring more staff.

"We're trying to find the best people who are excited about AI and machine learning excited about research and thinking long term but also bringing those ideas into products that impact and delight our customers," Apple's director of machine learning Carlos Guestrin told GeekWire.

By March 2017, Apple had hit its stride. Speaking at the EmTech Digital conference in San Francisco, Salakhutdinov laid out the state of AI research, discussing topics ranging from using "attention mechanisms" to better describe the content of photographs to combining curated knowledge sources like Freebase and WordNet with deep-learning algorithms to make AI smarter and more efficient. "How can we incorporate all that prior knowledge into deep-learning?" Salakhutdinov said. "That's a big challenge."

That challenge could soon be a bit easier once Apple finishes developing the Neural Engine chip that it announced this May. Unlike Google devices, which shunt the heavy computational lifting required by AI processes up to the cloud where it is processed on the company's Tensor Processing Units, Apple devices have traditionally split that load between the onboard CPU and GPU.

This Neural Engine will instead handle AI processes as a dedicated standalone component, freeing up valuable processing power for the other two chips. This would not only save battery life by diverting load from the power-hungry GPU, it would also boost the device's onboard AR capabilities and help further advance Siri's intelligence -- potentially exceeding the capabilities of Google's Assistant and Amazon's Alexa.

But even without the added power that a dedicated AI chip can provide, Apple's recent advancements in the field have been impressive to say the least. In the span between two WWDCs, the company managed to release a neural network API, drastically expand its research efforts, poach one of the country's top minds in AI from one of the nation's foremost universities, reverse two years of backwards policy, join the industry's working group as a charter member and finally -- finally -- deliver a Siri assistant that's smarter than a box of rocks. Next year's WWDC is sure to be even more wild.

Image: AFP/Getty (Federighi on stage / network of photos)

Continue reading here:

How Apple reinvigorated its AI aspirations in under a year - Engadget

How AI is transforming customer service – TNW

There will always be a need for a real humanspresence in customer service, but with the rise of AI comes the glaring reality that many things can be accomplished through the implementation of an AI-powered customer servicevirtualassistant. As our technology and understanding of machine learning grows, so does the possibilities for services that could benefit from a knowledgeable chatbot. What does this mean for the consumer and how will this affect the job market in the years to come?

How many times have you been placed on hold, on the phone or through a live chat option, when all you wanted to do was ask a simple question about your account? Now, how many times as that wait taken longer than the simple question you had? While chatbots may never be able to completely replace the human customer service agent, they most certainly are already helping answer simple questions and pointing users in the right direction when needed.

Credit: Unsplash

As virtual assistants become more knowledgeable and easier to implement, more businesses will begin to use them to assist with more advancedquestions a customer or interested party may have, meaning (hopefully) quicker answers for the consumer. But just how much of customer service will be taken over by virtual assistants? According toone report from Gartnerit is believed that by the year 2020, 85% of customer relationships will be through AI-powered services.

Thats a pretty staggering number, but I talked with Diego Ventura of NoHold, a company that provides virtual agents for enterprise level businesses, and he believes those numbers need to be looked at a bit closer.

The statement could end up being true but with two important proviso: For one, we most consider all aspects of AI, not just Virtual Assistants and two, we apply the statements to specific sectors and verticals.

AI is a vast field that includes multiple disciplines like Predictive Analytics, Suggestion engines, etc. In this sense you have to just think about companies like Amazon to see how most of customer interactions are already handled automatically though some form of AI. Having said this, there are certain sectors of the industry that will always require, at least for the foreseeable future, human intervention. Think of Medical for example, or any company that provides very high end B2B products or services.

Basically, what Diego is saying is that there are many aspects of customer service already being handled by AI that we dont even realize, so when discussing that 85% mentioned above we cant look at it as 85% of customer service jobs will be replaced by AI, but, even if were not talking about 85% of the jobs involved in customer service, surely there will be some jobs that will be completely eliminated by the use of chatbots, so where does that leave us?

Its unfair to look at virtual assistants as the enemy that is taking our precious jobs. Throughout history, technology has made certain jobs obsolete as smarter, more efficient methods are implemented . Look at our manufacturing sector and it will not take long to see that many of the jobs our grandparents and great grandparents had have been completely eliminated through advancements in machinery and other technologies, the rise in AI is simply another example of us growing as humans.

Credit: Unsplash

While it may take some jobs away, it also opens up the possibility for completely new jobs that have not existed prior. Chatbot technicians and specialists being but two examples. Couple that with the fact that many of these virtual assistants actual workwiththe customer services reps to make their jobs easier, and we start seeing that virtual assistant implementation is not as scary as it might seem. Ventura seems to agree,

I see Virtual Assistants, VAs, for one as a way to primarily improve the customer experience and, two, augmenting the capabilities of existing employees rather than simply taking their jobs. VAs help users find information more easily. Most of the VA users are people who were going to the Web to self-serve anyway, we are just making it easier for them to find what they are looking for and yes, prevent escalations to the call center.

VAs are also used at the call center to help agents be more successful in answering questions, therefore augmenting their capabilities. Having said all this, there are jobs that will be replaced by automation, but I think it is just part of progress and hopefully people will see it as an opportunity to find more rewarding opportunities.

I think back to my time at a startup that was located in an old Masonic Temple. We were on the 6th floor and every morning the lobby clerk, James, would put down the crumpled paper he was reading and hobble out from behind his small desk in the middle of the lobbyand take us up to our floor on one of those old elevators that required someone to manually push and pull a lever to get their guests to a certain floor. James was a professional at it, he reminded me of an airplane pilot the way he twisted certain knobs and manipulated the lever to get us to our destination only once missing our floor in the entire two years I was there.

While James might have been an expert at his craft, technology has all but eliminated that position. When was the last time you had someone manually cart you to a floor in a hotel? When was the last time you thought about it? Were you mad at technology for taking away someones job?

As humans, we advance, thats what we do. And the rise of AI in the customer service field is just another step in our advancement and should be looked at as such. There might be some growing pains during the process, but we shouldnt let that stop us from growing and extending our knowledge. When we look at the benefits these chatbots can provide to the consumer and the business, it becomes clear that we are moving in the right direction.

Read next: How Marketing Will Change in 2017

See the original post here:

How AI is transforming customer service - TNW

The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks – Council on Foreign Relations (blog)

The followingis a guest post by Kyle Evanoff,research associate for International Economics and U.S. Foreign Policy.

Today through Friday, artificial intelligence (AI) experts are meeting with international leaders in Geneva, Switzerland, for the inaugural AI for Good Global Summit. Organized by the International Telecommunications Union (ITU), a UN agency that specializes in information and communication technologies, and the XPRIZE Foundation, a Silicon Valley nonprofit that awards competitive prizes for solutions addressing some of the worlds most difficult problems, the gathering will discuss AI-related issues and promote international dialogue and cooperation on AI innovation.

The summit comes at a critical time and should help increase policymakers awareness of the possibilities and challenges associated with AI. The downside is that it may encourage undue optimism, by giving short shrift to the significant risks that AI poses to international security.

Although many policymakers and citizens are unaware of it, narrow forms of AI are already here. Software programs have long been able to defeat the worlds best chess players, and newer ones are succeeding at less-defined tasks, such as composing music, writing news articles, and diagnosing medical conditions. The rate of progress is surprising even tech leaders, and future developments could bring massive increases in economic growth and human well-being, as well as cause widespread socioeconomic upheaval.

This weeks forum provides a much-needed opportunity to discuss how AI should be governed at the global levela topic that has garnered little attention from multilateral institutions like the United Nations. The draft program promises to educate policymakers on multiple AI issues, from sessions on moonshots to ethics, sustainable living, and poverty reduction, among other topics. Participants will include prominent individuals drawn from multilateral institutions, nongovernmental organizations (NGOs), the private sector, and academia.

This inclusivity is typical of the complex governance models that increasingly define and shape global policymakingwith internet governance being a case in point. Increasingly, NGOs, public-private partnerships, industry codes of conduct, and other flexible arrangements have assumed many of the global governance functions once reserved for intergovernmental organizations. The new partnership between ITU and the XPRIZE Foundation suggests that global governance of AI, although in its infancy, is poised to follow this same model.

For all its strengths, however, this multistakeholder approach could afford private sector organizers excessive agenda-setting power. The XPRIZE Foundation, founded by outspoken techno-optimist Peter Diamandis, promotes technological innovation as a means of creating a more abundant future. The summits mission and agenda hews to this attitude, placing disproportionate emphasis on how AI technologies can overcome problems and too little attention on the question of mitigating risks from those same technologies.

This is worrisome, since the risks of AI are numerous and non-trivial. Unrestrained AI innovation could threaten international stability, global security, and possibly even humanitys survival. And, because many of the pertinent technologies have yet to reach maturity, the risks associated with them have received scant attention on the international stage.

One area in which the risk of AI is obvious is electioneering. Since the epochal June 2016 Brexit referendum, state and nonstate actors with varying motivations have used AI to create and/or distribute propaganda via the internet. An Oxford study found that during the recent French presidential election, the proportion of traffic originating from highly automated Twitter accounts doubled between the first and second rounds of voting. Some even attribute Donald J. Trumps victory over Hillary Clinton in the U.S. presidential election to weaponized artificial intelligence spreading misinformation. Automated propaganda may well call the integrity of future elections into question.

Another major AI risk lies in the development and use of lethal autonomous weapons systems (LAWS). After the release of a 2012 Human Rights Watch report, Losing Humanity: The Case Against Killer Robots, the United Nations began considering including restrictions on LAWS in the Convention on Certain Conventional Weapons (CCW). Meanwhile, both China and the United States have made significant headway with their autonomous weapons programs, in what is quickly escalating into an international arms race. Since autonomous weapons might lower the political cost of conflict, they could make war more commonplace and increase death tolls.

A more distant but possibly greater risk is that of artificial general intelligence (AGI). While current AI programs are designed for specific, narrow purposes, future programs may be able to apply their intelligence to a far broader range of applications, much as humans do. An AGI-capable entity, through recursive self-improvement, could give rise to a superintelligence more capable than any humanone that might prove impossible to control and pose an existential threat to humanity, regardless of the intent of its initial programming. Although the AI doomsday scenario is a common science fiction trope, experts consider it to be a legitimate concern.

Given rapid recent advances in AI and the magnitude of potential risks, the time to begin multilateral discussions on international rules is now. AGI may seem far off, but many experts believe that it could become a reality by 2050. This makes the timeline for AGI similar to that of climate change. The stakes, though, could be much higher. Waiting until a crisis has occurred to act could preclude the possibility of action altogether.

Rather than allocating their limited resources to summits promoting AI innovation (a task for which national governments and the private sector are better suited), multilateral institutions should recognize AIs risks and work to mitigate them. Finalizing the inclusion of LAWS in the CCW would constitute an important milestone in this regard. So too would the formal adoption of AI safety principles such as those established at the Beneficial AI 2017 conference, one of the many artificial intelligence summits occurring outside of traditional global governance channels.

Multilateral institutions should also continue working with nontraditional actors to ensure that AIs benefits outweigh its costs. Complex governance arrangements can provide much-needed resources and serve as stopgaps when necessary. But intergovernmental organizations, as well as the national governments that govern them, should be careful in ceding too much agenda-setting power to private organizations. The primary danger of the AI for Good Global Summit is not that it distorts perceptions of AI risk; it is that Silicon Valley will wield greater influence over AI governance with each successive summit. Since technologists often prioritize innovation over risk mitigation, this could undermine global security.

More important still, policymakers should recognize AIs unprecedented transformative power and take a more proactive approach to addressing new technologies. The greatest risk of all is inaction.

Go here to read the rest:

The Inaugural AI for Good Global Summit Is a Milestone but Must Focus More on Risks - Council on Foreign Relations (blog)

An AI Can Now Predict How Much Longer You’ll Live For – Futurism

In Brief Researchers at the University of Adelaide have developed an AI that can analyze CT scans to predict if a patient will die within five years with 69 percent accuracy. This system could eventually be used to save lives by providing doctors with a way to detect illnesses sooner. Predicting the Future

While many researchers are looking for ways to use artificial intelligence (AI) to extend human life, scientists at the University of Adelaidecreated an AI that could help them better understand death. The system they created predicts ifa person will die within five years after analyzingCT scans of their organs, and it was able to do sowith 69 percent accuracy a rate comparable to that of trained medical professionals.

The system makes use of thetechnique of deep learning, and it was tested using images taken from 48 patients, all over the age of 60. Its the first study to combine medical imaging and artificial intelligence, and the results have been published in Scientific Reports.

Instead of focusing on diagnosing diseases, the automated systems can predict medical outcomes in a way that doctors are not trained to do, by incorporating large volumes of data and detecting subtle patterns, explained lead authorLuke Oakden-Rayner in a university press release. This method of analysis can explore the combination of genetic and environmental risks better than genome testing alone,according to the researchers.

While the findings are only preliminary given the small sample size, the next stage will apply the AI to tens of thousands of cases.

While this study does focus on death, the most obvious and exciting consequence of it is how it could help preserve life. Our research opens new avenues for the application of artificial intelligence technology in medical image analysis, and could offer new hope for the early detection of serious illness, requiring specific medical interventions, said Oakden-Rayner. Because it encourages more precise treatment using firmer foundational data, the system has the potential to save many lives and provide patients with less intrusive healthcare.

An added benefit of this AI is its wide array of potential uses. Because medical imaging of internal organs is a fairly routine part of modern healthcare, the data is already plentiful. The system could be used to predict medical outcomes beyond just death, such as the potential for treatment complications, and it could work with any number of images, such as MRIs or X-rays, not just CT scans. Researchers will just need to adjustthe AItotheir specifications, andtheyll be able to obtain predictions quickly and cheaply.

AIsystems are becoming more and more prevalentin the healthcare industry.Deepmind is being usedto fight blindness in the United Kingdom, and IBM Watson is already as competent as human doctors at detecting cancer. It is in medicine, perhaps more than any other field, that we see AIs huge potential to help the human race.

Read more from the original source:

An AI Can Now Predict How Much Longer You'll Live For - Futurism

Meme-Gene Coevolution – Susan Blackmore

Evolution and Memes: The human brain as a selective imitation device

Susan Blackmore

This article originally appeared in Cybernetics and Systems, Vol 32:1, 225-255, 2001, Taylor and Francis, Philadelphia, PA. Reproduced with permission.

Italian translation I memi e lo sviluppo del cervello, in KOS 211, aprile 2003, pp. 56-64.

German translation Evolution und Meme: Das menschliche Gehirn als selektiver Imitationsapparat , in: Alexander Becker et al. (Hg.): Gene, Meme und Gehirne. Geist und Gesellschaft als Natur, Frankfurt: Suhrkamp 2003 pp 49-89.

Abstract

The meme is an evolutionary replicator, defined as information copied from person to person by imitation. I suggest that taking memes into account may provide a better understanding of human evolution in the following way. Memes appeared in human evolution when our ancestors became capable of imitation. From this time on two replicators, memes and genes, coevolved. Successful memes changed the selective environment, favouring genes for the ability to copy them. I have called this process memetic drive. Meme-gene coevolution produced a big brain that is especially good at copying certain kinds of memes. This is an example of the more general process in which a replicator and its replication machinery evolve together. The human brain has been designed not just for the benefit of human genes, but for the replication of memes. It is a selective imitation device.

Some problems of definition are discussed and suggestions made for future research.

The concept of the meme was first proposed by Dawkins (1976) and since that time has been used in discussions of (among other things) evolutionary theory, human consciousness, religions, myths and mind viruses (e.g. Dennett 1991, 1995, Dawkins 1993, Brodie 1996, Lynch 1996). I believe, however, that the theory of memes has a more fundamental role to play in our understanding of human nature. I suggest that it can give us a new understanding of how and why the human brain evolved, and why humans differ in important ways from all other species. In outline my hypothesis is as follows.

Everything changed in human evolution when imitation first appeared because imitation let loose a new replicator, the meme. Since that time, two replicators have been driving human evolution, not one. This is why humans have such big brains, and why they alone produce and understand grammatical language, sing, dance, wear clothes and have complex cumulative cultures. Unlike other brains, human brains had to solve the problem of choosing which memes to imitate. In other words they have been designed for selective imitation.

This is a strong claim and the purpose of this paper is first to explain and defend it, second to explore the implications of evolution operating on two replicators, and third to suggest how some of the proposals might be tested. One implication is that we have underestimated the importance of imitation.

The new replicator

The essence of all evolutionary processes is that they involve some kind of information that is copied with variation and selection. As Darwin (1859) first pointed out, if you have creatures that vary, and if there is selection so that only some of those creatures survive, and if the survivors pass on to their offspring whatever it was that helped them survive, then those offspring must, on average, be better adapted to the environment in which that selection took place than their parents were. It is the inevitability of this process that makes it such a powerful explanatory tool. If you have the three requisites variation, selection and heredity, then you must get evolution. This is why Dennett calls the process the evolutionary algorithm. It is a mindless procedure which produces Design out of Chaos without the aid of Mind (Dennett 1995, p 50).

This algorithm depends on something being copied, and Dawkins calls this the replicator. A replicator can therefore be defined as any unit of information which is copied with variations or errors, and whose nature influences its own probability of replication (Dawkins 1976). Alternatively we can think of it as information that undergoes the evolutionary algorithm (Dennett 1995) or that is subject to blind variation with selective retention (Campbell 1960), or as an entity that passes on its structure largely intact in successive replications (Hull, 1988).

The most familiar replicator is the gene. In biological systems genes are packaged in complex ways inside larger structures, such as organisms. Dawkins therefore contrasted the genes as replicators with the vehicles that carry them around and influence their survival. Hull prefers the term interactors for those entities that interact as cohesive wholes with their environments and cause replication to be differential (Hull 1988). In either case selection may take place at the level of the organism (and arguably at other levels) but the replicator is the information that is copied reasonably intact through successive replications and is the ultimate beneficiary of the evolutionary process.

Note that the concept of a replicator is not restricted to biology. Whenever there is an evolutionary process (as defined above) then there is a replicator. This is the basic principle of what has come to be known as Universal Darwinism (Dawkins 1976, Plotkin 1993) in which Darwinian principles are applied to all evolving systems. Other candidates for evolving systems with their own replicators include the immune system, neural development, and trial and error learning (e.g. Calvin 1996, Edelman 1989, Plotkin 1993, Skinner 1953).

The new replicator I refer to here is the meme; a term coined in 1976 by Dawkins. His intention was to illustrate the principles of Universal Darwinism by providing a new example of a replicator other than the gene. He argued that whenever people copy skills, habits or behaviours from one person to another by imitation, a new replicator is at work.

We need a name for the new replicator, a noun that conveys the idea of a unit of cultural transmission, or a unit of imitation. Mimeme comes from a suitable Greek root, but I want a monosyllable that sounds a bit like gene. I hope my classicist friends will forgive me if I abbreviate mimeme to meme. Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperms or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation. (Dawkins, 1976, p 192).

Dawkins now explains that he had modest, and entirely negative, intentions for his new term. He wanted to prevent his readers from thinking that the gene was necessarily the be-all and end-all of evolution which all adaptations could be said to benefit (Dawkins, 1999, p xvi) and make it clear that the fundamental unit of natural selection is the replicator any kind of replicator. Nevertheless, he laid the groundwork for memetics. He likened some memes to parasites infecting a host, especially religions which he termed viruses of the mind (Dawkins, 1993), and he showed how mutually assisting memes will group together into co-adapted meme complexes (or memeplexes) often propagating themselves at the expense of their hosts.

Dennett subsequently used the concept of memes to illustrate the evolutionary algorithm and to discuss personhood and consciousness in terms of memes. He stressed the importance of asking Cui bono? or who benefits? The ultimate beneficiary of an evolutionary process, he stressed, is whatever it is that is copied; i.e. the replicator. Everything else that happens, and all the adaptations that come about, are ultimately for the sake of the replicators.

This idea is central to what has come to be known as selfish gene theory, but it is important to carry across this insight into dealing with any new replicator. If memes are truly replicators in their own right then we should expect things to happen in human evolution which are not for the benefit of the genes, nor for the benefit of the people who carry those genes, but for the benefit of the memes which those people have copied. This point is absolutely central to understanding memetics. It is this which divides memetics from closely related theories in sociobiology (Wilson 1975) and evolutionary psychology (e.g. Barkow, Cosmides & Tooby 1992, Pinker 1997). Dawkins complained of his colleagues that In the last analysis they wish always to go back to biological advantage (Dawkins 1976 p 193). This is true of theories in evolutionary psychology but also of most of the major theories of gene-culture coevolution. For example, Wilson famously claimed that the genes hold culture on a leash (Lumsden & Wilson 1981). More recently he has conceded that the term meme has won against its various competitors but he still argues that memes (such as myths and social contracts) evolved over the millennia because they conferred a survival advantage on the genes, not simply because of advantages to themselves (Wilson 1998). Other theories such as the mathematical models of Cavalli-Sforza and Feldman (1981) and Lumsden and Wilson (1981) take inclusive fitness (advantage to genes) as the final arbiter, as does Durham (1991) who argues that organic and cultural selection work on the same criterion and are complementary. Among the few exceptions are Boyd and Richersons Dual Inheritance model (1985) which includes the concept of cultural fitness, and Deacons (1997) coevolutionary theory in which language is likened to a parasitic organism with adaptations that evolved for its own replication, not for that of its host.

With these exceptions, the genes remain the bottom line in most such theories, even though maladaptive traits (that is, maladaptive to the genes) can arise, and may even thrive under some circumstances (Durham 1991, Feldman and Laland 1996). By contrast, if you accept that memes are a true replicator then you must consider the fitness consequences for memes themselves. This could make a big difference, and this is why I say that everything changed in evolution when memes appeared.

When was that? If we define memes as information copied by imitation, then this change happened when imitation appeared. I shall argue that should we do just that, but this will require some justification.

Problems of definition

If we had a universally agreed definition of imitation, we could define memes as that which is imitated (as Dawkins originally did). In that case we could say that, by definition, memes are transmitted whenever imitation occurs and, in terms of evolution, we could say that memes appeared whenever imitation did. Unfortunately there is no such agreement either over the definition of memes or of imitation. Indeed there are serious arguments over both definitions. I suggest that we may find a way out of these problems of definition by thinking about imitation in terms of evolutionary processes, and by linking the definitions of memes and imitation together.

In outline my argument is as follows. The whole point of the concept of memes is that the meme is a replicator. Therefore the process by which it is copied must be one that supports the evolutionary algorithm of variation, selection and heredity in other words, producing copies of itself that persist through successive replications and which vary and undergo selection. If imitation is such a process, and if other kinds of learning and social learning are not, then we can usefully tie the two definitions together. We can define imitation as a process of copying that supports an evolutionary process, and define memes as the replicator which is transmitted when this copying occurs.

Note that this is not a circular definition. It depends crucially on an empirical question is imitation in fact the kind of process that can support a new evolutionary system? If it is then there must be a replicator involved and we can call that replicator the meme. If not, then this proposal does not make sense. This is therefore the major empirical issue involved, and I shall return to it when I have considered some of the problems with our current definitions.

Defining the meme

The Oxford English Dictionary defines memes as follows meme (mi:m), n. Biol.(shortened from mimeme that which is imitated, after GENE n.) An element of a culture that may be considered to be passed on by non-genetic means, esp. imitation. This is clearly built on Dawkinss original conception and is clear as far as it goes. However, there are many other definitions of the meme, both formal and informal, and much argument about which is best. These definitions differ mainly on two key questions: (1) Whether memes exist only inside brains or outside of them as well, and (2) the methods by which memes may be transmitted.

The way we define memes is critical, not only for the future development of memetics as a science, but for our understanding of evolutionary processes in both natural and artificial systems. Therefore we need to get the definitions right. What counts as right, in my view, is a definition that fits the concept of the meme as a replicator taking part in a new evolutionary process. Any definition which strays from this concept loses the whole purpose and power of the idea of the meme indeed its whole reason for being. It is against this standard that I judge the various competing definitions, and my conclusion is that memes are both inside and outside of brains, and they are passed on by imitation. The rest of this section expands on that argument and can be skipped for the purposes of understanding the wider picture.

First there is the question of whether memes should be restricted to information stored inside peoples heads (such as ideas, neural patterns, memories or knowledge) or should include information available in behaviours or artefacts (such as speech, gestures, inventions and art, or information in books and computers).

In 1975, Cloak distinguished between the cultural instructions in peoples heads (which he called i-culture) and the behaviour, technology or social organisation they produce (which he called m-culture). Dawkins (1976) initially ignored this distinction, using the term meme to apply to behaviours and physical structures in a brain, as well as to memetic information stored in other ways (as in his examples of tunes, ideas and fashions). This is sometimes referred to as Dawkins A (Gatherer 1998). Later (Dawkins B) he decided that A meme should be regarded as a unit of information residing in a brain (Cloaks i-culture) (Dawkins 1982, p 109). This implies that the information in the clothes or the tunes does not count as a meme. But later still he says that memes can propagate themselves from brain to brain, from brain to book, from book to brain, from brain to computer, from computer to computer (Dawkins, 1986, p 158). Presumably they still count as memes in all these forms of storage not just when they are in a brain. So this is back to Dawkins A.

Dennett (1991, 1995) treats memes as information undergoing the evolutionary algorithm, whether they are in a brain, a book or some other physical object. He points out that copying any behaviour must entail neural change and that the structure of a meme is likely to be different in any two brains, but he does not confine memes to these neural structures. Durham (1991) also treats memes as information, again regardless of how they are stored. Wilkins defines a meme as the least unit of sociocultural information relative to a selection process that has favourable or unfavourable selection bias that exceeds its endogenous tendency to change. (Wilkins 1998). This is based on Williamss now classic definition of the gene as any hereditary information for which there is a favorable or unfavorable selection bias equal to several or many times its rate of endogenous change. (Williams 1966, p 25). What is important here is that the memetic information survives intact long enough to be subject to selection pressures. It does not matter where and how the information resides.

In contrast, Delius (1989) describes memes as constellations of activated and non-activated synapses within neural memory networks (p 45) or arrays of modified synapses (p 54). Lynch (1991) defines them as memory abstractions or memory items, Grant (1990) as information patterns infecting human minds, and Plotkin as ideas or representations the internal end of the knowledge relationship (Plotkin 1993, p 215), while Wilson defines the natural elements of culture as the hierarchically arranged components of semantic memory, encoded by discrete neural circuits awaiting identification. (Wilson 1998, p 148). Closer to evolutionary principles, Brodie defines a meme as a unit of information in a mind whose existence influences events such that more copies of itself get created in other minds. (Brodie 1996, p 32), but this restricts memes to being in minds. Presumably, on all these latter definitions, memes cannot exist in books or buildings, so the books and buildings must be given a different role. This has been done, by using further distinctions, usually based on a more or less explicit analogy with genes.

Cloak (1975) explicitly likened his i-culture to the genotype and m-culture to the phenotype. Dennett (1995) also talks about memes and their phenotypic effects, though in a different way. The meme is internal (though not confined to brains) while the way it affects things in its environment (p 349), is its phenotype. In an almost complete reversal, Benzon (1996) likens pots, knives, and written words (Cloaks m-culture) to the gene; and ideas, desires and emotions (i-culture) to the phenotype. Gabora (1997) likens the genotype to the mental representation of a meme, and the phenotype to its implementation. Delius (1989), having defined memes as being in the brain, refers to behaviour as the memes phenotypic expression, while remaining ambiguous about the role of the clothes fashions he discusses. Grant (1990) defines the memotype as the actual information content of a meme, and distinguishes this from its sociotype or social expression. He explicitly bases his memotype/sociotype distinction on the phenotype/genotype distinction. All these distinctions are slightly different and it is not at all clear which, if any, is better.

The problem is this. If memes worked like genes then we should expect to find close analogies between the two evolutionary systems. But, although both are replicators, they work quite differently and for this reason we should be very cautious of meme-gene analogies. I suggest there is no clean equivalent of the genotype/phenotype distinction in memetics because memes are a relatively new replicator and have not yet created for themselves this highly efficient kind of system. Instead there is a messy system in which information is copied all over the place by many different means.

I previously gave the example of someone inventing a new recipe for pumpkin soup and passing it on to various relatives and friends (Blackmore 1999). The recipe can be passed on by demonstration, by writing the recipe on a piece of paper, by explaining over the phone, by sending a fax or e-mail, or (with difficulty) by tasting the soup and working out how it might have been cooked. It is easy to think up examples of this kind which make a mockery of drawing analogies with genotypes and phenotypes because there are so many different copying methods. Most important for the present argument, we must ask ourselves this question. Does information about the new soup only count as a meme when it is inside someones head or also when it is on a piece of paper, in the behaviour of cooking, or passing down the phone lines? If we answer that memes are only in the head then we must give some other role to these many other forms and, as we have seen, this leads to confusion.

My conclusion is this. The whole point of memes is to see them as information being copied in an evolutionary process (i.e. with variation and selection). Given the complexities of human life, information can be copied in myriad ways. We do a disservice to the basic concept of the meme if we try to restrict it to information residing only inside peoples heads as well as landing ourselves in all sorts of further confusions. For this reason I agree with Dennett, Wilkins, Durham and Dawkins A, who do not restrict memes to being inside brains. The information in this article counts as memes when it is inside my head or yours, when it is in my computer or on the journal pages, or when it is speeding across the world in wires or bouncing off satellites, because in any of these forms it is potentially available for copying and can therefore take part in an evolutionary process.

We may now turn to the other vexed definitional question the method by which memes are replicated. The dictionary definition gives a central place to imitation, both in explaining the derivation of the word meme and as the main way in which memes are propagated. This clearly follows Dawkinss original definition, but Dawkins was canny in saying imitation in the broad sense. Presumably he meant to include many processes which we may not think of as imitation but which depend on it, like direct teaching, verbal instruction, learning by reading and so on. All these require an ability to imitate. At least, learning language requires the ability to imitate sounds, and instructed learning and collaborative learning emerge later in human development than does imitation and arguably build on it (Tomasello, Kruger & Ratner 1993). We may be reluctant to call some of these complex human skills imitation. However, they clearly fit the evolutionary algorithm. Information is copied from person to person. Variation is introduced both by degradation due to failures of human memory and communication, and by the creative recombination of different memes. And selection is imposed by limitations on time, transmission rates, memory and other kinds of storage space. In this paper I am not going to deal with these more complex kinds of replication. Although they raise many interesting questions, they can undoubtedly sustain an evolutionary process and can therefore replicate memes. Instead I want to concentrate on skills at the simpler end of the scale, where it is not so obvious which kinds of learning can and cannot count as replicating memes.

Theories of gene-culture coevolution all differ in the ways their cultural units are supposed to be passed on. Cavalli-Sforza and Feldmans (1981) cultural traits are passed on by imprinting, conditioning, observation, imitation or direct teaching. Durhams (1991) coevolutionary model refers to both imitation and learning. Runciman (1998) refers to memes as instructions affecting phenotype passed on by both imitation and learning. Laland and Odling Smee (in press) argue that all forms of social learning are potentially capable of propagating memes. Among meme-theorists both Brodie (1996) and Ball (1984) include all conditioning, and Gabora (1997) counts all mental representations as memes regardless of how they are acquired.

This should not, I suggest, be just a matter of preference. Rather, we must ask which kinds of learning can and cannot copy information from one individual to another in such a way as to sustain an evolutionary process. For if information is not copied through successive replications, with variation and selection, then there is no new evolutionary process and no need for the concept of the meme as replicator. This is not a familiar way of comparing different types of learning so I will need to review some of the literature and try to extract an answer.

Communication and contagion

Confusion is sometimes caused over the term communication, so I just want to point out that most forms of animal communication (even the most subtle and complex) do not involve the copying of skills or behaviours from one individual to another with variation and selection. For example, when bees dance information about the location of food is accurately conveyed and the observing bees go off to find it, but the dance itself is not copied or passed on. So this is not copying a meme. Similarly when vervet monkeys use several different signals to warn conspecifics of different kinds of predator (Cheney and Seyfarth 1990), there is no copying of the behaviour. The behaviour acts as a signal on which the other monkeys act, but they do not copy the signals with variation and selection.

Yawning, coughing or laughter can spread contagiously from one individual to the next and this may appear to be memetic, but these are behaviours that were already known or in the animals repertoire, and are triggered by another animal performing them (Provine 1996). In this type of contagion there is no copying of new behaviours (but note that there are many other kinds of contagion (Levy & Nail, 1993; Whiten & Ham, 1992)). Communication of these kinds is therefore not even potentially memetic. Various forms of animal learning may be.

Learning

Learning is commonly divided into individual and social learning. In individual learning (including classical conditioning, operant conditioning, acquisition of motor skills and spatial learning) there is no copying of information from one animal to another. When a rat learns to press a lever for reward, a cat learns where the food is kept, or a child learns how to ride a skateboard, that learning is done for the individual only and cannot be passed on. Arguably such learning involves a replicator being copied and selected within the individual brain (Calvin 1996, Edelman 1989), but it does not involve copying between individuals. These types of learning therefore do not count as memetic transmission.

In social learning a second individual is involved, but in various different roles. Types of social learning include goal emulation, stimulus enhancement, local enhancement, and true imitation. The question I want to ask is which of these can and cannot sustain a new evolutionary process.

In emulation, or goal emulation, the learner observes another individual gaining some reward and therefore tries to obtain it too, using individual learning in the process, and possibly attaining the goal in quite a different way from the first individual (Tomasello 1993). An example is when monkeys, apes or birds observe each other getting food from novel containers but then get it themselves by using a different technique (e.g. Whiten & Custance 1996). This is social learning because two individuals are involved, but the second has only learned a new place to look for food. Nothing is copied from one animal to the other in such a way as to allow for the copying of variations and selective survival of some variants over others. So there is no new evolutionary process and no new replicator.

In stimulus enhancement the attention of the learner is drawn to a particular object or feature of the environment by the behaviour of another individual. This process is thought to account for the spread among British tits of the habit of pecking milk bottle tops to get at the cream underneath, which was first observed in 1921 and spread from village to village (Fisher and Hinde 1949). Although this looks like imitation, it is possible that once one bird had learned the trick others were attracted to the jagged silver tops and they too discovered (by individual learning) that there was cream underneath (Sherry & Galef 1984). If so, the birds had not learned a new skill from each other (they already knew how to peck), but only a new stimulus at which to peck. Similarly the spread of termite fishing among chimpanzees might be accounted for by stimulus enhancement as youngsters follow their elders around and are exposed to the right kind of sticks in proximity to termite nests. They then learn by trial and error how to use the sticks.

In local enhancement the learner is drawn to a place or situation by the behaviour of another, as when rabbits learn from each other not to fear the edges of railway lines in spite of the noise of the trains. The spread of sweet-potato washing in Japanese macaques may have been through stimulus or local enhancement as the monkeys followed each other into the water and then discovered that washed food was preferable (Galef 1992).

If this is the right explanation for the spread of these behaviours we can see that there is no new evolutionary process and no new replicator, for there is nothing that is copied from individual to individual with variation and selection. This means there can be no cumulative selection of more effective variants. Similarly, Boyd and Richerson (in press) argue that this kind of social learning does not allow for cumulative cultural change.

Most of the population-specific behavioural traditions studied appear to be of this kind, including nesting sites, migration routes, songs and tool use, in species such as wolves, elephants, monkeys, monarch butterflies, and many kinds of birds (Bonner 1980). For example, oyster catchers use two different methods for opening mussels according to local tradition but the two methods do not compete in the same population in other words there is no differential selection of variants within a given population. Tomasello, Kruger and Ratner (1993) argue that many chimpanzee traditions are also of this type. Although the behaviours are learned population-specific traditions they are not cultural in the human sense of that term because they are not learned by all or even most of the members of the group, they are learned very slowly and with wide individual variation, and most telling they do not show an accumulation of modifications over generations. That is, they do not show the cultural ratchet effect precluding the possibility of humanlike cultural traditions that have histories.

There may be exceptions to this. Whiten et al. (1999) have studied a wide variety of chimpanzee behaviours and have found limited evidence that such competition between variants does occur within the same group. For example, individuals in the same group use two different methods for catching ants on sticks, and several ways of dealing with ectoparasites while grooming. However, they suggest that these require true imitation for their perpetuation.

Imitation

True imitation is more restrictively defined, although there is still no firm agreement about the definition (see Zentall 1996, Whiten 1999). Thorndike (1898), originally defined imitation as learning to do an act from seeing it done. This means that one animal must acquire a novel behaviour from another so ruling out the kinds of contagion noted above. Whiten and Ham (1992), whose definition is widely used, define imitation as learning some part of the form of a behaviour from another individual. Similarly Heyes (1993) distinguishes between true imitation learning something about the form of behaviour through observing others, from social learning learning about the environment through observing others (thus ruling out stimulus and local enhancement).

True imitation is much rarer than individual learning and other forms of social learning. Humans are extremely good at imitation; starting almost from birth, and taking pleasure in doing it. Meltzoff, who has studied imitation in infants for more than twenty years, calls humans the consummate imitative generalist (Meltzoff, 1996) (although some of the earliest behaviours he studies, such as tongue protrusion, might arguably be called contagion rather than true imitation). Just how rare imitation is has not been answered. There is no doubt that some song birds learn their songs by imitation, and that dolphins are capable of imitating sounds as well as actions (Bauer & Johnson, 1994; Reiss & McCowan, 1993). There is evidence of imitation in the grey parrot and harbour seals. However, there is much dispute over the abilities of non-human primates and other mammals such as rats and elephants (see Byrne & Russon 1998; Heyes & Galef 1996, Tomasello, Kruger & Ratner 1993, Whiten 1999).

Many experiments have been done on imitation and although they have not been directly addressed at the question of whether a new replicator is involved, they may help towards an answer. For example, some studies have tried to find out how much of the form of a behaviour is copied by different animals and by children. In the two-action method a demonstrator uses one of two possible methods for achieving a goal (such as opening a specially designed container), while the learner is observed to see which method is used (Whiten et al. 1996; Zentall 1996). If a different method is used the animal may be using goal emulation, but if the same method is copied then true imitation is involved. Evidence of true imitation has been claimed using this method in budgerigars, pigeons and rats, as well as enculturated chimpanzees and children (Heyes and Galef 1996). Capuchin monkeys have recently been found to show limited ability to copy the demonstrated method (Custance, Whiten & Fredman 1999).

Other studies explore whether learners can copy a sequence of actions and their hierarchical structure (Whiten 1999). Byrne and Russon (1998) distinguish action level imitation (in which a sequence of actions is copied in detail) from program level imitation (in which the subroutine structure and hierarchical layout of a behavioural program is copied). They argue that other great apes may be capable of program level imitation although humans have a much greater hierarchical depth. Such studies are important for understanding imitation, but they do not directly address the questions at issue here that is, does the imitation entail an evolutionary process? Is there a new replicator involved?

To answer this we need new kinds of research directed at finding out whether a new evolutionary process is involved when imitation, or other kinds of social learning, take place. This might take two forms. First there is the question of copying fidelity. As we have seen, a replicator is defined as an entity that passes on its structure largely intact in successive replications. So we need to ask whether the behaviour or information is passed on largely intact through several replications. For example, in the wild, is there evidence of tool use, grooming techniques or other socially learned behaviours being passed on through a series of individuals, rather than several animals learning from one individual but never passing the skill on again? In experimental situations one animal could observe another, and then act as model for a third and so on (as in the game of Chinese whispers or telephone). We might not expect copying fidelity to be very high, but unless the skill is recognisably passed on through more than one replication then we do not have a new replicator i.e. there is no need for the concept of the meme.

Second, is there variation and selection? The examples given by Whiten et al. (1999) suggest that there can be. We might look for other examples where skills are passed to several individuals, these individuals differ in the precise way they carry out the skill, and some variants are more frequently or reliably passed on again. For this is the basis of cumulative culture. Experiments could be designed to detect the same process occurring in artificial situations. Such studies would enable us to say just which processes, in which species, are capable of sustaining an evolutionary process with a new replicator. Only when this is found can we usefully apply the concept of the meme.

If such studies were done and it turned out that, by and large, what we have chosen to call imitation can sustain cumulative evolution while other kinds of social learning cannot, then we could easily tie the definitions of memes and imitation together so that what counts as a meme is anything passed on by imitation, and wherever you have imitation you have a meme.

In the absence of such research we may not be justified in taking this step, and some people may feel that it would not do justice to our present understanding of imitation. Nevertheless, for the purposes of this paper at least, that is what I propose. The advantage is that it allows me to use one word imitation to describe a process by which memes are transmitted. If you prefer, for imitation read a kind of social learning which is capable of sustaining an evolutionary process with a new replicator.

This allows me to draw the following conclusion. Imitation is restricted to very few species and humans appear to be alone in being able to imitate a very wide range of sounds and behaviours. This capacity for widespread generalised imitation must have arisen at some time in our evolutionary history. When it did so, a new replicator was created and the process of memetic evolution began. This, I suggest, was a crucial turning point in human evolution. I now want to explore the consequences of this transition and some of the coevolutionary processes that may have occurred once human evolution was driven by two replicators rather than one. One consequence, I suggest, was a rapid increase in brain size.

The big human brain

Humans have abilities that seem out of line with our supposed evolutionary past as hunter-gatherers, such as music and art, science and mathematics, playing chess and arguing about our evolutionary origins. As Cronin puts it, we have a brain surplus to requirements, surplus to adaptive needs (Cronin, 1991, p 355). This problem led Wallace to argue, against Darwin, that humans alone have a God-given intellectual and spiritual nature (see Cronin 1991). Williams (1966) also struggled with the problem of mans cerebral hypertrophy, unwilling to accept that advanced mental capacities have ever been directly favoured by selection or that geniuses leave more children.

Humans have an encephalisation quotient of about 3 relative to other primates. That is, our brains are roughly three times as large when adjusted for body weight (Jerison 1973). The increase probably began about 2.5 million years ago in the australopithecines, and was completed about 100,000 years ago by which time all living hominids had brains about the same size as ours (Leakey, 1994; Wills, 1993). Not only is the brain much bigger than it was, but it appears to have been drastically reorganised during what is, in evolutionary terms, a relatively short time (Deacon 1997). The correlates of brain size and structure have been studied in many species and are complex and not well understood (Harvey & Krebs 1990). Nevertheless, the human brain stands out. The problem is serious because of the very high cost (in energy terms) of both producing a large brain during development, and of running it in the adult, as well as the dangers entailed in giving birth. Pinker asks Why would evolution ever have selected for sheer bigness of brain, that bulbous, metabolically greedy organ? Any selection on brain size itself would surely have favored the pinhead. (1994, p 363).

Early theories to explain the big brain focused on hunting and foraging skills, but predictions have not generally held up and more recent theories have emphasised the complexity and demands of the social environment (Barton & Dunbar 1997). Chimpanzees live in complex social groups and it seems likely that our common ancestors did too. Making and breaking alliances, remembering who is who to maintain reciprocal altruism, and outwitting others, all require complex and fast decision making and good memory. The Machiavellian Hypothesis emphasises the importance of deception and scheming in social life and suggests that much of human intelligence has social origins (Byrne & Whiten 1988; Whiten & Byrne 1997). Other theories emphasise the role of language (Deacon 1997, Dunbar 1996).

There are three main differences between this theory and previous ones. First, this theory entails a definite turning point the advent of true imitation which created a new replicator. On the one hand this distinguishes it from theories of continuous change such as those based on improving hunting or gathering skills, or on the importance of social skills and Machiavellian intelligence. On the other hand it is distinct from those which propose a different turning point, such as Donalds (1991) three stage coevolutionary model or Deacons (1997) suggestion that the turning point was when our ancestors crossed the Symbolic Threshold.

Second, both Donald and Deacon emphasise the importance of symbolism or mental representations in human evolution. Other theories also assume that what makes human culture so special is its symbolic nature. This emphasis on symbolism and representation is unnecessary in the theory proposed here. Whether behaviours acquired by imitation (i.e. memes) can be said to represent or symbolise anything is entirely irrelevant to their role as replicators. All that matters is whether they are replicated or not.

Third, the theory has no place for the leash metaphor of sociobiology, or for the assumption, common to almost all versions of gene-culture coevolution, that the ultimate arbiter is inclusive fitness (i.e. benefit to genes). In this theory there are two replicators, and the relationships between them can be cooperative, competitive, or anything in between. Most important is that memes compete with other memes and produce memetic evolution, the results of which then affect the selection of genes. On this theory we can only understand the factors affecting gene selection when we understand their interaction with memetic selection.

In outline the theory is this. The turning point in hominid evolution was when our ancestors began to imitate each other, releasing a new replicator, the meme. Memes then changed the environment in which genes were selected, and the direction of change was determined by the outcome of memetic selection. Among the many consequences of this change was that the human brain and vocal tract were restructured to make them better at replicating the successful memes.

The origins of imitation

We do not know when and how imitation originated. In one way it is easy to see why natural selection would have favoured social learning. It is a way of stealing the products of someone elses learning i.e. avoiding the costs and risks associated with individual learning though at the risk of acquiring outdated or inappropriate skills. Mathematical modelling has shown that this is worthwhile if the environment is variable but does not change too fast (Richerson and Boyd 1992). Similar analyses have been used in economics to compare the value of costly individual decision making against cheap imitation (Conlisk 1980).

As we have seen, other forms of social learning are fairly widespread, but true imitation occurs in only a few species. Moore (1996) compares imitation in parrots, great apes and dolphins and concludes that they are not homologous and that imitation must have evolved independently at least three times. In birds imitation probably evolved out of song mimicry, but in humans it did not. We can only speculate about what the precursors to human imitation may have been, but likely candidates include general intelligence and problem solving ability, the beginnings of a theory of mind or perspective taking, reciprocal altruism (which often involves strategies like tit-for-tat that entail copying what the other person does), and the ability to map observed actions onto ones own.

The latter sounds very difficult to achieve involving transforming the visual input of a seen action from one perspective into the motor instructions for performing a similar action oneself. However, mirror neurons in monkey premotor cortex appear to belong to a system that does just this. The same neurons fire when the monkey performs a goal-directed action itself as when it sees another monkey perform the same action, though Gallese and Goldman (1998) believe this system evolved for predicting the goals and future actions of others, rather than for imitation. Given that mirror neurons occur in monkeys, it seems likely that our ancestors would have had them, making the transition to true imitation more likely.

We also do not know when that transition occurred. The first obvious signs of imitation are the stone tools made by Homo habilis about 2.5 million years ago, although their form did not change very much for a further million years. It seems likely that less durable tools were made before then; possibly carrying baskets, slings, wooden tools and so on. Even before that our ancestors may have imitated ways of carrying food, catching game or other behaviours. By the time these copied behaviours were widespread the stage was set for memes to start driving genes. I shall take a simple example and try to explain how the process might work.

Memetic drive

Let us imagine that a new skill begins to spread by imitation. This might be, for example, a new way of making a basket to carry food. The innovation arose from a previous basket type, and because the new basket holds slightly more fruit it is preferable. Other people start copying it and the behaviour and the artefact both spread. Note that I have deliberately chosen a simple meme (or small memeplex) to illustrate the principle; that is the baskets and the skills entailed in making them. In practice there would be complex interactions with other memes but I want to begin simply.

Now anyone who does not have access to the new type of basket is at a survival disadvantage. A way to get the baskets is to imitate other people who can make them, and therefore good imitators are at an advantage (genetically). This means that the ability to imitate will spread. If we assume that imitation is a difficult skill (as indeed it seems to be) and requires a slightly larger brain, then this process alone can already produce an increase in brain size. This first step really amounts to no more than saying that imitation was selected for because it provides a survival advantage, and once the products of imitation spread, then imitation itself becomes ever more necessary for survival. This argument is a version of the Baldwin effect (1896) which applies to any kind of learning: once some individuals become able to learn something, those who cannot are disadvantaged and genes for the ability to learn therefore spread. So this is not specifically a memetic argument.

However, the presence of memes changes the pressures on genes in new ways. The reason is that memes are also replicators undergoing selection and as soon as there are sufficient memes around to set up memetic competition, then meme-gene coevolution begins. Let us suppose that there are a dozen different basket types around that compete with each other. Now it is important for any individual to choose the right basket to copy, but which is that? Since both genes and memes are involved we need to look at the question from both points of view.

From the genes point of view the right decision is the basket that increases inclusive fitness i.e. the decision that improves the survival chances of all the genes of the person making the choice. This will probably be the biggest, strongest, or easiest basket to make. People who copy this basket will gather more food, and ultimately be more likely to pass on the genes that were involved in helping them imitate that particular basket. In this way the genes, at least to some extent, track changes in the memes.

From the memes point of view the right decision is the one that benefits the basket memes themselves. These memes spread whenever they get the chance, and their chances are affected by the imitation skills, the perceptual systems and the memory capacities (among other things) of the people who do the copying. Now, let us suppose that the genetic tracking has produced people who tend to imitate the biggest baskets because over a sufficiently long period of time larger artefacts were associated with higher biological success. This now allows for the memetic evolution of all sorts of new baskets that exploit that tendency; especially baskets that look big. They need not actually be big, or well made, or very good at doing their job but as long as they trigger the genetically acquired tendency to copy big baskets then they will do well, regardless of their consequence for inclusive fitness. The same argument would apply if the tendency was to copy flashy-looking baskets, solid baskets, or whatever. So baskets that exploit the current copying tendencies spread at the expense of those that do not.

This memetic evolution now changes the situation for the genes which have, as it were, been cheated and are no longer effectively tracking the memetic change. Now the biological survivors will be the people who copy whatever it is about the current baskets that actually predicts biological success. This might be some other feature, such as the materials used, the strength, the kind of handle, or whatever and so the process goes on. This process is not quite the same as traditional gene-culture evolution or the Baldwin effect. The baskets are not just aspects of culture that have appeared by accident and may or may not be maladaptive for the genes of their carriers. They are evolving systems in their own right, with replicators whose selfish interests play a role in the outcome.

I have deliberately chosen a rather trivial example to make the process clear; the effects are far more contentious, as we shall see, when they concern the copying of language, or of seriously detrimental activities.

Whom to imitate

Another strategy for genes might be to constrain whom, rather than what, is copied. For example, a good strategy would be to copy the biologically successful. People who tended, other things being equal, to copy those of their acquaintances who had the most food, the best dwelling space, or the most children would, by and large, copy the memes that contributed to that success and so be more likely to succeed themselves. If there was genetic variation such that some people more often copied their biologically successful neighbours, then their genes would spread and the strategy copy the most successful would, genetically, spread through the population. In this situation (as I have suggested above) success is largely a matter of being able to acquire the currently important memes. So this strategy amounts to copying the best imitators. I shall call these people meme fountains, a term suggested by Dennett (1998) to refer to those who are especially good at imitation and who therefore provide a plentiful source of memes both old memes they have copied and new memes they have invented by building on, or combining, the old.

Now we can look again from the memes point of view. Any memes that got into the repertoire of a meme fountain would thrive regardless of their biological effect. The meme fountain acquires all the most useful tools, hunting skills, fire-making abilities and his genes do well. However, his outstanding imitation ability means that he copies and adapts all sorts of other memes as well. These might include rain dances, fancy clothes, body decoration, burial rites or any number of other habits that may not contribute to his genetic fitness. Since many of his neighbours have the genetically in-built tendency to copy him these memes will spread just as well as the ones that actually aid survival.

Whole memetic lineages of body decoration or dancing might evolve from such a starting point. Taking dancing as an example, people will copy various competing dances and some dances will be copied more often than others. This memetic success may depend on whom is copied, but also on features of the dances, such as memorability, visibility, interest and so on features that in turn depend on the visual systems and memories of the people doing the imitation. As new dances spread to many people, they open up new niches for further variations on dancing to evolve. Any of these memes that get their hosts to spend lots of time dancing will do better, and so, if there is no check on the process, people will find themselves dancing more and more.

Switching back to the genes point of view, the problem is that dancing is costly in terms of time and energy. Dancing cannot now be un-evolved but its further evolution will necessarily be constrained. Someone who could better discriminate between the useful memes and the energy-wasting memes would leave more descendants than someone who could not. So the pressure is on to make more and more refined discriminations about what and whom to imitate. And crucially the discriminations that have to be made depend upon the past history of memetic as well as genetic evolution. If dancing had never evolved there would be no need for genes that selectively screened out too much dance-imitation. Since it did there is. This is the crux of the process I have called memetic driving. The past history of memetic evolution affects the direction that genes must take to maximise their own survival.

We now have a coevolutionary process between two quite different replicators that are closely bound together. To maximise their success the genes need to build brains that are capable of selectively copying the most useful memes, while not copying the useless, costly or harmful ones. To maximise their success the memes must exploit the brains copying machinery in any way they can, regardless of the effects on the genes. The result is a mass of evolving memes, some of which have thrived because they are useful to the genes, and some of which have thrived in spite of the fact that they are not and a brain that is designed to do the job of selecting which memes are copied and which are not. This is the big human brain. Its function is selective imitation and its design is the product of a long history of meme-gene coevolution.

Whom to mate with

There is another twist to this argument; sexual selection for the ability to imitate. In general it will benefit females to mate with successful males and, in this imagined human past, successful males are those who are best at imitating the currently important memes. Sexual selection might therefore amplify the effects of memetic drive. A runaway process of sexual selection could then take off.

For example, let us suppose that at some particular time the most successful males were the meme fountains. Their biological success depended on their ability to copy the best tools or firemaking skills, but their general imitation ability also meant they wore the most flamboyant clothes, painted the most detailed paintings, or hummed the favourite tunes. In this situation mating with a good painter would be advantageous. Females who chose good painters would begin to increase in the population and this in turn would give the good painters another advantage, quite separate from their original biological advantage. That is, with female choice now favouring good painters, the offspring of good painters would be more likely to be chosen by females and so have offspring themselves. This is the crux of runaway sexual selection and we can see how it might have built on prior memetic evolution.

Miller (1998, 1999) has proposed that artistic ability and creativity have been sexually selected as courtship displays to attract women, and has provided many examples, citing evidence that musicians and artists are predominantly male and at their most productive during young adulthood. However, there are differences between his theory and the one proposed here. He does not explain how or why the process might have begun whereas on this theory the conditions were created by the advent of imitation and hence of memetic evolution. Also on his theory the songs, dances or books act as display in sexual selection, but the competition between them is not an important part of the process. On the theory proposed here, memes compete with each other to be copied by both males and females, and the outcome of that competition determines the direction taken both by the evolution of the memes and of the brains that copy them.

Whether this process has occurred or not is an empirical question. But note that I have sometimes been misunderstood as basing my entire argument on sexual selection of good imitators (Aunger, in press). In fact the more fundamental process of memetic drive might operate with or without the additional effects of sexual selection.

The coevolution of replicators with their replication machinery

Memetic driving of brain design can be seen as an example of a more general evolutionary process. That is, the coevolution of a replicator along with the machinery for its replication. The mechanism is straightforward. As an example, imagine a chemical soup in which different replicators occur, some together with coenzymes or other replicating machinery, and some without. Those which produce the most numerous and long lived copies of themselves will swamp out the rest, and if this depends on being associated with better copying machinery then both the replicator and the machinery will thrive.

Something like this presumably happened on earth long before RNA and DNA all but eliminated any competitors (Maynard Smith & Szathmry 1995). DNAs cellular copying machinery is now so accurate and reliable that we tend to forget it must have evolved from something simpler. Memes have not had this long history behind them. The new replicator is, as Dawkins (1976 p 192) puts it, still drifting clumsily about in its primeval soup the soup of human culture. Nevertheless we see the same general process happening as we may assume once happened with genes. That is, memes and the machinery for copying them are improving together.

The big brain is just the first step. There have been many others. In each case, high quality memes outperform lower quality memes and their predominance favours the survival of the machinery that copies them. This focuses our attention on the question of what constitutes high quality memes. Dawkins (1976) suggested fidelity, fecundity and longevity.

This is the basis for my argument about the origins of language (Blackmore 1999, in press). In outline it is this. Language is a good way of creating memes with high fecundity and fidelity. Sound carries better than visual stimuli to several people at once. Sounds digitised into words can be copied with higher fidelity than continuously varying sounds. Sounds using word order open up more niches for memes to occupy and so on. In a community of people copying sounds from each other memetic evolution will ensure that the higher quality sounds survive. Memetic driving then favours brains and voices that are best at copying those memes. This is why our brains and bodies became adapted for producing language. On this theory the function of language ability is not primarily biological but memetic. The copying machinery evolved along with the memes it copies.

Originally posted here:

Meme-Gene Coevolution - Susan Blackmore

Can A Human Be Frozen And Brought Back To Life? – Zidbits

Science

Published on February 21, 2011

We see it all the time in movies. A person gets frozen or put in cryosleep and then unfrozen at a later date with no aging taking place, or other ill effects.

Sometimes this happens on purpose, like to someone with an incurable disease hoping a cure exists in the future, or sometimes by accident, like someone getting frozen in a glacier.

The science behind it does exist and the application of the practice is called cryonics. Its a technique used to store a persons body at an extremely low temperature with the hope of one day reviving them. This technique is being performed today, but the technology behind it is still in its infancy.

Someone preserved this way is said to be in cryonic suspension. The hope is that, if someone has died from a disease or condition that is currently incurable, they can be frozen and then revived in the future when a cure has been discovered.

Its currently illegal to perform cryonic suspension on someone who is still alive. Those who wish to be cryogenically frozen must first be pronounced legally dead which means their heart has stopped beating. Though, if theyre dead, how can they ever be revived?

According to companies who perform the procedure, legally dead is not the same as totally dead. Total death, they claim, is the point at which all brain function ceases. They claim that the difference is based on the fact that some cellular brain function remains even after the heart has stopped beating. Cryonics preserves some of that cell function so that, at least theoretically, the person can be brought back to life at a later date.

After your heart stops beating and you are pronounced legally dead, the company you signed with takes over. An emergency response team from the facility immediately gets to work. They stabilize your body by supplying your brain with enough oxygen and blood to preserve minimal function until you can be transported to the suspension facility. Your body is packed in ice and injected with an anticoagulant to prevent your blood from clotting during the trip. A medical team is on standby awaiting the arrival of your body at the cryonics facility.

After you reach the cryonics facility, the actual freezing can begin.

They could, and while youd certainly be frozen, most of the cells in your body would shatter and die.

As water freezes, it expands. Since cells are made up of mostly water, freezing expands the stuff inside which destroys their cell walls and they die. The cryonics companies need to remove and/or replace this water. They replace it with something called a cryoprotectant. Much like the antifreeze in an automobile. This glycerol based mixture protects your organ tissues by hindering the formation of ice crystals. This process is called vitrification and allows cells to live in a sort of suspended animation.

After the vitrification, your body is cooled with dry ice until it reaches -202 Fahrenheit. After this pre-cooling, its finally time to insert your body into the individual container that will be placed into a metal tank filled with liquid nitrogen. This will cool the body down to a temperature of around -320 degrees Fahrenheit.

The procedure isnt cheap. It can cost up to $200,000 to have your whole body preserved. For the more frugal optimist, a mere $60,000 will preserve your brain with an option known as neurosuspension. They hope the technology in the future will allow them to clone or regenerate the rest of the body.

Many critics say the companies that perform cryonics are simply ripping off customers with the dream of immortality and they wont deliver. It doesnt help that the scientists who perform cryonics say they havent successfully revived anyone, and dont expect to be able to do so anytime soon. The largest hurdle is that, if the warming process isnt done at exactly the right speed and temperature, the cells could form ice crystals and shatter.

Despite the fact that no human placed in a cryonic suspension has yet been revived, some living organisms can be, and have been, brought back from a dead or near-dead state. CPR and Defibrillators can bring accident and heart attack victims back from the dead daily.

Neurosurgeons often cool patients bodies so they can operate on aneurysms without damaging or rupturing the nearby blood vessels. Human embryos that are frozen in fertility clinics, defrosted and implanted in a mothers uterus grow into perfectly normal human beings. Some frogs and other amphibians have a protein manufactured by their cells that act as a natural antifreeze which can protect them if theyre frozen completely solid.

Cryobiologists are hopeful that nanotechnology will make revival possible someday. Nanotechnology can use microscopic machines to manipulate single atoms to build or repair virtually anything, including human cells and tissues. They hope one day, nanotechnology will repair not only the cellular damage caused by the freezing process, but also the damage caused by aging and disease.

Some cryobiologists have predicted that the first cryonic revival might occur as early as year 2045.

Read this article:

Can A Human Be Frozen And Brought Back To Life? - Zidbits

All of those antioxidant supplements are a huge con – INSIDER

facebook pinterest email copy link Antioxidants may not live up to all the hype.Flickr/Ano Lobb The INSIDER Summary:

Food and supplement companies make it seem like antioxidants are little warriors that start vanquishing diseases in your body as soon as you ingest them. It's easy to assume that consuming more of them must be better than consuming less.

But science shows loading up on antioxidants may not be as beneficial as you'd think some research suggests it can even cause harm. Here's what you need to know.

First, a quick primer on how antioxidants work:

Blueberries are a source of dietary antioxidants.Flickr/mystuart

Antioxidants have the power to stop free radicals, highly reactive chemicals that tear through the body, damaging cells and possibly playing a role in the development of diseases like cancer. Free radicals are an inescapable fact of life: The body makes them as a natural byproduct of digesting food, and it also makes them in response to pollution or radiation exposure.

"Antioxidants" is the catchall name given to the hundreds probably thousands of chemicals that can quench destructive free radicals. The body makes a lot of its own antioxidants, but we can also get them from our diet. Some antioxidants are also vitamins vitamins A, C, and E, to be specific but most others aren't.

When it comes to antioxidants, more is n0t always better.

Vitamin E supplement pills.Flickr/John Liu

A few decades ago, scientists began to understand that free radical damage might play a role in conditions like heart disease, cancer, vision loss, and more, according to the Harvard School of Public Health. So they decided to study what would happen if they gave people large doses of antioxidants in supplement form.

The results have been largely disappointing.

In 1985, for instance, American researchers recruited 18,000 people at high risk for lung cancer and had some of them take vitamin A supplements. But the study was halted almost two years early because participants taking the supplements were lung cancer than participants taking a placebo.

Antioxidant supplements aren't always beneficial.Shutterstock

Newer research hasn't been much more promising. A 2007 review found that taking antioxidants beta carotene, vitamin A, or vitamin E could increase mortality yes, that's the fancy scientific term for death. And while some trials have found a benefit to antioxidant supplementation, most simply haven't.

"The supplement trials have really failed," Christopher Gardner, PhD, professor of Medicine at Stanford Prevention Research Center and member of the True Health Initiative (THI), told INSIDER.

The antioxidant "scores" on food packages don't mean much, either.

You've probably come across tons of foods with claims about antioxidants on the label.

The test that companies use to make such claims is called the Oxygen Radical Absorbance Capacity, or ORAC. The problem is that it's a done in a test tube, not in humans. And just because a food has lots of antioxidant power in a test tube, Gardner explained, doesn't mean it's going to translate to a tangible health benefit in your body.

Food companies like to boast about antioxidant content.Flickr/Ty Konzak

"Even though [there's an antioxidant] in a food, you would have to absorb it without breaking it down," Gardner said. "Then it would have to be delivered to some part of your body that needs it. Then it would have to be the case that you didn't have enough to begin with, so this [antioxidant] made up for your deficiency. And then the last thing is, how would you measure that it did something?"

It's really tough to prove that the antioxidants in your morning goji berries, for example, are the reason you do or don't get heart disease 50 years from now.

Antioxidant content isn't the only reason you should buy a food.Flickr/Mike Mozart

Because of all this, the USDA decided to shut down its online ORAC database back in 2012, writing that ORAC values were "routinely misused" by food and supplement companies.

This doesn't mean products that list ORAC scores are necessarily bad for you. On the contrary, foods with high ORAC scores are often very nutritious choices, cardiologist Joel Kahn, MD, another THI member, told INSIDER.

But you shouldn't let antioxidant-based marketing claims sway your food decisions. Don't spend more on a certain type of berry solely because it has a high ORAC score or the word "antioxidants" plastered all over the package. Just buy whatever berries you want to eat.

One thing is clear: Foods that contain lots of antioxidants are good for your health.

Fruits and vegetables are the way to go.Flickr/Jason Paris

Most health authorities agree: Antioxidant supplements aren't worth your money, but antioxidant-rich foods definitely are.

"Antioxidant-rich foods probably sound familiar because we've been telling you to eat those for a really long time," Gardner said.

Fruits, vegetables, whole grains these foods are all rich in antioxidants, but they also have healthful fiber and essential nutrients your body needs. Plus, a robust body of evidence says that they're beneficial for long-term health.

"People should get the majority of their antioxidants from brightly colored fresh fruits and vegetables," Kahn said. "There's no doubt eating fruits and vegetables is a dose-related way to improve your health."

Like what you see here? Subscribe to our daily newsletter to get more of it.

See more here:

All of those antioxidant supplements are a huge con - INSIDER

Everything You Need to Know About Eating Activated Charcoal – Eater

If youve taken a peek through Instagram recently, one thing is clear: Black food is everywhere. Perhaps a goth response to the ubiquity of unicorn lattes and rainbow bagels, dyeing foods a deep, inky black has become one of the years biggest food trends. Activated charcoal, the ingredient that creates this super-black hue, has made its way into coconut ash ice cream, detoxifying lemonades, pizza crusts, and boozy cocktails that are as black as your cold, dark soul.

Activated charcoal, also known as activated carbon or coconut ash, has long been a staple in hospitals, where it is used to prevent poisons and lethal overdoses of drugs from being absorbed by the body. Its a potent detoxifier, which has also helped activated charcoal attract an ardent following among the crunchy juice-cleanse types, who claim that the supplement (usually taken in pill form, though the powder can be mixed into a glass of water) can do everything from preventing hangovers to mitigating the side effects of food poisoning.

The idea of charcoal as a detoxifier isnt going away anytime soon, but consumers are now more interested in charcoal-tinted ice cream and pizza because it makes for excellent Instagram fodder. The black ice cream from shops like Morgensterns in New York City and Los Angeles Little Damage have been posted to social media thousands of times, along with inspiring countless copycats at ice cream shops across the country. This time, the craze isnt necessarily attributed to activated charcoals purported health benefits. Instead, the appeal is directly attributed to the fact that black-hued dishes are relatively rare and unique and also happen to look really, really cool.

Still, as the trend has grown, a number of articles have raised concerns about whether or not activated charcoal is safe to consume. Theres been a little bit of fearmongering regarding the ingredient, like pieces at Self and BoingBoing that warn people to definitely avoid foods dyed black with activated charcoal because theyre not safe.

As always, the truth lies somewhere in the middle, between the natural health evangelists and complete skeptics. If consumed in excessive amounts, activated charcoal can cause some adverse health effects but definitely it isnt as dangerous as some might believe.

While technically made of the same material as the charcoal briquettes in your barbecue, activated charcoal is a decidedly different thing. Food-grade activated charcoal is most frequently produced by heating coconut shells to extremely high temperatures until they are carbonized, or completely burned up. The resulting ash is then processed with steam or hot air at equally high temperatures to produce a microporous structure.

This process dramatically increases the surface area of the charcoal, which partly explains why it is such a powerful detoxifier. You can imagine activated charcoal as a sponge with its many tiny pores, writes Discover Magazines Eunice Liu. In fact, it is these little pores that endow the activated charcoal with its powerful adsorption properties, referring to the process by which atoms or molecules from a gas, liquid, or dissolved solid bind onto a surface.

Before it hit mainstream food culture, activated charcoal was a popular ingredient for detox enthusiasts. Added to juice cleanses and cayenne pepper lemonades, the powdered charcoal has been touted by natural health advocates for its anti-aging benefits, as a way to lose weight and lower cholesterol, draw poisonous spider venom out of wounds, and minimize gastrointestinal distress. Long before that, even, it was used by Ayurvedic and Eastern medicine practitioners to whiten teeth and cleanse toxic mold spores from the body.

Pretty much the only reason to add activated charcoal to ice cream or pizza crust is to produce that rich, Instagram-worthy black color. In terms of flavor, activated charcoal doesnt really bring much to the mix, which is why Morgensterns added coconut and burnt honey vanilla flavors to its black ice cream when it was introduced last year. Little Damage offers a rotating selection of flavors, like almond, dyed with activated charcoal.

The inspiration for Little Damages black ice cream came after owner Jenny Damage noticed activated charcoal in a number of juice shops across Los Angeles, and found that it was a really good way to produce a pure, super-black color. Black is not an easy color to achieve when youre mixing white ice cream with it, Damage says. I first saw it in charcoal lemonades, and I thought that was fun. The ingredient itself didnt have too much of a taste, so it was a really good base for us to rotate our flavors, using that as our iconic color.

At Prohibition Creamery in Austin, Texas, owner Laura Aidan first whipped up a batch of black ice cream as a Halloween special last year, but its been so popular that its made its way back to her constantly rotating menu a few times since. On a weekly basis, she gets requests from people via Instagram, Facebook, and email for the black ice cream, which was originally intended to just be a one-time-only offering.

When she decided to do a black ice cream, Aidan originally thought she might use squid ink, which is used to dye Italian pastas, or maybe black sesame seeds. Ultimately, though, activated charcoal was the best option. Activated charcoal was totally the best fit. I was familiar with it as a health food supplement, but I had never put it in ice cream before, Aidan says. It adds just a slight bit of crunch, a really fine little crunch to the texture, but for the most part it was amazing how smoothly the charcoal mixed into the ice cream.

Activated charcoal is really good at adsorption, or soaking up all the molecules in its path, but it isnt so good at picking out whats toxic and what isnt. When a person consumes activated charcoal in ice cream, the charcoal sucks up the calcium, potassium, and other vitamins that would be found in the milk. This prevents the stomach lining from absorbing those nutrients, which means that the body eliminates them as waste alongside the charcoal. In extreme cases, this can result in malnutrition.

For people who take prescription medications every day, activated charcoal may pose an even bigger concern. Activated charcoal is given to people who take too much medication because charcoal is so absorbent and can counteract an overdose, gastroenterologist Patricia Raymond, M.D. told Womens Health. But if youre drinking it and you also are on any meds, even birth control pills, the charcoal is likely to absorb the drugs. So you risk having them become ineffective. According to Drugs.com, that warning applies to more than 200 drugs, ranging from the ibuprofen you take to fend off a headache to albuterol, used to stop asthma attacks. As such, most companies that sell the product as a supplement recommend waiting at least two hours between taking activated charcoal and other prescription drugs.

Its especially concerning for people who use hormonal contraceptives, as consuming activated charcoal within just a few hours of taking the pill can reduce its efficiency. In a January interview with Imbibe, Bittermens founder Avery Glasser joked that he was going to make an activated charcoal cocktail called See Ya In Nine Months, referring to its potential to produce an unplanned pregnancy. It was a nod to the ethical dilemma at hand: Should bartenders really be serving these drinks to unwitting patrons, and if they do, should they come with a warning?

The science is somewhat mixed on the health benefits of activated charcoal, but as with most other detox products, most scientists are skeptical. There is little hard evidence that consuming activated charcoal actually does anything to detoxify the body or improve liver function, but that hasnt stopped natural health enthusiasts from consuming it, much like turmeric lattes or juice cleanses. Perhaps not surprisingly, natural lifestyle maven Gwyneth Paltrow is an ardent activated charcoal proponent.

Activated charcoal is amazing, says Elissa Goodman, a Los Angeles-based holistic nutritionist whos developed cleanse plans for celebrities like Kate Hudson. I have used it for myself, my children use it, and we always travel with it. Its powerful, potent stuff that is able to trap toxins and chemicals in the body and help flush them out so that theyre not absorbed. I think our bodies are really toxic.

For Goodman and her now college-aged kids, activated charcoal is mostly used as a hangover cure. She also packs it when traveling to places where shes concerned that the water may make her sick, and believes that it can be effective in helping remove toxic mold spores (which are prevalent in the laundry rooms and bathrooms of many homes and apartments) from the body. We all have digestive issues, and charcoal can alleviate gas and bloating, which is usually produced by some kind of fermentation in our guts, she says. We inhale spores of toxic molds. In places where water is crappy, tap water can be toxic and have chemicals. A lot of people dont have filtration systems in their homes, so its great to use.

Still, despite Goodmans obsession with eliminating toxins, she doesnt see activated charcoal as the kind of thing that should be eaten every day. Everything in moderation. We get onto these crazes and run with them, even if its potentially not that great for us in the long run, she says. I dont think its good to eat or drink it all the time. When youre feeling bad, its great to use. When youre healthy and normal, you dont need it. Goodman also knows that activated charcoal can interfere with adsorption of medications and other supplements, which is why she recommends taking it first thing in the morning.

In small quantities, activated charcoal is perfectly safe to consume, even if the purported health benefits are scientifically dubious. In the black ice cream at Prohibition Creamery, only a few ounces (by weight) of activated charcoal go into an 18-gallon batch of ice cream, which means that each scoop only contains a tiny amount. But because its hard to judge exactly how and when your body will process the charcoal, its still a good idea to wait a few hours after taking prescription medications like birth control before eating that charcoal pizza crust.

The amount that goes into each serving isnt great enough to make a huge difference when youre talking about ice cream, says Damage. Youd have to consume a huge amount. Of course, I dont know every medicine each and every person is taking, so if youre on medication, people should consult with their doctors before trying our ice cream.

Its also important to remember that activated charcoal isnt the only common ingredient used in restaurants that can interfere with medications. Grapefruit juice is known to increase the absorption of some drugs, including statins used to regulate cholesterol, HIV protease inhibitors, and over-the-counter cough syrup those who consume those medications are encouraged to avoid drinking grapefruit juice within two hours of downing their pills.

A natural compound called tyramine, found in aged cheeses, cured meats, and certain wines, can also be deadly for people using monoamine oxidase inhibitors, or MAOIs, to treat depression and personality disorders. (Fun fact: In The Silence of the Lambs, when Anthony Hopkins, starring as diabolical cannibal Hannibal Lecter, tells FBI agent Clarice Starling that he ate a census workers liver with fava beans and a nice Chianti, that particular assortment of foods (all high in tyramine) provides a subtle clue that Lecter is off his medications. Otherwise, as Mental Floss notes, its a combination that would have otherwise killed him.)

Still, despite the fact that activated charcoal is harmless in small quantities, its probably not a good idea to eat (or drink) it every single day. Over time, activated charcoal will adsorb crucial nutrients away from the body, which could eventually lead to malnutrition. Kim Kardashian might keep her fridge stocked with activated charcoal lemonades, but regular consumption comes with some less-than-glamorous side effects, like constipation, dehydration, and some very metal black-tinted poop.

Ultimately, its unlikely that consuming ice cream or pizza dyed black with activated charcoal every once in a while is going to result in any serious health complications. It might still be a good idea to treat this trendy ingredient much like the ice cream it is stirred into as an occasional splurge instead of a diet staple.

Amy McCarthy is the editor of Eater Dallas and Eater Houston. Editor: Erin DeJesus

See original here:

Everything You Need to Know About Eating Activated Charcoal - Eater

FDA Budget Cuts and Increased User Fees Are Bad for America – Morning Consult

The first 100 days of life under President Donald Trump has been a tumultuous time. While policy proposals from his White House are generally light on details, consumer advocates can use the presidents budget proposal as a window into his priorities. And while the continuing resolution that passed last week funded the federal government through September at or near previous levels, President Trumps call for a good shutdown in September reveals his ultimate agenda. His own plan, the grandiosely titled America First: A budget blueprint to make America great again, includes $54 billion dollars in cuts to a variety of government programs, many of would decimate key U.S. consumer protections.

With the presidents ill-advised tweets stealing thunder from the need for sound policy decisions, its easy to forget that consumer protections, and the agencies that enforce them, exist for a very good reason. For example, the official mission of the FDA is to protect the public health by assuring the safety, efficacy and security of human and veterinary drugs, biological products, medical devices, our nations food supply, cosmetics and products that emit radiation. Even in this day and age, when government seems to bear the blame for everything, protecting our safety and health remains a good use of taxpayer dollars.

The White House is proposing a major increase in FDA user fees, which are collected every time a new drug or medical device is submitted for a review. While increasing these fees might help offset costs, its unrealistic to think they can make up for total cuts.

The nearly-15,000 person FDA plays a regular and visible role in our lives, setting policy for review and labeling of foods and drugs in kitchen pantries and medicine cabinets across the country. Its mission is vital to keeping our food and prescription drug supply safe and secure.

The internet has generally made products of all kinds cheaper and more available to consumers, with widespread benefits. On the downside, it has provided a platform for bad actors to prey on unsuspecting customers seeking to stretch their ever-dwindling incomes.

The global market for dietary supplements was $82 billion as of 2013. Many of these supplements containing untested ingredients are sold directly to customers online. Surprisingly, these products can go into the market without any safety, purity or quality testing by the FDA. Margaret Hamburg, FDA commissioner from 2009 to 2015, echoed many when she observed that there is a widespread concern about where these products are coming from and whats in them. Consumer Reports found that an estimated 23,000 people annually end up in the emergency room after taking supplements. To fulfill its agency mission, the FDA must be empowered and funded to oversee product integrity for dietary supplements.

The FDA, even at former funding levels, is stretched too thin enforcing laws currently on the books. Unapproved drug products to treat glaucoma and asthma, as well as cosmetic treatments such as illicit Botox, have been seized by FDA enforcement officials. However, the FDA is limited, by statute and funding, in monitoring the vast sea of questionable counterfeit drugs and dietary supplement products online.

Low penalties, combined with the already inadequate FDA budget, dont incentivize the kind of prosecutions needed to stop of the flow of illicit products. According to a recent Washington Legal Foundation brief, A number of statutory gaps in the penalty provisions of the Food, Drug and Cosmetic Act (FDCA) pose a risk to American consumers including unequal treatment of counterfeiting and diversion, as well as differing penalties for certain types of diversion. Combined with chronic underfunding of the agency tasked with enforcement, these statutory gaps hinder regulators and prosecutors from stopping the bad actors.

Its likely that the user fees proposed by the Trump budget would go to fund the expedited approval of new drugs, which while commendable if done safely, would jeopardize the agencys other duties. President Trump needs to understand the true impact of his proposed budget cuts. If he doesnt, Congress must step up and fill the leadership void.

Ken McEldowney is executive director of Consumer Action, a nonprofit organization working for social change since 1971.

Morning Consult welcomes op-ed submissions on policy, politics and business strategy in our coverage areas. Updated submission guidelines can be foundhere.

Go here to see the original:

FDA Budget Cuts and Increased User Fees Are Bad for America - Morning Consult

The Ugly: Post #3 on the NNSA’s FY2018 Budget Request – All Things Nuclear

On Tuesday, May 23, the Trump administration released its Fiscal Year 2018 (FY2018) budget request. I am doing a three-part analysis of the National Nuclear Security Administrations budget. That agency, a part of the Department of Energy, is responsible for developing and maintaining US nuclear weapons. Previouslywe focused on The Good and The Bad, and today we have The Ugly.

The NNSAs FY2018 budget request includes what might seem to be a relatively innocuous statement:

In February 2017, DOD and NNSA representatives agreed to use the term IW1 rather than W78/88-1 LEP to reflect that IW1 replaces capability rather than extending the life of current stockpile systems.

In other words, rather than extending the life of the W78 and W88 warheads via a life extension program (or LEP), the NNSA will develop the IW1 to replace those warheads.

To my mind, that is an admission that the IW1short for Interoperable Warhead Oneis a new nuclear weapon, as UCS has been saying for quite some time.

The Obama administration was loath to admit as much, arguing that the proposed systemcombining a primary based on one from an existing warhead and a secondary from another warheadwas not a new warhead. That reluctance stemmed from the administrations declaration in its 2010 Nuclear Posture Review (NPR) that the United States would not develop new nuclear warheads or new military capabilities or new missions for nuclear weapons. Declaring the IW1 a new warhead would destroy that pledge.

That semantic sleight of hand by the Obama team was somewhat ugly: the IW1 is a new warhead. (For a lot more detail on the IW1 and the misguided 3+2 plan of which it ispart, see our report Bad Math on New Nuclear Weapons.)

However, what might be coming from the Trump administration is truly ugly.

The fact that the FY2018 NNSA budget admits the IW1 is a new warhead may be signal that the Trump teamwhich is doing its own NPRwill eliminate the Obama pledge not to develop new weapons or pursue new military capabilities and missions.

That change would send a clear message to the rest of the world that the United States believes it needs new types of nuclear weapons and new nuclear capabilities for its security. This would further damage the Nuclear Non-Proliferation Treaty (NPT), which is already fraying because the weapon states are not living up to their commitment to eliminate their nuclear weapons. Deep frustration on the part of the non-nuclear weapon states has led to the current negotiations on a treaty to ban nuclear weapons. New US weapons could also damage our efforts to halt North Koreas nuclear program and undermine the agreement with Iran that has massively reduced their program to produce fissile materials for nuclear weapons.

Moreover, a likely corollary of withdrawing that pledge would be to pursue a new type of nuclear weapon, or a new capability. Some options have already been suggested:

Those options are contrary to US security interests. Nuclear weapons are the only threat to the survival of the United States. Given that, and because there will not be a winner in a nuclear war, the US goal must be to reduce the role that these weapons play in security policy until they no longer are a threat to our survival. Continuing to invest in new types of nuclear weapons convinces the rest of the world that the United Stateswill never give up its nuclear weapons, and encourages other nuclear-weapon states to respond in ways that will continue to threaten the United States.

Make no mistake, the United States already has incredibly powerful and reliable nuclear weapons that would deter any nuclear attack on it or its allies, and it will for the foreseeable future.

So the idea that the United States should pursue new types of weapons? That is truly ugly.

Posted in: Nuclear Weapons Tags: arms control, new start, nuclear disarmament, nuclear posture review, nuclear weapons, nuclear weapons budget, obama administration

Support from UCS members make work like this possible. Will you join us? Help UCS advance independent science for a healthy environment and a safer world.

Read more here:

The Ugly: Post #3 on the NNSA's FY2018 Budget Request - All Things Nuclear

Cormorant, Griffon upgrade projects get new lift – Vertical Magazine (press release)

In the weeks before Canadas largest defense and security tradeshow, the Minister of National Defence and a Senate committee gave military helicopter manufacturers, many of whom have seen a sales slump in recent years, reason for optimism.

Midlife upgrade programs for both the CH-146 Griffon transport and tactical helicopter and the CH-149 Cormorant search-and-rescue helicopter have been on the Royal Canadian Air Force (RCAF) project list for several years, but neither have had funding approved to launch into project definition.

In an address on May 3 foreshadowing this weeks defense policy review announcement, Minister Harjit Sajjan described the dismal state of military spending and flagged both helicopters as part of a growing list of unfunded equipment and technical capabilities urgently required for the armed forces to meet domestic and international operational demands.

A week later the Senate Standing Committee on National Security and Defence also raised both helicopter projects in a report outlining a plan to reinvest in the military, recommending a Griffon replacement program be prioritized and that the government move forward with a proposal to expand the Cormorant fleet by upgrading the 14 CH-149 aircraft and converting seven VH-71 airframes currently in storage to the same operational capability.

While the RCAF has outlined a limited life-extension project for the CH-146 that would upgrade avionics and some communications systems, it has also assessed whether it might be better to invest in a new platform, bringing the tactical aviation capability on par with the CH-147F Chinook.

The prospect of a new helicopter acquisition program was clearly welcomed by Airbus Defence & Space. Romain Trapp, president of Airbus Helicopters in Canada, led off the companys corporate press briefing at CANSEC on June 1, highlighting the capability of the H145M as an option for the Griffon replacement.

With the rapid introduction of new technologies in its aircraft, Trapp said Airbus eventual offering would depend on when a request for proposals is issued. But the company has been pushing for an accelerated program, he said, and has provided the RCAF with recent a white paper and customer analysis as well as cost projections.

We made the business case by showing [the Air Force] that simply by going to a new platform, the Canadian taxpayers would save more than $1 billion 10 years from now, he said.

Today our current proposal is the H145M, which is a proven platform, he added, noting that the multirole aircraft is ideally suited for the Canadian tactical reconnaissance utility helicopter requirements.

The U.S. Army ordered the UH-72A Lakota, a variant of the H145M, in 2006 as its light utility helicopter and currently operates a fleet of 400. The aircraft is also in service with German special forces, possibly a key consideration in a Canadian procurement given that 427 Special Operations Aviation Squadron also operates the Griffon.

All deliveries were done on time, on budget, on quality, said Trapp.

Airbus is now investing heavily in autonomous flight technologies and will soon develop fully autonomous versions of some of our helicopters, he added. This will allow us to respond to the emerging needs of our defense customers all over the world.

For Leonardo Helicopters (formerly AgustaWestland), increased activity around a Cormorant midlife upgrade program was reason enough to put the band back together. Days before CANSEC, the company announced the reassembly of Team Cormorant, the industry partnership of Leonardo, IMP Aerospace, CAE, Rockwell Collins Canada and GE Canada that delivered the CH-149 in 2000.

Team Cormorant is proposing a modernization project based on the Norwegian All-Weather Search and Rescue Helicopter (NAWSARH) program, which selected the AW101 in 2013 to replace its fleet of Sea King aircraft and is expecting delivery of the first helicopter later this year. The CH-149 is a variant of the AW101 medium-lift helicopter now in service with over a dozen militaries.

The team is also proposing to expand the Cormorant fleet from 14 to 21 aircraft by converting seven VH-71 airframes, airworthy variants of the AW101, that were acquired from the U.S. government in 2011 for spare parts, to the same configuration. The additional aircraft would allow the air force to return the Cormorant to 424 Transport and Rescue Squadron at 8 Wing Trenton, Ontario, which currently operates a fleet of Griffon helicopters.

Leonardo has argued that, with an average of over 5,000 hours on the airframes, all of which are around 16 years of age, and growing concerns about parts obsolescence, an immediate update is required if the RCAF wants to meet its service life target of 2040.

The upgrades would include new cockpit displays, avionics, digital automatic flight control system, aircraft management system, electro-optical surveillance system, and weather radar as well as a new 3,000 horsepower CT7-8E engine.

Leonardo is also offering a new Obstacle Proximity LiDAR System that would provide directional audio and visual warning when the helicopter blades get too close to obstacles, and mobile phone detection technology that would effectively turn the aircraft into a mobile phone cell and allow its onboard system to identify and track a mobile phone within a 25-mile range.

The Cormorant fleet had problems with availability in the early years of the program, but John Ponsonby, managing director of Leonardo Helicopters, said dispatch availability is over 98 percent with the current fleet. We continue to support IMP and we provide the level of support expected by the customer.

The Air Force has been supportive of the VH-71 conversion proposal but RCAF commander LGen Mike Hood toldVerticalin an interview last November that repair and maintenance costs of the extant fleet would need to be reduced before the air force could move ahead with the plan.

I believe once we get there, the conditions will be set for me to drive forward with a Cormorant midlife update and I want to see the VH-71s included in that, he said. But until such time as they can deliver on what the department has asked in the way of reducing cost, Im a little stuck.

Ponsonby acknowledged the issue and said large strides have been made in recent years to reduce the cost of ownership. We have committed to a significant program of cost reduction and we have delivered a significant percentage of cost reduction alreadywe are focused on providing best value, we are taking action, and that action is delivering results.

As part of its options analysis, the Air Force had considered the possibility of replacing the CH-149, but an upgrade program now appears to be the preferred option. Ponsonby believes its the correct decision.

Our argument is that we can insert the capabilities you are looking for, and the reliability and cost of ownership are reduced, he said. You have used this platform for 18 years, it has done absolutely great service, there is nothing better on the market, so a [midlife upgrade] does make sense.

View original post here:

Cormorant, Griffon upgrade projects get new lift - Vertical Magazine (press release)

Some JSTARS aircraft could fly into 2034 – Flightglobal

The US Air Force will move ahead with its existing JSTARS recapitalisation strategy, even as a recent report indicates some aircraft in the fleet could fly longer.

In March, the service completed a fuselage widespread fatigue study to determine the service life of individual JSTARS aircraft.

Based on data provided by Boeing, which manufactured the original 707-300 airframe, the programme office determined the service life of fuselage is several years longer than previously expected, according to a document obtained by FlightGlobal.

The service will not conduct a service life extension programme (SLEP) on the existing JSTARS fleet, the document states.

The E-8C fleet, which is composed of 16 individual aircraft with varying maintenance issues and track records, was set to phase out from Fiscal 2017 through 2022. But the studys results extended the service life projections from FY2023 through FY2034.

The USAF did not detail how many aircraft in the fleet will be available through 2034. Boeing plans to complete additional studies to assess remaining structural areas, such as the wings.

Still, the USAF does not plan to change its JSTARS recapitalisation strategy given current aircraft availability.

The USAF anticipates a contract award for a new JSTARS platform in FY2018 and plans to reach initial operational capability by the last quarter of FY2024. Due to ongoing delays with maintenance at Northrop Grummans sustainment facility in Lake Charles, Louisiana, aircraft availability remains low with 42% of aircraft in the depot today.

Aircraft availability continues to decrease and sustainment costs are unsupportable, the document states. These two factors were the catalyst for initiating the JSTARS recapitalisation programme.

Unlike the air forces EC-130H Compass Call cross-deck effort, which will move old mission systems onto a new platform, the JSTARS recapitalisation is meant to overhaul the entire weapon system, USAF chief of staff Gen David Goldfein told reporters following a 6 June Congressional hearing. The USAF examines extending aircraft service life through rigorous testing, which helps the service identify items that will likely break and should be funded in the future, Goldfein says.

We only fund against what we predict and then youve seen in the past all of a sudden a part on an F-15C comes out and we havent manufactured that in the last five or 10 years, he says. So the reality is, we have to look at how we extend the weapon system, but it does not change the strategy at all about how we recapitalize to get into a new aircraft.

Read the original:

Some JSTARS aircraft could fly into 2034 - Flightglobal