Astronomy C12 - 2014-12-02: The Big Bang; Origin of Elements
Astronomy C12, 001 - Fall 2014 The Planets - Geoffrey W. Marcy All Rights Reserved.
By: UCBerkeley
Original post:
Astronomy C12 - 2014-12-02: The Big Bang; Origin of Elements - Video
Astronomy C12 - 2014-12-02: The Big Bang; Origin of Elements
Astronomy C12, 001 - Fall 2014 The Planets - Geoffrey W. Marcy All Rights Reserved.
By: UCBerkeley
Original post:
Astronomy C12 - 2014-12-02: The Big Bang; Origin of Elements - Video
@SinghRPN Shameful remarks by Minister on Ramzadon vs http://t.co/ASnMDZR2US can BJP leaders reduce Hinduism to just abuse of other communities ? @milinddeora @VIVEKSINGHANIA2 @RAHULNAHIRE thank you @MPNaveenJindal @digvijaya_28 A must read Article "If US had a patent law like ours, they would discover many more drugs: Anand Grover" is http://t.co/Ha8Y0GerGM @thekiranbedi (India climbs up with its leader) @ndtv: PM Narendra Modi regains top position in 'Time Person of the Year' poll http://t.co/UVWEIEo4RS @ajaymaken BJP came to power using half baked and leaked CAG Reports- Now diluting and undermining it:- http://t.co/LThJNOugkp @MPNaveenJindal @VinkalChabra @JSPLCorporate Thanks for sharing Mr Chabra 🙂 @Panda_Jay My Twitter handle changed yesterday to @PandaJay , please follow that.. @ShashiTharoor Link to @rajyasabhatv debate on 6 months of Modi Govt: https://t.co/aYklFUfJTL Will debate NDA's Sanskrit policy on RSTV 9.30pm today @Swamy39 PM made a speech in RS condemning the Sadhvi's use of H---zade but Opp did not lift the blockade? Who is responsible for this double cross? @narendramodi May Justice Krishna Iyer's soul attain eternal peace. My thoughts are with his family during this moment of immense sadness. @abdullah_omar The same rope sports PDP flags along side NC flags. Probably the closest NC & PDP ever come 🙂 #JK2014 http://t.co/eB1Yb0v0R2 @varungandhi80 My article in The Economic Times today: Go Beyond Power Politics. http://t.co/igNWjNbVjk @ArvindKejriwal RT @balakv1970: I donated Rs.1.80 lakhs vide transaction ID NI391277. Wish the #MufflerMan a great success in delhi elections. Honesty fina @PMOIndia The President of India, PM @narendramodi and other dignitaries during the 'At Home' on Navy Day. @RashtrapatiBhvn http://t.co/BZ2XbEIIkM @rajeev_mp I wl meet Home Minister #RajnathSingh seekng Govt's help in extraditing #Paedophile #PaulMeekin,ensure he stands trial #ChildSafeBengaluru @quizderek @Jeet_Kulkarni thanks @ Christie's auction: Will Tyeb Mehta sell for a record? @avantikabhuyan reports: http://t.co/uWUBI00Uhn http://t.co/IeOpNnCcKl @SushmaSwaraj My respectful homage to the memory of the victims of Bhopal gas tragedy. @arunjaitley Today is the birth anniversary of Dr. Rajender Prasad, the first president of our nation. My salutations to this great personality. @MVENKAIAHNAIDU I pay my respect to Sri R Venkataraman on his birth anniversary. He Initiated India's Guided Missile Program as defence Minister. @nitin_gadkari so as to facilitate an integrated approach to Drinking water, River navigation and Inland waterways (2/2) @DVSBJP RT @KarnatakaVarthe: Committee on Public Sectors from Karnataka calls on the Union Minister @DVSBJP http://t.co/IPURqNvGlc @nimmasuresh ht @umasribharti . , . @AnanthKumar_BJP Meeting with the Hon'ble Chief Minister of Karnataka today to discuss pending issues of #Karnataka pertaining to MoCF http://t.co/VoUIJQFbix @smritiirani Addressed BJP election meetings at Ram Nagar and Reasi today @drharshvardhan Best wishes to Maitri Porecha & Shreya Dasgupta Madan on being chosen for 2015 Eureka Alert Fellowship for international science reporters. @Gen_VKSingh RT @narendramodi: Let us all work together to create a world where persons with disabilities can scale new heights of success without any o @Rao_InderjitS RT @mlkhattar: Gurgaon generates the maximum revenue in the state however there is a huge divide between the rich & poor which needs to b @nsitharaman @sankarbe 🙂 @RSSorg It was only recently I had an opportunity of meeting Just: Iyer, which was an invigorating experience for me. Dr Mohan Bhagwat
Read the original here:
03.12.2014 - (idw) Julius-Maximilians-Universitt Wrzburg
The astrophysicists Thomas Bretz and Daniela Dorner have developed a novel camera technology which for the first time allows sources of cosmic gamma radiation to be observed without interruption even when the moon is shining brightly. The scientists are now receiving an award in recognition of their work. The Wrzburg astrophysics department is thrilled: On 27 November, the Deutsche Physikalische Gesellschaft (DPG) announced that the 7,500 euro Gustav Hertz Award would be shared by scientist Daniela Dorner (35) and her colleague Thomas Bretz (40), who spent several years researching at the University of Wrzburg. The awards will be presented in Berlin on March 2015.
The DPG commends the laureates' "original and seminal impetus" to astroparticle physics with their contributions to enhancing Cherenkov telescopes. Bretz and Dorner achieved the success during their work for FACT (First Geiger-Mode Avalanche Photodiode Cherenkov Telescope), a joint project of Germany and Switzerland, which also involves scientists from TU Dortmund, ETH Zurich and the University of Geneva.
The challenge
So far, Cherenkov telescopes for observing cosmic gamma radiation have been based on detecting single photons by means of so-called photomultiplier tubes. Requiring high voltage in the kilovolt range, these photosensors are difficult to operate with outdoor telescopes. Moreover, they overload in the presence of bright moonlight and have to be switched off, which regularly results in data gaps.
But non-stop observation is crucial particularly in case of variable astronomical sources. Especially active galactic nuclei exhibit extreme variations in brightness which are crucial in order to understand physical processes taking place in the vicinity of black holes. Making progress in this field called for highly sensitive photosensors that require little electricity and no high-voltage supply while featuring nanosecond time resolution.
Putting an unusual idea into practice
To overcome this challenge, the scientists used an idea of the late physicist Eckhart Lorenz of the Max Planck Institute for Physics in Munich, namely to develop a camera with silicon-based semiconducting photosensors for a Cherenkov telescope. This idea seemed unsuitable at first and many experts advised against it.
But Thomas Bretz and Daniela Dorner nevertheless dared take the FACT collaboration one step further: They designed a camera, which was built at ETH Zurich, and installed it in a telescope on the Canary Island of La Palma in the Roque de los Muchachos Observatory, 2,200 metres above sea level.
The laureates' feat
More:
Stephen Hawking: #39;AI could spell end of the human race #39;
Subscribe to BBC News HERE http://bit.ly/1rbfUog Professor Stephen Hawking has told the BBC that artificial intelligence could spell the end for the human race. In an interview after the launch...
By: BBC News
Read the original here:
Stephen Hawking: 'AI could spell end of the human race' - Video
Stephen Hawking #39;s Warning To Humanity
In a new interview with BBC, Stephen Hawking warns us about why he fears artificial intelligence could be the demise of humanity. Buy some awesomeness for yourself! http://www.forhumanpeoples.co.
By: SourceFed
Read more here:
On Artificial Intelligence, Stephen Hawking, And Racism
On Artificial Intelligence, Stephen Hawking, And Racism On Artificial Intelligence, Stephen Hawking, And Racism On Artificial Intelligence, Stephen Hawking, And Racism Stephen Hawking recently...
By: zennie62
Read the original:
On Artificial Intelligence, Stephen Hawking, And Racism - Video
By Tanya Lewis
Stephen Hawking recently began using a speech synthesizer system that uses artificial intelligence to predict words he might use.(Flickr/NASA HQ PHOTO.)
The eminent British physicist Stephen Hawking warns that the development of intelligent machines could pose a major threat to humanity.
"The development of full artificial intelligence (AI) could spell the end of the human race," Hawking told the BBC.
The famed scientist's warnings about AI came in response to a question about his new voice system. Hawking has a form of the progressive neurological disease called amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease), and uses a voice synthesizer to communicate. Recently, he has been using a new system that employs artificial intelligence. Developed in part by the British company Swiftkey, the new system learns how Hawking thinks and suggests words he might want to use next, according to the BBC. [Super-Intelligent Machines: 7 Robotic Futures]
Humanity's biggest threat?
Fears about developing intelligent machines go back centuries. More recent pop culture is rife with depictions of machines taking over, from the computer HAL in Stanley Kubrick's "2001: A Space Odyssey" to Arnold Schwarzenegger's character in "The Terminator" films.
Inventor and futurist Ray Kurzweil, director of engineering at Google, refers to the point in time when machine intelligence surpasses human intelligence as "the singularity," which he predicts could come as early as 2045. Other experts say such a day is a long way off.
It's not the first time Hawking has warned about the potential dangers of artificial intelligence. In April, Hawking penned an op-ed for The Huffington Post with well-known physicists Max Tegmark and Frank Wilczek of MIT, and computer scientist Stuart Russell of the University of California, Berkeley, forecasting that the creation of AI will be "the biggest event in human history." Unfortunately, it may also be the last, the scientists wrote.
And they're not alone billionaire entrepreneur Elon Musk called artificial intelligence "our biggest existential threat." The CEO of the spaceflight company SpaceX and the electric car company Tesla Motors told an audience at MIT that humanity needs to be "very careful" with AI, and he called for national and international oversight of the field.
More here:
Stephen Hawking: Artificial intelligence could end human ...
By Tanya Lewis
Stephen Hawking recently began using a speech synthesizer system that uses artificial intelligence to predict words he might use.(Flickr/NASA HQ PHOTO.)
The eminent British physicist Stephen Hawking warns that the development of intelligent machines could pose a major threat to humanity.
"The development of full artificial intelligence (AI) could spell the end of the human race," Hawking told the BBC.
The famed scientist's warnings about AI came in response to a question about his new voice system. Hawking has a form of the progressive neurological disease called amyotrophic lateral sclerosis (ALS or Lou Gehrig's disease), and uses a voice synthesizer to communicate. Recently, he has been using a new system that employs artificial intelligence. Developed in part by the British company Swiftkey, the new system learns how Hawking thinks and suggests words he might want to use next, according to the BBC. [Super-Intelligent Machines: 7 Robotic Futures]
Humanity's biggest threat?
Fears about developing intelligent machines go back centuries. More recent pop culture is rife with depictions of machines taking over, from the computer HAL in Stanley Kubrick's "2001: A Space Odyssey" to Arnold Schwarzenegger's character in "The Terminator" films.
Inventor and futurist Ray Kurzweil, director of engineering at Google, refers to the point in time when machine intelligence surpasses human intelligence as "the singularity," which he predicts could come as early as 2045. Other experts say such a day is a long way off.
It's not the first time Hawking has warned about the potential dangers of artificial intelligence. In April, Hawking penned an op-ed for The Huffington Post with well-known physicists Max Tegmark and Frank Wilczek of MIT, and computer scientist Stuart Russell of the University of California, Berkeley, forecasting that the creation of AI will be "the biggest event in human history." Unfortunately, it may also be the last, the scientists wrote.
And they're not alone billionaire entrepreneur Elon Musk called artificial intelligence "our biggest existential threat." The CEO of the spaceflight company SpaceX and the electric car company Tesla Motors told an audience at MIT that humanity needs to be "very careful" with AI, and he called for national and international oversight of the field.
Excerpt from:
Stephen Hawking has warned that artificial intelligence could one day "spell the end of the human race."
Speaking to the BBC, the eminent theoretical physicist said the artificial intelligence developed so far has been useful but expressed fears of creating something that far exceeded human abilities.
"It would take off on its own, and re-design itself at an ever increasing rate," Hawking said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
Hawking, who has the motor neuron disease ALS, spoke using a new system developed by Intel and Swiftkey. Their technology, already in use in a smartphone keyboard app, learns how the professor thinks and then proposes words he might want to use next.
"I expect it will speed up my writing considerably," he said.
Hawking praised the "primitive forms" of artificial intelligence already in use today, though he eschewed drawing a connection to the machine learning that is required for the predictive capabilities of his speaking device.
Hawking's comments were similar to those made recently by SpaceX and Tesla founder Elon Musk, who called AI a threat to humanity.
"With artificial intelligence, we are summoning the demon," Musk said during an October centennial celebration of the MIT Aeronautics and Astronautics Department. Musk had earlier sent a tweet saying that AI is "potentially more dangerous than nukes."
More broadly, Hawking told the BBC that he saw plenty of benefits from the Internet, but cautioned that it, too, had a dark side.
He called the Internet a "command center for criminals and terrorists," adding, "More must be done by the Internet companies to counter the threat, but the difficulty is to do this without sacrificing freedom and privacy."
See the article here:
Stephen Hawking warns artificial intelligence could be threat to human race
Barely a month after Elon Musk called artificial intelligence a threat to humanity, another voice a much bigger voice in the scientific world warned that the technology could end mankind.
Barely a month after Elon Musk called artificial intelligence a threat to humanity, another voice a much bigger voice in the scientific world warned that the technology could end mankind.
Stephen Hawking, the renowned physicist, cosmologist and author, in an interview with the BBC this week, said "the development of full artificial intelligence could spell the end of the human race."
The BBC noted that Hawking said the state of artificial intelligence (AI) today holds no threat, but he is concerned about scientists in the future creating technology that can surpass humans in terms of both intelligence and physical strength.
"It would take off on its own, and re-design itself at an ever-increasing rate," Hawking said. "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."
Hawking's comments closely follow those made by high-tech entrepreneur Musk, who raised controversy in late October when he warned an audience at MIT about the dangers behind AI research.
"I think we should be very careful about artificial intelligence," said Musk, CEO of electric car maker Tesla Motors, and CEO and co-founder of the commercial space flight company SpaceX. "If I were to guess at what our biggest existential threat is, it's probably that... With artificial intelligence, we are summoning the demon. In all those stories with the guy with the pentagram and the holy water, and he's sure he can control the demon. It doesn't work out."
Musk, who tweeted this past summer that AI is "potentially more dangerous than nukes," also told the MIT audience that the industry needs national and international oversight.
Musk's comments raised discussion about the state of artificial intelligence, which today is more about robotic vacuum cleaners than Terminator-like robots that shoot people and take over the world.
Yaser Abu-Mostafa, professor of electrical engineering and computer science at the California Institute of Technology, said he was a little surprised that AI is getting so much negative attention since the fearful talk hasn't been preceded by the creation of a new, potentially scary technology.
Read the original:
December 3, 2014 in Nation/World
Associated Press
You have viewed free articles or blogs allowed within a 30-day period. Receive FREE access by logging in to or creating your Spokesman.com account.
S-R Media, The Spokesman-Review and Spokesman.com are happy to assist you. Contact Customer Service by email or call 800-338-8801
LONDON Physicist Stephen Hawking has warned that the rise of artificial intelligence could see the human race become extinct.
In an interview with the BBC, the scientist said that while primitive forms of artificial intelligence have proved useful, if the technology is developed to a level that can surpass humans, it could spell the end of the human race.
He said that advanced artificial intelligence would take off on its own, and redesign itself at an ever increasing rate.
Human biological evolution will not be able to compete and would be superseded, he said in the interviewTuesday.
You have viewed 20 free articles or blogs allowed within a 30-day period. FREE registration is now required for uninterrupted access.
View original post here:
Stephen Hawking: Artificial intelligence could end mankind - Wed, 03 Dec 2014 PST
It would rapidly become all-powerful, and we are as capable of understanding what a machine like that could do as a worm is of comprehending Stephen Hawking's immense intellect.
How close are we to that basic thinking machine? Simple artificial intelligence is already being harnessed to design electrical circuits that we dont fully understand. Some antennae designs produced by genetic algorithms, for example, work better than those conceived by humans and we arent always sure why because they're too complex.
Combine this software intelligence with robot bodies and a malevolent motivation and you have a gory science fiction film. But because every aspect of our lives is controlled by computers, such a super-intelligence wouldn't need arms and legs to make life unpleasant.
You can argue that we could do AI experiments on computers isolated from sensitive systems, but we dont seem to be able to keep human hackers in check so why assume we can outwit thinking machines? You can also argue that AI may prove to be friendly, but if they treat us the way that we treat less intelligent creatures then were in a world of trouble.
There were fears that the first atomic bomb tests could ignite the atmosphere, burning alive everyone man, woman and child on Earth. Some believed that the Large Hadron Collider would create a black hole when first booted-up which would consume the Earth. We got away with it, thanks to the fact that both suggestions were hysterical nonsense. But whats to say that one day we wont attempt an experiment which actually does have apocalyptic results?
A decade ago they seemed like distant sci-fi to most people, but were all familiar with 3D printers now: you can buy them on Amazon. Next-day delivery.
We're also creating 3D printers which can replicate themselves by printing the component parts for a second machine.
Imagine a machine capable of doing this which is not only microscopically small, but nanoscopically small. So small that it can stack atoms together to make molecules. This could lead to all sorts of advances in manufacturing and medicine: inject a few thousand into a patient and they'll dissolve a tumour into harmless saline. Millions could float in your car's engine oil, replacing worn metal on vital components and removing the need for human maintenance.
But what if we get it wrong? A single typo in the source code and instead of removing a cancerous lump in a patient these medi-bots could begin to indiscriminately churn out copies of themselves until the patient is converted into a pile of billions of the machines. Then the hospital, too, and the city its in. Finally the whole planet.
This is the grey goo scenario. There would be no way to stop it.
View post:
Zeonics Systech Defence Aerospace Engineers Private Limited
We are engaged in manufacturing, exporting, importing and supplying a comprehensive assortment of Power Transformer, Power Generator, Ignition System, Electronic Components, High Voltage ...
By: Business Video
Read more from the original source:
Zeonics Systech Defence & Aerospace Engineers Private Limited - Video
Cyrus Co. Genetic Engineering Episode 1(E.G.C.)
This facility was funded by Coronia Corp. No hate mail.
By: E.G.C. Cyrus and Friends
See the original post:
Cyrus Co. Genetic Engineering Episode 1(E.G.C.) - Video
By Isaac Fletcher, contributing writer, Food Online
J.R. Simplots Innate potato may provide potential health benefits through genetic engineering, but uncertainty over long-term risks and degree of benefits raise some concerns
The U.S. Department of Agriculture (USDA) has recently approved for commercial planting a potato that has been genetically engineered to reduce the amounts of a potentially harmful ingredient that appear in French fries and potato chips. When potatoes are fried, a chemical called acrylamide, which is suspected of causing cancer, is produced. The genetic engineering involves altering the potatos DNA so that when the potato is fried, the amount of acrylamide that appears is reduced. Additionally, the genetically-engineered potato is resistant to bruising. This will help potato growers and processors lower the instances of damage during shipping and storage, leading to fewer occurrences of lost value and unusable product. The potatoes have been developed by the J.R. Simplot Company of Boise, Idaho, a major supplier of McDonalds frozen French fries.
Rather than solely providing benefit to farmers and producers, the potato is among a new wave of genetically-engineered crops designed to provide benefits to consumers. However, with many consumers calling into question the safety of genetically-modified foods, the new potato may face some challenges in winning over consumer approval. Such consumer concerns raise questions about whether the potatoes will be used by various food companies and restaurant chains.
In the 1990s, genetically-modified potatoes were introduced by Monsanto in an effort to provide resistance against the Colorado potato beetle. However, the market crumbled when major buyers of potatoes instructed suppliers to not grow them due to fears over consumer resistance. However, the new potato from Simplot has some advantages that may help it weather the tide of consumer uncertainty.
First of all, the potato aims to provide potential health benefits to consumers rather than just providing cost-savings to suppliers and producers. Furthermore, Simplot is a well-established power in the potato industry and has likely been laying the foundation for product acceptance among its customers. The other strength of Simplots potato is that, unlike many other genetically-engineered crops, the potatoes do not contain genes from any other species, instead, the potato contains fragments of potato DNA that serve to mute four of the potatoes own genes involved in the production of particular enzymes. For this reason, Simplot has chosen to call its product the Innate potato, an innocuous name that may help win over consumer acceptance. Haven Baker, head of potato development at Simplot, explains, We are trying to use genes from the potato plant back into the potato plant. We believe theres some more comfort in that.
However, that is not to say that the Innate potato will not face roadblocks along the way. There are some questions over the long-term effects of this kind of engineering and, according to Doug Gurian-Sherman, a plant pathologist and senior scientist at the Center for Food Safety, much about RNA interference the technique used to mute the genes is not fully understood. Gurian-Sherman argues, We think this is a really premature approval of a technology that is not being adequately regulated. Additionally, the benefits of reducing acrylamide levels by 50 to 75 percent are still unclear.
See the original post:
USDA Gives Genetically-Engineered Potatoes The Thumbs Up
PUBLIC RELEASE DATE:
2-Dec-2014
Contact: Kathryn Ryan kryan@liebertpub.com 914-740-2100 Mary Ann Liebert, Inc./Genetic Engineering News @LiebertOnline
New Rochelle, NY, December 2, 2014--A study of active duty U.S. Marines who suffered a recent or previous concussion(s) examined whether persistent post-concussive symptoms (PPCS) and lingering effects on cognitive function are due to concussion-related brain trauma or emotional distress. The results are different for a recent concussion compared to a history of multiple concussions, according to the study published in Journal of Neurotrauma, a peer-reviewed journal from Mary Ann Liebert, Inc., publishers. The article is available Open Access on the Journal of Neurotrauma website at http://online.liebertpub.com/doi/full/10.1089/neu.2014.3363.
James Spira, U.S. Department of Veterans Affairs and University of Hawaii (Honolulu, HI), Corinna Lathan, AnthroTronix, Inc. (Silver Spring, MD), Joseph Bleiberg, Walter Reed National Military Medical Center (Bethesda, MD), and Jack Tsao, U.S. Navy Bureau of Medicine and Surgery (Falls Church, VA), assessed the effects of concussion on persistent symptoms, independent of deployment history, combat exposure, and symptoms of post-traumatic stress disorder and depression. They describe the results for persons with a recent concussion or who had ever had a concussion to those who had more than one lifetime concussion in the article "The Impact of Multiple Concussions on Emotional Distress, Post-Concussive Symptoms, and Neurocognitive Functioning in Active Duty United States Marines Independent of Combat Exposure or Emotional Distress".
John T. Povlishock, PhD, Editor-in-Chief of Journal of Neurotrauma and Professor, Medical College of Virginia Campus of Virginia Commonwealth University, Richmond, notes that "This study by Spira and colleagues represents an important contribution to our understanding of the negative impact of multiple concussions in a relatively large military population sustaining both deployment and non-deployment related trauma. The consistent observation that multiple concussive injuries are associated with worse emotional and post-concussive symptoms is an extremely important finding that must guide our evaluation of individuals, in both the military and civilian settings, who have sustained multiple concussive injuries. While the authors acknowledge some limitations of the current work and the need for future research to follow a similar cohort in terms of the time course and causality of the symptoms associated with concussion, overall this well done study adds significantly to our increased understanding of the adverse consequences of repetitive concussive/mild traumatic brain injury."
###
About the Journal
Journal of Neurotrauma is an authoritative peer-reviewed journal published 24 times per year in print and online that focuses on the latest advances in the clinical and laboratory investigation of traumatic brain and spinal cord injury. Emphasis is on the basic pathobiology of injury to the nervous system, and the papers and reviews evaluate preclinical and clinical trials targeted at improving the early management and long-term care and recovery of patients with traumatic brain injury. Journal of Neurotrauma is the official journal of the National Neurotrauma Society and the International Neurotrauma Society. Complete tables of content and a sample issue may be viewed on the Journal of Neurotrauma website at http://www.liebertpub.com/neu.
About the Publisher
More here:
Do concussions have lingering cognitive, physical, and emotional effects?
Jack Crawford (character)
Jack Crawford is a fictional character who appears in the Hannibal Lecter series of books by Thomas Harris, in which Crawford is the Agent-in-Charge of the B...
By: Audiopedia
The rest is here:
Jack Crawford (character) - Video
Will you be taking a brain-scan for your next job interview? Jim Heskett explores the emerging world of neuromanagement and what it means for both organizations and employees. What do YOU think?
or years, behavioral scientists have been telling us that they have a great deal to contribute to decision theory and management. Their work most applicable to business, however, was often overshadowed by that of economists. But as the assumptions of rational behavior and "perfect information" that formed the basis of much of the work in economics concerning markets came into question, behavioral science not based on those assumptions gained ascendance.
At first, the contributions from behavioral science were based on laboratory tests, too many of them involving handy college students. They helped describe biases (at least among those being tested). For example, we learned that people tend to devalue long-term returns in relation to short-term gains. They tend not to buy and sell according to self-set rules.
A person willing to pay up to $200 for a ticket to a sporting event is not, once he owns it, willing to sell it at any price above $200counter to what economists would predict. Behavioral science regards it as perfectly reasonable behavior, explained by what they call the "endowment effect." It is one of many behaviors that help explain why markets are not always "rational," why they may not be a reflection of perfect information, why people buy high and sell low.
A recent study of "midlife northeast American adults" raises questions about whether we are entering the next stage in what might be termed an era of neuromanagement. In it, a group of researchers claim to have found that brain structure and the density of cells in the right posterior parietal cortex are associated with willingness to take risks. They found that participants with higher gray matter volume in this region exhibited less risk aversion. The results "identify what might be considered the first stable biomarker for financial risk-attitude," according to the authors.
The study is a distant cousin to those that have located the side of the brain associated with creativity and the portion of the brain that is stimulated, for example, by gambling or music. Assuming: (1) there will be more research efforts combining the results of brain scans with behavioral exercises, and (2) findings are proven to be more valid than, say, those associated with phrenology, it raises some interesting questions about the future.
Is it possible that some organizations selecting and hiring talent may, in the future, require a brain scan, just as some require psychological testing today? Is hiring on the basis of brain structure much different than hiring, for example, on the basis of height or other characteristics required to perform certain jobs? Or does it raise too many ethical questions? For example, who will own the data? How will it be used? How would we apply the results?
Are we entering the next stage in an age of neuromanagement? What will it look like? What do you think?
See the original post:
Are we entering an era of Neuromanagement?
Weve all used behavioral interview questionsquestions that ask job candidates to recount a past experience so we can assess their likely future performance. In theory, behavioral interview questions should work just fine (because past behavior is usually a decent predictor of future behavior).
But most interviewers ask behavioral questions in a way that gives away the correct answer and thus ruins the questions effectiveness.
Here are some pretty typical behavioral interview questions:
You probably noticed that all of these questions ask the candidate to recount a time when they successfully did something. The candidate is asked about times they adapted to a difficult situation, balanced competing priorities, made their job more interesting and successfully persuaded someone. And that leads us to the flaw in these questions.
The flaw in behavioral interview questions These behavioral interview questions make very clear that the candidate is supposed to share a success story about adapting, balancing, persuading, etc. No candidate in their right mind would answer these questions by saying Im terrible at persuading people, and my boss is a jerk who never listens to me anyway. Or Im constantly overwhelmed by competing priorities, and I cant live like that.
These questions give away the right answers; cuing candidates to share success stories and avoid examples of failure. But how are interviewers supposed to tell good from bad candidates if everyone shares only success stories? Wouldnt you rather change the question so that candidates feel free to tell you about all the times they couldnt balance competing priorities? Or failed to persuade people? Or couldnt adapt to a difficult situation?
Lets take the question Tell me about a time when you were bored on the job and what you did to make the job more interesting. Because the question gives away the correct answer (talk about going from bored to interested), anyone who answers is going to say something like heres what I did to make the job the more interesting, and I grew professionally, and I was so enriched, etc.
But now, imagine that you tweaked the question to not divulge the answer and you asked Could you tell me about a time when you were bored on the job? Because youre not giving away the correct answer, youre going to hear a wide range of responses.
Some candidates (people who are problem bringers in their current job) are going to say things like OMG, that job was sooo boring and I couldnt wait to quit and I was bored, but hey, I needed the money. Answers like that are a great gift because they immediately tell you not to hire that candidate. And those answers make your job as interviewer much easier because they help you weed-out the weaker candidates.
See more here:
The Hidden Flaw In Behavioral Interview Questions
A crew aboard the space shuttle 'Endeavor' successfully repaired the Hubble telescope in December of 1993.
At their 360-mile-high rendezvous, Endeavour's crew pulled the telescope onto a platform in the space shuttle's open cargo bay.
There, they attached new stabilizing gyroscopes necessary to guide the telescope, replaced its solar panels and gave it a new primary camera.
The wide-field planetary camera was responsible for about half of the Hubble's observations.
It had been sending to earth unfocused images due to a flaw in its primary mirror.
Hubble, which had been launched into space in 1990, had only been able to transmit bright images within 4 billion light-years rather than the optimal 10 to 15 billion.
The flaw had limited the observations of astronomers investigating theories of an expanding universe and the existence of black holes.
Read more here:
This Day in History - December 2, 1993 - Hubble Repair Shuttle Mission Launched