NASA's Kepler Spots Thousands Of Extreme 'White Light' Stellar Flares

New data from NASAs Kepler space telescope is allowing astronomers a glimpse at potentially catastrophic flaring in a solar-type star roughly 300 light years away.

The observations detail some of the largest flaring events ever detected from a fully-mature G spectral-type star, known for now by its Kepler Input Catalog number KIC 11551430. Flaring from the star is several thousands times stronger than the Carrington Event a September 1859 solar super-flare, hundreds of times stronger than most of our Suns X-class flares (the most powerful solar flares yet classified).

We are counting thousands of white light flares from KIC 11551430 in a range from 10 to 10,000 times bigger than the biggest flares produced by our own Sun, Rachel Osten, an astronomer at the Space Telescope Science Institute and the team leader on the Kepler survey of this star, told Forbes.

When you count and plot these really energetic stellar flares, said Osten, you expect to have more and more energetic flares happening less and less frequently. The fact that we see a limit on the flare energies for these stars, Osten says, sort of confirms that these flares get their energy from star spots, or magnetic fields poking through the stellar surface.

A major solar eruption is shown in progress October 28, 2003. (Photo by Solar & Heliospheric Observatory/NASA via Getty Images)

In the mid-19th century, x-ray measurements of the Carrington Event werent yet available. But because the superflare was associated with spectacular Earth auroras, Osten says the event was likely coupled with a coronal mass ejection (or CME) a magnetized plasma streaming high-energy accelerated particles at thousands of kilometers per second.

Osten says our own Sun might still be capable of producing something slightly larger than the Carrington Event which, at the time, sent the new technology of the telegraph into a tailspin.

But in its 4.5 billion year history, has the Sun ever produced a flare 10,000 times larger than the Carrington Event?

Almost certainly, yes, said Osten. During its first hundred million years, the Sun was very active.

Osten says a close binary stellar companion in which two stars are gravitationally interacting might explain why KIC 11551430, located in the bright constellation of Cygnus, is so active. She says that when two stars are that close, tidal forces cause their rotation and orbital period to be coupled with each other. As a result, a star with a close binary companion will rotate much faster than if it were simply a single star.

Read more here:

NASA's Kepler Spots Thousands Of Extreme 'White Light' Stellar Flares

YTPMVMAD Artificial Intelligence Slam Tasmanian x Mega Tasmanian – Video


YTPMVMAD Artificial Intelligence Slam Tasmanian x Mega Tasmanian
BGM: Artificial Intelligence Bomb Tiempo Tardado: 8 horas 26 minutos Aqui va el reto de Sergio Llovera , pero valio la pena hacerlo cuando me estaba cansando a la septima hora de progreso...

By: Mega TasmanianR-3TZ_49

Read the rest here:

YTPMVMAD Artificial Intelligence Slam Tasmanian x Mega Tasmanian - Video

Apple co-founder on artificial intelligence: The future is scary and very bad for people

The Super Rich Technologists Making Dire Predictions About Artificial Intelligence club gained another fear-mongering member this week: Apple co-founder Steve Wozniak.

In an interview with the Australian Financial Review, Wozniak joined original club members Bill Gates, Stephen Hawking and Elon Musk by making his own casually apocalyptic warning about machines superseding the human race.

"Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people," Wozniak said. "If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently."

[Bill Gates on dangers of artificial intelligence: I dont understand why some people are not concerned]

Doling out paralyzing chunks of fear like gumdrops to sweet-toothed children on Halloween, Woz continued: "Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don't know about that But when I got that thinking in my head about if I'm going to be treated in the future as a pet to these smart machines well I'm going to treat my own pet dog really nice."

Seriously? Should we even get up tomorrow morning, or just order pizza, log onto Netflix and wait until we find ourselves looking through the bars of a dog crate? Help me out here, man!

Wozniak's warning seemed to follow the exact same story arc as Season 1 Episode 2 of Adult Swim's "Rick and Morty Show."Not accusing him of apocalyptic plagiarism or anything; just noting.

For what it's worth, Wozniak did outline a scenario by which super-machines will be stopped in their human-enslaving tracks. Citing Moore's Law -- "the pattern whereby computer processing speeds double every two years" -- Wozniak pointed out that at some point, the size of silicon transistors, which allow processing speeds to increase as they reduce size, will eventually reach the size of an atom, according to the Financial Review.

"Any smaller than that, and scientists will need to figure out how to manipulate subatomic particles a field commonly referred to asquantum computing which has not yet been cracked," Quartz notes.

Wozniak's predictions represent a bit of a turnaround, the Financial Review pointed out. While he previously rejected the predictions of futurists such asthe pill-popping Ray Kurzweil, who argued that super machines will outpace human intelligence within several decades, Wozniak told the Financial Review that he came around after he realized the prognostication was coming true.

Read the original here:

Apple co-founder on artificial intelligence: The future is scary and very bad for people

Innovations: Elon Musk, Neil deGrasse Tyson laugh about artificial intelligence turning the human race into its pet …

Elon Musk has already ignited a debate over the dangers of artificial intelligence. The chief executive of Tesla and SpaceX has called it humanitysgreatest threat, and something even more dangerous than nuclear weapons.

Musk publicly hasnt offered a lot of detail about why hes concerned, and what could go wrong. That changed in an interview with scientist Neil deGrasse Tyson, posted Sunday.

Musks fears lie with a subset of artificial intelligence, called superintelligence. Its defined by Nick Bostrom, author of the highly-cited book Superintelligence, as any intellect that greatly exceed the cognitive performance of humans in virtually all domains of interest.

Musk isnt worried about simpler forms of artificial intelligence, such as a driverless car or smart conditioning unit. The danger is when a machine can rapidly educate itself, as Musk explained:

If there was a very deep digital superintelligence that was created that could go into rapid recursive self-improvement in a non-algorithmic way it could reprogram itself to be smarter and iterate very quickly and do that 24 hours a day on millions of computers, well

Then thats all she wrote, interjected Tysonwith a chuckle.

Thats all she wrote, Musk answered. I mean, we wont be like a pet Labrador if were lucky.

A pet Lab, laughed Tyson.

I have a pet Labrador by the way, Musk said.

Well be their pets, Tysonsaid.

Read the rest here:

Innovations: Elon Musk, Neil deGrasse Tyson laugh about artificial intelligence turning the human race into its pet ...

Artificial intelligence systems more apt to fail than to destroy

17 hours ago by David Stauth

The most realistic risks about the dangers of artificial intelligence are basic mistakes, breakdowns and cyber attacks, an expert in the field says more so than machines that become super powerful, run amok and try to destroy the human race.

Thomas Dietterich, president of the Association for the Advancement of Artificial Intelligence and a distinguished professor of computer science at Oregon State University, said that the recent contribution of $10 million by Elon Musk to the Future of Life Institute will help support some important and needed efforts to ensure AI safety.

But the real risks may not be as dramatic as some people visualize, he said.

"For a long time the risks of artificial intelligence have mostly been discussed in a few small, academic circles, and now they are getting some long-overdue attention," Dietterich said. "That attention, and funding to support it, is a very important step."

Dietterich's perspective of problems with AI, however, is a little more pedestrian than most not so much that it will overwhelm humanity, but that like most complex engineered systems, it may not always work.

"We're now talking about doing some pretty difficult and exciting things with AI, such as automobiles that drive themselves, or robots that can effect rescues or operate weapons," Dietterich said. "These are high-stakes tasks that will depend on enormously complex algorithms.

"The biggest risk is that those algorithms may not always work," he added. "We need to be conscious of this risk and create systems that can still function safely even when the AI components commit errors."

Dietterich said he considers machines becoming self-aware and trying to exterminate humans to be more science fiction than scientific fact. But to the extent that computer systems are given increasingly dangerous tasks, and asked to learn from and interpret their experiences, he says they may simply make mistakes.

"Computer systems can already beat humans at chess, but that doesn't mean they can't make a wrong move," he said. "They can reason, but that doesn't mean they always get the right answer. And they may be powerful, but that's not the same thing as saying they will develop superpowers."

Read more:

Artificial intelligence systems more apt to fail than to destroy

The AI Resurgence: Why Now?

Artificial Intelligence (AI) has been enjoying a major resurgence in recent months and for some seasoned professionals, who have been in the AI industry since the 1980s, it feels like dj vu all over again.

AI, being a loosely defined collection of techniques inspired by natural intelligence, does have a mystic aspect to it. After all, we do culturally assign positive value to all things smart, and so we naturally expect any system imbued with AI to be good, or it is not AI. When AI works, it is only doing what it is supposed to do, no matter how complex an algorithm being used to enable it, but when it fails to workeven if what was asked of it is impractical or out of scopeit is often not considered intelligent anymore. Just think of your personal assistant.

For these reasons, AI has typically gone through cycles of promise, leading to investment, and then under-delivery, due to the expectation problem noted above, which has inevitably led to a tapering off of the funding.

This time, however, the scale and scope of this surge in attention to AI is much larger than before. During the latter half of 2014, there was an injection of nearly half a billion dollars into the AI industry.

What are the drivers behind this?

For starters, the infrastructure speed, availability, and sheer scale has enabled bolder algorithms to tackle more ambitious problems. Not only is the hardware faster, sometimes augmented by specialized arrays of processors (e.g., GPUs), it is also available in the shape of cloud services. What used to be run in specialized labs with access to super computers can now be deployed to the cloud at a fraction of the cost and much more easily. This has democratized access to the necessary hardware platforms to run AI, enabling a proliferation of start-ups.

Furthermore, new emerging open source technologies, such as Hadoop, allow speedier development of scaled AI technologies applied to large and distributed data sets.

A combination of other events has helped AI gain the critical-mass necessary for it to become the center of attention for technology investment. Larger players are investing heavily in various AI technologies. These investments go beyond simple R&D extensions of existing products, and are often quite strategic in nature. Take for example, IBMs scale of investment in Watson, or Googles investment in driverless cars, Deep Learning (i.e., DeepMind), and even Quantum Computing, which promises to significantly improve on efficiency of machine learning algorithms.

On top of this, theres a more wide scale awareness of AI in the general population, thanks in no small part to the advent and relative success of natural language mobile personal assistants. Incidentally, the fact that Siri can be funny sometimes, which ironically is technically relatively simple to implement, does add to the impression that it is truly intelligent.

But theres more substance to this resurgence than the impression of intelligence that Siris jocularity gives its users. The recent advances in Machine Learning are truly groundbreaking. Artificial Neural Networks (deep learning computer systems that mimic the human brain) are now scaled to several tens of hidden layer nodes, increasing their abstraction power. They can be trained on tens of thousands of cores, speeding up the process of developing generalizing learning models. Other mainstream classification approaches, such as Random Forest classification, have been scaled to run on very large numbers of compute nodes, enabling the tackling of ever more ambitious problems on larger and larger data-sets (e.g., Wise.io).

More here:

The AI Resurgence: Why Now?

Vector Aerospace Contracted to Perform Major Service Inspections

Langley, BC Vector Aerospace(www.vectoraerospace.com), a global independent provider of aviation maintenance, repair and overhaul (MRO) services, is pleased to announce that it has been selected to perform a 7,500 hour major inspection (G check) and 12-year inspection on one of the Los Angeles County Sheriffs Departments three AS332L1 Super Puma helicopters. This inspection will also include post maintenance return to service activities and a 10-year calendar fuel hose replacement.

Vector Aerospace is globally recognized as an AS332C, L & L1 MRO provider certified by Airbus Helicopters to perform all inspections up to and including G check, avionics and structural MRO including full-wiring repairs and extensive primary and secondary composite repairs. Vector Aerospace also offers a variety of custom Supplemental Type Certificates (STCs) for the Super Puma, as well as a pay by the hour lease program suitable for operators who have immediate operational requirements that can be fulfilled by leasing an AS332L.

With extensive experience in performing inspections and maintenance on Super Puma helicopters, I am confident that the Los Angeles County Sheriffs Department will be pleased with our quality standards and top-notch customer support delivered throughout this contract, said Chris McDowell, Vice President, Sales and Marketing at HS-NA.

Vector Aerospace holds approvals from some of the world's leading turbine engine, airframe and avionics OEMs. Powerplants supported include a wide range of turboshafts, turboprops and turbofans from General Electric, Honeywell, Pratt & Whitney Canada, Rolls-Royce and Turbomeca. Vector Aerospace also provides support for a wide range of airframes from Airbus Helicopters, Bell, Boeing and Sikorsky. Its capabilities include major inspections and dynamic component overhaul, full-service avionics capability including aircraft rewiring, as well as mission equipment installation and glass cockpit upgrades.

About the Los Angeles County Sheriffs Department:

The Los Angeles Sheriffs Department is the largest sheriffs department in the world. It is divided into the four main operations of Custody Operations, Patrol Operations, Countywide Services, and Administrative & Professional Standards. Within the four operations, there are thirteen additional subgroups, called divisions, each headed by a Division Chief. From within these divisions are bureaus and specialized units which provide specific services to the county, the county residents, other county, state and federal agencies.

Their Core Values are: Courage, Compassion, Professionalism, Accountability, and Respect. With integrity, compassion, and courage, we serve our communities-- protecting life and property, being diligent and professional in our acts and deeds, holding ourselves and each other accountable for our actions at all times, while respecting the dignity and rights of all. Earning the Public Trust Every Day!

For more information about the Los Angeles County Sheriffs Department, visit their website at http://www.lasd.org

About Vector Aerospace:

Vector Aerospace is a global provider of aviation MRO services. Through facilities in Canada, the United States, the United Kingdom, France, Australia, South Africa and Kenya, Vector Aerospace provides services to commercial and military customers for gas turbine engines, components and helicopter airframes. Vector Aerospaces customer-focused team includes over 2,700 motivated employees.

See the original post:

Vector Aerospace Contracted to Perform Major Service Inspections

Match Day 2015 University of Rochester School of Medicine and Dentistry – Video


Match Day 2015 University of Rochester School of Medicine and Dentistry
One of the biggest annual events in medical schools across the country is Match Day. At 12:00pm EST, medical students across the country discover where they will spend the next few years of...

By: URMCPR

See the original post:
Match Day 2015 University of Rochester School of Medicine and Dentistry - Video