WWAY INVESTIGATES: Are beaches worth the money?

BRUNSWICK COUNTY, NC (WWAY) -- Surf, sand and sun are big draws for southeastern North Carolina. In fact, our beaches keep much of our economy afloat, but they take a lot of maintenance.

Beach nourishment, dredging inlets, and building structures all help prevent erosion, and preserve our coastline. They, however, take time, man-power, and most importantly, money. So, is that all worth it, and what is the best method?

It all depends. We take a closer look in a WWAY investigation: Fighting Mother Nature.

Sandbags are the only defense David and Vonecille Litz have been able to put up against the waves pounding their home in Oak Island.

"It's wiped the concrete walkways down, and washed all that sand away," David Litz said.

Vonecille Litz said she and her husband have spent almost $50,000 on the sandbags. They had to put up the barrier when Lockwood's Folly Inlet started eroding the beach outside they're home a year ago.

Spencer Rogers, a coastal engineer and geologist with North Carolina Sea Grant was out at the Litz' home Thursday. He helps homeowners deal with these kinds of issues. He said Lockwood's Folly Inlet, like many others, oscillates back and forth, and has highly-variable shorelines.

"What we're seeing here is something that occurred in the exact same area back in 1979 or so," Rogers said.

Back then, according to Rogers and the Litz, some of the threatened houses were moved farther away from the beach. The Litz said they actually used to own one of those houses. It now sits across the street from the one they're losing to the ocean.

"You wonder sometimes why would these be buildable lots when you knew, in so many years, that it was going to erode again," Vonecille Litz said.

Here is the original post:

WWAY INVESTIGATES: Are beaches worth the money?

Russian satellite support, wave search move Green Bank toward independence

While the staff of the National Radio Astronomy Observatory at Green Bank can detect no sign of the National Science Foundation changing its plan to drop the Pocahontas County research facilitys primary research tool the Green Bank Telescope from its portfolio of fully funded astronomy sites by 2017, new rays of hope for the facilitys future can be seen on the horizon.

I am optimistic, said the observatorys business manager, Mike Holstine. Weve been a part of the West Virginia landscape for almost 60 years, and I think that in some shape or form, well be here for a number of years more.

Holstines optimism has developed despite the fact that the NSF opted in 2012 to divest itself of the observatorys crown jewel the $100 million Robert C. Byrd Green Bank Telescope, along with the Very Long Baseline Array a network of 10 linked radio-telescopes headquartered in New Mexico and several other smaller telescopes. Facing a dwindling federal budget for astronomy, the NSF chose to focus on the funding of new projects, such as the Atacama Large Millimeter/sub-millimeter Array (ALMA)in Chile, even though the West Virginia telescope was far from being outdated or idle.

Completed in 2000, the 450-foot-tall, 16 million-pound GBT is the worlds largest fully steerable telescope, capable of precisely directing its 2.3-acre light collecting surface to all but the southernmost 15 percent of the celestial sphere. Known for its wide range of observational wavelengths and its high resolution, the GBT is used by scientists to search the universe for the building blocks of life by detecting gases in distant galaxies and interstellar molecules. Considered one of the best pulsar telescopes in the world, the GBT is used by astronomers around the world to clock the millisecond flashes coming from the spinning neutron stars. Current pulsar research made possible by the huge radio-telescope is helping an international consortium of scientists search for evidence of gravitational waves, the presence of which were first postulated in Albert Einsteins theory of general relativity.

Not all of GBT-assisted discoveries take place in deep space. In January, the telescope produced detailed images of a 70-meter moon orbiting around an asteroid measuring 300 meters across, as the objects hurtled within 745,000 miles of the Earth.

Each year, the GBT provides researchers about 6,500 hours of observation time. That wont necessarily end when the 2017 divestiture date arrives, according to Holstine.

The NSF has said that if we can come up with half the cost of operating the GBT, they would continue to fund us at something almost up to but less than the remaining 50 percent,Holstine said. Right now, were operating at about 30 percent from external funding. So far, were doing pretty good.

Currently, West Virginia University is contributing about $500,000 annually to GBT operations, which now cost $6 million to $7 million annually. Other clients include the Russian space agency, which in 2013 retrofitted Green Banks 1965-vintage 43-meter telescope to serve as one of two Earth stations for the agencys orbiting RadioAstron satellite, the most distant element of an Earth-to-space spanning radio telescope. When the orbiting radio-telescope passes out of view from its Moscow Earth station, observations are downloaded to the Green Bank dish.

We started working with RadioAstron three years ago,said Holstine. Theyve been a great partner. In addition to paying for time on the 43-meter scope, RadioAstron uses the GBT as part of a linked array of radio-telescopes called an interferometer to get high-resolution data, Holstine said. When linked to the GBT,the satellite and other land-based radio-telescopes in the system, a virtual radio-telescope with a diameter of up to 220,000 miles is formed. RadioAstron is used to study quasars, cosmic masers and interstellar space in unprecedented detail.

Their contract runs out in June, but theyve indicated they have every intention of extending it, Holstine said. How long RadioAstron and the Green Bank observatory will remain partners, Holstine said, depends on the state of Russian affairs.

Excerpt from:

Russian satellite support, wave search move Green Bank toward independence

Lowell curator and FALA teacher to fly on airborne observatory

Samantha Thompson of Lowell Observatory and Rich Krueger of the Flagstaff Arts & Leadership Academy (FALA) have been selected for the SOFIA Airborne Astronomy Ambassadors program. Later this year, they will take flight alongside scientists on NASAs flying observatory.

The Thompson/Krueger team was just one of 14 chosen from a highly competitive, nationwide field of educators. Each team of ambassadors will work with a professional astronomer to experience airborne astronomical research first-hand. Afterward, the educators share what they learned with their classrooms and local communities.

The goal of the Airborne Astronomy Ambassadors program is to increase scientific literacy. Each year, a new group of educator teams are chosen to fulfil this mission. Dana Backman, SOFIA Outreach Manager, said There are two components to the applications. The educators have to talk about how they are going to improve the science and STEM education within their institutions, plus they have to have a public engagement plan, a way to somehow leverage their experience into the local neighborhood.

Because of the limited number of opportunities to fly, many more applications come in than can be accepted. In evaluating the projects, Backman said, We want to see something thats innovative. All the ones that were selected had some interesting twists.

In the case of Thompson and Krueger, one unique aspect involved exhibits the team will build. Thompson said, We will create one exhibit here at Lowell and one that travels around to STEM fairs, the Festival of Science, schools and elsewhere.

Because these displays will be shown at both informal (Lowell) and formal (schools) education sites, they will reach a wide range of audiences. Plus, Kruegers students will gain valuable firsthand experience. Krueger said, When we take the exhibit to Wheeler Park and classrooms, my students will go and help teach the concepts in the exhibits.

The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a modified 747SP jetliner equipped with a 100-inch telescope. Flying at altitudes between 39,000 and 45,000 feet, the craft collects data from the infrared portion of the electromagnetic spectrum. One of the instruments on SOFIA is the High Speed Imaging Photometer for Occultation (HIPO), a device built by astronomer Ted Dunham and his engineering team at Lowell Observatory. Lowell director Jeffrey Hall said, Lowell Observatory has long been involved scientifically with SOFIA, so its very appropriate to have one of our staff members take part in the ambassador program.

Thompson is curator at Lowell Observatory. She manages the observatorys collection of historical artifacts and designs educational exhibits. She said, My goal is to make astronomy, and generally science, technology, and math, accessible to all. The SOFIA Ambassador program allows me to do this, and I am very grateful to have this rare opportunity that few people get.

Krueger teaches primarily high school science classes at FALA, a public charter school. He said, I teach from the phenomena basis, where I give the kids a phenomenon and ask them How do we understand that. Flying on SOFIA and participating in a research project gives me the tools to do so.

-- Sun staff reports

Link:

Lowell curator and FALA teacher to fly on airborne observatory

See the solar eclipse from Ipswich Waterfront on March 20

19:45 05 March 2015

James Marston

David Murton (L to R) Paul Whiting and Bill Barton in front of the main telescope at the Orwell Astronomical Society. Photograph Simon Parker

Archant

Astronomy enthusiasts will be gathering outside Isaacs on March 20 to view the celestial event - and you are invited to join them.

A solar eclipse occurs when the moon passes between the sun and earth.

Each year there are between two and five solar eclipses.

The Suns corona extends millions of kilometres into space

Looking at the sun during an eclipse is dangerous and specialist equipment must be used.

The March 2015 eclipses longest duration of totality will be 2 minutes and 46 seconds off the northern coast of the Faroe Islands.

See original here:

See the solar eclipse from Ipswich Waterfront on March 20

An explosive quartet

IMAGE:This image shows the huge galaxy cluster MACS J1149+2223, whose light took over 5 billion years to reach us. view more

Credit: NASA, ESA, S. Rodney (John Hopkins University, USA) and the FrontierSN team; T. Treu (University of California Los Angeles, USA), P. Kelly (University of California Berkeley, USA) and the GLASS...

Astronomers using the NASA/ESA Hubble Space Telescope have, for the first time, spotted four images of a distant exploding star. The images are arranged in a cross-shaped pattern by the powerful gravity of a foreground galaxy embedded in a massive cluster of galaxies. The supernova discovery paper will appear on 6 March 2015 in a special issue of Science celebrating the centenary of Albert Einstein's theory of general relativity.

Whilst looking closely at a massive elliptical galaxy and its associated galaxy cluster MACS J1149+2223 -- whose light took over 5 billion years to reach us -- astronomers have spotted a strange and rare sight. The huge mass of the galaxy and the cluster is bending the light from a much more distant supernova behind them and creating four separate images of it. The light has been magnified and distorted due to gravitational lensing [1] and as a result the images are arranged around the elliptical galaxy in a formation known as an Einstein cross.

Although astronomers have discovered dozens of multiply imaged galaxies and quasars, they have never before seen multiple images of a stellar explosion.

"It really threw me for a loop when I spotted the four images surrounding the galaxy -- it was a complete surprise," said Patrick Kelly of the University of California Berkeley, USA, a member of the Grism Lens Amplified Survey from Space (GLASS) collaboration and lead author on the supernova discovery paper. He discovered the supernova during a routine search of the GLASS team's data, finding what the GLASS group and the Frontier Fields Supernova team have been searching for since 2013 [2]. The teams are now working together to analyse the images of the supernova, whose light took over 9 billion years to reach us [3].

"The supernova appears about 20 times brighter than its natural brightness," explains the paper's co-author Jens Hjorth from the Dark Cosmology Centre, Denmark. "This is due to the combined effects of two overlapping lenses. The massive galaxy cluster focuses the supernova light along at least three separate paths, and then when one of those light paths happens to be precisely aligned with a single elliptical galaxy within the cluster, a secondary lensing effect occurs." The dark matter associated with the elliptical galaxy bends and refocuses the light into four more paths, generating the rare Einstein cross pattern the team observed.

This unique observation will help astronomers refine their estimates of the amount and distribution of dark matter in the lensing galaxy and cluster. There is more dark matter in the Universe than visible matter, but it is extremely elusive and is only known to exist via its gravitational effects on the visible Universe, so the lensing effects of a galaxy or galaxy cluster are a big clue to the amount of dark matter it contains.

When the four supernova images fade away as the explosion dies down, astronomers will have a rare chance to catch a rerun of the explosion. The supernova images do not arrive at the Earth at the same time because, for each image produced, the light takes a different route. Each route has a different layout of matter -- both dark and visible -- along its path. this causes bends in the road, and so for some routes the light takes longer to reach us than for others. Astronomers can use their model of how much dark matter is in the cluster, and where it is, to predict when the next image will appear as well as using the time delays they observe to make the mass models even more accurate [4].

"The four supernova images captured by Hubble appeared within a few days or weeks of each other and we found them after they had appeared," explains Steve Rodney of Johns Hopkins University, USA, leader of the Frontier Fields Supernova team. "But we think the supernova may have appeared in a single image some 20 years ago elsewhere in the cluster field, and, even more excitingly, it is expected to reappear once more in the next one to five years -- and at that time we hope to catch it in action."

The rest is here:

An explosive quartet

CNET Update – ‘Chappie’ stirs up questions about artificial intelligence – Video


CNET Update - #39;Chappie #39; stirs up questions about artificial intelligence
http://cnet.co/1zZ6mix Some of the biggest names in tech have warned about the dangers of creating AI, and machines that can think are at the center of Sony #39;s upcoming film. CNET #39;s Bridget...

By: CNET

Read more from the original source:

CNET Update - 'Chappie' stirs up questions about artificial intelligence - Video

The Attention to Detail of Cryengine’s Artificial Intelligence Sets a New Precedent – Video


The Attention to Detail of Cryengine #39;s Artificial Intelligence Sets a New Precedent
The groundbreaking video game series Crysis is known for its flawless design and rewarding gameplay, not to mention its bar-raising visuals. But little known is that, behind the scenes, the...

By: kintustis

Follow this link:

The Attention to Detail of Cryengine's Artificial Intelligence Sets a New Precedent - Video

Artificial Intelligence App Review – Create The Most Advanced, Sophisticated Trading Platform Ever! – Video


Artificial Intelligence App Review - Create The Most Advanced, Sophisticated Trading Platform Ever!
Artificial Intelligence App: http://doiop.com/enigmacodef Free Bonus: http://www.toplaunchreview.com/free-bonus/ The Artificial Intelligence APP Review: Artificial Intelligence APP is a single...

By: TopLaunchReview

Here is the original post:

Artificial Intelligence App Review - Create The Most Advanced, Sophisticated Trading Platform Ever! - Video

Embrace AI, don't fear it: Expert

As man-made robots get smarter, will they eventually outpace man?

A few of the world's smartest technology leaders certainly think so. In recent days, they've taken to sounding the alarm bell about the potential dangers of Artificial Intelligence (AI).

Tesla CEO Elon Musk called AI "our biggest existential threat" while British scientist Stephen Hawking said AI could "spell the end of the human race." In January, Read MoreMicrosoft co-founder Bill Gates sided with Musk, adding, "[I] don't understand why some people are not concerned."

Read More Think tank: Study AI before letting it take over

Yet on the other side of the argument are people like Microsoft co-founder, Paul Allen. In 2013, he founded the Allen Institute for Artificial Intelligence in Seattle, whose mission is to advance the study of AI. The man who heads the organization thinks the fears are overblown.

"Robots are not coming to get you," said Allen Institute CEO Oren Etzioni. In an interview with CNBC, he said: "We quite simply have to separate science from science fiction."

Etzioni said Elon Musk and others may be missing the distinction between intelligence and autonomy. One implies streamlined computer functions, while the other means machines think and operate independently.

Etzioni offered two Artificial Intelligence examples. In 1997, IBM's Deep Blue chess computer beat then world champion Garry Kasparov. In 2011, IBM's Watson supercomputer beat two champions on the game show "Jeopardy."

"These are highly targeted savants," said Etzioni. "They say Watson didn't even know it won. And Deep Blue will not play another chess game unless you push a button."

Etzioni said that the machines "have no free will, they have no autonomy. They're no more likely to do damage than your calculator is likely to do its own calculations."

More here:

Embrace AI, don't fear it: Expert

Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter

Artificial intelligence has gone through some dismal periods, which those in the field gloomily refer to as AI winters. This is not one of those times; in fact, AI is so hot right now that tech giants like Google, Facebook, Apple, Baidu, and Microsoft are battling for the leading minds in the field. The current excitement about AI stems, in great part, from groundbreaking advances involving what are known as convolutional neural networks. This machine learning technique promises dramatic improvements in things like computer vision, speech recognition, and natural language processing. You probably have heard of it by its more layperson-friendly name: Deep Learning.

Few people have been more closely associated with Deep Learning than Yann LeCun, 54. Working as a Bell Labs researcher during the late 1980s, LeCun developed the convolutional network technique and showed how it could be used to significantly improve handwriting recognition; many of the checks written in the United States are now processed with his approach. Between the mid-1990s and the late 2000s, when neural networks had fallen out of favor, LeCun was one of a handful of scientists who persevered with them. He became a professor at New York University in 2003, and has since spearheaded many other Deep Learning advances.

More recently, Deep Learning and its related fields grew to become one of the most active areas in computer research. Which is one reason that at the end of 2013, LeCun was appointed head of the newly-created Artificial Intelligence Research Lab at Facebook, though he continues with his NYU duties.

LeCun was born in France, and retains from his native country a sense of the importance of the role of the public intellectual. He writes and speaks frequently in his technical areas, of course, but is also not afraid to opine outside his field, including about current events.

IEEE Spectrum contributor Lee Gomes spoke with LeCun at his Facebook office in New York City. The following has been edited and condensed for clarity.

Yann LeCun on...

IEEE Spectrum: We read about Deep Learning in the news a lot these days. Whats your least favorite definition of the term that you see in these stories?

Yann LeCun: My least favorite description is, It works just like the brain. I dont like people saying this because, while Deep Learning gets an inspiration from biology, its very, very far from what the brain actually does. And describing it like the brain gives a bit of the aura of magic to it, which is dangerous. It leads to hype; people claim things that are not true. AI has gone through a number of AI winters because people claimed things they couldnt deliver.

Spectrum: So if you were a reporter covering a Deep Learning announcement, and had just eight words to describe it, which is usually all a newspaper reporter might get, what would you say?

LeCun: I need to think about this. [Long pause.] I think it would be machines that learn to represent the world. Thats eight words. Perhaps another way to put it would be end-to-end machine learning. Wait, its only five words and I need to kind of unpack this. [Pause.] Its the idea that every component, every stage in a learning machine can be trained.

Read more:

Facebook AI Director Yann LeCun on His Quest to Unleash Deep Learning and Make Machines Smarter

Who's set to make money from the coming artificial intelligence boom?

Artificial intelligence is about to take off in a big way.

According to a new report by Goldman Sachs, AI is defined as "any intelligence exhibited by machines or software." That can mean machines that learn and improve their operations over time, or that make sense of huge amounts of disparate data.

Though it's been almost 60 years since we first heard of the term AI, Goldman believes that we are "on the cusp of a period of more rapid growth in its use and applications."

The reasons? Cheaper sensors leading to a flood of new data, and rapid improvements in technology that allows computers to understand so-called "unstructured" data like conversations and pictures.

Other industry insiders are confident that AI will continue to evolve at a much higher rate while affecting wage growth in many industries. Ray Kurzweil, the director of engineering at Google, believes that human-level AI is coming by 2029.

So who are the players going to be?

First, several big tech companies have been storing up patents related to the field.

IBM is the leader, with about 500 patents related to artificial intelligence. IBM's super-computer Watson is an example of the shift to AI, as it entered the healthcare sector in 2013 and helped lower the error rate in cancer diagnoses by physicians.

Other big patent players in the space include Microsoft, Google, and SAP.

USPTO

Original post:

Who's set to make money from the coming artificial intelligence boom?

Building AI to Play by Any Rules

Computer algorithms capable ofplaying the perfect game of checkers or Texas Holdem poker have achieved success so far by efficiently calculating the best strategiesin advance. But some computer scientists want to createa different form of artificial intelligence that can playany new game without the benefit ofprior knowledge or strategies. The software would face opponents after having onlyreadthe gamesrulebook. AnAI that can adapt well enough to play new games without prior knowledgecould also potentially do well in adapting to the rules of society in areas such as corporate law orgovernment regulations.

This idea of general game-playing AI has gotten a big boost from the International General Game Playing Competition, a US $10,000 challengethat has been heldas an annual eventsince 2005. The AI competitorsmust analyze the unfamiliar game at handsay,somevariant of chesswithina start clock time of 5 or 10 minutes. Then they each have a playclock of just one minute to make their move within each turn of play. Its a challenge that requires a very different approach to AI than the specialized algorithms that exhaustively analyze almostevery possible playover days or weeks.

This raises the question of where is the intelligence in artificial intelligence, saysMichael Genesereth, a computer scientist at Stanford University.Is it inthe program which is following a recipe,or is it in the programmer who invented the recipe and understands the rules for playing the games?

Geneserethrecently presentedthe latest advances in general game-playing AI at the29thAssociation for the Advancement of Artificial Intelligence conferenceheld from25-30Januaryin Austin, Texas. The latest champions of the General Game Playingcompetition represent the thirdgeneration of AI to have emerged since the first competition in 2005.

But the idea of general game-playing AIgoes all the way back tothe original 1958 vision of John McCarthy, the computer scientist who coined the term artificial intelligence.McCarthyenvisioned an advice taker AI that didnt need to rely upon a programmers step-by-step recipe to tackle new scenarios, but instead could adapt its behavior based on statements about its environment and goals. To paraphrase science fiction writer Robert Heinlein,such AI could behave more like an adaptable human who can write a sonnet, balance accounts, build a wall, set a bone,rather than just performa single tasklike a specialized insect.

Computer scientists usecompetitions based on games such as tic-tac-toe and chess as benchmarks of their progress. But general game-playing AI would not likely compete with specialized algorithms tofind the best solutions to the ancient game of Goor heads-up nolimit Texas Holdem. Thosespecialized algorithms are programmed to crunch all the information sets about possible moves made by game opponents at each stage of play. Such anexhaustive approach often requires intensive supercomputing resources.

By comparison, general game-playing AI can easily learn toplay new games on its ownby doing the equivalent of translating a games rulebookintoGame Description Language, a computer programming language it can understand. That means general game playing AI can rely upon just one page ofrules to learn games involving thousands of information states; chess, for instance, can be described through just four pages of such rules.

Most examples of AI tend to fall in the category of specialized algorithms following preprogrammed instructions. Some AI uses the popular approach known as machine learning to slowly adapt to new scenarios; they are, in a sense, virtual newborns that knownothing and must learn everything for themselves. General game-playing AI provides an alternative approach, incorporatingexisting knowledge rather than having to learn everything on its own.

I think there needs to be some balance between a machine that knows nothing to start and learns about world,Genesereth says, andamachine that is told everything about human knowledge and startsfrom there.

The first generation of general game-playing AI focused on maximizing the moves available to itself and limiting the moves available to opponents. Such an approach had onlylimited success;computer programs still struggled to beat humans during the first Carbon versus Silicon competition held alongside the General Game Playing competition in2005. Since that time,humans have never again beaten their silicon counterparts.

Read more from the original source:

Building AI to Play by Any Rules

Even with Watson brains, robots need practical skills

What do you get if you take a cute humanoid robot and pair it with one of the worlds best artificial intelligence programs? A smarter humanoid with few practical skills.

Japanese mobile carrier SoftBank introduced a talking household machine called Pepper last year, and its now hooking up with Watson, IBMs artificial intelligence platform.

The prospect of the AI, a winner on the Jeopardy! quiz show, becoming embodied in the doe-eyed droid may be frightening or enchanting depending on your perspective. But its unlikely to make the robot any more useful due to its physical limitations.

SoftBank and IBM are planning to use Watson with Pepper, but they havent specified how. The quiz show champ is a cloud-based platform that can be used to sift through large volumes of data and answer questions posed in natural language. IBM has billed cognitive computing as a nimble form of AI that will be able to look at and use unstructured data. Watson has been deployed in the U.S. to help doctors identify treatment options for cancer patients.

SoftBank is rolling out Watson in Japan and wants to apply it to fields such as education, banking, retail and medicine. In banking, for instance, Watson could be used to answer questions about asset management, according to SoftBank.

With a constant high-speed Internet connection, Pepper could channel Watson and become something more sophisticated than what it does now: sing songs, help sell mobile phones and appear in ads for canned coffee with actor Tommy Lee Jones. Like Hondas Asimo robot, Pepper is a corporate ambassador.

At a demo held last September, Pepper was linked to Watson when it answered a moderators question, What weapon was invented by Ernest Swinton and used in 1916? Pepper seemed to think for a moment before saying, Tank.

Even though Pepper didnt respond with a question, it was another version of Watsons Jeopardy! routine, one that evoked super-intelligent robots of science fiction such as C-3PO or Data. But few people will be willing to shell out 198,000 (US$1,653), Peppers planned sale price, for a quiz master that cant do much else.

Constrained by its hardware limitations, Pepper cant cook or clean, as developer Aldebaran Robotics of France readily admits at the top of a promotional webpage. Its rubber-tipped fingers could probably grasp clothing, an Aldebaran researcher speculated via email, but it clearly wasnt designed to carry things. Pepper is an empathy robot.

SoftBank CEO Masayoshi Son unveiled Pepper in a dramatic press event as the worlds first robot that can read human emotions. It can read facial expressions, tone of voice and body language to better communicate with people.

See the original post here:

Even with Watson brains, robots need practical skills