Did Indigenous people use rock formations for astronomy? (SBS Radio) – Video


Did Indigenous people use rock formations for astronomy? (SBS Radio)
SBS World News, 22 MAY 2013, 8:56 PM Researchers have found stone arrangements in New South Wales that may have functioned as ancient compasses. Dr Duane Hamacher is a lecturer at the ...

By: Duane Hamacher

Read the original:

Did Indigenous people use rock formations for astronomy? (SBS Radio) - Video

Keystone Science School astronomy program offers children, adults a chance to learn

Normally, Play-Doh is simply a ball of brightly colored clay. But in the hands of volunteers and children at the Keystone Science School (KSS), it becomes much more a link to the greater mysteries of the universe, and a science lesson on top of that.

Now entering into its fourth year, the astronomy program reaches into the classroom of fourth-graders throughout Summit County, offering them one hour of classroom learning, with the opportunity to follow up with an evening of more hands-on science activities, plus a glimpse of the heavens through the science schools telescope. They can also invite their parents.

COMBINING SCIENCE AND FUN

While many of the children who participate in the KSS programs and summer camps come from the Front Range and other schools, programs like astronomy for fourth-graders and ecology for third-graders reach out specifically to Summit County students.

The topics chosen for each grade level correspond with state standards. Fourth grade studies astronomy, which lends itself to unique opportunities with the science school.

First, instructors from KSS go into fourth-grade classrooms to teach a one-hour lesson having to do with the science behind daytime and nighttime, and the changing of the seasons.

It really depends on how the school has put together their curriculum as to whether its a review, whether its a new thing, whether were adding on to what theyve already learned, said Daniel Van Horn, staff and curriculum manager for school programs at KSS.

The follow-up component to the classroom lesson is a non-mandatory option of attending astronomy family night at the science school. On six different nights over the next two months, staff and volunteers will gather at KSS to present hands-on activities and experiences for the students and their families. Each night is dedicated to students from a different elementary school. There is no cost.

Getting there is easy, too. Vans from the science school gather at the elementary school parking lot to drive the students and their families to the Keystone facility, and then back again when the night is over.

The night consists of a variety of hands-on experiences for the students on astronomy-related topics, from size of the different planets (thats where the Play-Doh comes in) to star constellations, moon phases and, of course, the chance to look at the real thing through the schools 14-inch reflecting telescope.

See original here:

Keystone Science School astronomy program offers children, adults a chance to learn

Amherst College Astronomy Professor Detects Record-Breaking Black Hole Outburst

Contact Information

Available for logged-in reporters only

Newswise AMHERST, Mass.Last September, after years of watching, a team of scientists led by Amherst College astronomy professor Daryl Haggard observed and recorded the largest-ever flare in X-rays from a supermassive black hole at the center of the Milky Way. The astronomical event, which was detected by NASAs Chandra X-ray Observatory, puts the scientific community one step closer to understanding the nature and behavior of supermassive black holes.

Haggard and her colleagues discussed the flare today at a press conference during this years meeting of the American Astronomical Society in Seattle.

Supermassive black holes are the largest of black holes, and all large galaxies have one. The one at the center of our galaxy, the Milky Way, is called Sagittarius A* (or, Sgr A*, as it is called), and scientists estimate that it contains about four and a half million times the mass of our Sun.

Scientists working with Chandra have observed Sgr A* repeatedly since the telescope was launched into space in 1999. Haggard and fellow astronomers were originally using Chandra to see if Sgr A* would consume parts of a cloud of gas, known as G2.

Unfortunately, the G2 gas cloud didnt produce the fireworks we were hoping for when it got close to Sgr A*, she said. However, nature often surprises us and we saw something else that was really exciting.

Haggard and her team detected an X-ray outburst last September that was 400 times brighter than the usual X-ray output from Sgr A*. This megaflare was nearly three times brighter than the previous record holder that was seen in early 2012. A second enormous X-ray flare, 200 times brighter than Sgr A* in its quiet state, was observed with Chandra on October 20, 2014.

Haggard and her team have two main ideas about what could be causing Sgr A* to erupt in this extreme way. One hypothesis is that the gravity of the supermassive black hole has torn apart a couple of asteroids that wandered too close. The debris from such a tidal disruption would become very hot and produce X-rays before disappearing forever across the black holes point of no return (called the event horizon).

If an asteroid was torn apart, it would go around the black hole for a couple of hours like water circling an open drain before falling in, said colleague and co-principal investigator Fred Baganoff of the Massachusetts Institute of Technology in Cambridge, MA. Thats just how long we saw the brightest X-ray flare last, so that is an intriguing clue for us to consider.

More here:

Amherst College Astronomy Professor Detects Record-Breaking Black Hole Outburst

Artificial Intelligence and Manufacturing (Part One)

Artificial Intelligence and Manufacturing ( Part One)

American manufacturing has come a long way in automating factories with robots and Computers. Over the last 40 years palletizer systems and industrial robots have replaced humans in lots of back breaking and repetitive jobs. Millions of clerical jobs have also been replaced by computers and software.

These advances in technology along with movies like the Terminator and Star Wars has led to a lot of speculation about how far artificial intelligence can be developed. In the Terminator film, someone makes a microprocessor so advanced that it makes machines self aware and then they connect all computers on the internet to cause an atomic war. The suggestion is that microprocessors can become so sophisticated that they can think like humans.

In 1965, DR. Herbert Simon, one of the founders of artificial intelligence (AI) said, Machines will be capable in 20 years of doing any work a man can do. Marvin Minsky, another AI guru from MIT said, within a generation the problem of creating artificial intelligence will be substantially solved. Moshe Vardi, a computer scientist at Rice University in Houston said, Everything that humans can do machines can do

Professors at Universities and computer scientists also add to the excitement by promoting artificial intelligence with futuristic potential as they try to get their share of federal grant money. The big question that comes up is, when will computers be able to emulate humans and become self aware and intelligent?

Link:

Artificial Intelligence and Manufacturing (Part One)

No need to panic artificial intelligence has yet to create a doomsday machine

6 hours ago by Tony Prescott, The Conversation Is artificial super-intelligence lurking nearby, under wraps? eugenia_loli, CC BY

The possibility that advanced artificial intelligence (AI) might one day turn against its human creators has been repeatedly raised of late. Renowned physicist Stephen Hawking, for instance, surprised by the ability of his newly-upgraded speech synthesis system to anticipate what he was trying to say, has suggested that, in the future, AI could surpass human intelligence and ultimately bring about the end of humankind.

Hawking is not alone in worrying about superintelligent AI. A growing number of futurologists, philosophers and AI researchers have expressed concerns that artificial intelligence could leave humans outsmarted and outmanoeuvred. My view is that this is unlikely, as humans will always use an improved AI to improve themselves. A malevolent AI will have to outwit not only raw human brainpower but the combination of humans and whatever loyal AI-tech we are able to command a combination that will best either on their own.

There are many examples already: Clive Thompson, in his book Smarter Than You Think describes how in world championship chess, where AIs surpassed human grandmasters some time ago, the best chess players in the world are not humans or AIs working alone, but human-computer teams.

While I don't believe that surpassing raw (unaided) human intelligence will be the trigger for an apocalypse, it does provide an interesting benchmark. Unfortunately, there is no agreement on how we would know when this point has been reached.

Beyond the Turing Test

An established benchmark for AI is the Turing Test, developed from a thought experiment described by the late, great mathematician and AI pioneer Alan Turing. Turing's practical solution to the question: "Can a machine think?" was an imitation game, where the challenge is for a machine to converse on any topic sufficiently convincingly that a human cannot tell whether they are communicating with man or machine.

In 1991 the inventor Hugh Loebner instituted an annual competition, the Loebner Prize, to create an AI or what we would now call a chatbot that could pass Turing's test. One of the judges at this year's competition, Ian Hocking, reported in his blog that if the competition entrants represent our best shot at human-like intelligence, then success is still decades away; AI can only match the tip of the human intelligence iceberg.

I'm not overly impressed either by the University of Reading's recent claim to have matched the conversational capability of a 13-year-old Ukrainian boy speaking English Imitating child-like intelligence, and the linguistic capacity of a non-native speaker, falls well short of meeting the full Turing Test requirements.

Indeed, AI systems equipped with pattern-matching, rather than language understanding, algorithms have been able to superficially emulate human conversation for decades. For instance, in the 1960s the Eliza program was able to give a passable impression of a psychotherapist. Eliza showed that you can fool some people some of the time, but the fact that Loebner's US$25,000 prize has never been won demonstrates that, performed correctly, the Turing test is a demanding measure of human-level intelligence.

Visit link:

No need to panic artificial intelligence has yet to create a doomsday machine

AI still can't recognise these simple pictures

Look at these black and yellow bars and tell me what you see. Not much, right? Ask state-of-the-art artificial intelligence the same question, however, and it will tell you they're a school bus. It will be over 99 percent certain of this assessment. And it will be totally wrong.

Computers are getting truly, freakishly good at identifying what they're looking at. They can't look at this pictureand tell you it's a chihuahua wearing a sombrero, but they can say that it's a dog wearing a hat with a wide brim. A new paper, however, directs our attention to one place these super-smart algorithms are totally stupid. It details how researchers were able to fool cutting-edge deep neural networks using simple, randomly generated imagery. Over and over, the algorithms looked at abstract jumbles of shapes and thought they were seeing parrots, ping pong paddles, bagels, and butterflies.

The findings force us to acknowledge a somewhat obvious but hugely important fact: Computer vision and human vision are nothing alike. And yet, since it increasingly relies on neural networks that teach themselves to see, we're not sure preciselyhowcomputer vision differs from our own. As Jeff Clune, one of the researchers who conducted the study, puts it, when it comes to AI, "we can get the results without knowing how we're getting those results."

Evolving Images to Fool AI One way to find out how these self-trained algorithms get their smarts is to find places where they are dumb. In this case, Clune, along with PhD students Anh Nguyen and Jason Yosinski, set out to see if leading image-recognising neural networks were susceptible to false positives. We know that a computer brain can recognise a koala bear. But could you get it to call something else a koala bear?

To find out, the group generated random imagery using evolutionary algorithms. Essentially, they bred highly-effective visual bait. A program would produce an image, and then mutate it slightly. Both the copy and the original were shown to an "off the shelf" neural network trained on ImageNet, a data set of 1.3 million images, which has become a go-to resource for training computer vision AI. If the copy was recognised as something -- anything -- in the algorithm's repertoire with more certainty the original, the researchers would keep it, and repeat the process. Otherwise, they'd go back a step and try again. "Instead of survival of the fittest, it's survival of the prettiest," says Clune. Or, more accurately, survival of the most recognisable to a computer as an African Gray Parrot.

Eventually, this technique produced dozens images that were recognised by the neural network with over 99 percent confidence. To you, they won't seem like much. A series of wavy blue and orange lines. A mandala of ovals. Those alternating stripes of yellow and black. But to the AI, they were obvious matches: Star fish. Remote control. School bus.

Peering Inside the Black Box In some cases, you can start to understand how the AI was fooled. Squint your eyes, and a school bus can look like alternating bands of yellow and black. Similarly, you could see how the randomly generated image that triggered "monarch" would resemble butterfly wings, or how the one that was recognised as "ski mask" does look like an exaggerated human face.

But it gets more complicated. The researchers also found that the AI could routinely be fooled by images of pure static. Using a slightly different evolutionary technique, they generated another set of images. These all look exactly alike -- which is to say, nothing at all, save maybe a broken TV set. And yet, state of the art neural networks pegged them, with upward of 99 percent certainty, as centipedes, cheetahs, and peacocks.

To Clune, the findings suggest that neural networks develop a variety of visual cues that help them identify objects. These cues might seem familiar to humans, as in the case of the school bus, or they might not. The results with the static-y images suggest that, at least sometimes, these cues can be very granular. Perhaps in training, the network notices that a string of "green pixel, green pixel, purple pixel, green pixel" is common among images of peacocks. When the images generated by Clune and his team happen on that same string, they trigger a "peacock" identification. The researchers were also able to elicit an identification of "lizard" with abstract images that looked nothing alike, suggesting that the networks come up with a handful of these cues for each object, any one of which can be enough to trigger a confident identification.

The fact that we're cooking up elaborate schemes to trick these algorithms points to a broader truth about artificial intelligence today: Even when it works, we don't always know how it works. "These models have become very big and very complicated and they're learning on their own," say Clune, who heads the Evolving Artificial Intelligence Laboratory at the University of Wyoming. "There's millions of neurons and they're all doing their own thing. And we don't have a lot of understanding about how they're accomplishing these amazing feats."

Read more:

AI still can't recognise these simple pictures

PPG Ergonomic Scrapers Improve Aerospace Sealant Removal

SYLMAR, Calif., Jan. 6, 2015 PPG Industries (NYSE:PPG) has expanded its line of SEMCO sealant removal tools with ergonomically-designed scrapers that improve efficiency of aerospace sealant removal and reduce operator fatigue.

According to Dillon Desai, PPG global market development manager for packaging, 10 scrapers have been designed for use by hand for standard sealant removal as well as more difficult and specialty applications such as removal of sealant around fasteners, fillet seals and other hard-to-reach areas.

With these new ergonomic Semco tools, aircraft workers and maintenance personnel can select the scraper that best meets their needs to make sealant removal easier, Desai said. Semco packaging and application systems aid the aerospace industry with efficient, cost-effective and easy-to-use packaging for mixing, dispensing and applying chemicals. With these new scrapers, we broaden our offerings for sealant and adhesives removal to provide a more complete resource.

Five tools are made of glass-filled nylon materials for more difficult operations, while five tools designed for standard sealant removal by hand are made of CELCON acetal copolymer (POM). The tools are designed for more rigorous operations and will last longer than those made with more traditional materials, Desai said. CELCON POM is often preferred by aircraft manufacturers because it offers the end-user more control when removing sealant and will not scratch or score aluminum substrates.

The tools have a variety of widths from 1/4-inch to 3 1/2-inch with eight scrapers having straight ends, one having an angled end and one with a pointed-tip design. An added benefit is that these tools can be sharpened for continued use.

New Semco aerospace sealant removal tools are available through PPG application support centers.

To learn more about Semco packaging and application systems, visit http://www.semcopackaging.com or call the PPG Aerospace customer service line (in North America) at 1-800-AEROMIX (237-6649).

PPG Aerospace is the aerospace products and services business of PPG Industries. PPG Aerospace PRC-DeSoto is the leading global producer of aerospace sealants, coatings, and packaging and application systems. PPG Aerospace Transparencies is the worlds largest supplier of aircraft windshields, windows and canopies. For more information, visit http://www.ppgaerospace.com.

PPG: BRINGING INNOVATION TO THE SURFACE.(TM)

PPG Industries vision is to continue to be the worlds leading coatings and specialty materials company. Through leadership in innovation, sustainability and color, PPG helps customers in industrial, transportation, consumer products, and construction markets and aftermarkets to enhance more surfaces in more ways than does any other company. Founded in 1883, PPG has global headquarters in Pittsburgh and operates in nearly 70 countries around the world. Reported net sales in 2013 were $15.1 billion. PPG shares are traded on the New York Stock Exchange (symbol:PPG). For more information, visit http://www.ppg.com and follow @PPGIndustries on Twitter.

Link:

PPG Ergonomic Scrapers Improve Aerospace Sealant Removal

Senior appoints rival Cobham's David Squires as new chief executive

Charles Berry, Senior chairman, said: David clearly has the depth and breadth of experience, and the personality and drive to lead Senior through the next stage of its development.

"I am confident that Senior will continue to make strong progress under his leadership.

His 25 years of experience in the global aerospace and defence industry, working in both developing and larger organisations, will be a significant benefit to Senior as it grows and increases its profile with its customers.

Mr Berry also paid tribute to Mr Rollins who in August announced his plan to retire with the chairman pointing to the chief executives work in transforming the FTSE 250-listed business into an international operation with 7,500 staff in 14 countries.

Last year Senior reported annual pre-tax profits of 83.8m on revenues of 775m, which have almost doubled under his stewardship.

Mr Berry said: Mark has made an enormous contribution to Senior... including 15 years on the board, initially as group finance director and then, for the past seven years, as group chief executive.

In this time, Senior has been transformed from a lowly-rated, industrial conglomerate to a quality global engineering business, focused on the aerospace, defence, land vehicle and energy markets.

"The groups financial performance and market capitalisation have increased significantly during this period and Mark leaves Senior in good health, with encouraging prospects for continued, long-term growth.

Mr Squires will be paid a basic salary of 460,000 and will be compensated for the value of benefits he forfeits on leaving Cobham. Details of his bonus and pension arrangements will be given in Seniors annual report.

The news came as another company in the sector, Meggitt, announced it had acquired Precision Engine Controls Corporation (PECC) from America's United Technologies Corporation $44.2m in cash. PECC makes fuel metering valves and actuators for small gas turbines which are mainly used in the oil and gas and power generation industries.

See the rest here:

Senior appoints rival Cobham's David Squires as new chief executive