State hopes portable toilets will help Old Orchard Beach’s poop problem – WGME

by Bill Trotter, BDN Staff/WGME

Maine Department of Environmental Protection is working with town officials to place portable toilets along Old Orchard Beach after local residents complained last week about beach-goers relieving themselves outside instead of using public bathrooms. (Troy R. Bennett | BDN)

OLD ORCHARD BEACH (BDN/WGME) -- The state hopes bringing in portable toilets will help clean up Old Orchard beach.

According to the Bangor Daily News, officials are planning to install portable toilets in the town after locals complained that tourists were going the bathroom in the dunes and ocean.

There are public bathrooms centrally located on West Grand Avenue, but the beach stretches more than three miles in either direction.

One woman reports seeing 10 to 15 people a day defecating near her property.

Earlier this week, Town Manager Larry Mead urged people to contact local police if they saw anyone relieving themselves outside.

Keri Kaczor, coordinator for Maine Healthy Beaches, said that people peeing or pooping on beaches is pretty gross, but her group is more concerned with water draining from land into the sea.

Most of Maine Healthy Beaches monitoring efforts, which occur at dozens of sites along the coast between Kittery and Mount Desert Island, are at places where rivers, streams and storm drains empty into the ocean.

Harmful bacteria levels are found most often at these kinds of locations, though the source of the bacteria is usually unknown, she said. The biggest cause of beach closings and advisories is stormwater runoff, according to the Natural Resources Defense Council, which used to publish an annual report on pollution at beaches nationwide.

At Old Orchard Beach on July 25, there was an enterococci bacteria count of 185 per 100 ml of water, which is over the maximum acceptable level of 103, but the following day it was back down near zero, she said. Enterococci is commonly found in feces and can make people sick.

The reading of 185 on July 25 was the first time in the past year that a sample taken at Old Orchard Beach had exceeded a count of 103.

Whatever it was, it was very temporary, Kazcor said of the bacteria level.

Still, she added, human feces is so much more pathogenic than that of a beaver or a bird, for example, and conditions can result in bacteria blooming more than once from a single deposit.

They can re-grow and persist, she said.

Large, wide beaches with heavy exposure to coastal ocean currents, such as Old Orchard Beach, usually rid themselves fairly quickly of harmful bacteria because of sun exposure, which can kill the bacteria, or of the tide, which often washes it away, Kazcor said. Bacteria has been known to linger longer at beaches that are more sheltered from the open ocean.

Its also a relatively easy problem to address if there is property where public bathrooms can be built. At Higgins Beach in Scarborough, for example, the public bathrooms were built about two blocks away from the water because nothing closer was available.

But money for building bathrooms, or even for water quality sampling efforts, often is not easy to come by.

Everyone is squeezed for budgets, Kazcor said.

Read this article:

State hopes portable toilets will help Old Orchard Beach's poop problem - WGME

The Sun’s core rotates faster than its surface – Astronomy Magazine

The Sun is the closest star to Earth at a mere 93 million miles (150 million kilometers) away. Despite the fact that you can feel its heat on your skin and its disk appears as large as the Full Moon in the sky, the Sun still largely remains an enigma. For all that astronomers have learned about our planets nearby host star, many questions and uncertainties remain. Now, though, one of those questions has been answered: the rotation rate of the Suns core.

An international team of astronomers using the Global Oscillations at Low Frequencies (GOLF) instrument aboard the ESA/NASA Solar and Heliospheric Observatory (SOHO) satellite have accurately measured the speed at which the Suns core rotates for the first time. That rotation rate is once per week. Their results, including the novel method they developed to measure this rotation rate, are published in Astronomy & Astrophysics.

The SOHO satellite has been studying the Sun for more than two decades. Its GOLF instrument records oscillations, which are wavelike changes in the gases of the Suns atmosphere that reveal information about its inner structure. GOLF records these changes at the level of the Suns surface every 10 seconds; solar astronomers then look at the signals over time to infer details about the activity deeper within the star. Studying the Sun this way is similar to studying the way waves caused by earthquakes propagate through the Earth, which tells scientists about the structure that lies beneath our feet.

To study the Suns core, the team examined an aspect of the oscillations visible at the Suns surface that reflects the time it takes for waves to travel through the center of the Sun. They found that the thermonuclear core rotates once per week, which is nearly four times faster than the rotation rate of the middle and outer layers of the star.

Using this new information, astronomers will be able to refine their models of the Suns current and past behavior, as well as determine more accurately its composition and the structure of its layers and magnetic field.

Read the rest here:

The Sun's core rotates faster than its surface - Astronomy Magazine

Ron Olowin, Saint Mary’s physics, astronomy professor, dies – East Bay Times

MORAGA Dr. Ronald Olowin, a well-known Saint Marys College School of Science professor and astrophysicist and member of the faculty there since 1987, died Saturday at age 72.

A Lafayette resident, Olowin a devout Catholic often gave lectures on the complex relationships between politics, religion, science and art, starting from the era of the historical Christ.

He was a member of the executive committee for the Inspiration of Astronomical Phenomenon consortium, which holds international meetings to explore the influence of astronomical phenomena on art, literature, myth, religion and history.

With a National Science Foundation grant and a major donation from Saint Marys alumnus and Bay Area dentist Louis Geissberger, Olowin was central to establishing the Norma Geissberger Observatory, where a research telescope was installed in 2004 on Observatory Hill near campus.

He also worked to deepen the colleges relationship with Moraga and the other Lamorinda cities, giving talks about the intersection of science and religion at his church, St. Perpetua Catholic Community in Lafayette, at Berkeleys Graduate Theological Union, and at installments of the Science Cafe series of discussions at the Lafayette Library.

A native of Pennsylvania, Olowin studied geophysics and astronomy at the University of British Columbia and worked at an observatory in South Africa before coming to Saint Marys 30 years ago.

Funeral services will be at 10:30 a.m. Saturday, Aug. 26 at St. Perpetua Church; a vigil service will he held the night before Friday, Aug. 25 at 7 p.m. in the Saint Marys College Chapel.Donations may be made in Rons honor to the Ron and Mary Olowin Memorial Gift, for the support of the Christian Brothers educational ministry in Myanmar under Brother Ling John. Send such donations to the De La Salle Institute (DLSI), 4401 Redwood Road, Napa CA 94558.

Donations may also be made to the St. Perpetua Church Capital Campaign, for the purpose of building a new social hall for the parish. Those contributions can be sent to St. Perpetua Church, 3454 Hamlin Road, Lafayette CA 94549.

Read more from the original source:

Ron Olowin, Saint Mary's physics, astronomy professor, dies - East Bay Times

UC Irvine astronomers make a hundred million discoveries black … – OCRegister

IRVINE Astronomers from UC Irvine have concluded that as many as 100 million black holes exist in our galaxy a tally far more than previously believed and shocking even to the three researchers.

The findings, while surprising, dont figure to affect life on earth or offer any immediate change in thinking about how the universe was formed and continues to evolve.

Its not like we need to go out and buy black hole insurance soon; not like were in danger of being sucked into a black hole any more than we were before, said James Bullock, UCIs chairman and professor of physics and astronomy and one of the researchers on the project.

There are wonderless giant objects lurking out there, Bullock said of black holes. Its just amazing that the universe was created out of things like this.

The trios discovery grows out of the 2015 detection of evidence showing that two black holes, each the size of 30 suns, collided to form a single one some 1.3 billion years ago. In response to thatground-breaking news which confirmed a key part of Einsteins theory of relativity the three UCI astronomers launched their study to determine how many black holes there are in the Milky Way.

In Monday interviews, Bullock, Manoj Kaplinghat, a physics and astronomy professor, andOliver Elbert, who in July earned a Ph.D. from UCI, said they couldnt even venture guesseson how many black holes existed.

The three co-authored a research paper on their findings that appears in the current issue ofMonthly Notices of the Royal Astronomical Society.

For more than 18 months, the trio researched existing data about galaxies and stars, and crunched mathematical formulas, creating their cosmic inventory.

It was an amazing exercise we were able to do, given what we know and what we dont know, to come up with a number that was pretty accurate, Bullock said.

A black hole is a star that has burnt off all of its fueland collapsed. The gravity in a black hole typically becomes so strong that light cant escape, rendering the space where the star once existed invisible.

In addition to counting our galaxys black holes, the trio set out to shed light on the elusive phenomenon of black holes colliding. The collision tracked in 2015 was the first time the phenomenon had been proven.

Of the 100 million black holes, Bullock, Elbert and Kaplinghat determined that 10 million are each the size of 50 suns. Some could collide in the near future.

Right now, we know these black hole mergers are occurring in some broad swath of the sky, but not with much precision, Bullock said. Theres hope well be able to triangulate their positions in the sky (and)begin to figure out what galaxy theyre merging in. In the next decade or so, Elbert said, astronomers should figure out the answer.

Its definitely something I will be keeping my eye on, as we get more information coming in from future detection, he said. I plan on coming back to this, revisiting some of our predictions.

Said Bullock: The fun thing about research is when you find out one answer, it always opens up three or four other questions.

Read the rest here:

UC Irvine astronomers make a hundred million discoveries black ... - OCRegister

Giant, Extremely Bright Storm System Spotted in Neptune’s Atmosphere – Sci-News.com

An unusually bright storm system nearly the size of Earth has been spotted in the atmosphere of Neptune, baffling researchers because it is located near the ice giants equator.

This image from the Keck Telescope shows an extremely bright, nearly circular storm system near Neptunes equator. Image credit: Ned Molter & Imke de Pater, University of California, Berkeley / C. Alvarez, W. M. Keck Observatory.

This giant storm system is unusually bright and is about 5,600 miles (9,000 km) in length, or one-third the size of Neptunes radius, spanning at least 30 degrees in both latitude and longitude.

The storm was discovered earlier this year by Ned Molter, a graduate student at the University of California, Berkeley, in images taken by an optical/infrared telescope at W. M. Keck Observatory on Maunakea, Hawaii.

Seeing a storm this bright at such a low latitude is extremely surprising. Normally, this area is really quiet and we only see bright clouds in the mid-latitude bands, so to have such an enormous cloud sitting right at the equator is spectacular, Molter said.

He observed the system getting much brighter between June 26 and July 2.

Historically, very bright clouds have occasionally been seen on Neptune, but usually at latitudes closer to the poles, around 15 to 60 degrees north or south, said Professor Imke de Pater, also from the University of California, Berkeley.

Never before has a cloud been seen at, or so close to the equator, nor has one ever been this bright.

At first, the scientists thought the storm system was the same Northern Cloud Complex seen by the NASA/ESA Hubble Space Telescope in 1994, after the iconic Great Dark Spot, imaged by NASAs Voyager 2 in 1989, had disappeared.

But measurements of its locale do not match, signaling that this cloud complex is different from the one Hubble first saw more than two decades ago, Professor de Pater said.

Images of Neptune (upper row June 26, lower row July 2) revealed an extremely bright storm system near Neptunes equator (labeled cloud complex in the upper figure). Image credit: Ned Molter & Imke de Pater, University of California, Berkeley / C. Alvarez, W. M. Keck Observatory.

A massive, high-pressure, dark vortex system anchored deep in Neptunes atmosphere may be whats causing the colossal cloud cover.

As gases rise up in a vortex, they cool down. When its temperature drops below the condensation temperature of a condensable gas, that gas condenses out and forms clouds, just like water on Earth. On Neptune we expect methane clouds to form.

As with every planet, winds in Neptunes atmosphere vary drastically with latitude, so if there is a big bright cloud system that spans many latitudes, something must hold it together, such as a dark vortex. Otherwise, the clouds would shear apart.

This big vortex is sitting in a region where the air, overall, is subsiding rather than rising. Moreover, a long-lasting vortex right at the equator would be hard to explain physically, Professor de Pater said.

If it is not tied to a vortex, the system may be a huge convective cloud, similar to those seen occasionally on other planets like the huge storm on Saturn that was detected in 2010.

Although one would also then expect the storm to have smeared out considerably over a weeks time.

This shows that there are extremely drastic changes in the dynamics of Neptunes atmosphere, and perhaps this is a seasonal weather event that may happen every few decades or so, Professor de Pater said.

Link:

Giant, Extremely Bright Storm System Spotted in Neptune's Atmosphere - Sci-News.com

Villanova astronomy professor gears up for solar eclipse – Main Line

Radnor >> On Monday, Aug. 21 a full solar eclipse when the moon blocks the view of the sun will sweep across the United States. Residents in the Philadelphia area will be treated to a 75 percent partial eclipse.

Among those who are eagerly anticipating the eclipse is Villanova University Astronomy Professor Edward Guinan, who plans to travel to Nebraska, where he has relatives, to see the full solar eclipse. This will mark the fifth total solar eclipse that the veteran professor, who has taught at VU for 47 years, has observed.

The really big deal is to see a total one, said Guinan, who is known internationally as an astrophysics expert in solar activity and flares. Over the years, hes journeyed to North Carolina (1970), Turkey (2006), Romania (1999) and Nova Scotia (1972) the latter eclipse made famous in the Carly Simon song, Youre So Vain. He recalled that eclipse as being a particularly beautiful event that he watched from a beach.

This one is convenient, said Guinan. It goes right across the U.S. A lot of my students havent seen any (eclipses).

The total eclipse is just amazing, said Guinan. It gets pretty dark. The stars come out. Birds roost. It only lasts a few minutes. Of course, you can get clouded out and then it just gets dark.

With satellites now circling the planet, eclipses are not as scientifically valuable as they were in earlier times but the phenomenon still yields scientific information. In the past, the 1919 solar eclipse proved the general theory of relativity posited by Albert Einstein, that mass bends space and time. Also, during an 1867 eclipse scientists discovered the then unknown element Helium.

While in Nebraska, Guinan plans to fly drones to look at the shadow created by the eclipse and also take part with a nationwide group of scientists taking pictures of the magnetic streamers in the suns corona that are visible during the 2 minute totality as the eclipse travels across the United States. During this time, bright stars can be seen and planets like Mercury, Mars and Jupiter. When previous eclipses occurred, the solar corona has been shown to be 2 to 3 million degrees. Since light from the corona is about 1 millionth as bright as the surface of the sun, it can only been seen from the Earth during a total eclipse.

Guinan emphasized how dangerous it is to look directly at the sun during the eclipse and warns people against that. People can see the eclipse safely with special glasses or by using a pinhole in a cardboard to project the suns image onto a paper. Also, its safe to use solar telescopes or solar binoculars or those instruments with approved solar filters. Otherwise, staring at the sun to view the eclipse can cause blindness, he said.

Astronomers have been able to predict eclipses since ancient times and people told various myths to explain the phenomenon, such as that a monster was trying to eat the sun. In ancient China, a ruler once ordered two astronomers to be beheaded when they failed to predict an eclipse. It was the ancient Greeks who figured out the mathematics involved when the earth, moon and sun line up on the same plane causing an eclipse, he said.

If the weather cooperates on Aug. 21, the eclipse should be visible between 1:20 p.m. and 4 p.m. At 2:44 p.m., 75 percent of the sun will be blocked by the moon. Meanwhile, Villanovas Department of Astrophysics and Planetary Science will host a Solar Eclipse Open House for University faculty, students and staff.

Read the original post:

Villanova astronomy professor gears up for solar eclipse - Main Line

Franklin TV showcases teenager, snackologist – Milford Daily News

By Scott Calzolaio Daily News Staff

FRANKLIN - Okay, sound check. Everybody go around and count to 10, said cameraman and video editor Chris Flynn on set at Franklin TV headquarters.

Each member of the "Life of Reilly and Friends" crew and their special guests did their sound check before diving into the interviews they had scheduled. Host and self-declared doctor of kidology Reilly DeForge, 13, prepped himself by joking around with his friends and practicing his formal introduction to the show.

The "Life of Reilly and Friends" is a public access TV show as well as a YouTube channel. On his show, Reilly keeps the company of his friend Tyler Afonso, 13-year-old doctor of snackology, and the minions, Reillys sister, Lily DeForge, 11, and her friend Meghan Norton, 11. Together, they explore themes such as books, charities, and other organizations. On their debut episode, Reilly interviewed the author the Diary of a Wimpy Kid series, Jeff Kinney.

The audience in mind is small children, Reilly said. Older people are welcome to watch, but its not geared towards them because the idea is to interest the younger kids.

A typical episode runs anywhere from 20 minutes to an hour and can be viewed by searching Life of Reilly in Franklin on Google or YouTube.

The show features a regular snack-time segment in between interviews, hosted by snackologist Afonso.

A lot of the snacks are actually really good, Reilly said. I look at the recipes and Im like eh, I dont know, but its always really good.

Chef Afonso, who mostly uses YouTube as his cookbook, said his recipes are always a mixture of things.

My recipes are mostly just random stuff, said the Gordon Ramsey fan. And now Im the star of the show, everybody likes me better, he laughed.

Sure they do, Reillys sister said sarcastically, also laughing.

During one segment, they made slime using melted gummy bears, adding red food dye to fit the episodes blood drive theme. The messy, delicious concoction was a highlight for the crew.

It was sticky and messy and disgusting, but it was so good, said Lily.

When asked what her favorite part of being on the show was, Lily replied Eating, thats pretty much all I do, she laughed.

Lily is a taste-tester for the snack segment, as well as a judge in other segments. She does her fair share of interviewing as well, when Reilly gets stuck, she said.

With his eyes on the prize, Reilly said that the goal is to build a good standing to get into college.

I think personally that its great college credit, he said. Ive been thinking about that for a while now. Im trying to do everything I can to get into college.

Reilly said that his mom, Tracy, had sparked the idea to make a show.

She was thinking originally a YouTube channel, which we do have in addition to the show, Reilly said.

He said that he already knew he was good with people from a young age, and that he is using the show as a learning experience.

I noticed Im already doing it, but Ive always found that public speaking and talking with people comes very naturally to me, he said. I think this experience of interviewing and getting interviewed has given me a whole new respect for talking in general. A lot of the greatest people out there are amazingly eloquent speakers.

A Jimmy Fallon fan and science lover at heart, the outgoing teenager hopes to someday find a job in that field while still maintaining his vivid personality.

I love the sciences. I love physics, Im particularly into astro-physics and molecular physics, the way that atoms work and the way the universe in general works, he said. We did a whole experiments episode and it did not work out great. There were big studio lights and it was a glow-in-the-dark experiment, and it did not work out well.

Not discouraged, he said he would attempt to incorporate science into his program as much as he could.

Reillys mom said that shes very grateful Franklin TV has allowed her children this opportunity.

Im in the media business and have been for some time, she said. I know that local stations struggle to get good content. You know that yourself as youre clicking through, you say Oh, theres town hall meeting again.

Tracy said that the station records, produces, and broadcasts the show for free, and that is had been a learning experience to work in local TV.

This is our second season, she said. We did a lot of learning our first season. Like, what does it take? How many episodes can we get in? So, weve done a lot of learning and were hoping to pick it up a little bit.

Tracy said that her son has the personality it takes for TV, and that she hopes to see him incorporate what he has learned from this experience in his future endeavors.

He loves to talk to people, and hes very charming. So, he definitely can put it on, she said.

Follow this link:

Franklin TV showcases teenager, snackologist - Milford Daily News

Physics Students to Study Solar Eclipse in Wyoming – Colorado College News

Give me six hours to chop down a tree and I will spend the first four sharpening the axe, Abraham Lincoln is quoted as saying.

The Colorado College Physics Departments variation on that might well be Give me a 2-minute solar eclipse and I will spend weeks beforehand preparing for it.

CC Physics Professor Shane Burns is working with a group of students in advance of the Aug. 21 eclipse, the first total solar eclipse visible in the contiguous U.S. in nearly four decades.

Colorado College students Ben Pitta 18, Maddie Lucey 18, Jake Kohler 18, and Nick Merritt 19, along with Physics Department Technical Director Jeff Steele, will depart for a remote area outside Lander, Wyoming, about a week before the eclipse, where they will attempt to reaffirm one of the findings of Einsteins general theory of relativity: that is, that the path of light is bent by massive objects.

They will use a CCD (charge-coupled device) camera and an 8-inch aperture telescope in an effort to measure the curve of light from stars during the eclipse. Burns says the effect is small, less than two arc-seconds, or approximately the width of a human hair viewed at 10 meters.

In 1915 Einstein predicted that this bending should be measurable during a solar eclipse; the effect was first measured by Arthur Eddington during an eclipse on May 29, 1919.

In preparation for the event, Pitta, a physics major with a concentration in astrophysics, is developing sun-tracking software to determine the exact patch of sky they will be imaging and which stars will be visible during the narrow timeframe allotted by the eclipse. The difficulty is compounded by the fact that the stars will be behind the sun, and thus very faint.

Burns and Pitta have been testing the CCD and telescope system, developing software to process the images, and finalizing an observation plan.

The position of the sun during the eclipse depends on where you are on earth, Burns says. Pittas program will determine the position of stars behind the sun and the exposure times they should use so that the stars can be imaged accurately during the crucial 2:20 minutes of the eclipse. Pitta is conducting trial runs to make sure the camera is pointed at the precise location in the sky, the exposure times are correct, and the procedures are well established. We want to have everything in place so that we can effectively take observations, Pitta says. Because the window of observation is so short, they will have to be efficient and well-practiced.

Einstein theorized that as light rays from stars pass near the surface of the sun during an eclipse, they should be bent. This makes the stars appear to shift relative to their positions when the sun rays dont pass near the sun, Burns says. In order to measure the effect, one compares the positions of stars whose rays graze the surface of the sun to the positions of the same stars when the sun is in a different position.

Of course, he adds, you have to wait for an eclipse to see stars near the sun.

Despite the drills and trial runs, Burns says that realistically, things can go awry. The students might not be able to access the remote area they hope to. Other glitches may arise at the last minute, with the students needing to redo the calculations on the spot.

Theres not much time to figure it out on the fly, says Burns.

Therere always problems that arise, and you need to find the path to take to solve them, Pitta says. Thats physics in a nutshell.

The team will be conducting the observations and imaging on their own, as Burns will be presenting a (sold out) program for CC alumni near Grand Teton National Park during the eclipse.

To measure the bend of light, the students will image the same patch of sky six months later, and if successful, should be able to detect a subtle shift of stars. If so, the experiment will bear out Einsteins prediction that light does not always travel in a perfectly straight line. While traveling through space time and nearing the warp induced by an objects gravitational field, light should curve but not by much.

Its going to be a difficult project, Burns says. I give the odds for success about 50 percent, but the students will learn a lot. If successful, Burns and the students plan to publish the results in a scientific journal.

Continued here:

Physics Students to Study Solar Eclipse in Wyoming - Colorado College News

Educational app released ahead of highly anticipated solar eclipse – Phys.Org

August 8, 2017 by John Michael Baglione The Center for Astrophysics' new app, Eclipse 2017, comes with several features to prepare, learn, and watch the eclipse that will travel across the United States on Aug. 21. Credit: Harvard-Smithsonian Center for Astrophysics

Thousands of years ago, human beings reacted to solar eclipses with dismay, flooding the streets with pots and pans to scare away whatever had blotted out the sun with a cacophony of banging and shouting.

When a total solar eclipse crosses the United States on Aug. 21 people will once again take to the streets with a great deal of anxiety, but most will be concerned primarily with getting a good view.

With solar safety glasses available at every counter and an expected 27 million Americans traveling to the path of totalitythe nearly 3,000-mile-long arc from the coast near Salem, Ore., to Charleston, S.C., in which a view of the total eclipse is possibleit is clear that eclipse fever has swept the country. Seeing an opportunity to educate and inspire a new wave of astronomers, the Harvard-Smithsonian Center for Astrophysics (CfA) has released a smartphone app, Eclipse 2017, available on iOS and Android.

"We haven't had an eclipse cross the United States like this in nearly 100 years," says CfA spokesperson Tyler Jump. "Because it's such a rare and exciting event, we wanted to create an interactive guide that everyone could enjoy. Even if you're not in the path of totality, our app allows you to calculate exactly how much of an eclipse you'll be able to see and get a preview with our eclipse simulation. It's also a great opportunity to highlight some of Smithsonian Astrophysical Observatory's (SAO) solar research. SAO was founded in large part to study the sun, and we've been doing so now for more than a century."

The free app comes with a host of resources for the amateur astronomer. A comprehensive viewing guide offers a crash course in the science behind eclipses and instructions on how to safely observe the celestial phenomenon. Videos from the Solar Dynamics Observatory show the sun in different wavelengths, revealing the many layers of solar activity. Users can also access an interactive eclipse map, which gives lunar transit times and simulated views for any location in the United States.

In Cambridge, a partial eclipse covering most of the sun will be visible in the afternoon from about 1:30 to 4. But along the path of totality, for up to 2 minutes, viewers will enjoy one of the astronomy's most extraordinary sights: the sun's ethereal corona. Normally invisible due to the amount of light emanating from the sun's surface, the "crown" of magnetized plasma reaches temperatures over a million degrees Kelvinnearly 2 million Fahrenheitand is best known as the site of the sun's awesome and violent flares.

Monday's will be the first total eclipse to be visible from the United States since 1991, and the first to be visible from every state since 1918. Though this is the first total eclipse to cross the United States in nearly 40 years, there is a total eclipse visible from Earth about every 18 months.

For those who will not have a chance to view the eclipse with their own eyes, the app will provide a live stream of the eclipse as it travels across the country. But if it has to be the real thing, in April 2024 the United States is due for total eclipse that will travel from Texas to Maine. A lucky stretch of land along the Illinois, Missouri, and Kentucky borders will see two total eclipses in just seven years.

Explore further: What's a total solar eclipse and why this one is so unusual

This story is published courtesy of the Harvard Gazette, Harvard University's official newspaper. For additional university news, visit Harvard.edu.

Total solar eclipses occur every year or two or three, often in the middle of nowhere like the South Pacific or Antarctic. What makes the Aug. 21 eclipse so special is that it will cut diagonally across the entire United ...

This August, the U.S. will experience its first coast-to-coast total solar eclipse in 99 years.

The upcoming solar eclipse in August bids to be more than a rare celestial event - it could meld the increasingly pervasive world of smartphone apps with a total eclipse visible from sea to shining sea.

For the first time in almost a century the United States is preparing for a coast-to-coast solar eclipse, a rare celestial event millions of Americans, with caution, will be able to observe.

For the first time in 99 years, a total solar eclipse will crossthe entire nation on Monday, August 21, 2017. Over the course of 100 minutes, 14 states across the United States will experience over two minutes of darkness ...

More than 300 million people in the United States potentially could directly view the Aug. 21 total solar eclipse, and NASA wants everyone who will witness this celestial phenomenon to do so safely.

After conducting a cosmic inventory of sorts to calculate and categorize stellar-remnant black holes, astronomers from the University of California, Irvine have concluded that there are probably tens of millions of the enigmatic, ...

Studies of molecular clouds have revealed that star formation usually occurs in a two-step process. First, supersonic flows compress the clouds into dense filaments light-years long, after which gravity collapses the densest ...

A group of astronomers led by Javier Lorenzo of the University of Alicante, Spain, has discovered that the binary star system HD 64315 is more complex than previously thought. The new study reveals that HD 64315 contains ...

The five sunshield layers responsible for protecting the optics and instruments of NASA's James Webb Space Telescope are now fully installed.

In our hunt for Earth-like planets and extraterrestrial life, we've found thousands of exoplanets orbiting stars other than our sun. The caveat is that most of these planets have been detected using indirect methods. Similar ...

A NASA mission designed to explore the stars in search of planets outside of our solar system is a step closer to launch, now that its four cameras have been completed by researchers at MIT.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Read the original post:

Educational app released ahead of highly anticipated solar eclipse - Phys.Org

The Real Threat of Artificial Intelligence – The New York Times

Unlike the Industrial Revolution and the computer revolution, the A.I. revolution is not taking certain jobs (artisans, personal assistants who use paper and typewriters) and replacing them with other jobs (assembly-line workers, personal assistants conversant with computers). Instead, it is poised to bring about a wide-scale decimation of jobs mostly lower-paying jobs, but some higher-paying ones, too.

This transformation will result in enormous profits for the companies that develop A.I., as well as for the companies that adopt it. Imagine how much money a company like Uber would make if it used only robot drivers. Imagine the profits if Apple could manufacture its products without human labor. Imagine the gains to a loan company that could issue 30 million loans a year with virtually no human involvement. (As it happens, my venture capital firm has invested in just such a loan company.)

We are thus facing two developments that do not sit easily together: enormous wealth concentrated in relatively few hands and enormous numbers of people out of work. What is to be done?

Part of the answer will involve educating or retraining people in tasks A.I. tools arent good at. Artificial intelligence is poorly suited for jobs involving creativity, planning and cross-domain thinking for example, the work of a trial lawyer. But these skills are typically required by high-paying jobs that may be hard to retrain displaced workers to do. More promising are lower-paying jobs involving the people skills that A.I. lacks: social workers, bartenders, concierges professions requiring nuanced human interaction. But here, too, there is a problem: How many bartenders does a society really need?

The solution to the problem of mass unemployment, I suspect, will involve service jobs of love. These are jobs that A.I. cannot do, that society needs and that give people a sense of purpose. Examples include accompanying an older person to visit a doctor, mentoring at an orphanage and serving as a sponsor at Alcoholics Anonymous or, potentially soon, Virtual Reality Anonymous (for those addicted to their parallel lives in computer-generated simulations). The volunteer service jobs of today, in other words, may turn into the real jobs of the future.

Other volunteer jobs may be higher-paying and professional, such as compassionate medical service providers who serve as the human interface for A.I. programs that diagnose cancer. In all cases, people will be able to choose to work fewer hours than they do now.

Who will pay for these jobs? Here is where the enormous wealth concentrated in relatively few hands comes in. It strikes me as unavoidable that large chunks of the money created by A.I. will have to be transferred to those whose jobs have been displaced. This seems feasible only through Keynesian policies of increased government spending, presumably raised through taxation on wealthy companies.

As for what form that social welfare would take, I would argue for a conditional universal basic income: welfare offered to those who have a financial need, on the condition they either show an effort to receive training that would make them employable or commit to a certain number of hours of service of love voluntarism.

To fund this, tax rates will have to be high. The government will not only have to subsidize most peoples lives and work; it will also have to compensate for the loss of individual tax revenue previously collected from employed individuals.

This leads to the final and perhaps most consequential challenge of A.I. The Keynesian approach I have sketched out may be feasible in the United States and China, which will have enough successful A.I. businesses to fund welfare initiatives via taxes. But what about other countries?

They face two insurmountable problems. First, most of the money being made from artificial intelligence will go to the United States and China. A.I. is an industry in which strength begets strength: The more data you have, the better your product; the better your product, the more data you can collect; the more data you can collect, the more talent you can attract; the more talent you can attract, the better your product. Its a virtuous circle, and the United States and China have already amassed the talent, market share and data to set it in motion.

For example, the Chinese speech-recognition company iFlytek and several Chinese face-recognition companies such as Megvii and SenseTime have become industry leaders, as measured by market capitalization. The United States is spearheading the development of autonomous vehicles, led by companies like Google, Tesla and Uber. As for the consumer internet market, seven American or Chinese companies Google, Facebook, Microsoft, Amazon, Baidu, Alibaba and Tencent are making extensive use of A.I. and expanding operations to other countries, essentially owning those A.I. markets. It seems American businesses will dominate in developed markets and some developing markets, while Chinese companies will win in most developing markets.

The other challenge for many countries that are not China or the United States is that their populations are increasing, especially in the developing world. While a large, growing population can be an economic asset (as in China and India in recent decades), in the age of A.I. it will be an economic liability because it will comprise mostly displaced workers, not productive ones.

So if most countries will not be able to tax ultra-profitable A.I. companies to subsidize their workers, what options will they have? I foresee only one: Unless they wish to plunge their people into poverty, they will be forced to negotiate with whichever country supplies most of their A.I. software China or the United States to essentially become that countrys economic dependent, taking in welfare subsidies in exchange for letting the parent nations A.I. companies continue to profit from the dependent countrys users. Such economic arrangements would reshape todays geopolitical alliances.

One way or another, we are going to have to start thinking about how to minimize the looming A.I.-fueled gap between the haves and the have-nots, both within and between nations. Or to put the matter more optimistically: A.I. is presenting us with an opportunity to rethink economic inequality on a global scale. These challenges are too far-ranging in their effects for any nation to isolate itself from the rest of the world.

Kai-Fu Lee is the chairman and chief executive of Sinovation Ventures, a venture capital firm, and the president of its Artificial Intelligence Institute.

Follow The New York Times Opinion section on Facebook and Twitter (@NYTopinion), and sign up for the Opinion Today newsletter.

A version of this op-ed appears in print on June 25, 2017, on Page SR4 of the New York edition with the headline: The Real Threat of Artificial Intelligence.

Read more from the original source:

The Real Threat of Artificial Intelligence - The New York Times

How Google is making music with artificial intelligence – Science Magazine

A musician improvises alongside A.I. Duet, software developed in part by Googles Magenta

google

By Matthew HutsonAug. 8, 2017 , 3:40 PM

Can computers be creative? Thats a question bordering on the philosophical, but artificial intelligence (AI) can certainly make music and artwork that people find pleasing. Last year, Google launched Magenta, a research project aimed at pushing the limits of what AI can do in the arts. Science spoke with Douglas Eck, the teams lead in San Francisco, California, about the past, present, and future of creative AI. This interview has been edited for brevity and clarity.

Q: How does Magenta compose music?

A: Learning is the key. Were not spending any effort on classical AI approaches, which build intelligence using rules. Weve tried lots of different machine-learning techniques, including recurrent neural networks, convolutional neural networks, variational methods, adversarial training methods, and reinforcement learning. Explaining all of those buzzwords is too much for a short answer. What I can say is that theyre all different techniques for learning by example to generate something new.

Q: What examples does Magenta learn from?

A:We trained theNSynthalgorithm, which uses neural networks to synthesize new sounds, on notes generated by different instruments. TheSketchRNNalgorithm was trained onmillions of drawingsfrom ourQuick, Draw!game. Our most recent music algorithm,Performance RNNwas trained on classical piano performances captured on a modern player piano [listen below]. I'd like musicians to be able to easily train models on their own musical creations, then have fun with the resulting music, further improving it.

Q: How has computer composition changed over the years?

A:Currently the focus is on algorithms which learn by example, i.e., machine learning, instead of using hard-coded rules. I also think theres been increased focus on using computers as assistants for human creativity rather than as a replacement technology, such as our work and Sonys Daddys Car [a computer-composed song inspired by The Beatles and fleshed out by a human producer].

Q: Do the results of computer-generated music ever surprise you?

A:Yeah. All the time. I was really surprised at how expressive the short compositions were from Ian Simon and Sageev Oores recent Performance RNN algorithm. Because they trained on real performances captured in MIDI on Disklavier pianos, their model was able to generate sequences with realistic timing and dynamics.

Q: What else is Magenta doing?

A:We did a summer internship around joke telling, but we didnt generate any funny jokes. Were also working on image generation and drawing generation [seeexample below]. In the future, Id like to look more at areas related to design. Can we provide tools for architects or web page creators?

Magenta software can learn artistic styles from human paintings and apply them to new images.

Fred Bertsch

Q: How do you respond to art that you know comes from a computer?

A: When I was on the computer science faculty at University of Montreal [in Canada], I heard some computer music by a music faculty member, Jean Pich. Hed written a program that could generate music somewhat like that of the jazz pianist Keith Jarrett. It wasnt nearly as engaging as the real Keith Jarrett! But I still really enjoyed it, because programming the algorithm is itself a creative act. I think knowing Jean and attributing this cool program to him made me much more responsive than I would have been otherwise.

Q: If abilities once thought to be uniquely human can be aped by an algorithm, should we think differently about them?

A: I think differently about chess now that machines can play it well. But I dont see that chess-playing computers have devalued the game. People still love to play! And computers have become great tools for learning chess.Furthermore, I think its interesting to compare and contrast how chess masters approach the game versus how computers solve the problemvisualization and experience versus brute-force search, for example.

Q: How might people and machines collaborate to be more creative?

A: I think its an iterative process. Every new technology that made a difference in art took some time to figure out. I love to think of Magenta like an electric guitar. Rickenbacker and Gibson electrified guitars with the purpose of being loud enough to compete with other instruments onstage.Jimi Hendrix and Joni Mitchell and Marc Ribot and St. Vincent and a thousand other guitarists who pushed the envelope on how this instrument can be played were all using the instrument the wrong way, some saidretuning, distorting, bending strings, playing upside-down, using effects pedals, etc. No matter how fast machine learning advances in terms of generative models, artists will work faster to push the boundaries of whats possible there, too.

See original here:

How Google is making music with artificial intelligence - Science Magazine

TechCrunch Disrupt SF 2017 is all in on artificial intelligence and machine learning – TechCrunch

As fields of research, machine learning and artificial intelligence both date back to the 50s. More than half a century later, the disciplines have graduated from the theoretical to practical, real world applications. Well have some of the top minds in both categories to discuss the latest advances and future of AI and ML on stage and Disrupt San Francisco in just over a month.

Well be joined on stage by Brian Krzanich of Intel, John Giannandrea of Google, Sebastian Thrun of Udacity and Andrew Ng of Baidu, to outline the various ways these cutting edge technologies are already impacting our lives, from simple smart assistants, to self-driving cars. Its a broad range of speakers, which is good news, because weve got a lot of ground to cover in some of the industrys most exciting advances.

John (JG) Giannandrea, SVP Engineering at Google: Giannandrea joined Google in 2010, when the company acquired his startup Metaweb Technologies, a move that formed the basis for the search giants Knowledge Graph technology. Last year, Google appointed Giannandrea the head of search, the latest indication of the companys deep interest for machine learning and AI. Teaching machines to be smarter is a long time passion for the executive, who told Fortune in a 2016 interview that, computers are remarkably dumb. Giannandrea will discuss the work hes doing at Google to fix exactly that.

Sebastian Thrun, Founder, Udacity: Prior to founding online educational service Udacity, Sebastian Thrun headed up Google X, helping make artificial intelligence a foundational key for the companys moonshot products. The topic has been a long time passion for the CMU computer science grad, in fact, he now teaches a course on the subject at Udacity. The introductory Artificial Intelligence for Robotics class takes students through the basics of AI and the ways in which the technology is helping pave the way for his other key passion, self-driving cars.

Andrew Ng, Former Chief Scientist atBaidu: Earlier this year, Andrew Ng stepped down from his role as the head of Baidus AI Group. In a post for Medium announcing the move, the executive reconfirmed his commitment to the space, noting that AI will also now change nearly every major industryhealthcare, transportation, entertainment, manufacturing. After Baidu, NG has shifted his focus toward harnessing artificial intelligence for the benefit of larger society, beyond just a single company, targeting a broad range of industries from healthcare to conversational computing.

Brian Krzanich, CEO Intel: When Brian Krzanich took over as Intel CEO in 2013, the company was reeling from an inability to adapt from desktop computing to mobile devices. Under his watch, hes shifted much of Intels resources to forward thinking technologies, from 5G networks and cloud computing to drones and self-driving cars. Artificial Intelligence and Machine Learning are at the heart of much of Intels forward looking plans, as the company works to stay on the bleeding edge of technology breakthroughs.

Alongside this main-stage panel, well also have an Off The Record session on AI with some of the top minds in the field, which will only be available to attendees at Disrupt. Plus, there are plenty of startups in Startup Alley this year that are focusing in on machine learning.

Were incredibly excited to be joined by so many top names, and hope youll be there as well. Early bird general admission tickets are still available for whats shaping up to be another blockbuster Disrupt.

View post:

TechCrunch Disrupt SF 2017 is all in on artificial intelligence and machine learning - TechCrunch

Artificial Intelligence might not destroy us after all – New York Post

Elon Musk famously equated Artifical Intelligence with summoning the demon and sounds the alarm that AI is advancing faster than anyone realizes, posing an existential threat to humanity. Stephen Hawking has warned that AI could take off and leave the human race, limited by evolutions slow pace, in the dust. Bill Gates counts himself in the camp concerned about super intelligence. And, although Mark Zuckerburg is dismissive about AIs potential threat, Facebook recently shut down an AI engine after reportedly discovering that it had created a new language humans cant understand.

Concerns about AI are entirely logical if all that exists is physical matter. If so, itd be inevitable that AI designed by our intelligence but built on a better platform than biochemistry would exceed human capabilities that arise by chance.

In fact, in a purely physical world, fully realized AI should be recognized as the appropriate outcome of natural selection; we humans should benefit from it while we can. After all, sooner or later, humanity will cease to exist, whether from the sun running out or something more mundane including AI-driven extinction. Until then, wouldnt it be better to maximize human flourishing with the help of AI rather than forgoing its benefits in hopes of extending humanitys end date?

As possible as all this might seem, in actuality, what we know about the human mind strongly suggests that full AI will not happen. Physical matter alone is not capable of producing whole, subjective experiences, such as watching a sunset while listening to seagulls, and the mechanisms proposed to address the known shortfalls of matter vs. mind, such as emergent properties, are inadequate and falsifiable. Therefore, it is highly probable that we have immaterial minds.

Granted, forms of AI are already achieving impressive results. These use brute force, huge and fast memory, rules-based automation, and layers of pattern matching to perform their extraordinary feats. But this processing is not aware, perceiving, feeling, cognition. The processing doesnt go beyond its intended activities even if the outcomes are unpredictable. Technology based on this level of AI will often be quite remarkable and definitely must be managed well to avoid dangerous repercussions. However, in and of itself, this AI cannot lead to a true replication of the human mind.

Full AI that is, artificial intelligence capable of matching and perhaps exceeding the human mind cannot be achieved unless we discover, via material means, the basis for the existence of immaterial minds, and then learn how to confer that on machines. In philosophy, the underlying issue is known as the qualia problem. Our awareness of external objects and colors; our self-consciousness; our conceptual understanding of time; our experiences of transcendence, whether simple awe in front of beauty or mathematical truth; or our mystical states, all clearly point to something that is qualitatively different from the material world. Anyone with a decent understanding of physics, computer science and the human mind ought to be able to know this, especially those most concerned about AIs possibilities.

That those who fear AI dont see its limitations indicates that even the best minds fall victim to their biases. We should be cautious about believing that exceptional achievements in some areas translate to exceptional understanding in others. For too many including some in the media the mantra Question everything applies only within certain boundaries. They never question methodological naturalism the belief that there is nothing that exists outside the material world which blinds them to other possibilities. Even with what seems like more open-minded thinking, some people seem to suffer from a lack of imagination or will. For example, Peter Thiel believes that the human mind and computers are deeply different yet doesnt acknowledge that implies that the mind comprises more than physical matter. Thomas Nagle believes that consciousness could not have arisen via materialistic evolution yet explicitly limits the implications of that because he doesnt want God to exist.

Realizing that we have immaterial minds, i.e. genuine souls, is far more important than just speculating on AIs future. Without immaterial minds, there is no sustainable basis for believing in human exceptionalism. When human life is viewed only through a materialistic lens, it gets valued based on utility. No wonder the young nones young Americans who dont identify with a religion think their lives are meaningless and some begin to despair. It is time to understand that evolution is not a strictly material process but one in which the immaterial mind plays a major role in human, and probably all sentient creatures, adaption and selection.

Deep down, we all know were more than biological robots. Thats why almost everyone rebels against materialisms implications. We dont act as though we believe everything is ultimately meaningless.

Were spiritual creatures, here by intent, living in a world where the supernatural is the norm; each and every moment of our lives is our souls in action. Immaterial ideas shape the material world and give it true meaning, not the other way around.

In the end, the greatest threat that humans face is a failure to recognize what we really are.

If were lucky, what people learn in the pursuit of full AI will lead us to the rediscovery of the human soul, where it comes from, and the important understanding that goes along with that.

Go here to read the rest:

Artificial Intelligence might not destroy us after all - New York Post

AI Vs. Bioterrorism: Artificial Intelligence Trained to Detect Anthrax by Scientists – Newsweek

South Korean scientists have been able to train artificial intelligence to detect anthrax at fast speeds, potentially dealing ablow to bioterrorism.

Hidden in letters, the biological agent killed five Americans and infected 17 morein the yearfollowing the 9/11 attacks, and the threat of a biological attackremains a top concern of Western security services as radicals such as the Islamic State militant group (ISIS) seek new ways to attack the West.

Researchers from the Korea Advanced Institute of Science and Technology have now created an algorithm that is able to study bacterial spores and quickly identify the biological agent, according to a paper published last week for the Science Advances journal.

Tech & Science Emails and Alerts - Get the best of Newsweek Tech & Science delivered to your inbox

The new training of AI to identify the bacteria usingmicroscopic images could decrease the time it takes to detect anthrax drastically, to mere seconds from a day. It is also accurate 95 percent of the time.

Anthrax contaminates the body when spores enter it, mostly through inhalation, multiplying and spreading an illness that could be fatal. Skin infections of anthrax are less deadly.

Spores from the Sterne strain of anthrax bacteria (Bacillus anthracis) are pictured in this handout scanning electron micrograph obtained by Reuters May 28, 2015. Reuters/Center for Disease Control/Handout

This study showed that holographic imaging and deep learning can identify anthrax in a few seconds,YongKeun Paul Park, associate professor of physics at the Korea Advanced Institute of Science and Technology, told the IEEE Spectrum blog.

Conventional approaches such as bacterial culture or gene sequencing would take several hours to a day, he added.

Park is working with the South Korean agency responsible for developing the country's defense capabilities amid fears that North Korea may plan a biological attack against its archenemy across their shared border.

North Korea's regime is no stranger to chemical agents. South Korea has accused operatives linked toPyongyang of responsibility for the assassination of North Korean leader Kim JongUn's half brother, Kim Jong Nam, using a VX agent at Malaysia's Kuala Lumpur International Airport in February.

Contamination by anthrax hasa death rate of 80 percent, so detection of the bacteria is crucial.

Spreading anthrax far and wide in an attack would mean that thousands would die if contaminated. So Western security services fear that hostile parties, such as ISIS sympathizers or regimes such as North Korea, will make attempts to develop a capability to cause a mass-casualty attack.

The researchers say the AI innovation could bring advances elsewhere, too, including the potential to detect other bacterias, such as those that cause food poisoning and kill more than a quarter of a million people every year.

Follow this link:

AI Vs. Bioterrorism: Artificial Intelligence Trained to Detect Anthrax by Scientists - Newsweek

The Race to Cyberdefense, Artificial Intelligence and the Quantum Computer – Government Technology

I've been following cybersecurity startups and hackers for years, and I suddenly discovered how hackers are always ahead of the rest of us they have a better business model funding them in their proof of concept (POC) stage of development.

To even begin protecting ourselves from their well-funded advances and attacks, cyberdefense and artificial intelligence (AI)technologies must be funded at the same level in the POC stage.

Today, however, traditional investors not only want your technology running, they also need assurances that you already have a revenue stream which stifles potential new technology discovery at the POC level. And in some industries, this is dangerous.

Consider the fast-paced world of cybersecurity, in which companies are offered traditional funding avenues as they promote their product's tech capabilities so people will invest. This promotion and disclosure of their technology, however, gives hackers a road map to the new cyberdefense technologies and a window of time to gain knowledge on how to exploit them.

This same road map exists for technologies covered in detail when standard groups, universities, governments and private labs publish white papers documents that essentially assist hackers by giving them advanced notice of cyberdefense techniques.

In addition to this, some hackers receive immediate funding through nation states that are coordinating cyberwarfare like the traditional military and others are involved in organized secret groups that fund the use of ransomware and DDoS attacks. These hackers get immediate funding and then throw their technology on the Internet for POC discovery.

One project that strongly makes a case for rapidly funding cyberdefense technologies in an effort to keep up with hackers is the $5.7 billion U.S. Department of Homeland Security's (DHS) EINSTEIN cyberdefense system, which was deemed obsolete upon its deployment for failing to detect 94 percent of security vulnerabilities. As this situation illustrates, the traditional methods of funding cyberdefense taking years of bureaucratic analysis and vendor contracts does not work in the fast technology discovery world of cyberdefense. After the EINSTEIN project failure, DHS decided to conduct an assessment it's currently working to understand if it's making the right investments in dealing with the ever-changing cyberenvironment.

But it also has other roadblocks, as even large technology companies and contractors with which DHS does business have their own bureaucracies and investments that ultimately deter the department from getting the best in cyberdefense technologies. And once universities, standards groups, regulation and funding approvals are added to these processes, you're pretty much assured to be headed for another disaster.

But DHS doesnt need to develop these technologies itself. The department needs to support public- and private-sector POCs to rapidly mature and deploy new cyberdefense technologies. This suggestion is supported by what other countries are successfully doing including our adversaries.

The same two things that have motivated mankind all through history immediate power and money are now motivating hackers, and cyberdefense technologies are taking years to be deployed. So I'll say it again: The motivational and funding model of cyberdefense technologies must change. The key to successful cyberdefense technology development is making it as aggressive as the hackers that attack it. And this needs to be done at the conceptual POC level.

The concern in cyberdefense (and really all AI) is the race to the quantum computer.

Quantum computer technologies cant be hacked, and in theory, its processing power can break all encryption. The computational physics behind the quantum also offer remarkable capabilities that will drastically change all current AI and cyberdefense technologies. This is a winner-takes-all technology that offers capability with absolute security capabilities capabilities that we can now only imagine.

The most recent funding source for hackers is Bitcoin, which uses the decentralized and secure blockchain technology. It has even been used to support POC funding in what is called an Initial Coin Offering (ICO), the intent of which is to crowdfund early startup companies at the development or POC level by bypassing traditional and lengthy funding avenues. Because this type of startup seed offering has been clouded with scams, it is now in regulatory limbo.

Some states have passed laws that make it difficult to legally present and offer an ICO. While the U.S. seems to be pushing ICOregulation, other countries are still deciding what to do. But like ICOs or not, they offer first-time startups an avenue of fast-track funding at the concept level where engineers and scientists can jump on newer technologies by focusing seed money on testing their concepts. Bogging ICOs down with regulatory laws will both slow down legitimate POC innovation in the U.S. and give other countries a competitive edge.

Another barrier to cyberdefense POC funding is the size and technological control of a handful of tech companies. Google, Facebook, Amazon, Microsoft and Apple have become enormous concentrations of wealth and data, drawing the attention of economists and academics who warn they're growing too powerful. Now as big as major American cities, these companies are mega centers of both money and technology. They are so large and control so much of the market that many are beginning to view them as in violation of the Sherman Antitrust Act. So how can small startups compete with these tech giants and potentially fund POCs in areas such as cyberdefense and AI? By aligning with giant companies in industries that have the most need for cyberdefense and AI technologies: critical infrastructure.

The industries that are most vulnerable and could cause the most devastation if hacked are those involved in critical infrastructure. These large industries have the resources to fund cyberdefense technologies at the concept level and they would obtain superior cyberdefense technologies in doing so.

Cyberattacks to critical infrastructure could devastate entire country economies and must be protected by the most advanced cyberdefense. Quantum computing and artificial intelligence will initiate game-changing technology in both cyberdefense and the new intellectual property deriving from quantum sciences. Entering these new technologies at the POC level is like being a Microsoft or Google years ago. Funding the development of these new technologies in cyberdefense and AI are needed soon but what about today?

Future quantum computer capabilities will also demand immediate short-term fixes in current cyberdefense and AI. New quantum-ready compressed encryption and cyberdefense deep learning AI must be funded and tested now at the concept level. The power grid, oil and gas, and even existing telecoms are perfect targets for this funding and development. Investing today would offer current cyberdefense and business intelligence protection while creating new profit centers in the licensing and sale of these leading-edge technologies. This is true for many other industries, all differing in their approach and requiring specialized cyberdefense capabilities and new intelligence gathering that will shape their future.

So we must find creative ways of rapidly funding cyberdefense technologies at the conceptual level. If this is what hackers do and it's why they're always one step ahead, shouldn't we work to surpass them?

Read more:

The Race to Cyberdefense, Artificial Intelligence and the Quantum Computer - Government Technology

When artificial intelligence goes wrong – Livemint

Even as artificial intelligence and machine learning continue to break new ground, there is enough evidence to indicate how easy it is for bias to creep into even the most advanced algorithms. Photo: iStockphoto

Bengaluru: Last year, for the first time ever, an international beauty contest was judged by machines. Thousands of people from across the world submitted their photos to Beauty.AI, hoping that their faces would be selected by an advanced algorithm free of human biases, in the process accurately defining what constitutes human beauty.

In preparation, the algorithm had studied hundreds of images of past beauty contests, training itself to recognize human beauty based on the winners. But what was supposed to be a breakthrough moment that would showcase the potential of modern self-learning, artificially intelligent algorithms rapidly turned into an embarrassment for the creators of Beauty.AI, as the algorithm picked the winners solely on the basis of skin colour.

The algorithm made a fairly non-trivial correlation between skin colour and beauty. A classic example of bias creeping into an algorithm, says Nisheeth K. Vishnoi, an associate professor at the School of Computer and Communication Sciences at Switzerland-based cole Polytechnique Fdrale de Lausanne (EPFL). He specializes in issues related to algorithmic bias.

A widely cited piece titled Machine bias from US-based investigative journalism organization ProPublica in 2016 highlighted another disturbing case.

It cited an incident involving a black teenager named Brisha Borden who was arrested for riding an unlocked bicycle she found on the road. The police estimated the value of the item was about $80.

In a separate incident, a 41-year-old Caucasian man named Vernon Prater was arrested for shoplifting goods worth roughly the same amount. Unlike Borden, Prater had a prior criminal record and had already served prison time.

Yet, when Borden and Prater were brought for sentencing, a self-learning program determined Borden was more likely to commit future crimes than Praterexhibiting the sort of racial bias computers were not supposed to have. Two years later, it was proved wrong when Prater was charged with another crime, while Bordens record remained clean.

And who can forget Tay, the infamous racist chatbot that Microsoft Corp. developed last year?

Even as artificial intelligence and machine learning continue to break new ground, there is enough evidence to indicate how easy it is for bias to creep into even the most advanced algorithms. Given the extent to which these algorithms are capable of building deeply personal profiles about us from relatively trivial information, the impact that this can have on personal privacy is significant.

This issue caught the attention of the US government, which in October 2016 published a comprehensive report titled Preparing for the future of artificial intelligence, turning the spotlight on the issue of algorithmic bias. It raised concerns about how machine-learning algorithms can discriminate against people or sets of people based on the personal profiles they develop of all of us.

If a machine learning model is used to screen job applicants, and if the data used to train the model reflects past decisions that are biased, the result could be to perpetuate past bias. For example, looking for candidates who resemble past hires may bias a system toward hiring more people like those already on a team, rather than considering the best candidates across the full diversity of potential applicants, the report says.

The difficulty of understanding machine learning results is at odds with the common misconception that complex algorithms always do what their designers choose to have them do, and therefore that bias will creep into an algorithm if and only if its developers themselves suffer from conscious or unconscious bias. It is certainly true that a technology developer who wants to produce a biased algorithm can do so, and that unconscious bias may cause practitioners to apply insufficient effort to preventing bias, it says.

Over the years, social media platforms have been using similar self-learning algorithms to personalize their services, offering content better suited to the preferences of their usersbased solely on their past behaviour on the site in terms of what they liked or the links they clicked on.

What you are seeing on platforms such as Google or Facebook is extreme personalizationwhich is basically when the algorithm realizes that you prefer one option over another. Maybe you have a slight bias towards (US President Donald) Trump versus Hillary (Clinton) or (Prime Minister Narendra) Modi versus other opponentsthats when you get to see more and more articles which are confirming your bias. The trouble is that as you see more and more such articles, it actually influences your views, says EPFLs Vishnoi.

The opinions of human beings are malleable. The US election is a great example of how algorithmic bots were used to influence some of these very important historical events of mankind, he adds, referring to the impact of fake news on recent global events.

Experts, however, believe that these algorithms are rarely the product of malice. Its just a product of careless algorithm design, says Elisa Celis, a senior researcher along with Vishnoi at EPFL.

How does one detect bias in an algorithm? It bears mentioning that machine learning-algorithms and neural networks are designed to function without human involvement. Even the most skilled data scientist has no way to predict how his algorithms will process the data provided to them, said Mint columnist and lawyer Rahul Matthan in a recent research paper on the issue of data privacy published by the Takshashila Institute, titled Beyond consent: A new paradigm for data protection.

One solution is black-box testing, which determines whether an algorithm is working as effectively as it should without peering into its internal structure. In a black-box audit, the actual algorithms of the data controllers are not reviewed. Instead, the audit compares the input algorithm to the resulting output to verify that the algorithm is in fact performing in a privacy-preserving manner. This mechanism is designed to strike a balance between the auditability of the algorithm on the one hand and the need to preserve proprietary advantage of the data controller on the other. Data controllers should be mandated to make themselves and their algorithms accessible for a black box audit, says Matthan, who is also a fellow with Takshashilas technology and policy research programme.

He suggests the creation of a class of technically skilled personnel or learned intermediaries whose sole job will be to protect data rights. Learned intermediaries will be technical personnel trained to evaluate the output of machine-learning algorithms and detect bias on the margins and legitimate auditors who must conduct periodic reviews of the data algorithms with the objective of making them stronger and more privacy protective. They should be capable of indicating appropriate remedial measures if they detect bias in an algorithm. For instance, a learned intermediary can introduce an appropriate amount of noise into the processing so that any bias caused over time due to a set pattern is fuzzed out, Matthan explains.

That said there still remain significant challenges in removing the bias once discovered.

If you are talking about removing biases from algorithms and developing appropriate solutions, this is an area that is still largely in the hands of academiaand removed from the broader industry. It will take time for the industry to adopt these solutions on a larger scale, says Animesh Mukherjee, an associate professor at the Indian Institute of Technology, Kharagpur, who specializes in areas such as natural language processing and complex algorithms.

This is the first in a four-part series. The next part will focus on consent as the basis of privacy protection.

A nine-judge Constitution bench of the Supreme Court is currently deliberating whether or not Indian citizens have the right to privacy. At the same time, the government has appointed a committee under the chairmanship of retired Supreme Court judge B.N. Srikrishna to formulate a data protection law for the country. Against this backdrop, a new discussion paper from the Takshashila Institute has proposed a model of privacy particularly suited for a data-intense world. Over the course of this week we will take a deeper look at that model and why we need a new paradigm for privacy. In that context, we examine the increasing reliance on software to make decisions for us, assuming that dispassionate algorithms will ensure a level of fairness that we are denied because of human frailties. But algorithms have their own shortcomingsand those can pose a serious threat to our personal privacy.

Excerpt from:

When artificial intelligence goes wrong - Livemint

Artificial intelligence is inevitable. Will you embrace or resist it in your practice? – Indiana Lawyer

Growing up, Kightlinger and Gray LLP attorney Adam Ira can recall members of his family, many of whom were factory workers, expressing concerning about the prospect of automated machines taking their jobs. Now, Ira said similar concerns are creeping into his work as a lawyer, as the rise of artificial intelligence in the practice of law has begun automating legal tasks previously performed by humans.

As the number of available AI products grows, attorneys have begun to gravitate toward tools that enable them to do their work quickly and more efficiently. Artificial intelligence can come in multiple forms, legal tech experts say, from simple document automation to more complex intelligence using algorithms to predict legal outcomes.

In recent months, several new AI products have been introduced with the promise of automating the mundane tasks of being a lawyer, leaving attorneys with more time to focus on the complex legal questions raised by their clients.

For example, Seattle-based TurboPatent Corp. launched an AI tool in mid-July known as RoboReview. Through RoboReview, patent attorneys can upload a patent application into the AI software, which then scans the document and assesses it for similarities to previous patent applications and uses the level of similarity to predict patent eligibility. RoboReview can also make other predictions about the patent process, such as how long the process might take or what actions the U.S. Patent and Trademark Office may take with the application, said Dave Billmaier, TurboPatent vice president of product marketing.

Shortly after RoboReview went public, Amy Wan, attorney and founder and CEO of Bootstrap Legal, introduced an AI product that automates the process of drafting legal paperwork for people trying to raise capital for a real estate project of $2 million or less. As a former real estate securities attorney, Wan said she witnessed firsthand how inefficient the process of drafting such documents could be, especially considering that much of the work involved routine tasks such as copying and pasting from previous documents.

With Wans AI product, users answer questions about their real estate project, and the software uses those answers to develop the necessary legal documents, which are returned to the user within 48 hours. Such technology expedites drafting the documents a process she said could otherwise take 20 to 25 hours to complete while also cutting the costs associated with raising real estate capital. Wan said her company and AI product are based on the principle that cost considerations should not prevent people from accessing legal services.

Saving time and cutting costs are AI advantages that serve as the key selling points for legal tech developers, as clients have come to expect their attorneys to use modern technology to perform efficient work at the lowest possible cost, said Jason Houdek, a patent and intellectual property attorney with Taft Stettinius & Hollister LLP. Though RoboReview is new, Houdek said he has been using similar AI tools to determine patent eligibility, ensure application quality and predict patent examiner behavior for several years.

Similarly, Haley Altman, CEO of Indianapolis-based Doxly Inc., said most legal tech entrepreneurs like her are trying to develop AI tools that take large sets of data or documents and extrapolate the relevant information lawyers are looking for, thus reducing the amount of time they spend combing through documents. The Doxly software, which is designed to automate the legal transaction process, uses AI to mimic a transactional attorneys natural workflow, making the software feel natural, she said.

Job security?

Despite these benefits, some attorneys are concerned that continued use of AI in the practice of law could put them out of a job. Further, Jim Billmaier, TurboPatent CEO, said the old guard of attorneys, those who have been in practice for many years, can be inclined to resist artificial intelligence tools because they go against traditional practice.

There may be some legitimacy to those concerns, the attorneys and legal tech experts said. For example, attorneys at large firms that still employ the typical billable-hour model could see a drop in their hours as a result of AI products, said Dan Carroll with vrsus LLC, a rebranded version of legal tech company CasePacer. The vrsus technology utilizes AI to enable attorneys at plaintiffs firms to reach outcomes for their clients as quickly as possible, rather than focusing on how many hours they are able to bill, Carroll said.

Similarly, certain practice areas that are more transactional in nature, such as bankruptcy or tax law, might be more susceptible to automation, Ira said.

But such automation is now inevitable, as further AI development is a matter of when, not if, Houdek said. Jim Billmaier agreed and noted that attorneys who are resistant to AI advancements will find themselves underperforming if they choose not to take advantage of tools that increase efficiency.

While technological advancements might be inevitable, they do not have to be uncontrollable, said Ira Smith, vrsus chief strategy officer. Few attorneys fully understand the nuances of what makes AI work, Smith said, yet few tech developers, such as IBM, understand the nuances of practicing law.

As a result, attorneys and legal tech companies should focus less on how new artificial intelligence products might change their work and instead try to mold whatever AI tools are currently on the market to improve the product of their work, Smith said. He encouraged attorneys to be product agnostic and focus less on the technological platform and more on technologys possible benefits.

Why would it matter whether (IBMs) Watson is utilizing my data as long as I can take that and serve it back to my clients? Smith said.

Human advantage

Even as legal tech and other companies offer new and ever more advanced AI products, attorneys said the human mind will always be needed in the practice of law.

For example, even if a computer becomes intelligent enough to draw up contracts on its own, lawyers will still need to review and finalize them, Altman said. Ira agreed and noted that use of AI can create ethical issues, as attorneys must ensure the automated documents they produce reflect accurate and competent work.

Further, the power of persuasion is a trait that is uniquely human, and one that is critical to the practice of law, Ira said. Though an intelligent computer might able to cobble together a legal argument one day an advancement he thinks is still at least 10 to 15 years off it could never speak to a judge or a jury in a manner meant to persuade and effectively advocate on behalf of a client, Ira said.

Similarly, judges will always be needed to use their minds and legal training to decide the outcome of cases, Houdek said, and human juries will always be needed to decide cases.

Though some human jobs or billable hours might decrease as a result of advancements in artificial intelligence, the legal tech experts said AI is more of a benefit than a threat because it allows legal professionals to use their minds and training for the creative work that comes with being an attorney.

AI technology isnt taking their jobs, Altman said. The whole point of it is to enable them to do the work that they really want to be focusing on.

Read the rest here:

Artificial intelligence is inevitable. Will you embrace or resist it in your practice? - Indiana Lawyer

Customs charges Pacific Aerospace for alleged unlawful exports to North Korea – Waikato Times

THOMAS MANCH

Last updated14:42, August 9 2017

SUPPLIED

Pacific Aerospace chief executive Damian Camp with a P-750, the type of plane spotted at a North Korean airshow in October 2016.

New Zealand Customs has charged a Hamilton-based aircraft manufacturer for the export of aircraft parts to North Korea.

Customs confirmed last week it was investigating Pacific Aerospace for potentially breaching United Nations sanctions after a plane was sighted at a North Korean airshow in September 2016.

On Tuesday, Customs confirmed charges have been laid against the company for three breachesof United Nations sanctions under New Zealand law, and one charge under the Customs and Excise Act 1996.

YOUTUBE

A New Zealand-made Pacific Aerospace P-750 XSTOL was spotted at North Korea's first airshow in October 2016.

The charges relate to the export of aircraft parts, and an alleged "erroneousdeclaration" about parts inside an exported aircraft.

READ MORE: *Kiwi firm investigated after plane ends up in North Korea *How did a New Zealand-built plane end up at a North Korean air show? *Prime Minister Bill English in the dark over raid *Claims of blood on New Zealand's hands need answers

Pacific Aerospace chief executive Damian Camp declined to comment while the company reviewed the charges.

KCNA

North Korean leader Kim Jong-Un continues to progress nuclear weapons development, drawing tougher sanctions from UN member states.

Previously, Campexpressed surprise when one of the company's P-750 XSTOL planes was spotted at the WonsanAir Festival in North Korea in September 2016.

A UN Security Council report from February includes a chain of emails that suggest the company knew one its planes was in North Korea, and planned to provide parts and engineering training.

The emails, from January 2016,show Pacific Aerospaceand its Chinese partner were planning to providea replacement flap motor, tools and training to fix a problem with the aircraft.

The direct or indirect supply of aircraft, related parts and aerospace training to North Korea is a violation of UN Security Council Resolution 1718.

TOM LEE/STUFF

Hamilton-based Pacific Aerospace is based at Hamilton airport.

The 2006 resolution wasagreed on by UN member statesin response to North Koreatestinga nuclear weapon.

United Nations sanctions against North Korea in 2006,meant acompany which breacheda UN-mandated ban couldbe fined up to $100,000.

A company can be fined up to $5000 for making an erroneousdeclaration under the Customs and Excise Act.

TOM LEE/STUFF

The P-750 XSTOL, called the "Swiss army knife of an aircraft" by its maker, is able to take off on short runways, ascend quickly, and carry heavy loads.

Ministry of Foreign Affairs and Trade (MFAT) declined to comment, but in a previous statement said it expects NewZealand companies to abide by the letter and spirit of UN sanctions.

-Stuff

More here:

Customs charges Pacific Aerospace for alleged unlawful exports to North Korea - Waikato Times

Is This Aerospace Stock With 158% Growth Set To Take Off? – Investor’s Business Daily

Defense and aerospace giants Lockheed Martin (LMT), Northrop Grumman (NOC) and Raytheon (RTN) have already shot past their buy points, but fastener distributor and logistics services leader KLX (KLXI)is trying to launch its own breakout move.

XAutoplay: On | Off Driven by Donald Trump's plan to boost defense spending and the rising tensions with North Korea, China and Russia, the Aerospace/Defense industry group has moved up the rankings to No. 35 out of 197, up from No. 87 six weeks ago.

KLX and Northrop are tied for the No. 9 spot among their industry peers with a 91 Composite Rating, ahead of Lockheed (No. 12) and Raytheon (No. 13).

After several quarters of declining earnings and sales growth, KLX has bounced back with three consecutive quarters of triple-digit earnings growth, including a 158% rise in fiscal Q1, ended April 30. (Note that KLX's big 689% EPS gain in fiscal Q4 was based on a comparison to a year-over-year quarter that had no earnings.)

Sales growth rose from 7% to 17% in the most recent report.

Analysts expect Q2 earnings to rise 55% when the company reports Aug. 23.

KLX is a global leader in aerospace fasteners, consumables and supply-chain management services for commercial airliners, business jets and defense original-equipment manufacturers (OEMs).

Through KLX Energy Services, the company also provides technical services and rental equipment to the oil and gas exploration and production industry.

KLX was spun off from B/E Aerospace and had its IPO in 2014.

Institutional demand for the stock is reflected in a B Accumulation/Distribution Rating, 1.4 up/down volume rating and five quarters of rising fund ownership.

Heading into Tuesday's session, the stock had been trading within a tight price range for several weeks, which can be a sign a stock is like a tightly coiled spring getting ready to jump higher.

That seemed to be the case for KLX as it sprang higher Tuesday to temporarily clear a 53.23 entry in aflat base. But the stock gave buck much of its earlier gains to close about 1% higher, but below the buy point. Volume was more than double its daily average.

Keep in mind that the stock is set to report soon and that can swing the stock sharply higher or lower.

RELATED:

Defense And Aerospace Stocks To Watch And Industry News

3:54 PM ET Northrop Grumman and NASA's James Webb Space Telescope, which will be launched next year, will probe deeper and see more...

Continue reading here:

Is This Aerospace Stock With 158% Growth Set To Take Off? - Investor's Business Daily

South Bay Aerospace Industry Alliance Launched to Support Local Aerospace Sector – Los Angeles Business Journal

Los Angeles Air Force Base in El Segundo Photo by Courtesy Photo

Business groups in the South Bay have launched the South Bay Aerospace Industry Alliance to support the local aerospace sector and bolster efforts to prevent the closure of Los Angeles Air Force Base.

The South Bay Association of Chambers of Commerce, comprised of 18 business groups representing communities from Los Angeles International Airport to Long Beach, formed the alliance last week, in part as a response to repeated threats to close the air force base.

About 6,000 administrators and engineers at the Los Angeles Air Force Base in an El Segundo office campus oversee billions of dollars in defense contracts to local aerospace firms. The base has been put forward as a target for closure or consolidation to other bases several times over the past 25 years.

When there are efforts in Washington, D.C., that threaten the LAAFBs presence in El Segundo, the Alliance will most definitely do all it can with its broad group of community partners to help keep the base here, said Michael Jackson, a transportation project management consultant and former local aerospace executive who chairs the new alliance.

Jackson said the alliances 11-member executive committee will meet monthly at the offices of the Redondo Beach Chamber of Commerce to consider a broad array of issues impacting the local aerospace industry, such as federal defense funding for fighter jets or potential relocation of aerospace company operations outside of the region.

Public policy and energy reporter Howard Fine can be reached at hfine@labusinessjournal.com. Follow him on Twitter @howardafine.

Original post:

South Bay Aerospace Industry Alliance Launched to Support Local Aerospace Sector - Los Angeles Business Journal