The Physics of Interstellar Travel : Explorations in …

To one day, reach the stars.

When discussing the possibility of interstellar travel, there is something called the giggle factor. Some scientists tend to scoff at the idea of interstellar travel because of the enormous distances that separate the stars. According to Special Relativity (1905), no usable information can travel faster than light locally, and hence it would take centuries to millennia for an extra-terrestrial civilization to travel between the stars. Even the familiar stars we see at night are about 50 to 100 light years from us, and our galaxy is 100,000 light years across. The nearest galaxy is 2 million light years from us. The critics say that the universe is simply too big for interstellar travel to be practical.

Similarly, investigations into UFOs that may originate from another planet are sometimes the third rail of someones scientific career. There is no funding for anyone seriously looking at unidentified objects in space, and ones reputation may suffer if one pursues an interest in these unorthodox matters. In addition, perhaps 99% of all sightings of UFOs can be dismissed as being caused by familiar phenomena, such as the planet Venus, swamp gas (which can glow in the dark under certain conditions), meteors, satellites, weather balloons, even radar echoes that bounce off mountains. (What is disturbing, to a physicist however, is the remaining 1% of these sightings, which are multiple sightings made by multiple methods of observations. Some of the most intriguing sightings have been made by seasoned pilots and passengers aboard air line flights which have also been tracked by radar and have been videotaped. Sightings like this are harder to dismiss.)

But to an astronomer, the existence of intelligent life in the universe is a compelling idea by itself, in which extra-terrestrial beings may exist on other stars who are centuries to millennia more advanced than ours. Within the Milky Way galaxy alone, there are over 100 billion stars, and there are an uncountable number of galaxies in the universe. About half of the stars we see in the heavens are double stars, probably making them unsuitable for intelligent life, but the remaining half probably have solar systems somewhat similar to ours. Although none of the over 100 extra-solar planets so far discovered in deep space resemble ours, it is inevitable, many scientists believe, that one day we will discover small, earth-like planets which have liquid water (the universal solvent which made possible the first DNA perhaps 3.5 billion years ago in the oceans). The discovery of earth-like planets may take place within 20 years, when NASA intends to launch the space interferometry satellite into orbit which may be sensitive enough to detect small planets orbiting other stars.

So far, we see no hard evidence of signals from extra-terrestrial civilizations from any earth-like planet. The SETI project (the search for extra-terrestrial intelligence) has yet to produce any reproducible evidence of intelligent life in the universe from such earth-like planets, but the matter still deserves serious scientific analysis. The key is to reanalyze the objection to faster-than-light travel.

A critical look at this issue must necessary embrace two new observations. First, Special Relativity itself was superceded by Einsteins own more powerful General Relativity (1915), in which faster than light travel is possible under certain rare conditions. The principal difficulty is amassing enough energy of a certain type to break the light barrier. Second, one must therefore analyze extra-terrestrial civilizations on the basis of their total energy output and the laws of thermodynamics. In this respect, one must analyze civilizations which are perhaps thousands to millions of years ahead of ours.

The first realistic attempt to analyze extra-terrestrial civilizations from the point of view of the laws of physics and the laws of thermodynamics was by Russian astrophysicist Nicolai Kardashev. He based his ranking of possible civilizations on the basis of total energy output which could be quantified and used as a guide to explore the dynamics of advanced civilizations:

Type I: this civilization harnesses the energy output of an entire planet.

Type II: this civilization harnesses the energy output of a star, and generates about 10 billion times the energy output of a Type I civilization.

Type III: this civilization harnesses the energy output of a galaxy, or about 10 billion time the energy output of a Type II civilization.

A Type I civilization would be able to manipulate truly planetary energies. They might, for example, control or modify their weather. They would have the power to manipulate planetary phenomena, such as hurricanes, which can release the energy of hundreds of hydrogen bombs. Perhaps volcanoes or even earthquakes may be altered by such a civilization.

A Type II civilization may resemble the Federation of Planets seen on the TV program Star Trek (which is capable of igniting stars and has colonized a tiny fraction of the near-by stars in the galaxy). A Type II civilization might be able to manipulate the power of solar flares.

A Type III civilization may resemble the Borg, or perhaps the Empire found in the Star Wars saga. They have colonized the galaxy itself, extracting energy from hundreds of billions of stars.

By contrast, we are a Type 0 civilization, which extracts its energy from dead plants (oil and coal). Growing at the average rate of about 3% per year, however, one may calculate that our own civilization may attain Type I status in about 100-200 years, Type II status in a few thousand years, and Type III status in about 100,000 to a million years. These time scales are insignificant when compared with the universe itself.

On this scale, one may now rank the different propulsion systems available to different types of civilizations:

Type 0

Type I

Type II

Type III

Propulsion systems may be ranked by two quantities: their specific impulse, and final velocity of travel. Specific impulse equals thrust multiplied by the time over which the thrust acts. At present, almost all our rockets are based on chemical reactions. We see that chemical rockets have the smallest specific impulse, since they only operate for a few minutes. Their thrust may be measured in millions of pounds, but they operate for such a small duration that their specific impulse is quite small.

NASA is experimenting today with ion engines, which have a much larger specific impulse, since they can operate for months, but have an extremely low thrust. For example, an ion engine which ejects cesium ions may have the thrust of a few ounces, but in deep space they may reach great velocities over a period of time since they can operate continuously. They make up in time what they lose in thrust. Eventually, long-haul missions between planets may be conducted by ion engines.

For a Type I civilization, one can envision newer types of technologies emerging. Ram-jet fusion engines have an even larger specific impulse, operating for years by consuming the free hydrogen found in deep space. However, it may take decades before fusion power is harnessed commercially on earth, and the proton-proton fusion process of a ram-jet fusion engine may take even more time to develop, perhaps a century or more. Laser or photonic engines, because they might be propelled by laser beams inflating a gigantic sail, may have even larger specific impulses. One can envision huge laser batteries placed on the moon which generate large laser beams which then push a laser sail in outer space. This technology, which depends on operating large bases on the moon, is probably many centuries away.

For a Type II civilization, a new form of propulsion is possible: anti-matter drive. Matter-anti-matter collisions provide a 100% efficient way in which to extract energy from mater. However, anti-matter is an exotic form of matter which is extremely expensive to produce. The atom smasher at CERN, outside Geneva, is barely able to make tiny samples of anti-hydrogen gas (anti-electrons circling around anti-protons). It may take many centuries to millennia to bring down the cost so that it can be used for space flight.

Given the astronomical number of possible planets in the galaxy, a Type II civilization may try a more realistic approach than conventional rockets and use nano technology to build tiny, self-replicating robot probes which can proliferate through the galaxy in much the same way that a microscopic virus can self-replicate and colonize a human body within a week. Such a civilization might send tiny robot von Neumann probes to distant moons, where they will create large factories to reproduce millions of copies of themselves. Such a von Neumann probe need only be the size of bread-box, using sophisticated nano technology to make atomic-sized circuitry and computers. Then these copies take off to land on other distant moons and start the process all over again. Such probes may then wait on distant moons, waiting for a primitive Type 0 civilization to mature into a Type I civilization, which would then be interesting to them. (There is the small but distinct possibility that one such probe landed on our own moon billions of years ago by a passing space-faring civilization. This, in fact, is the basis of the movie 2001, perhaps the most realistic portrayal of contact with extra-terrrestrial intelligence.)

The problem, as one can see, is that none of these engines can exceed the speed of light. Hence, Type 0,I, and II civilizations probably can send probes or colonies only to within a few hundred light years of their home planet. Even with von Neumann probes, the best that a Type II civilization can achieve is to create a large sphere of billions of self-replicating probes expanding just below the speed of light. To break the light barrier, one must utilize General Relativity and the quantum theory. This requires energies which are available for very advanced Type II civilization or, more likely, a Type III civilization.

Special Relativity states that no usable information can travel locally faster than light. One may go faster than light, therefore, if one uses the possibility of globally warping space and time, i.e. General Relativity. In other words, in such a rocket, a passenger who is watching the motion of passing stars would say he is going slower than light. But once the rocket arrives at its destination and clocks are compared, it appears as if the rocket went faster than light because it warped space and time globally, either by taking a shortcut, or by stretching and contracting space.

There are at least two ways in which General Relativity may yield faster than light travel. The first is via wormholes, or multiply connected Riemann surfaces, which may give us a shortcut across space and time. One possible geometry for such a wormhole is to assemble stellar amounts of energy in a spinning ring (creating a Kerr black hole). Centrifugal force prevents the spinning ring from collapsing. Anyone passing through the ring would not be ripped apart, but would wind up on an entirely different part of the universe. This resembles the Looking Glass of Alice, with the rim of the Looking Glass being the black hole, and the mirror being the wormhole. Another method might be to tease apart a wormhole from the quantum foam which physicists believe makes up the fabric of space and time at the Planck length (10 to the minus 33 centimeters).

a) one version requires enormous amounts of positive energy, e.g. a black hole. Positive energy wormholes have an event horizon(s) and hence only give us a one way trip. One would need two black holes (one for the original trip, and one for the return trip) to make interstellar travel practical. Most likely only a Type III civilization would be able harness this power.

b) wormholes may be unstable, both classically or quantum mechanically. They may close up as soon as you try to enter them. Or radiation effects may soar as you entered them, killing you.

c) one version requires vast amounts of negative energy. Negative energy does exist (in the form of the Casimir effect) but huge quantities of negative energy will be beyond our technology, perhaps for millennia. The advantage of negative energy wormholes is that they do not have event horizons and hence are more easily transversable.

d) another version requires large amounts of negative matter. Unfortunately, negative matter has never been seen in nature (it would fall up, rather than down). Any negative matter on the earth would have fallen up billions of years ago, making the earth devoid of any negative matter.

The second possibility is to use large amounts of energy to continuously stretch space and time (i.e. contracting the space in front of you, and expanding the space behind you). Since only empty space is contracting or expanding, one may exceed the speed of light in this fashion. (Empty space can warp space faster than light. For example, the Big Bang expanded much faster than the speed of light.) The problem with this approach, again, is that vast amounts of energy are required, making it feasible for only a Type III civilization. Energy scales for all these proposals are on the order of the Planck energy (10 to the 19 billion electron volts, which is a quadrillion times larger than our most powerful atom smasher).

Lastly, there is the fundamental physics problem of whether topology change is possible within General Relativity (which would also make possible time machines, or closed time-like curves). General Relativity allows for closed time-like curves and wormholes (often called Einstein-Rosen bridges), but it unfortunately breaks down at the large energies found at the center of black holes or the instant of Creation. For these extreme energy domains, quantum effects will dominate over classical gravitational effects, and one must go to a unified field theory of quantum gravity.

At present, the most promising (and only) candidate for a theory of everything, including quantum gravity, is superstring theory or M-theory. It is the only theory in which quantum forces may be combined with gravity to yield finite results. No other theory can make this claim. With only mild assumptions, one may show that the theory allows for quarks arranged in much like the configuration found in the current Standard Model of sub-atomic physics. Because the theory is defined in 10 or 11 dimensional hyperspace, it introduces a new cosmological picture: that our universe is a bubble or membrane floating in a much larger multiverse or megaverse of bubble-universes.

Unfortunately, although black hole solutions have been found in string theory, the theory is not yet developed to answer basic questions about wormholes and their stability. Within the next few years or perhaps within a decade, many physicists believe that string theory will mature to the point where it can answer these fundamental questions about space and time. The problem is well-defined. Unfortunately, even though the leading scientists on the planet are working on the theory, no one on earth is smart enough to solve the superstring equations.

Most scientists doubt interstellar travel because the light barrier is so difficult to break. However, to go faster than light, one must go beyond Special Relativity to General Relativity and the quantum theory. Therefore, one cannot rule out interstellar travel if an advanced civilization can attain enough energy to destabilize space and time. Perhaps only a Type III civilization can harness the Planck energy, the energy at which space and time become unstable. Various proposals have been given to exceed the light barrier (including wormholes and stretched or warped space) but all of them require energies found only in Type III galactic civilizations. On a mathematical level, ultimately, we must wait for a fully quantum mechanical theory of gravity (such as superstring theory) to answer these fundamental questions, such as whether wormholes can be created and whether they are stable enough to allow for interstellar travel.

Continue reading here:

The Physics of Interstellar Travel : Explorations in ...

BBC commissions documentary about commercial space travel fronted by Brian Cox – Radio Times

The BBC is making a documentary about commercial space travel featuring Richard Bransons Virgin Galactic space programme and co-produced by his son Sam's company.

Quest for Space is the working title of the new documentary, which is fronted by Brian Cox and co-produced by Sundog Pictures, the production company run by Sam Branson. Sam Branson, who is chairman of Sundog, is also a friend of Cox.

The BBC has denied suggestions that the commission represents a conflict of interest and insisted that the programme will focus on space exploration and space mining generally and would not be a plug for Bransons company.

A spokeswoman said that it also promised to profile the work of other bodies including NASA, Space X and Deep Space Industries and Blue Origin the aerospace manufacturer and spaceflight service founded by Amazon tycoon Jeff Bezos.

Richard Branson is understood to have been filmed by the producers and is expected to feature in the programme, which will air either at the end of this year or early next year. Bezos will also feature.

The commission is also said by sources to be a big deal for Sundog Pictures, which has been suspended by the BBC for any commissions following its documentary Reggie Yates: Hidden Australia which was on BBC3 at the beginning of this year.

The Corporation is completing an investigation into an alleged breach of editorial standards in a section of the programme, where an Aboriginal wake was allegedly filmed as if it were a party scene. Sundog has been suspended from future commissions until the matter is formally resolved.

The commercial space programme show was commissioned before the suspension which is why it has been allowed to go ahead. But RadioTimes.com understands that co-producers Voltage TV Productions have been given editorial responsibility for delivery of the programme because of the suspension.

RadioTimes.com also understands that the BBC is also keeping an eye on this commission generally in order to ensure it is journalistically rigorous.

Go here to see the original:

BBC commissions documentary about commercial space travel fronted by Brian Cox - Radio Times

spotlight – NYCAviation

Founded by Paul G. Allen in 2011, Stratolaunch is the latest endeavor that aims to make space travel a possibility for consumers. With an eye on Low Earth Orbit (LEO), Stratolaunch seeks to enable advancements in science, technology and research from space. Stratolaunch was designed by Burt Rutan and built by Scaled Composites.

The aircraft is the largest in the world, with a widerwingspan that that of Howard Hughes Spruce Goose. The 6-engined Stratolaunchs size and statistics are staggering, the official press release notes it has a, wingspan, measuring 385 ft. by comparison, a professional football field spans only 360 ft. The aircraft is 238 ft. from nose to tail and stands 50 ft. tall from the ground to the top of the vertical tail. The massive wingspan is nearly 50% wider than the Airbus A380.

The carrier craft is notably powered by 6 engines. According to Wikipedia, the carrier plane will be powered by sixPratt & WhitneyPW4000, 205296kN (46,00066,500lbf) thrust-range jet engines, sourced from two used747-400sthat werecannibalizedfor engines, avionics, flight deck, landing gear and other proven systems to reduce initial development costs. The carrier is designed to have a range of 2,200km (1,200nmi) when flying an air launch mission.

Stratolaunch reacheda new milestone on May 31, 2017. Rolling out of the hangar,it exited the aircraft construction phase to begin the first steps in testing the new aircraft. First up will be testing of the fueling system. Enthusiasts were able to watch feeds from several news sources as the craft was revealed to the public.

The plan for the coming months is many rounds of ground and flighttesting.These tests will be based atMojave Air & Space Port, Stratolaunchs home airport. The ultimate goal of testing is to ensure the safety of crew and future passengers. Stratolaunch Systems Corporations goal is to send their first launch into LEO in 2019.

(All images courtesyStratolaunch Systems Corp.)

Go here to read the rest:

spotlight - NYCAviation

Is this massive airplane the future of space travel? One billionaire thinks so. – SOFREP (press release) (subscription)

By Alex Hollings 06.04.2017#Featured Email Share Tweet

The first time I ever saw a C-130 in person was as it came crashing down on a small air strip near Palm Springs, California in what was, at the time, the most brutal landing I had ever seen. I had been in the Fleet Marine Force for only a few weeks, and my wife and I had only just arrived in Twentynine Palms when I got scooped up as part of a security detail for the funeral of President Gerald Ford. It was a trip I wasnt able to warn her I was being taken on, and that would leave my new wife alone throughout New Years in a state she had never even visited before.

Despite the concerns I had about my wife not knowing where I was (I didnt have a cell phone at the time because I was poor and it was a long time ago), all of my trepidation vanished as I watched the biggest airplane I thought Id ever seen belly flop onto the tarmac. Id find out later that it was full of military personnel from various branches coming in for the funeral, and what I thought was a crash was actually Marine pilots making sure their passengers knew how tough Marines and their birds can be but even that early revelation into our relationship with other services was drowned out in my mind by the sheer scale of the aircraft. With its 133-foot wingspan, the C-130 is a big, bad plane, deserving of every bit of my awe, and its because of my memories of that flight line detail all those years ago that Im able to grasp the scale of a new aircraft that was unveiled on Wednesday, the Stratolaunch.

or Log In

Filed Under: Featured, North America News, World News Tagged With: aircraft, C-130, Headline, Launch, orbit, Paul Allen, space, SpaceX, Stratolaunch

If you liked this article, tell someone about it

Alex Hollings Alex Hollings served as an active duty Marine for six and a half years before being medically retired. A college rugby player, Marine Corps football player, and avid shooter, he has competed in multiple mixed martial arts tournaments, raced exotic cars across the country and wrestled alligators in pursuit of a story to tell. His novel, "A Secondhand Hero" is currently seeking publication.

The rest is here:

Is this massive airplane the future of space travel? One billionaire thinks so. - SOFREP (press release) (subscription)

Approaching the World of Collaboration Singularity – CommsTrader

Originally a concept that was conceived by David Tucker, and Richard Platt on a napkin in 1993, the first IP PBX in the world changed the IP network platform. Soon, convergence emerged as a crucial trend, as connecting people through real-time voice became essential for many businesses. As convergence became more popular, companies like Cisco responded accordingly, integrating voicemail, call centre solutions, IM, conferencing, video, and more into a singular platform for enterprise communication.

One of the key elements of Ciscos success has been the use of a teeming system of collaboration solutions and developers. Rich APIs for call management, automation, interoperability, and compliance mean in-house developers and ISVs can mainline business intelligence straight to the communications infrastructure, connecting the IoT, processes, systems, and people.

During its most recent attempt to bring the business world into its network, Cisco Spark has taken the IP collaboration stack out of the server and into the cloud, bringing together consistent chat, WebEx video conferencing, screen sharing, HD video and voice, and a range of unique hardware endpoints into a unique user experience. Cisco Sparks end-to-end encryption solution blurs the lines between WAN and LAN with UC infrastructure interoperability, adaptive bandwidth usage, and reliability-at-scale solutions.

Perhaps two of the most innovative solutions for developers and end-users have been the launch of the Cisco Spark video SDK, and the Cisco Spark Board room-conferencing system. Bringing in fantastic reviews in the world of business equipment, Ciscos Spark Board has changed the environment for many enterprises, as a hugely capable solution for UC.

Using the Cisco Spark Board, customers can connect to the Spark cloud effortless by simply plugging into their network. They then have access to the complete enterprise conference room, with stunning touchscreen collaboration and video conferencing that connects seamlessly to both PC and mobile devices.

Additionally, the Cisco SparkVideo SDK complements the Cisco Spark ability to offer omnipresent video collaboration, giving developers more power to embed their Spark collaboration needs into existing applications. The Spark SDK currently supports Swift/Ios browser apps through WebRTC and JavaScript, but Android will follow soon. The system offers self-contained widgets and frameworks that let coders transform mobile apps and web pages into high-performance, secure tools for collaboration.

When you combine Ciscos existing messaging APIs to Spark to these new solutions, business IM can begin to encounter all the possibilities of chatbots that can be linked to automation and IT systems, delivering a fully pervasive cloud collaboration solution for any agile enterprise.

In a world thats increasingly focused on bringing all of its solutions for unified communications into the same space, Ciscos latest developments, including the acquisition of the MindMeld AI company, shows its devotion to collaboration singularity. Although there are few details on what the future might look like for Cisco, the term cognitive collaboration has been mentioned. Its thrilling to think about convergence, integration, and networks will continue to transform communications in businesses all over again.

More:

Approaching the World of Collaboration Singularity - CommsTrader

Oregon Ducks’ Deajah Stevens makes a fast ascension in American … – The Register-Guard

Nowhere is Deajah Stevens sudden rise to stardom more apparent than in the increased popularity of her Instagram account.

Last year at this time she had 700 followers. Today she has more than 34,000.

Fame, it seems, has caught up to the Oregon junior.

On the track, however, only a few of her competitors have had the same luck.

After quietly arriving at Oregon 18 months ago and slogging through the early months of her transition to big-time collegiate track, Stevens went from an intriguing prospect at the start of the 2016 outdoor season to a world-class sprinter by summers end.

And that was just the beginning.

Its been a pretty amazing ascent to this point, Oregon associate head coach Curtis Taylor said.

After not winning an individual conference or national title last season, Stevens surprised many including herself with a second-place finish in the womens 200 meters at the U.S. Olympic Track & Field Trials in July, nearly catching Tori Bowie at the line for the title. She then made the Olympic final and finished seventh at the Summer Games in Rio de Janeiro.

Stevens carried that momentum into this season, and the results have been impressive.

She won Pac-12 titles in the 100 and 200 last month and goes into the NCAA Outdoor Track & Field Championship meet this week at Hayward Field as the form-chart favorite in both races.

Stevens owns the fourth-fastest times in the world this season at each distance, having run 11 seconds in the 100 this season and 22.09 in the 200. Both times are personal bests, and her 200 time is also an Oregon school record.

Im a way stronger version of myself than I was last year at this time, Stevens said.

And much more popular too.

Born to run

Memorial Field in Mount Vernon, N.Y., is undergoing a rebirth after years of neglect forced its closure.

The crumbling brick stadium once hosted concerts by James Brown and the Jackson 5, as well as several decades worth of high school and semi-pro sporting events for teams throughout Westchester County.

Its also where Stevens learned to run as a youngster growing up in the suburb just north of the Bronx.

To hide the blight of the decaying stadium as it awaits renovation, the city recently built a wall that doubles as a mural dedicated to Mount Vernon Legends. First on the list of honorees was Stevens, who is pictured running in her Olympic uniform with an American flag waving behind her.

Its so nice, Stevens gushed. When my friends drive by it back home they send me pictures of it. Its really funny.

Mount Vernon Mayor Richard Thomas dedicated the mural in January by declaring Stevens, a girl who knew she could run, but did not know how far she could go until she set foot on the track here at Memorial Field. When she discovered her potential she ran all the way from Mount Vernon to Rio.

It wasnt the first time her hometown had honored Stevens.

In September, just a few weeks after her return from Rio, Stevens was treated to a parade through the streets of Mount Vernon, riding in a car with her mom, sister and the mayor.

If Stevens didnt comprehend before how popular she had become since making the Olympic team, she quickly learned that weekend.

Ive gotten so much love from people on social media and from people here in Oregon, but when I got home, it was really shocking to me how much people were supportive and happy for me, Stevens said. Just meeting adults and kids and people my own age who were like, Oh my God, its you! Its still uncomfortable to me, but Im getting better.

Mount Vernon is also where Stevens spent her first year out of high school.

Denied entry into South Carolina her first school of choice after it was discovered she was one credit short, Stevens sat out a year before enrolling at College of the Sequoias in Visalia, Calif.

She went on to become the California junior college champion in the 200 and 400 in 2015. One year later, she was at Oregon, joining a womens sprint team that included 2015 World Outdoor Championships qualifier Jasmine Todd, future 2016 Olympian and two-time NCAA champion Ariana Washington, and Hannah Cunliffe, who would sweep the Pac-12 100 and 200 later that season.

We knew she was talented, the extent of which I dont think we really knew until we started doing speed testing on her, Taylor said. We found out she has very high levels of speed and power, and if we did it right we could kind of convert all those qualities into a really good sprinter.

Flashes of brilliance

Stevens showed flashes toward the end of the collegiate season last year, finishing second in the 200 and third in the 100 at the Pac-12 Championship meet in Seattle. She was second in the 200 at the NCAA meet as well, behind Washington.

Then came the Olympic Trials.

Stevens entered the final as a longshot in a race that also included Bowie, Allyson Felix and Jenna Prandini.

Theyre peers to me now, but its funny, when I went into the Trials I was so nervous and I felt so young, Stevens said. It was just super crazy to me. Allyson Felix to this day is someone who is so inspiring to me but now shes also a competitor to me, and thats crazy. Thats mind blowing.

It flipped when I made the finals. It dawned on me that I have to stop looking at it like, I guess, starstruck. I had to snap out of it quick.

Bowie led the whole race but it was Stevens who was hot on her trail after coming off the curve. She ran the last 10 meters with a look of disbelief on her face as Prandini was forced to edge out Felix in a photo-finish for third.

While Stevens has said she stunned herself by making the Olympic team the original goal had been Tokyo 2020 her coach could see it coming.

Surprised probably isnt the word, Taylor said. A lot of these kids out here have the ability, its just a matter of if theyre going to put it together or not. So whenever they do really well, youre not surprised because you know they have the capability in them. Its just exciting to see them start to realize what their capabilities are.

Sustaining success is a different challenge, and one Stevens took seriously coming into the 2017 season. Now its time for the payoff.

The NCAA meet wont come without its challenges. LSU junior Aleia Hobbs is the NCAA leader in the 100 at 10.85 and UNLV junior Destiny Smith-Barnett has recorded a wind-aided 10.97. Washington, the defending national champ, is also among the collegiate leaders at 11.06.

In the 200, Stevens is the NCAA leader and also a contender to qualify for the 2017 World Outdoor Championships in London during the U.S. Track & Field Championships in Sacramento later this month.

Im not going to worry about USAs while Im at nationals because those are just two different meets, Stevens said. Im trying to get through nationals healthy and Im trying to train up until USAs.

Theres also another decision looming for Stevens, who will have every chance to turn pro and forgo her senior season at Oregon.

Im going to finish and keep my mind clear through nationals, she said. I need to stay in this space for now and Im trying not to think about it. I really have to talk to Curtis after nationals and see what my best options are. Im not completely opposed to coming back to school, and Im not completely opposed to going pro. I just dont know.

Whatever decision Stevens makes, history suggests more fame will follow.

Follow Chris on Twitter @chansen_RG .

More Oregon Track & Field articles

More:

Oregon Ducks' Deajah Stevens makes a fast ascension in American ... - The Register-Guard

Towards an evidence-based Ascension Island Ocean Sanctuary – National Geographic

Written byDr. Judith Brown, Director of Conservation and Fisheries, Ascension Island government

The U.K. government has taken a proactive approach to marine conservation, committing to marine protection around both its own shores and those of its overseas territories in what is known as its Blue Belt commitment. Whilst an honorable statement, now comes the hard work in determining what form that protection should take and how best to deliver marine reserves that really achieve notable conservation benefits to important marine biodiversity. On Ascension Island, a small but dedicated team of marine scientists has been working hard to gather the baseline data on inshore fisheries and biodiversity over the last few years, but their work is spreading to include all of the 200 nautical mile maritime zone the waters that the Ascension Island government is responsible for managing. But gathering data in this larger and less accessible area is much harder and comes at great costs, with the need for a wider team of experts. Fortunately funding has been made available, not least by the U.K. government, to allow these scientific knowledge gaps to be addressed, alongside financing patrolling the waters in search of illegal vessels. Two external grants awarded by the EU Best Initiative and the U.K. governments Darwin Initiative, combined with this ground-breaking National Geographic project, have enabled a dedicated trip to study the practically unknown seamounts that lie within Ascensions waters. These areas were provisionally selected to fall within the zone closed to commercial fishing but they are in desperate need of research to justify if they really are the biological hotspots that we presume and therefore should be included in the final Ascension Island Ocean Sanctuary.

This current National Geographic Pristine Seas expedition has come at a critical time bringing together a core team of scientific experts from a diverse range of disciplines from those who study the bottom of the food chain, the plankton, to the unique benthic communities, to the top level predators, the sharks. This biological research combined with the oceanographic data and the seabed mapping information allows the team to study the entire ecosystem an opportunity very rarely brought together in one expedition. This research is addressing the key priorities in the Ascension Island governments scientific roadmap a detailed plan of information needed to allow management decisions to be made based on scientific evidence. Whilst the data still needs processing we can see that the trip has been an enormous success and already we have witnessed what special habitats the Ascension seamounts are. New records (and very likely) new marine species have been discovered here and bioacoustic data have identified high levels of marine species abundance over the seamounts. Sharks are a species of particular interest due to their susceptibility as by-catch in commercial longline fisheries and here we have gathered unique footage of a not just a diverse range of shark species but evidence of high abundance of silky sharks. When in larger numbers sharks are often less cautious to approaching baited hooks, meaning that at these areas when the sharks are in greater abundance, they are likely to be more susceptible to being caught and a single longline could have a potentially devastating impact on the population found around the seamount. Bigeye and yellowfin tuna have also been seen and tagged during this project to investigate how long they stay around these undersea features and understand the importance of the seamounts to these species. All of this data, when reviewed and processed, will allow us to understand the seamount ecosystems and their wider importance within Ascension waters. However, already we can see they are sufficiently unique and rich in life to make them key candidates to fall within the Ascension Island Marine Protected Area.

Personally this voyage has been a fantastic opportunity to get the chance to work with such enthusiastic and knowledgeable scientists alongside the hardworking and helpful crew of the two vessels is such a positive experience. The logistics to make such an expedition happen goes on unnoticed behind the scenes but was by no means an insignificant feat. The Ascension Island Government Conservation team extends a huge thank you to the National Geographic for making this trip possible and the scientists from the British Antarctic Survey, University of Windsor, University of Western Australia and University of Exeter, not to mention the amazing assistance from the captains and crew of the RRS James Clark Ross and Extractor.

The Pristine Seas team is currently conducting an expedition to the remote island ofAscension, in partnership withtheAscensionIsland Conservation Department, the British Antarctic Survey, the Royal Society for the Protection of Birds, and The Blue Marine Foundation.

Read all Ascension Island 2017 expedition posts.

More here:

Towards an evidence-based Ascension Island Ocean Sanctuary - National Geographic

Froedtert, Ascension post rate increases – BizTimes.com (Milwaukee)

Froedtert Hospital recently announced it will implement a rate adjustment that will increase gross patient revenue by 4.9 percent beginning July 1 to compensate for below-cost reimbursement from government programs and the increased cost of providing care.

Thats compared to its last increase of 5 percent, which was implemented on July 1, 2016.

Froedtert Hospital

Hospitals are required to report rate increases to the state and community if the increase is greater than the consumer price index, which is 2.5 percent.

The average price increase at Wisconsin hospitals in 2017 is 4.07 percent as of May 18, according to the Wisconsin Hospital Association. In 2016, the average price increase was 3.73.

Health care systems typically raise rates between 2 and 10 percent at the beginning of the systems fiscal year. Earlier this year, Aurora Health Care and Childrens Hospital of Wisconsin raised their rates 4.5 percent and 4 percent, respectively.

Meanwhile, Ascension, whichacquired Wheaton FranciscanHealthcare in 2016,recently announced a rate adjustment that will increase gross patient revenue by 3 percent at Wheaton Franciscan Healthcare-St. Francis in Milwaukee, Wheaton Franciscan Healthcare-St. Joseph in Milwaukee, Elmbrook Memorial in Brookfield and Midwest Orthopedic Specialty Hospital in Franklin. It also announced a rate increase of 3.3 percent at Wheaton Franciscan Healthcare-Franklin.

In recent public notices announcing the rate increase, both Froedtert and Ascension said its necessary to increase prices to keep pace with the increasing costs of providing care with below-cost reimbursement rates from government programs and other payers.

Rate increases are only on charges the amount a hospital bills for a patients care rather than a price increase on the actual care. The amount collected by a hospital for each service is almost always less than the amount billed.

The new prices typically have little impact on the average patient with commercial insurance because all major health care providers have contracts with insurance companies that use the hospital rates as a starting point for negotiations.

Rates dont apply to Medicare or Medicaid patients. Both programs pay hospitals a set rate based on the patients diagnosis and the procedure.

Froedtert Hospital recently announced it will implement a rate adjustment that will increase gross patient revenue by 4.9 percent beginning July 1 to compensate for below-cost reimbursement from government programs and the increased cost of providing care.

Thats compared to its last increase of 5 percent, which was implemented on July 1, 2016.

Froedtert Hospital

Hospitals are required to report rate increases to the state and community if the increase is greater than the consumer price index, which is 2.5 percent.

The average price increase at Wisconsin hospitals in 2017 is 4.07 percent as of May 18, according to the Wisconsin Hospital Association. In 2016, the average price increase was 3.73.

Health care systems typically raise rates between 2 and 10 percent at the beginning of the systems fiscal year. Earlier this year, Aurora Health Care and Childrens Hospital of Wisconsin raised their rates 4.5 percent and 4 percent, respectively.

Meanwhile, Ascension, whichacquired Wheaton FranciscanHealthcare in 2016,recently announced a rate adjustment that will increase gross patient revenue by 3 percent at Wheaton Franciscan Healthcare-St. Francis in Milwaukee, Wheaton Franciscan Healthcare-St. Joseph in Milwaukee, Elmbrook Memorial in Brookfield and Midwest Orthopedic Specialty Hospital in Franklin. It also announced a rate increase of 3.3 percent at Wheaton Franciscan Healthcare-Franklin.

In recent public notices announcing the rate increase, both Froedtert and Ascension said its necessary to increase prices to keep pace with the increasing costs of providing care with below-cost reimbursement rates from government programs and other payers.

Rate increases are only on charges the amount a hospital bills for a patients care rather than a price increase on the actual care. The amount collected by a hospital for each service is almost always less than the amount billed.

The new prices typically have little impact on the average patient with commercial insurance because all major health care providers have contracts with insurance companies that use the hospital rates as a starting point for negotiations.

Rates dont apply to Medicare or Medicaid patients. Both programs pay hospitals a set rate based on the patients diagnosis and the procedure.

Excerpt from:

Froedtert, Ascension post rate increases - BizTimes.com (Milwaukee)

Renovations, new altar dedicated at Church of the Ascension – The Catholic Review

Archbishop William E. Lori consecrates a new altar at Church of the Ascension in Halethorpe May 28. (Christopher Gunty/CR Staff)

editor@CatholicReview.org

HALETHORPE Weve got a consecrated altar now, Father John Williamson, exclaimed to a parishioner greeting him in the narthex of the Church of the Ascension after a Mass May 28 in which the altar was blessed and the new sanctuary was dedicated.

Archbishop William E. Lori remarked in his homily that a lot of things changed at the church since he had last celebrated Mass at the parish.

The $200,000 renovations were accomplished in about nine weeks between January and Palm Sunday, said Father Williamson, pastor of Ascension and St. Augustine (Elkridge) parishes.

A new altar and ambo were installed, with carved-wood fronts. The altar depicts the Last Supper and the ambo, from which the readings are proclaimed, depicts the Ascension of the Lord, which was also the reading for the day for the dedication Mass, on the parishs patronal feast.

The orange carpet throughout the church was replaced with tile. The Stations of the Cross and statues of Mary and Joseph are the same as before, but their backgrounds were redone in blue to match the new color of the wall behind the altar and tabernacle. The tabernacle, which was originally from the Good Shepherd Sisters convent, was restored as part of the 2017 renovations.

Archbishop William E. Lori greets parishioners at the May 28 blessing of the renovated sanctuary at Church of the Ascension in Halethorpe. (Christopher Gunty/CR Staff)

The corpus for the crucifix was retained, but it is now on a new cross. New candlesticks and new sanctuary furnishings rounded out the interior renovations. Outside, the old bells remain in the bell tower, but they are now accompanied by a new carillon system that pealed both before and after the Mass.

In his homily, Archbishop Lori noted that the Risen Lord is present in all the sacraments, especially the Eucharist.

Thats why its so wonderful that were consecrating a new altar in this church today, the archbishop said. For it is upon this altar that the one sacrifice of Christ is offered day after day, upon this altar that bread and wine become the body and blood of the risen Lord, and from this altar that we receive the strength of Christ in the Holy Spirit to be the Lords disciples and witnesses in the midst of our daily lives.

Catholics are called to make the Gospel known through their words and actions and to undergo what Pope Francis calls a missionary conversion so that they can share the joy of the Gospel, not just as individuals, but as a community of faith, he said.

In other words, our parishes too must undergo a missionary conversion as we seek to reconnect those who, for whatever reason, are unconnected with the Lord and with his body, the church, Archbishop Lori said.

As part of the dedication, the archbishop placed a stone in the altar that contains a relic of St. Genarius, which had been part of past altars at the parish, providing continuity to the past, Father Williamson said. The altar also contains relics that came from the Good Shepherd convent of Saints Mary Euphrasia, John Vianney, Maria Goretti and Pius X.

Father Williamson said the parishioners have been overjoyed with the results of the renovations. Everyone has been very happy, he said.

He said the number of weddings scheduled has doubled now that the sanctuary looks better, which is a good sign that more young people are eager to be married in the church.

As of July 1, the parishes of St. Augustine and Ascension will be formally merged into a single parish, the Catholic Community of Ascension and St. Augustine. Father Williamson has been pastor Ascension since 2008 and also became pastor of St. Augustine in 2011. Since that time, the parishes have been working together.

The merged parish will be one of the pilot pastorates in the parish planning process that was announced in May. With 2,200 registered families at St. Augustine and 840 families at Ascension, the combined parish will be one of the largest in the archdiocese.

Go here to read the rest:

Renovations, new altar dedicated at Church of the Ascension - The Catholic Review

A reply to Wait But Why on machine superintelligence

Tim Urban of the wonderfulWait But Whyblog recently wrote two posts on machine superintelligence:The Road to Superintelligence and Our Immortality or Extinction.These postsare probably now among the most-readintroductions to the topic since Ray Kurzweils 2006 book.

In general I agree with Tims posts, but I think lots of details in his summary of the topic deserve to be corrected or clarified.Below, Ill quotepassages from histwo posts, roughlyin the order they appear, and then give my own brief reactions. Someof my commentsare fairlynit-picky but I decided to share them anyway; perhaps my most important clarification comes at the end.

The average rate of advancement between 1985 and 2015 was higher than the rate between 1955 and 1985 because the former was a more advanced world so much more change happened in the most recent 30 years than in the prior 30.

Readers should know this claim is heavily debated, and its truth depends on what Tim means by rate of advancement. If hes talking about the rate of progress in information technology, the claim might be true. But it might be false for most other areas of technology, for example energy and transportation technology. Cowen, Thiel, Gordon, and Huebner argue that technological innovation more generally has slowed. Meanwhile, Alexander, Smart, Gilder, and others critique some of those arguments.

Anyway,most of what Tim saysin these posts doesnt depend muchon the outcome of these debates.

Artificial Narrow Intelligence is machine intelligence that equals or exceeds human intelligence or efficiency at a specific thing.

Well, thats the goal. But lots ofcurrentANI systems dont yet equal human capability or efficiency at their given task.To pick an easy example from game-playing AIs: chess computers reliably beat humans, and Go computers dont (but they will soon).

Each new ANI innovation quietly adds another brick onto the road to AGI and ASI.

I know Tim is speaking loosely, but I should note that many ANI innovations probably most, depending on how you count wont end up contributing to progress toward AGI. ManyANI methodswill end up being dead ends after some initial success, and their performance on the target task will be superseded by other methods. Thats how the history of AI has worked so far, and how it will likely continue to work.

the human brain is the most complex object in the known universe.

Well, not really. For example the brain of an African elephanthas 3 as many neurons.

Hard things like calculus, financial market strategy, and language translation are mind-numbingly easy for a computer, while easy things like vision, motion, movement, and perception are insanely hard for it.

Yes,Moravecs paradox is roughly true, but I wouldnt say that getting AI systems to perform well in asset trading or language translation has been mind-numbingly easy. E.g. machine translation is useful for getting the gist of a foreign-language text,but billions of dollars of effort still hasnt produced a machine translation system as good as a mid-level humantranslator, and I expect this willremain true for at least another 10 years.

One thing that definitely needs to happen for AGI to be a possibility is an increase in the power of computer hardware. If an AI system is going to be as intelligent as the brain, itll need to equal the brains raw computing capacity.

Because computing power is increasing so rapidly, we probablywill have more computing power than the human brain(speaking loosely) before we know how to build AGI, but I just want to flag that this isntconceptually necessary. In principle,an AGIdesign could be very different than the brains design, just like a plane isnt designed much like a bird. Depending onthe efficiency of the AGI design, it might be able to surpass human-level performance in all relevant domains using muchless computing power than the human brain does, especially since evolution is avery dumb designer.

So,we dont necessarilyneedhuman brain-ish amounts of computing power to build AGI, but the more computing power we have available, the dumber (less efficient)our AGI design can afford to be.

One way to express this capacity is in the total calculations per second (cps) the brain could manage

Just an aside:TEPS is probably another good metric to think about.

The science world is working hard on reverse engineering the brain to figure out how evolution made such a rad thing optimistic estimates say we can do this by 2030.

I suspect that approximately zeroneuroscientists think we can reverse-engineer the brain to the degree being discussed in this paragraph by 2030.To get a sense of current and near-future progress in reverse-engineering the brain, seeThe Future of the Brain (2014).

One example of computer architecture that mimics the brain is the artificial neural network.

This probably isnta good example of the kind of brain-inspired insights wed need to build AGI. Artificial neural networks arguably go back to the 1940s, and they mimic the brain only in the most basicsense. TD learning would be a more specific example, except in that case computer scientists were using the algorithmbefore we discovered the brain also uses it.

[We have]just recently been able to emulate a 1mm-long flatworm brain

No we havent.

The human brain contains 100 billion [neurons].

Good news! Thanks to a new technique we now have a more precise estimate: 86 billion neurons.

If that makes [whole brain emulation]seem like a hopeless project, remember the power of exponential progress now that weve conquered the tiny worm brain, an ant might happen before too long, followed by a mouse, and suddenly this will seem much more plausible.

Because computing power advances so quickly, it probably wont be the limiting factor on brain emulation technology. Scanning resolution and neuroscience knowledge are likely to lag far behind computing power: see chapter 2 ofSuperintelligence.

most of our current models for getting to AGI involve the AI getting there by self-improvement.

They do? Says who?

I think the path from AGI to superintelligence is mostly or entirely about self-improvement, but the path from current AI systems to AGI is mostly about human engineering work, probably until relatively shortly before the leading AI project reachesa level of capability worth calling AGI.

the median year on a survey of hundreds of scientists about when they believed wed be more likely than not to have reached AGI was 2040

Thats the number you get when you combine the estimates from several different recent surveys, including surveys of people who were mostlynot AIscientists. If you stick to the survey of the top-cited living AI scientists the one called TOP100 here the median estimate for50% probability of AGI is 2050. (Not a big difference, though.)

many of the thinkers in this field think its likely that the progression from AGI to ASI [will happen]very quickly

True, but it should be noted this is still a minority position, as one can see in Tims 2nd post, or in section 3.3 of the source paper.

90 minutes after that, the AI has become an ASI, 170,000 times more intelligent than a human.

Remember that lots of knowledge and intelligence comes from interacting with the world, not just from running computational processesmore quickly or efficiently. Sometimes learning requires that you wait on some slow natural process to unfold.(In this context, even a1-second experimental test is slow.)

So the median participant thinks its more likely than not that well have AGI 25 years from now.

Again, I think its better to usethe numbers for the TOP100 survey from that paper, rather than the combined numbers.

Due to something called cognitive biases, we have a hard time believing something is real until we see proof.

There are dozens of cognitive biases,so thisis about as informative assaying due to something calledpsychology, we

The specific cognitive bias Tim seems to be discussing in this paragraph is the availability heuristic, or maybe the absurdity heuristic.Also see Cognitive Biases Potentially AffectingJudgment of Global Risks.

[Kurzweil is]well-known for his bold predictions and has a pretty good record of having them come true

The linked article says Ray Kurzweils predictions are right 86% of the time. That statistic is from a self-assessment Kurzweil published in 2010.Not surprisingly, when independent partiestry to grade the accuracy of Kurzweils predictions, they arrive at a much lower accuracy score: seepage 21 of this paper.

How good is this compared to other futurists? Unfortunately,we have no idea. The problem is that nobodyelse has bothered to write down so many specific technological forecasts over the course of multiple decades. So, give Kurzweil credit fordaring to make lots of predictions.

My own vague guess is that Kurzweils track record is actually pretty impressive, but not as impressive as his own self-assessment suggests.

Kurzweil predicts that well get [advanced nanotech]by the 2020s.

Im not surewhich Kurzweilprediction about nanotech Tim isreferring to, because the associated footnote points to a page of The Singularity is Nearthat isnt about nanotech. But if hes talking about advanced Drexlerian nanotech, then I suspect approximately zero nanotechnologists would agree with this forecast.

I expected [Kurzweils]critics to be saying, Obviously that stuff cant happen, but instead they were saying things like, Yes, all of that can happen if we safely transition to ASI, but thats the hard part.Bostrom, one of the most prominent voices warning us about the dangers of AI, still acknowledges

Yeah, but Bostrom and Kurzweil are both famous futurists. There are plenty of non-futuristcritics of Kurzweil who would say Obviously that stuff cant happen. I happen to agree with Kurzweil and Bostrom about the radical goods within reach of a human-aligned superintelligence,but lets not forget that most AI scientists, and most PhD-carrying members of societyin general, probablywould say Obviously that stuff cant happen in response to Kurzweil.

The people on Anxious Avenue arent in Panicked Prairie or Hopeless Hills both of which are regions on the far left of the chart but theyre nervous and theyre tense.

Actually, the people Tim istalking about here are often more pessimistic about societal outcomes than Tim issuggesting.Many of them are, roughly speaking, 65%-85% confident that machine superintelligence will lead to human extinction, and that its only in a small minority of possible worlds that humanity rises to the challenge and gets a machine superintelligence robustly aligned with humane values.

Of course, itsalso true that many of the people who write about the importance of AGI risk mitigationare moreoptimistic than the range shown in Timsgraph of Anxious Avenue. For example, one researcher I know thinks its maybe 65% likely we get really good outcomes from machine superintelligence. But he notes that a ~35% chance ofhuman friggin extinction istotally worth trying to mitigate as much as wecan, including by funding hundreds of smart scientists to study potential solutionsdecades in advance of the worst-case scenarios, like we already do with regard to a global warming, a much smaller problem. (Global warming is a big problem on a normal persons scale of things to worry about, but even climate scientists dont think its capable of human extinction in the next couple centuries.)

Or, as Stuart Russell author of the leading AI textbook likes to put it,If a superior alien civilization sent us a message saying, Well arrive in a few decades, would we just reply, OK, call us when you get here well leave the lights on? Probably not but this is more or less what is happening with AI. Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes1

[In the movies]AI becomes as or more intelligent than humans, then decides to turn against us and take over. Heres what I need you to be clear on for the rest of this post: None of the people warning us about AI are talking about this. Evil is a human concept, and applying human concepts to non-human things is called anthropomorphizing.

Thank you. Jesus Christ I am tired of clearing up that very basic confusion, even formany AI scientists.

Turry started off as Friendly AI, but at some point, she turned Unfriendly, causing the greatest possible negative impact on our species.

Just FYI, at MIRI wevestarted to move away from the Friendly AI language recently, since people think Oh, like C-3PO? MIRIs recent papersuse phrases like superintelligence alignmentinstead.

In any case, my real comment here isthat the quoted sentence above doesnt use the terms Friendly or Unfriendly the way theyve been used traditionally. In the usual parlance, a Friendly AI doesnt turn Unfriendly. If it becomes Unfriendly at some point, then it wasalways an Unfriendly AI, it just wasnt powerful enough yet to be a harm to you.

Tim does sorta fix this much later in the same post when he writes: So Turry didnt turn against usor switch from Friendly AI to Unfriendly AI she just kept doing her thing as she became more and more advanced.

When were talking about ASI, the same concept applies it would become superintelligent, but it would be no more human than your laptop is.

Well, this depends on how the AI is designed. If the ASI is an uploaded human, itll be pretty similar to a human in lots of ways. If itsnot an uploaded human, it could still be purposely designed to be human-like in many different ways. But mimicking human psychology in any kind of detail almost certainly isnt the quickest way to AGI/ASI just like mimicking bird flight in lots of detail wasnt how we built planes sopractically speaking yes,the first AGI(s) will likely be very alien from our perspective.

What motivates an AI system?The answer is simple: its motivation is whatever we programmed its motivation to be. AI systems are given goals by their creators your GPSs goal is to give you the most efficient driving directions; Watsons goal is to answer questions accurately. And fulfilling those goals as well as possible is their motivation.

Some AI programs today are goal-driven, but most are not. Siri isnt trying to maximize somegoal likebe useful to the user of this iPhone or anything like that. It just has a long list of rules about what kind of output to provide in response to different kinds of commands and questions. Varioussub-components of Siri might be sortagoal-oriented e.g. theres an evaluation function trying topick the most likely accuratetranscriptionof your spoken words but the system as a whole isnt goal-oriented. (Or at least, this is how it seems to work. Apple hasnt shown me Siris source code.)

As AI systems become more autonomous, giving them goals becomes more important because you cant feasibly specify how the AI shouldreact in every possible arrangement of the environment instead, you need to give it goals and let it do its own on-the-fly planning for how its going achieve those goals in unexpected environmental conditions.

The programming for a Predator drone doesnt include a list of instructionsto follow for every possiblecombination of takeoff points, destinations, and wind conditions, because that list would be impossiblylong. Rather, the operator gives the Predator drone a goal destination and the drone figures out how to get there on its own.

when [Turry]wasnt yet that smart, doing her best to achieve her final goal meant simple instrumental goals like learning to scan handwriting samples more quickly. She caused no harm to humans and was, by definition, Friendly AI.

Again, Ill mention thats not how the term has traditionally been used, but whatever.

But there are all kinds of governments, companies, militaries, science labs, and black market organizations working on all kinds of AI. Many of them are trying to build AI that can improve on its own

This isnt true unless by AI that can improve on its own you just mean machine learning. Almost nobody in AI is working on the kind of recursive self-improvement youd need to get an intelligence explosion. Lots of people are working onsystemsthat could eventually providesomepiece of the foundational architecturefor a self-improving AGI, but almostnobody is working directly on the recursive self-improvement problem right now, because its too far beyond current capabilities.2

because many techniques to build innovative AI systems dont require a large amount of capital, development can take place in the nooks and crannies of society, unmonitored.

True, its much harder to monitor potential AGI projects than it is to track uranium enrichment facilities. But you can at least trackAI research talent. Right nowit doesnt take a ton of work to identify aset of 500 AI researchers that probably contains the most talented ~150AI researchers in the world. Then you can just track all 500 of them.

This is similar to back whenphysicists were starting to realize that a nuclear fission bomb mightbe feasible. Suddenly a few of the most talented researchers stoppedpresenting their work at the usual conferences, and the other nuclear physicists pretty quickly deduced: Oh, shit, theyre probably working on a secret government fission bomb. If Geoff Hinton or even the much younger Ilya Sutskever suddenly went undergroundtomorrow, a lot of AI people would notice.

Of course, such a tracking effort might not be so feasible 30-60 years from now, when serious AGI projects will be more numerous and greater proportions of world GDP and human cognitive talent will be devoted to AI efforts.

On the contrary, what [AI developers are]probably doing is programming their early systems with a very simple, reductionist goal like writing a simple note with a pen on paper to just get the AI to work.Down the road, once theyve figured out how to build a strong level of intelligence in a computer, they figure they can always go back and revise the goal with safety in mind. Right?

Again, I note that most AI systems today are not goal-directed.

I also note that sadly, it probably wouldnt just be a matter of going back to revise the goal with safety in mind after a certain level of AI capability is reached. Most proto-AGIdesigns probably arent even thekind of systems youcan make robustly safe, no matterwhat goals you program into them.

To illustrate what I mean, imaginea hypothetical computer security expert namedBruce.Youtell Bruce that he and his team havejust 3years tomodify the latest version ofMicrosoft Windows so that it cant be hacked in any way, even by the smartest hackers on Earth. If he fails, Earth will be destroyed because reasons.

Bruce just stares at you and says, Well, thats impossible, so I guess were allfucked.

The problem, Bruce explains,is that Microsoft Windows was neverdesigned to be anything remotely like unhackable. It was designed to be easily useable, and compatible with lots of software, and flexible, and affordable, and just barely secure enough to be marketable, and you cant just slap ona specialUnhackability Module at the last minute.

To get a system that even has achance at being robustlyunhackable, Bruce explains, youve got to design an entirely differenthardware + software system that was designedfrom the ground up to be unhackable. And that systemmustbe designed in an entirely different way than Microsoft Windows is,and no team in the world could do everything that is required for that in a mere 3 years. So, were fucked.

But! By a stroke of luck, Bruce learns that some teamsoutsideMicrosoft have been working on a theoretically unhackablehardware + software system for the past several decades (high reliability ishard) people like Greg Morrisett (SAFE) and Gerwin Klein (seL4).Bruce says he might be able to take their work and add thefeatures you need,while preserving the strong security guaranteesof the original highly securesystem. Bruce sets Microsoft Windows aside and gets to work on trying to make this other system satisfy themysteriousreasons while remaining unhackable. He and his team succeedjust in time to savethe day.

This is an oversimplified and comically romanticway to illustrate whatMIRI is trying to do in the area of long-term AI safety. Were trying to think through what properties an AGI would need to haveif it was going to very reliablyact in accordance with humane values even as it rewrote its own code a hundredtimes on its way tomachine superintelligence.Were asking:What would it look like if somebody tried to design an AGI that was designedfrom the ground up not for affordability, or for speed of development, or for economic benefit at every increment of progress, but for reliably beneficial behavior even under conditions of radical self-improvement? What does the computationally unbounded solution to that problem look like,so we can gain conceptual insightsusefulfor later efforts to build a computationally tractable self-improving system reliably aligned with humane interests?

So ifyoure reading this, and you happen to be a highly gifted mathematician or computer scientist, and you want a full-time job working on themost important challenge of the 21st century, well were hiring. (I will also try to appeal to your vanity: Please note that because so little work has been done in this area, youve still got a decentchance to contribute to what will eventually be recognized as the early, foundational results ofthe most important field of research in human history.)

My thanks to Tim Urban for hisvery nice posts on machine superintelligence. Be sure to read his ongoing series about Elon Musk.

More:

A reply to Wait But Why on machine superintelligence

The AI Revolution: The Road to Superintelligence (PDF)

Mailed to:

Is this your street address? Yes, update Yes, it is

AA AE AP AL AK AZ AR CA CO CT DE FL GA HI ID IL IN IA KS KY LA ME MD MA MI MN MS MO MT NE NV NH NJ NM NY NC ND OH OK OR PA RI SC SD TN TX UT VT VA WA WV WI WY DC

United States Japan land Islands Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia, Plurinational State of Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cabo Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Cook Islands Costa Rica Croatia Curaao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard Island and McDonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lesotho Liechtenstein Lithuania Luxembourg Macao Macedonia, the former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montenegro Montserrat Morocco Mozambique Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestine, State of Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Runion Romania Russian Federation Rwanda Saint Barthlemy Saint Helena, Ascension and Tristan da Cunha Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands South Africa South Georgia and the South Sandwich Islands South Sudan Spain Sri Lanka Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela, Bolivarian Republic of Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Zambia

Read more:

The AI Revolution: The Road to Superintelligence (PDF)

China willing to cooperate in peaceful space exploration: Xi – Space Daily

Chinese President Xi Jinping has sent a letter of congratulations to the Global Space Exploration Conference, which opened Tuesday in Beijing.

In his letter, Xi said China is willing to enhance cooperation with the international community in peaceful space exploration and development.

Hailing the achievements made in space exploration in the 20th century, Xi said progress in space science and technology will benefit people around the world in the future.

China has attached great importance to space exploration as well as innovation in space science and technology, the president said, noting that the country wants to use these achievements to create a better future for mankind.

He also expressed hope that the ongoing conference will promote space science development and international exchanges and cooperation.

Source: Xinhua News Agency

With the rise of Ad Blockers, and Facebook - our traditional revenue sources via quality network advertising continues to decline. And unlike so many other news sites, we don't have a paywall - with those annoying usernames and passwords.

Our news coverage takes time and effort to publish 365 days a year.

If you find our news sites informative and useful then please consider becoming a regular supporter or for now make a one off contribution.

See original here:

China willing to cooperate in peaceful space exploration: Xi - Space Daily

Space Matter: The Trouble with Spacesuits :: Science :: Features … – Paste Magazine

Every aspect of space travel is difficult, but perhaps the hardest is the act of walking in space. When astronauts exit the International Space Station, theyre exposed to the vacuum of space. The only thing thats protecting them is a pressurized suit, known as an EMU (Extravehicular Mobility Unit). And now, it appears as though were running out of them.

Weve been using spacesuits since Mercury (the first American spacewalk occurred on Gemini 4), but the current spacesuit was designed and built for the Space Shuttle program. Of course theyve been upgraded, modified, and refurbished since then, but the fact remains: These suits were originally designed to last fifteen years. Almost forty years later, theyre wearing out.

This is a huge problem, given the timeline for the International Space Station. Right now, the ISS is scheduled to be operated through the year 2024. Its likely that will be extended through the year 2028. And according to NASAs own investigations and a report from the NASA Office of the Inspector General, the current plan to maintain and support the station with the spacesuits we currently have will be a real challenge.

Astronaut David A. Wolf participates in a 2002 space walk (Image credit: NASA)

NASAs current crop of spacesuits, or EMUs, have two different components: the Pressure Garment System, or PGS, and the Primary Life Support System, or PLSS. The PGS is responsible for maintaining pressure around the astronauts (as we need a minimum of 3 pounds per square inch of oxygen for our bodies to function), while the PLSS is basically a life-support backpack. It provides temperature control, oxygen, and scrubs carbon dioxide. The problem is, there are only 11 functional PLSSs left, out of an original 18.

When the Space Shuttle program was still running, issues with existing spacesuits were less of a problem. The EMUs could be regularly returned to Earth for inspection and maintenance. But now, SpaceXs Dragon is the only vehicle that can both carry supplies to the ISS and return items to Earth. (The Russian Soyuz can as well, but the weight/cargo space on those is usually reserved for astronauts because its currently the only vehicle capable of ferrying humans to and from the ISS. And spacesuits are big.)

As a result, NASA has been pushing the limits on how long EMUs can go without maintenance. Youd think that the older the suits got, the more refurbishment theyd need to make sure theyre performing up to spec. The suits were originally authorized for a single Shuttle mission before maintenance. In 2000, that interval was extended to 1 year. That continued to increase until 2008, when the ground maintenance interval [was] extended to 6 years, with in-flight maintenance and additional ground processing, NASA OIG Analysis of EVA Office Information.

Astronaut Soichi Noguchi trains for a space walk in the Neutral Buoyancy Lab (Image credit: NASA)

Now, thats not to say were intentionally and knowingly endangering astronauts livesin August 2016, an independent review team agreed with the six-year maintenance cycle, but there were still issues: For example, due to launch failures and slips, the suits werent even being maintained on a six-year cycle. One had gone a full nine years with no ground maintenance. (Just another example of why we need more than one vehicle capable of bringing cargo like this back from the ISS.)

Of course, were currently developing new spacesuitsin fact, NASA is working on three different programs, none of which have actually produced a spacesuit thats ready to fly. The combined cost of these programs? Around $200 million.

A prototype of the Exploration Development Suit (Image credit: NASA)

The problemokay, there are many problems, but the one Im going to focus onis related to larger issues at NASA. The organization is unsure of what its doing, what its goals are, and where its going. Now, as I mentioned in my column about going to Mars, thats not all the organizations fault. Program authorizations, followed by budget cuts, mean that NASA is constantly in limbo in regard to what is actually going to happen. The organization might start developing a new spacesuit tied to a program thats been authorized, knowing that it might never make it to fruition. Thats not a great way to commit to developing new technology, and its part of the reason there are three different programs to develop new suits, rather than one dedicated program.

Its not really clear what NASA is going to do about this spacesuit issue, but I will say that the organization is incredibly good at making hardware last far beyond its original use date. The problem is that these spacesuits are already so oldwhile many people might think spacesuits are custom made for the user, theyre not. These are old, cobbled together EMUs. Astronauts swap out arms and legs to make them fit, but weve got to figure out a better solution, and fast, in order to ensure we can continue to maintain the ISS over the course of its life.

Top photo by NASA

Swapna Krishna is a freelance writer, editor and giant space/sci-fi geek.

See original here:

Space Matter: The Trouble with Spacesuits :: Science :: Features ... - Paste Magazine

Launch of India’s biggest rocket is a defining moment in space exploration – DailyO

Indian Space Research Organisation (ISRO) successfully launched its 90th spacecraft mission on June 5, 2017, called GSLV MkIII-D1/GSAT 19. This is one of most important missions launched by ISRO ever, because it successfully lifted a payload mass of 3,136kg, the largest weight ever put by ISRO in outer space.

For the last few years, the Indian space programme is getting recognised as one of the most successful space programmes globally in recent times. However, Indian space capabilities for all these years were suffering from lack of a heavy satellite launch vehicle.

Now, with the success of GSLV Mark III, in the coming few years ISRO should be able to fully operationalise this new launch vehicle for heavy satellites.

Normally, communication and meteorological satellites belong to the category of heavy satellites. Such satellites are 4 to 6 tonne in weight and operate from geostationary orbit (36,000km above the earths surface).

Since, 1983 India has been launching communications satellites mainly under the programme famously known as Indian National Satellite (INSAT) system. Some of these satellites were multipurpose satellites too (they had meteorological payloads).

Today, India has nine operational communication satellites. Together, these satellites have more than 200 transponders in the C, Extended C and Ku-bands. These transponders are primarily used for television broadcasting and for providing various telecommunications services.

GSAT 19 is also a commutations satellite weighting 3,136kg, and is configured around ISROs standard I-3K bus. This satellite carries Ka/Ku-band high throughput communication transponders. In addition, it carries a geostationary radiation spectrometer (GRASP) payload for monitoring and studying the nature of charged particles and the influence of space radiation on satellites and their electronic components.

The success of GSLV III mission is significant for ISRO on various counts. First, it reduces/removes their dependence on outside agencies like the French company Ariane Space for launching of heavy satellites (four to six-tonne category) on commercial basis.

This would allow significant monitory savings and ISRO could use the same money for their various other programmes. Second, India took the help of Ariane Space during September 2013 for the launch of its first strategic satellite called GSAT-7 (being used by the Indian Navy), a multi-band military communications satellite, because GSLV Mark III was not ready by that time. Hence, ISRO was forced to look towards a foreign agency for launching a strategic payload.

India undertook missions to the moon and Mars more as technology demonstrator missions.

Now, in the near future, ISRO would be able to launch the proposed satellites for the Indian Army and air force by using an indigenously developed launch vehicle. In short, the presence of a heavy satellite launch vehicle would also boost Indias strategic space programme.

Third, India undertook missions to the moon and Mars more as technology demonstrator missions. These missions had limited scientific aims owing to ISRO limitations to carry more weight and the missions were undertaken by PSLV (Polar Satellite Launch Vehicle).

Naturally, owing to the capability of this rocket, only a limited number of payloads was carried onboard the moon and Mars missions to study these planets. But now, with a stronger rocket (GSLV), ISRO can develop major scientific goals for future missions to these planets.

Fourth, ISRO has already established itself as a reliable and cost-effective agency capable of launching satellites in the low earth orbit, weighing less than two tonne.

Now, in the coming years, with the maturing of the GSLV system, ISRO could be able to make inroads in the global commercial heavy satellite launch market.

Today, a good number of countries in the world can develop satellites and sensors. Many such efforts are collaborative efforts and are among two or more countries. However, mastering the art of rocket science remains a difficult proposal even today.

Hardly 11 countries in the world have developed such capabilities and they are able to launch satellites by using their indigenously built rocket systems. Among these countries only Russia, US (also private agency called Space X), China, Japan and the European Union can launch heavy satellites in the geostationary orbit.

Now, with the successful launch of GSAT 19 by using GSLV Mark III-D1, India has joined this club. In compression, with the earlier rockets developed by India (SLV, ASLV and PSLV), the GSLV is bigger in size and purpose (for launching heavy satellites) and hence fondly gets referred to as Fat Boy.

However, knowing the importance of GSLV for the future of Indias space programme and the type of role it is expected to play in the near future, this Fat Boy needs to be rechristened as a Suitable Boy!

Also read:ISRO launching its biggest rocket ever, GSLV-Mk III, is a bold move by India

View post:

Launch of India's biggest rocket is a defining moment in space exploration - DailyO

Americans Like Spending Money on Space Exploration, Survey Finds – Inverse

Venturing off into space costs a pretty penny. Thats why NASA, in 1972, quit sending astronauts on joyrides to the moon. But a nation-wide survey over 40 years in the making reveals that Americans dont mind ponying up the cash for extraterrestrial exploration whether its scouring the red Martian desert for hints of life, or peering down into Jupiters roiling clouds.

Beginning in 1972, when the last astronauts returned home from the moon, the independent research organization NORC at the University of Chicago began surveying Americans in every state, asking them, Do you think the nation is spending enough on space exploration? This was one of hundreds of questions asked as part of the General Social Survey, an ambitious endeavor to track the nations attitudes and beliefs which continues today. The surveys data for this specific question can been seen in the graph below, which was created by Overflow Data, a site that turns data into clear visualizations.

Herein lie some important trends. When the survey began in the early seventies, six out of 10 Americans thought we were spending too much on space exploration. Today, a little over two out of 10 Americans believe this. And in the last decade, the percentage of Americans that think were spending too little on space exploration has nearly doubled, to 21 percent.

Tom W. Smith, the director of the General Social Survey, suggests that the media coverage around commercial space programs, like SpaceX, might be a significant factor for why so many Americans want the government to spend more on space.

A lot of space news in recent years has been about the private sector. I wonder if the public thinks were spending less, Smith told Inverse. If were spending less, then theyre not going to say that were spending too much.

There are few space events that stir more excitement and media attention that Elon Musks reusable rockets, which are now landing back on Earth after blasting into space. The more coverage something is getting, that would be a major factor in shaping what people think about it, says Smith. Private spaceflight, notes Smith, might imply that space exploration money](https://www.inverse.com/topic/money) is being spent more efficiently than before, when the notoriously bureaucratic federal government held nearly exclusive reign over space rocketry.

In the 1980s, the Soviets invaded Afghanistan and Smith observed a similar shift in the nations attitudes. When the Soviets were on the march, support for defense spending doubled, says Smith.

Media attention is undoubtedly influential, but Americans acceptance of space spending could also be motivated by a growing cosmic intrigue. The deeper humans plunge into the void, the more curiosities we find. Saturns moon Enceladus spews geysers of water vapor and ice (and whatever else) into space, while Jupiters moon Europa tempts scientists with what might lie under its cracked icy crust perhaps a salty sea?

Whatever the reasons, if Americans believe that the government is spending less on space exploration, theyre absolutely right.

During the rousing 1960s space race to the moon, NASA was swimming in money. In the mid to late 1960s, NASA was spending well over four percent of taxpayer dollars. But by 1980, this dipped to one percent of the budget, and today its a measly half of one percent. This, of course, isnt too measly its over $19 billion. Nearly a quarter of this (around $4.5 billion) was tagged for space exploration in 2016, which includes the development of NASAs giant new rocket the Space Launch System which will launch Mars(https://www.inverse.com/topic/mars)-bound astronauts into space.

This expensive venture wont take place until the 2030s, but if recent trends continue, Americans may be willing to shell out more money to give astronauts the chance to romp around the Martian desert.

Here is the original post:

Americans Like Spending Money on Space Exploration, Survey Finds - Inverse

Nanotech Entertainment Inc. (NTEK) True Range Review – Nelson Research

The moving average of trading ranges over a specified time period is known as the Average True Range. Market bottoms following a panic sell-off are often where high values occur, and low values are often found during extended sideways periods, such as those found after consolidation periods at tops. Nanotech Entertainment Inc. (NTEK)s 9-Day Average True Range is0.006 and the 14-Day Average True Range is 0.0063. Digging deeper, the 20-Day Average True Range is 0.0064, the 50-Day Average True Range is0.0062 and, lastly, the 100-Day Average True Range is 0.0079.

Volume is the number of shares traded specific period of time. Every buyer has a seller, and each transaction adds to the total count of the volume. When a buyer and a seller agree on a transaction at a certain price, it is considered to be one transaction. For example, if only twenty transactions occur in a trading day, the volume for the day is twenty. Volume is used to measure the relative worth of a market move. When the markets make a strong price movement, the strength of that movement depends on the volume over that period. The higher the volume means the more significant the move. Volume levels give clues about where to find the best entry and exit points. Nanotech Entertainment Inc. (NTEK) experienced a volume of 266388. Volume is an important measure of strength for traders and technical analysts because volume is the number of contracts traded. The market needs to produce a buyer and a seller for any trade to occur. The market price is when buyers and sellers meet. When buyers and sellers become very active at a certain price, this means that there is high volume. Bar charts are used to quickly determine the level of volume and identify trends in volume.

A 52-week high/low is the highest and lowest share price that a stock has traded at during the previous year. Investors and traders consider the 52-week high or low as a crucial factor in determining a given stocks current value while also predicting future price movements. When a commodity trades within its 52-week price range (the range that exists between the 52-week low and the 52-week high), investors usually show more interest as the price nears either the high or the low. One of the more popular strategies used by traders is to buy when the price eclipses its 52-week high or to sell when the price drops below its 52-week low. The rationale involved with this strategy says that if the price breaks out either above or below the 52-week range, there is momentum enough to continue the price fluctuation in a positive direction. Nanotech Entertainment Inc. (NTEK)s high over the last year was $0.1275 while its low was $0.0155.

A pivot point is a technical analysis indicator used to glean the overall trend of the market over differing time periods. The pivot point itself is simply the average of the high, low and closing prices from the previous days trading. On the following day, any trading above the pivot point indicates ongoing bullish trends, while trading below the pivot point indicates a bearish trend. Pivot point analysis is used in alongside calculating support and resistance levels, much like trend line analysis. In pivot point analysis, the first support and resistance levels are found by utilizing the width of the trading range between the pivot point and either the high or low prices of the previous trading day. Secondary support and resistance levels are found using the full width between the high and low prices of the previous trading day.

Pivot points are oft-used indicators for trading futures, commodities, and stocks. They are static, remaining at the same price level throughout the day. Five pivot point levels are generated by using data from the previous days trading range. These are composed of a pivot point and two higher pivot point resistances called R1 and R2 and also two lower pivot point supports called as S1 and S2. Nanotech Entertainment Inc. (NTEK)s Pivot Point is 0.0358. Barchart Opinions show investors what a variety of popular trading systems are suggesting. These Opinions take up to 2 years worth of historical data and runs the prices through thirteen technical indicators. After each calculation, a buy, sell or hold value for each study is assigned, depending on where the price is in reference to the interpretation of the study.

Todays opinion, the overall signal based on where the price lies in reference to the common interpretation of all 13 studies, for Nanotech Entertainment Inc. (NTEK) is 40% Sell. Nanotech Entertainment Inc. (NTEK)s Previous Opinion, he overall signal from yesterday, based on where the price lies in reference to the common interpretation of all 13 studies was 16% Sell. Nanotech Entertainment Inc. (NTEK)s opinion strength, a long-term measurement of signal strength vs. the historical strength, is Minimum.

Opinion strength ranges from Maximum, Strong, Average, Weak, Minimum. A stronger strength is less volatile and a hold signal does not have any strength. The Opinion Direction, a three-day measurement of the movement of the signal, an indication of whether the most recent price movement is going along with the signal. Strongest, Strengthening, Average, Weakening, or Weakest. A buy or sell signal with a strongest direction means the signal is becoming stronger. A hold signal direction indicates where the signal is heading (towards a buy or sell): Bullish, Rising, Steady, Falling, or Bearish. Nanotech Entertainment Inc. (NTEK)s direction is Strengthening.

Disclaimer: Nothing contained in this publication is intended to constitute legal, tax, securities, or investment advice, nor an opinion regarding the appropriateness of any investment, nor a solicitation of any type. The general information contained in this publication should not be acted upon without obtaining specific legal, tax, and investment advice from a licensed professional.

Read the rest here:

Nanotech Entertainment Inc. (NTEK) True Range Review - Nelson Research

North Korea WW3: Japan prepares for EVACUATION after missile tests – Daily Star

NORTH KOREAS latest missile tests have forced Japan to undergo evacuation drills with over fears of an impending attack.

Kim Jong-uns tyrannical state fired an unidentified warhead last week which fell close to the neighbouring country.

Days after on Sunday (June 4) hundreds of residents from a small Japanese coastal town took part to simulate a ballistic missile strike.

The town of Abu which faces the Sea of Japan is surrounded by the Yamaguchi prefecture range, which host a major US Marine Corps air station.

GETTY

I was able to stay calm and evacuated in a few minutes, 67-year-old Yuriko Suewaka, who was among the 280 people involved in the exercise, told Jiji Press.

As WW3 tensions escalate further, there are plans to conduct more drills later this month in Yamagata and Niigata which both also face the Sea of Japan.

The government has requested local communities to prepare for the holding of an evacuation drill, an official told AFP on condition of anonymity.

GETTY

North Koreas long-running tension with the West hit new levels on Friday after a US official claimed the country does not have the capability to stop Kims deadly missiles.

John F. Tierny executive director of the Centre for Arms Control and Non-Proliferation said the system aimed to intercept nuclear bombs has failed to destroy its target three out of the last five attempts.

Kim is also providing a new threat with claims he has an army of 6,000 hackers ready to gain access into Western computer systems.

Read the original here:

North Korea WW3: Japan prepares for EVACUATION after missile tests - Daily Star

Posted in Ww3

North Korea facing new alliance to stop WW3 as Japan pledges to team up with China – Express.co.uk

Japanese Prime Minister Shinzo Abe told China's top diplomat he would like to work with China to try to rein in North Korea's nuclear and missile programmes.

Prime Minister Abe said: To resolve this problem peacefully, we would like to work with China, which has strong influence (over North Korea).

In return, Mr Yang said he hoped all parties play a constructive role in resolving the issue.

GETTY

This partnership takes on an especially symbolic importance given the dark history of Sino-Japanese relations, highlighted by the Rape of Nanking in 1937, an atrocity committed by Japan which left hundreds of thousands of Chinese civilians dead.

And in 2014 a poll revealed 90 per cent of Chinese people polled expressed a negative view on Japan, while 85 per cent of Japanese people concerned disputes between the two nations could lead to another war.

READ MORE: Will the USA attack North Korea?

Eric Lafforgue/Exclusivepix Medi

1 of 69

Taking pictures in the DMZ is easy, but if you come too close to the soldiers, they stop you

But North Koreas increasingly erratic actions could force the two rivals to come together in order to bring stability to the region.

Kim Jong-un regularly threatens Japan, seen as one of the countrys biggest enemies, promising several times this year to reduce Tokyo to ashes.

But more worrying in recent weeks has been the emergence of North Korean threats against China, traditionally seen as one of Kims biggest ally.

GETTY

Last month North Korean state media accused China of a betrayal following the introduction of coal imports to the country

It said: China should no longer recklessly try to test the limitations of our patience. We have so devotedly helped the Chinese revolution and suffered enormous damage.

READ MORE: Who is on North Korea's side? Who are Kim Jong-un's allies?

North Korea should be very careful

Odd Arne Westad

And one North Korean expert warned Beijing was quickly losing patience with North Korea.

Odd Arne Westad of Harvard University said: North Korea should be very careful. It's clear Beijing now sees further, deeper urgency in getting talks started and these talks have to centre on North Korean nuclear and missile programmes.

GETTY

The end point will have to be at least a freeze on North Korean nuclear and missile programs with an aim of its abolition, not necessarily to insist on its immediate disarmament.

Prof Westad said relations had never been as bad as they are today - and they seem to be getting worse very quickly.

View post:

North Korea facing new alliance to stop WW3 as Japan pledges to team up with China - Express.co.uk

Posted in Ww3

North Korea WW3: US official claims defences ‘can’t stop Kim Jong un’s nukes’ – Daily Star

THE US will not be able to stop a North Korean nuke, an official has claimed.

John F. Tierney, executive director of the Centre for Arms Control and Non-Proliferation, believes the US may not currently have the capability to stop Kim Jong-un's deadly missiles.

Donald Trump successfully tested a system aimed at intercepting nukes from North Korea in the Californian desert this week.

But Tierney warned that, despite the success, the US is not bulletproof.

GETTY

Since 2008, photographer Eric Lafforgue ventured to North Korea six times. Thanks to digital memory cards, he was able to save photos that was forbidden to take inside the segregated state

1 / 62

Taking pictures in the DMZ is easy, but if you come too close to the soldiers, they stop you

Six tests have failed to destroy the target, including three of the last five tries

"Of the 10 tests of the system since 2004, when the Bush administration prematurely declared it operational, six have failed to destroy the target, including three of the last five tries," he wrote in the New York Times.

"The timing and other details are provided in advance, information that no real enemy would provide. The weather and time of day are just right for an intercept.

"An adversary would use complex countermeasures, such as decoys, alongside the real missile to try to fool the defence system, but only simplistic versions of this trick have been included.

GETTY

President Trump answered Kim Jong-un's missile test with his own display of power, with this test of a defensive Interceptor missile, designed to blow up enemy missiles in space

1 / 11

The ground-based Midcourse Defense (GMD) element of the US ballistic missile defense system launches during a flight test from Vandenberg Air Force Base, California

"Under realistic testing conditions, the programs success rate would almost certainly be lower."

Kim has promised "weekly" tests in the face of US aggression and recently carried out North Korea's largest nuke test ever.

In response, Trump has despatched his "armada" led by the USS Carl Vinson to the peninsula.

Trump has also vowed to keep developing his nuclear missile defence programme.

But Tierney warned that a peaceful solution is the only way to end the North Korean crisis.

GETTY

As Donald Trump has promised to start an arms race, we take a look at the futuristic weapons being developed for the US military.

1 / 10

The Lockheed Martin HULC is an exoskeleton that allows soldiers to carry loads of up to 200lbs for long distances

"If lawmakers are serious about defending the homeland from rogue states like North Korea, they should prioritise diplomatic action," he added.

"Expanding the programme before it is proved to be effective under realistic conditions would mark the height of irresponsibility.

"The Missile Defence Agency might be celebrating now, but the latest test is at best a small step forward on a very long journey."

See the rest here:

North Korea WW3: US official claims defences 'can't stop Kim Jong un's nukes' - Daily Star

Posted in Ww3

America’s Trippiest Chemist: Making Psychedelics ‘Was Fun’ – Motherboard

This story is part of When the Drugs Hit, a Motherboard journey into the science, politics, and culture of today's psychedelic renaissance. Follow along here.

After a flurry of scientific research in the 1950s and 60s, all human psychedelic drug trials in the US were effectively banned with the passage of the Comprehensive Drug Abuse and Control Act in 1970. This moratorium on psychedelic research lasted until psychiatrist Rick Strassman's DMT trials at the University of New Mexico opened the door for new psychedelic research in 1990.

Since then, a number of studies have looked at the potential therapeutic effects of psychedelic substances. And in almost all of these trials, including Strassman's landmark study, the drugs have been supplied by a single individual: Dave Nichols.

As a chemist at Purdue University, Nicholas was in the business of supplying America's psychedelic research compounds from 1969 until he retired in 2012. He is perhaps best known for synthesizing the MDMA, DMT, LSD and psilocybin (the psychedelic compound in "magic mushrooms") that have been used in the new wave of psychedelic research. But most of Nichols' career was spent researching psychedelic analogs, molecules which have similar structures or effects to their well-known controlled relatives.

Despite his academic pedigree, Nichols has come under fire for fueling a "designer drug" boom that resulted in obscure research chemicals making their way from the lab to the street and resulted in a few deaths along the way. But this is merely an unfortunate side effect of his researchanyone with access to chemistry journals and a decent lab set up would be able to reproduce his research. Nichols has also strongly condemned a number of designer drugs, particularly synthetic cannabinoids like spice, as dangerous for consumption.

In many ways, Nichols is like a contemporary Sasha Shulgin, the infamous chemist who co-authored TIHKAL and PIHKAL , which are essentially psychedelic cookbooks interwoven with a love story (indeed, Nichols was actually the first to synthesize a handful of chemicals described in PIHKAL). But whereas Shulgin was getting high on his own supply in his backyard lab, Nichols isn't trying to spark a psychedelic awakening and was certainly not tripping in his Purdue laboratory.

"There was nobody else really doing what I was doing so it was fun and I didn't have to worry about getting scooped."

Still, life ain't easy for a psychedelic scientist, so I caught up with Nichols to learn about how he became one of the largest producers of psychedelic research chemicals in the US, and what that experience was like at a time when there was zero tolerance for psychedelic research on humans.

Motherboard: How did you get started making psychedelic drugs? David Nichols: I started in this field in 1969 as a graduate student, doing my PhD on mescaline analogs, basically. Then I got an academic position at Purdue and that allowed me to pursue whatever I wanted to follow, so I just kept working on psychotomimetics. I was lucky to get a grant from the National Institute of Drug Abuse and that grant continued for about 29 years. There was nobody else really doing what I was doing so it was fun and I didn't have to worry about getting scooped by someone else working in the same area because we were doing a pretty novel thing.

Was it hard getting approval to make Schedule I psychedelics? Not really. When I made the MDMA it was before it was illegal so there wasn't a problem with that. And with Rick Strassman's DMT, I had a schedule I license for DMT already. Same thing was true with the psilocybin I made for Johns Hopkins. You're allowed to make a Schedule I substance if it's in collaboration with someone else who has a Schedule I license so that wasn't that difficult. I had to get the Schedule I license in the beginning, which required getting everything certified, but it wasn't as difficult as people might imagine.

The DEA never pushed back against your research? I had a license for 15 different Schedule I substances. At one point they started getting really pushy. Do you have an active protocol? Do you need all these? We'd like to get some of them out of your lab if you don't really need these 15. How often do you use them?

So I had to write a letter saying we don't do the kind of research you're used to. We don't fit in a boilerplate. We do studies where we modify a receptor and mutate different amino acids in the receptor. We have a whole library of compounds we want to put in there and see what did that mutation do to the activity of those structures. We never know which compound we might need on any given day.

The DEA aren't scientists, they're basically policemen. At a certain point they started sending protocols over to the FDA, which is totally inappropriate. But anyway they'd ask if this [is] research worth doing. That's an inappropriate question. If you're qualified, you have an academic appointment, you're respected, and you've got a CV, the DEA shouldn't be asking whether this research should be done. Anyway, if they got feedback saying the project was worth doing, then they'd come and investigate your facilities. They wanted to see what kind of safe you were going to keep substances in, where the safe's located. They're basically concerned with diversion of controlled substances. They want to make sure if you get it, no one else can get it.

What was your security like for these substances you were making? My office had a two-inch solid oak door that had a key lock that would've been difficult to pick. Inside my office I had a big, heavy steel fireproof file cabinet. In addition to the regular pushlocks, it had a hasp welded on top with another place welded on the bottom so that a one-inch steel bar could fit in front of the drawers with a padlock on the top. To get in that you'd have to be able to break through a solid oak door, then pick or cut off a substantial padlock, and then use a crowbar or something to break open the file cabinet.

Read more: A Beginner's Guide to Tripping on Acid

How long did it take you to get your license? That can take six months to two years depending on how you appear to the DEA. It's supposed to be a non-political process, but I've known people who've taken two years because the DEA said they lost their inspection forms after doing an inspection. You don't know if these things are willful are not. I've always had the impression that the DEA doesn't appreciate having to do Schedule I. They think these things are so dangerous, why should we be working with them. But that's just my own impression.

This was back in the 70s. Do you think studying psychedelic drugs has gotten easier? I think the DEA has gotten more stringent. When I first started the process seemed more transparent and I didn't have any trouble getting a license. I've talked to many people over the years who've really had a difficult time. Also the DEA didn't used to send the protocols to the FDA to get a ruling as to whether they were worth doing or not. It's like the DEA saying, "we don't trust you, we need to get an outside check to make sure you're a legitimate guy." They're real suspicious.

Outside of your lab, who else was synthesizing psychedelics? I don't know that there were that many people just making things. We generally made them because we had a certain hypothesis. So when we were doing the MDMA work, we were trying to develop a molecule that had MDMA-like activity but which was completely new so we wouldn't have to worry about the stigma of it being a research chemical or drug of abuse.

That was what drove a lot of the MDMA research. We basically were trying to understand the features of the receptor that were necessary for activity. It was a pretty complex program designed to understand how the molecules were interacting with biological targets. We didn't just make compounds like Sasha Shulgin, just to see what we could make and take. He was an alchemist, not a scientist. We had specific hypotheses, we made the molecules with specific ideas of what we wanted to figure out. I don't know anyone else who was really doing that with respect to psychedelics.

"I think it's because of the internet that these things have just proliferated."

A lot of your work was focused on psychedelic analogs. In recent years, the DEA has really been cracking down on analogs. Why do you think this is? Well, there are a lot more analogs out there than people are aware of now. In fact, a lot of the things I've made are now called designer drugs and are out on the street. But they don't have to cause any damage, they don't even have to be problematic. All the DEA has to do is suspect that they have abuse potential, then all of a sudden say we better control this. They don't wait for something to show up as a problem, they try to think of everything possible.

I think they can probably wait and see until these things are showing up on the street in a significant amount. They can quickly schedule with emergency scheduling. But they claim it makes their life easier if they have these things scheduled already, but it does shut off legitimate research. If these things have medical potential, nobody is going to look at them. So then that means they'll only be examined in the context of being drugs of abuse. So any potential benefit will never be studied.

A lot of the chemicals made in your lab have shown up on the street. How did it feel to know people were recreationally taking these obscure compounds you created? I was surprised. Some of the things we made were not that simple of a synthesis to carry out. So when these things showed up, I thought, "wow, someone's gone to a lot of trouble to make these." We published a lot of these things years ago, but they just started showing up in the last few years.

I think it's because of the internet that these things have just proliferated. Now people can go on sites like Erowid and read about these different substances and say, "oh, that sounds interesting," and there's people that sell the stuff so they can go buy a sample. That didn't used to happen. They used to have to go to a library and do some actual research.

Were pharmaceutical companies showing any interest in psychedelic analogs when you were working in the lab? Drug companies have stayed away from this field entirely. It was the case in their research that if they found a molecule that activated the serotonin 2A receptor, which is the target for psychedelics, it was a kiss of death for that molecule right off the bat. I think the drug industry has been very circumspect about things hitting targets that could be drugs of abuse. I don't really think that the drug industry as a whole sees these things as any sort of profit source for them now or in the future.

In terms of the paradigm, using psilocybin once or twice a year maybe, isn't the model that the pharmaceutical industry follows. Most of the research chemicals are like that. They're analogs of psychedelics. But pharmaceutical companies are looking for a pill you take everyday for the rest of your life. That's how they make their money.

This interview has been lightly edited for length and clarity.

See the original post:

America's Trippiest Chemist: Making Psychedelics 'Was Fun' - Motherboard