12345...102030...


Astrophysics – Wikipedia

This article is about the use of physics and chemistry to determine the nature of astronomical objects. For the use of physics to determine their positions and motions, see Celestial mechanics. For the physical study of the largest-scale structures of the universe, see Physical cosmology. For the journal, see Astrophysics (journal).

Astrophysics is the branch of astronomy that employs the principles of physics and chemistry “to ascertain the nature of the astronomical objects, rather than their positions or motions in space”.[1][2] Among the objects studied are the Sun, other stars, galaxies, extrasolar planets, the interstellar medium and the cosmic microwave background.[3][4] Their emissions are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists typically apply many disciplines of physics, including mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.

In practice, modern astronomical research often involves a substantial amount of work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include their attempts to determine the properties of dark matter, dark energy, and black holes; whether or not time travel is possible, wormholes can form, or the multiverse exists; and the origin and ultimate fate of the universe.[3] Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics.

Although astronomy is as ancient as recorded history itself, it was long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle.[5][6]During the 17th century, natural philosophers such as Galileo,[7] Descartes,[8] and Newton[9] began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws.[10] Their challenge was that the tools had not yet been invented with which to prove these assertions.[11]

For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects.[12][13] A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum.[14] By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements.[15] Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere.[16] In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth.

Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected bright, as well as dark, lines in solar spectra. Working with the chemist, Edward Frankland, to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified.[17][18]

In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory, in which a team of woman computers, notably Williamina Fleming, Antonia Maury, and Annie Jump Cannon, classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering’s vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922.[19]

In 1895, George Ellery Hale and James E. Keeler, along with a group of ten associate editors from Europe and the United States,[20] established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics.[21] It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories.[20]

Around 1920, following the discovery of the Hertsprung-Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars.[22][23] At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein’s equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.[non-primary source needed]

In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars.[24] Most significantly, she discovered that hydrogen and helium were the principal components of stars. Despite Eddington’s suggestion, this discovery was so unexpected that her dissertation readers convinced her to modify the conclusion before publication. However, later research confirmed her discovery.[25]

By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths.[26] In the 21st century it further expanded to include observations based on gravitational waves.

Observational astronomy is a division of the astronomical science that is concerned with recording data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.

The majority of astrophysical observations are made using the electromagnetic spectrum.

Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study our Sun. Cosmic rays consisting of very high energy particles can be observed hitting the Earth’s atmosphere.

Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different.

The study of our very own Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Our understanding of our own Sun serves as a guide to our understanding of other stars.

The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the HertzsprungRussell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction.

Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.[27][28]

Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.

Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.

Topics studied by theoretical astrophysicists include: stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Astrophysical relativity serves as a tool to gauge the properties of large scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.

Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model, are the Big Bang, cosmic inflation, dark matter, dark energy and fundamental theories of physics. Wormholes are examples of hypotheses which are yet to be proven (or disproven).

The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms.[10] There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss, Subrahmanyan Chandrasekhar, Stephen Hawking, Hubert Reeves, Carl Sagan and Neil deGrasse Tyson. The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics.[29][30][31]

See the rest here:

Astrophysics – Wikipedia

UC San Diego NanoEngineering Department

The NanoEngineering program has received accreditation by the Accreditation Commission of ABET, the global accreditor of college and university programs in applied and natural science, computing, engineering and engineering technology. UC San Diego’s NanoEngineering program is the first of its kind in the nation to receive this accreditation. Our NanoEngineering students can feel confident that their education meets global standards and that they will be prepared to enter the workforce worldwide.

ABET accreditation assures that programs meet standards to produce graduates ready to enter critical technical fields that are leading the way in innovation and emerging technologies, and anticipating the welfare and safety needs of the public. Please visit the ABET website for more information on why accreditation matters.

Congratulations to the NanoEngineering department and students!

Original post:

UC San Diego NanoEngineering Department

Nanoengineering – Wikipedia

Nanoengineering is the practice of engineering on the nanoscale. It derives its name from the nanometre, a unit of measurement equalling one billionth of a meter.

Nanoengineering is largely a synonym for nanotechnology, but emphasizes the engineering rather than the pure science aspects of the field.

The first nanoengineering program was started at the University of Toronto within the Engineering Science program as one of the options of study in the final years. In 2003, the Lund Institute of Technology started a program in Nanoengineering. In 2004, the College of Nanoscale Science and Engineering at SUNY Polytechnic Institute was established on the campus of the University at Albany. In 2005, the University of Waterloo established a unique program which offers a full degree in Nanotechnology Engineering. [1] Louisiana Tech University started the first program in the U.S. in 2005. In 2006 the University of Duisburg-Essen started a Bachelor and a Master program NanoEngineering. [2] Unlike early NanoEngineering programs, the first Nanoengineering Department in the world, offering both undergraduate and graduate degrees, was established by the University of California, San Diego in 2007.In 2009, the University of Toronto began offering all Options of study in Engineering Science as degrees, bringing the second nanoengineering degree to Canada. Rice University established in 2016 a Department of Materials Science and NanoEngineering (MSNE).DTU Nanotech – the Department of Micro- and Nanotechnology – is a department at the Technical University of Denmark established in 1990.

In 2013, Wayne State University began offering a Nanoengineering Undergraduate Certificate Program, which is funded by a Nanoengineering Undergraduate Education (NUE) grant from the National Science Foundation. The primary goal is to offer specialized undergraduate training in nanotechnology. Other goals are: 1) to teach emerging technologies at the undergraduate level, 2) to train a new adaptive workforce, and 3) to retrain working engineers and professionals.[3]

More:

Nanoengineering – Wikipedia

The NANO-ENGINEERING FLAGSHIP initiative

Nano-Engineering introduces a novel key-enabling non-invasive broadband technology, the Nano-engineered Interface (NaI), realising omni -connectivity and putting humans and their interactions at the center of the future digital society.Omni-connectivity encompasses real-time communication, sensing, monitoring, and data processing among humans, objects, and their environment. The vision of Omni-connectivity englobes people in a new sphere of extremely simplified, intuitive and natural communication.The Nano-engineered Interface (NaI) a non-invasive wireless ultraflat functional system will make this possible. NaI will be applicable to any surface on any physical item and thereby exponentially diversify and increase connections among humans, wearables, vehicles, and everyday objects. NaI will communicate with other NaI-networks from local up to satellites by using the whole frequency spectrum from microwave frequency to optics

Link:

The NANO-ENGINEERING FLAGSHIP initiative

NETS – What are Nanoengineering and Nanotechnology?

is one billionth of a meter, or three to five atoms in width. It would take approximately 40,000 nanometers lined up in a row to equal the width of a human hair. NanoEngineering concerns itself with manipulating processes that occur on the scale of 1-100 nanometers.

The general term, nanotechnology, is sometimes used to refer to common products that have improved properties due to being fortified with nanoscale materials. One example is nano-improved tooth-colored enamel, as used by dentists for fillings. The general use of the term nanotechnology then differs from the more specific sciences that fall under its heading.

NanoEngineering is an interdisciplinary science that builds biochemical structures smaller than bacterium, which function like microscopic factories. This is possible by utilizing basic biochemical processes at the atomic or molecular level. In simple terms, molecules interact through natural processes, and NanoEngineering takes advantage of those processes by direct manipulation.

SOURCE:http://www.wisegeek.com/what-is-nanoengineering.htm

Excerpt from:

NETS – What are Nanoengineering and Nanotechnology?

Undergraduate Degree Programs | NanoEngineering

The Department of NanoEngineering offers undergraduate programs leading to theB.S. degreesinNanoengineeringandChemical Engineering. The Chemical Engineering and NanoEngineering undergraduate programs areaccredited by the Engineering Accreditation Commission of ABET. The undergraduate degree programs focus on integrating the various sciences and engineering disciplines necessary for successful careers in the evolving nanotechnology industry.These two degree programshave very different requirements and are described in separate sections.

B.S. NanoEngineering

TheNanoEngineering Undergraduate Program became effective Fall 2010.Thismajor focuses on nanoscale science, engineering, and technology that have the potential to make valuable advances in different areas that include, to name a few, new materials, biology and medicine, energy conversion, sensors, and environmental remediation. The program includes affiliated faculty from the Department of NanoEngineering, Department of Mechanical and Aerospace Engineering, Department of Chemistry and Biochemistry, and the Department of Bioengineering. The NanoEngineering undergraduate program is tailored to provide breadth and flexibility by taking advantage of the strength of basic sciences and other engineering disciplines at UC San Diego. The intention is to graduate nanoengineers who are multidisciplinary and can work in a broad spectrum of industries.

B.S. Chemical Engineering

The Chemical Engineering undergraduate program is housed within the NanoEngineering Department. The program is made up of faculty from the Department of Mechanical and Aerospace Engineering, Department of Chemistry and Biochemistry, the Department of Bioengineering and the Department of NanoEngineering. The curricula at both the undergraduate and graduate levels are designed to support and foster chemical engineering as a profession that interfaces engineering and all aspects of basic sciences (physics, chemistry, and biology). As of Fall 2008, the Department of NanoEngineering has taken over the administration of the B.S. degree in Chemical Engineering.

Academic Advising

Upon admission to the major, students should consult the catalog or NanoEngineering website for their program of study, and their undergraduate/graduate advisor if they have questions. Because some course and/or curricular changes may be made every year, it is imperative that students consult with the departments student affairs advisors on an annual basis.

Students can meet with the academic advisors during walk-in hours, schedule an appointment, or send messages through the Virtual Advising Center (VAC).

Program Alterations/Exceptions to Requirements

Variations from or exceptions to any program or course requirements are possible only if the Undergraduate Affairs Committee approves a petition before the courses in question are taken.

Independent Study

Students may take NANO 199 or CENG 199, Independent Study for Undergraduates, under the guidance of a NANO or CENG faculty member. This course is taken as an elective on a P/NP basis. Under very restrictive conditions, however, it may be used to satisfy upper-division Technical Elective or Nanoengineering Elective course requirements for the major. Students interested in this alternative must have completed at least 90 units and earned a UCSD cumulative GPA of 3.0 or better. Eligible students must identify a faculty member with whom they wish to work and propose a two-quarter research or study topic. Please visit the Student Affairs office for more information.

Continue reading here:

Undergraduate Degree Programs | NanoEngineering

About the NANO-ENGINEERING FLAGSHIP

Turning the NaI concept into reality necessitates an extraordinary and long-term effort. This requires the integration of nanoelectronics, nanophotonics, nanophononics, nanospintronics, topological effects, as well as the physics and chemistry of materials. This also requires operations in an extremely broad range of science and technology, including Microwaves, Millimeter waves, TeraHertz, Infrared and Optics, and will exploit various excitations, such as surface waves, spin waves, phonons, electrons, photons, plasmons, and their hybrids, for sensing, information processing and storage. Integrating

This high level of integration, which goes beyond individual functionalities, components and devices and requires cooperation across a range of disciplines, makes the Nano Engineering Flagship unique in its approach. It will be crucial in tackling the 6 strategic challenges identified as:

Visit link:

About the NANO-ENGINEERING FLAGSHIP

We Aren’t Growing Enough Healthy Foods to Feed Everyone on Earth

Check Yourself

The agriculture industry needs to get its priorities straight.

According to a newly published study, the world food system is producing too many unhealthy foods and not enough healthy ones.

“We simply can’t all adopt a healthy diet under the current global agriculture system,” said study co-author Evan Fraser in a press release. “Results show that the global system currently overproduces grains, fats, and sugars, while production of fruits and vegetables and, to a smaller degree, protein is not sufficient to meet the nutritional needs of the current population.”

Serving Downsized

For their study, published Tuesday in the journal PLOS ONE, researchers from the University of Guelph compared global agricultural production with consumption recommendations from Harvard University’s Healthy Eating Plate guide. Their findings were stark: The agriculture industry’s overall output of healthy foods does not match humanity’s needs.

Instead of the recommended eight servings of grains per person, it produces 12. And while nutritionists recommend we each consume 15 servings of fruits and vegetables daily, the industry produces just five. The mismatch continues for oils and fats (three servings instead of one), protein (three servings instead of five), and sugar (four servings when we don’t need any).

Overly Full Plate

The researchers don’t just point out the problem, though — they also calculated what it would take to address the lack of healthy foods while also helping the environment.

“For a growing population, our calculations suggest that the only way to eat a nutritionally balanced diet, save land, and reduce greenhouse gas emission is to consume and produce more fruits and vegetables as well as transition to diets higher in plant-based protein,” said Fraser.

A number of companies dedicated to making plant-based proteins mainstream are already gaining traction. But unfortunately, it’s unlikely that the agriculture industry will decide to prioritize growing fruits and veggies over less healthy options as long as people prefer having the latter on their plates.

READ MORE: Not Enough Fruits, Vegetables Grown to Feed the Planet, U of G Study Reveals [University of Guelph]

More on food scarcity: To Feed a Hungry Planet, We’re All Going to Need to Eat Less Meat

Go here to read the rest:

We Aren’t Growing Enough Healthy Foods to Feed Everyone on Earth

Report Identifies China as the Source of Ozone-Destroying Emissions

Emissions Enigma

For years, a mystery puzzled environmental scientists. The world had banned the use of many ozone-depleting compounds in 2010. So why were global emission levels still so high?

The picture started to clear up in June. That’s when The New York Times published an investigation into the issue. China, the paper claimed, was to blame for these mystery emissions. Now it turns out the paper was probably right to point a finger.

Accident or Incident

In a paper published recently in the journal Geophysical Research Letters, an international team of researchers confirms that eastern China is the source of at least half of the 40,000 tonnes of carbon tetrachloride emissions currently entering the atmosphere each year.

They figured this out using a combination of ground-based and airborne atmospheric concentration data from near the Korean peninsula. They also relied on two models that simulated how the gases would move through the atmosphere.

Though they were able to narrow down the source to China, the researchers weren’t able to say exactly who’s breaking the ban and whether they even know about the damage they’re doing.

Pinpoint

“Our work shows the location of carbon tetrachloride emissions,” said co-author Matt Rigby in a press release. “However, we don’t yet know the processes or industries that are responsible. This is important because we don’t know if it is being produced intentionally or inadvertently.”

If we can pinpoint the source of these emissions, we can start working on stopping them and healing our ozone. And given that we’ve gone nearly a decade with minimal progress on that front, there’s really no time to waste.

READ MORE: Location of Large ‘Mystery’ Source of Banned Ozone Depleting Substance Uncovered [University of Bristol]

More on carbon emissions: China Has (Probably) Been Pumping a Banned Gas Into the Atmosphere

Read more from the original source:

Report Identifies China as the Source of Ozone-Destroying Emissions

An AI Conference Refusing a Name Change Highlights a Tech Industry Problem

Name Game

There’s a prominent artificial intelligence conference that goes by the suggestive acronym NIPS, which stands for “Neural Information Processing Systems.”

After receiving complaints that the acronym was alienating to women, the conference’s leadership collected suggestions for a new name via an online poll, according to WIRED. But the conference announced Monday that it would be sticking with NIPS all the same.

Knock It Off

It’s convenient to imagine that this acronym just sort of emerged by coincidence, but let’s not indulge in that particular fantasy.

It’s more likely that tech geeks cackled maniacally when they came up with the acronym, and the refusal to do better even when people looking up the conference in good faith are bombarded with porn is a particularly telling failure of the AI research community.

Small Things Matter

This problem goes far beyond a silly name — women are severely underrepresented in technology research and even more so when it comes to artificial intelligence. And if human decency — comforting those who are regularly alienated by the powers that be — isn’t enough of a reason to challenge the sexist culture embedded in tech research, just think about what we miss out on.

True progress in artificial intelligence cannot happen without a broad range of diverse voices — voices that are silenced by “locker room talk” among an old boy’s club. Otherwise, our technological development will become just as stuck in place as our cultural development often seems to be.

READ MORE: AI RESEARCHERS FIGHT OVER FOUR LETTERS: NIPS [WIRED]

More on Silicon Valley sexism: The Tech Industry’s Gender Problem Isn’t Just Hurting Women

More:

An AI Conference Refusing a Name Change Highlights a Tech Industry Problem

Scientists Are Hopeful AI Could Help Predict Earthquakes

Quake Rate

Earlier this year, I interviewed U.S. Geological Survey geologist Annemarie Baltay for a story about why it’s incredibly difficult to predict earthquakes.

“We don’t use that ‘p word’ — ‘predict’ — at all,” she told me. “Earthquakes are chaotic. We don’t know when or where they’ll occur.”

Neural Earthwork

That could finally be starting to change, according to a fascinating feature in The New York Times.

By feeding seismic data into a neural network — a type of artificial intelligence that learns to recognize patterns by scrutinizing examples — researchers say they can now predict moments after a quake strikes how far its aftershocks will travel.

And eventually, some believe, they’ll be able to listen to signals from fault lines and predict when an earthquake will strike in the first place.

Future Vision

But like Baltay, some researchers aren’t convinced we’ll ever be able to predict earthquakes.University of Tokyo seismologist Robert Geller told the Times that until an algorithm actually predicts an upcoming quake, he’ll remain skeptical.

“There are no shortcuts,” he said. “If you cannot predict the future, then your hypothesis is wrong.”

READ MORE: A.I. Is Helping Scientist Predict When and Where the Next Big Earthquake Will Be [The New York Times]

More on earthquake AI: A New AI Detected 17 Times More Earthquakes Than Traditional Methods

Link:

Scientists Are Hopeful AI Could Help Predict Earthquakes

A Stem Cell Transplant Let a Wheelchair-Bound Man Dance Again

Stand Up Guy

For 10 years, Roy Palmer had no feeling in his lower extremities. Two days after receiving a stem cell transplant, he cried tears of joy because he could feel a cramp in his leg.

The technical term for the procedure the British man underwent is hematopoietic stem cell transplantation (HSCT). And while risky, it’s offering new hope to people like Palmer, who found himself wheelchair-bound after multiple sclerosis (MS) caused his immune system to attack his nerves’ protective coverings.

Biological Reboot

Ever hear the IT troubleshooting go-to of turning a system off and on again to fix it? The HSCT process is similar, but instead of a computer, doctors attempt to reboot a patient’s immune system.

To do this, they first remove stem cells from the patient’s body. Then the patient undergoes chemotherapy, which kills the rest of their immune system. After that, the doctors use the extracted stem cells to reboot the patient’s immune system.

It took just two days for the treatment to restore some of the feeling in Palmer’s legs. Eventually, he was able to walk on his own and even dance. He told the BBC in a recent interview that he now feels like he has a second chance at life.

“We went on holiday, not so long ago, to Turkey. I walked on the beach,” said Palmer. “Little things like that, people do not realize what it means to me.”

Risk / Reward

Still, HSCT isn’t some miracle cure for MS. Though it worked for Palmer, that’s not always the case, and HSCT can also cause infections and infertility. The National MS Society still considers HSCT to be an experimental treatment, and the Food and Drug Administration has yet to approve the therapy in the U.S.

However, MS affects more than 2.3 million people, and if a stem cell transplant can help even some of those folks the way it helped Palmer, it’s a therapy worth exploring.

READ MORE: Walking Again After Ten Years With MS [BBC]

More on HCST: New Breakthrough Treatment Could “Reverse Disability” for MS Patients

Read the original:

A Stem Cell Transplant Let a Wheelchair-Bound Man Dance Again

AI Dreamed Up These Nightmare Fuel Halloween Masks

Nightmare Fuel

Someone programmed an AI to dream up Halloween masks, and the results are absolute nightmare fuel. Seriously, just look at some of these things.

“What’s so scary or unsettling about it is that it’s not so detailed that it shows you everything,” said Matt Reed, the creator of the masks, in an interview with New Scientist. “It leaves just enough open for your imagination to connect the dots.”

A selection of masks featured on Reed’s twitter. Credit: Matt Reed/Twitter

Creative Horror

To create the masks, Reed — whose day job is as a technologist at a creative agency called redpepper — fed an open source AI tool 5,000 pictures of Halloween masks he sourced from Google Images. He then instructed the tool to generate its own masks.

The fun and spooky project is yet another sign that AI is coming into its own as a creative tool. Just yesterday, a portrait generated by a similar system fetched more than $400,000 at a prominent British auction house.

And Reed’s masks are evocative. Here at the Byte, if we looked through the peephole and saw one of these on a trick or treater, we might not open our door.

READ MORE: AI Designed These Halloween Masks and They Are Absolutely Terrifying [New Scientist]

More on AI-generated art: Generated Art Will Go on Sale Alongside Human-Made Works This Fall

Read more here:

AI Dreamed Up These Nightmare Fuel Halloween Masks

Robot Security Guards Will Constantly Nag Spectators at the Tokyo Olympics

Over and Over

“The security robot is patrolling. Ding-ding. Ding-ding. The security robot is patrolling. Ding-ding. Ding-ding.”

That’s what Olympic attendees will hear ad nauseam when they step onto the platforms of Tokyo’s train stations in 2020. The source: Perseusbot, a robot security guard Japanese developers unveiled to the press on Thursday.

Observe and Report

According to reporting by Kyodo News, the purpose of the AI-powered Perseusbot is to lower the burden on the stations’ staff when visitors flood Tokyo during the 2020 Olympics.

The robot is roughly 5.5 feet tall and equipped with security cameras that allow it to note suspicious behaviors, such as signs of violence breaking out or unattended packages, as it autonomous patrols the area. It can then alert security staff to the issues by sending notifications directly to their smart phones.

Prior Prepration

Just like the athletes who will head to Tokyo in 2020, Perseusbot already has a training program in the works — it’ll patrol Tokyo’s Seibu Shinjuku Station from November 26 to 30. This dry run should give the bot’s developers a chance to work out any kinks before 2020.

If all goes as hoped, the bot will be ready to annoy attendees with its incessant chant before the Olympic torch is lit. And, you know, keep everyone safe, too.

READ MORE: Robot Station Security Guard Unveiled Ahead of 2020 Tokyo Olympics [Kyodo News]

More robot security guards: Robot Security Guards Are Just the Beginning

Original post:

Robot Security Guards Will Constantly Nag Spectators at the Tokyo Olympics

People Would Rather a Self-Driving Car Kill a Criminal Than a Dog

Snap Decisions

On first glance, a site that collects people’s opinions about whose life an autonomous car should favor doesn’t tell us anything we didn’t already know. But look closer, and you’ll catch a glimpse of humanity’s dark side.

The Moral Machine is an online survey designed by MIT researchers to gauge how the public would want an autonomous car to behave in a scenario in which someone has to die. It asks questions like: “If an autonomous car has to choose between killing a man or a woman, who should it kill? What if the woman is elderly but the man is young?”

Essentially, it’s a 21st century update on the Trolley Problem, an ethical thought experiment no doubt permanently etched into the mind of anyone who’s seen the second season of “The Good Place.”

Ethical Dilemma

The MIT team launched the Moral Machine in 2016, and more than two million people from 233 countries participated in the survey — quite a significant sample size.

On Wednesday, the researchers published the results of the experiment in the journal Nature, and they really aren’t all that surprising: Respondents value the life of a baby over all others, with a female child, male child, and pregnant woman following closely behind. Yawn.

It’s when you look at the other end of the spectrum — the characters survey respondents were least likely to “save” — that you’ll see something startling: Survey respondents would rather the autonomous car kill a human criminal than a dog.

moral machine
Image Credit: MIT

Ugly Reflection

While the team designed the survey to help shape the future of autonomous vehicles, it’s hard not to focus on this troubling valuing of a dog’s life over that of any human, criminal or not. Does this tell us something important about how society views the criminal class? Reveal that we’re all monsters when hidden behind the internet’s cloak of anonymity? Confirm that we really like dogs?

The MIT team doesn’t address any of these questions in their paper, and really, we wouldn’t expect them to — it’s their job to report the survey results, not extrapolate some deeper meaning from them. But whether the Moral Machine informs the future of autonomous vehicles or not, it’s certainly held up a mirror to humanity’s values, and we do not like the reflection we see.

READ MORE: Driverless Cars Should Spare Young People Over Old in Unavoidable Accidents, Massive Survey Finds [Motherboard]

More on the Moral Machine: MIT’s “Moral Machine” Lets You Decide Who Lives & Dies in Self-Driving Car Crashes

Continued here:

People Would Rather a Self-Driving Car Kill a Criminal Than a Dog

Scientists Say New Material Could Hold up an Actual Space Elevator

Space Elevator

It takes a lot of energy to put stuff in space. That’s why one longtime futurist dream is a “space elevator” — a long cable strung between a geostationary satellite and the Earth that astronauts could use like a dumbwaiter to haul stuff up into orbit.

The problem is that such a system would require an extraordinarily light, strong cable. Now, researchers from Beijing’s Tsinghua University say they’ve developed a carbon nanotube fiber so sturdy and lightweight that it could be used to build an actual space elevator.

Going Up

The researchers published their paper in May, but it’s now garnering the attention of their peers. Some believe the Tsinghua team’s material really could lead to the creation of an elevator that would make it cheaper to move astronauts and materials into space.

“This is a breakthrough,” colleague Wang Changqing, who studies space elevators at Northwestern Polytechnical University, told the South China Morning Post.

Huge If True

There are still countless galling technical problems that need to be overcome before a space elevator would start to look plausible. Wang pointed out that it’d require tens of thousands of kilometers of the new material, for instance, as well as a shield to protect it from space debris.

But the research brings us one step closer to what could be a true game changer: a vastly less expensive way to move people and spacecraft out of Earth’s gravity.

READ MORE: China Has Strongest Fibre That Can Haul 160 Elephants – and a Space Elevator? [South China Morning Post]

More on space elevators: Why Space Elevators Could Be the Future of Space Travel

Here is the original post:

Scientists Say New Material Could Hold up an Actual Space Elevator

Zero Gravity Causes Worrisome Changes In Astronauts’ Brains

Danger, Will Robinson

As famous Canadian astronaut Chris Hadfield demonstrated with his extraterrestrial sob session, fluids behave strangely in space.

And while microgravity makes for a great viral video, it also has terrifying medical implications that we absolutely need to sort out before we send people into space for the months or years necessary for deep space exploration.

Specifically, research published Thursday In the New England Journal of Medicine demonstrated that our brains undergo lasting changes after we spend enough time in space. According to the study, cerebrospinal fluid — which normally cushions our brain and spinal cord — behaves differently in zero gravity, causing it to pool around and squish our brains.

Mysterious Symptoms

The brains of the Russian cosmonauts who were studied in the experiment mostly bounced back upon returning to Earth.

But even seven months later, some abnormalities remained. According to National Geographic, the researchers suspect that high pressure  inside the cosmonauts’ skulls may have squeezed extra water into brain cells which later drained out en masse.

Now What?

So far, scientists don’t know whether or not this brain shrinkage is related to any sort of cognitive or other neurological symptoms — it might just be a weird quirk of microgravity.

But along with other space hazards like deadly radiation and squished eyeballs, it’s clear that we have a plethora of medical questions to answer before we set out to explore the stars.

READ MORE: Cosmonaut brains show space travel causes lasting changes [National Geographic]

More on space medicine: Traveling to Mars Will Blast Astronauts With Deadly Cosmic Radiation, new Data Shows

More:

Zero Gravity Causes Worrisome Changes In Astronauts’ Brains

Scientists May Have Put Microbes in a State of Quantum Entanglement

Hall of Mirrors

A few years ago, the journal Small published a study showing how photosynthetic bacteria could absorb and release photons as the light bounced across a minuscule gap between two mirrors.

Now, a retroactive look at the study’s data published in The Journal of Physics Communications suggests something more may have been going on. The bacteria may have been the first living organisms to operate in the realm of quantum physics, becoming entangled with the bouncing light at the quantum scale.

Cat’s Cradle

The experiment in question, as described by Scientific American, involved individual photons — the smallest quantifiable unit of light that can behave like a tiny particle but also a wave of energy within quantum physics — bouncing between two mirrors separated by a microscopic distance.

But a look at the energy levels in the experimental setup suggests that the bacteria may have become entangled, as some individual photons seem to have simultaneously interacted with and missed the bacterium at the same time.

Super Position

There’s reason to be skeptical of these results until someone actually recreates the experiment while looking for signs of quantum interactions. As with any look back at an existing study, scientists are restricted to the amount and quality of data that was already published. And, as Scientific American noted, the energy levels of the bacteria and the mirror setup should have been recorded individually — which they were not — in order to verify quantum entanglement.

But if this research holds up, it would be the first time a life form operated on the realm of quantum physics, something usually limited to subatomic particles. And even though the microbes are small, that’s a big deal.

READ MORE“Schrödinger’s Bacterium” Could Be a Quantum Biology Milestone [Scientific American]

More on quantum physics: The World’s First Practical Quantum Computer May Be Just Five Years Away

Read the original post:

Scientists May Have Put Microbes in a State of Quantum Entanglement

There’s No Way China’s Artificial Moon Will Work, Says Expert

Good Luck

On October 10, a Chinese organization called the Tian Fu New Area Science Society revealed plans to replace the streetlights in the city of Chengdu with a satellite designed to reflect sunlight toward the Earth’s surface at night.

But in a new interview with Astronomy, an associate professor of aerospace engineering at the University of Texas at Austin named Ryan Russel argued that based on what he’s read, the artificial moon plan would be impossible to implement.

Promised the Moon

Wu Chunfeng, the head of the Tian Fu New Area Science Society, told China Daily the artificial moon would orbit about 310 miles above Earth, delivering an expected brightness humans would perceive to be about one-fifth that of a typical streetlight.

The plan is to launch one artificial moon in 2020 and then three more in 2022 if the first works as hoped. Together, these satellites could illuminate an area of up to 4,000 square miles, Chunfeng claims.

But Russell is far from convinced.

“Their claim for 1 [low-earth orbit satellite] at [300 miles] must be a typo or misinformed spokesperson,” he told Astronomy. “The article I read implied you could hover a satellite over a particular city, which of course is not possible.”

Overkill Overhead

To keep the satellite in place over Chengdu, it would need to be about 22,000 miles above the Earth’s surface, said Russel, and its reflective surface would need to be massive to reflect sunlight from that distance. At an altitude of just 300 miles, the satellite would quickly zip around the Earth, constantly illuminating new locations.

Even if the city could put the artificial moon plan into action, though, Russell isn’t convinced it should.

“It’s a very complicated solution that affects everyone to a simple problem that affects a few,” he told Astronomy. “It’s light pollution on steroids.”

Maybe Chengdu shouldn’t give up on its streetlights just yet.

READ MORE: Why China’s Artificial Moon Probably Won’t Work [Astronomy]

More on the artificial moon: A Chinese City Plans to Replace Its Streetlights With an Artificial Moon

Read more here:

There’s No Way China’s Artificial Moon Will Work, Says Expert

Clean Coal Startup Turns Human Waste Into Earth-Friendly Fuel

Gold Nuggets

A company called Ingelia says it’s figured out a way to turn human waste — the solid kind — into a combustible material it’s calling biochar. And if Ingelia’s claims are accurate, biochar can be burned for fuel just like coalexcept with nearzero greenhouse gas emissions, according to Business Insider.

That’s because almost all of the pollutants and more harmful chemicals that would normally be given off while burning solid fuels is siphoned away into treatable liquid waste, leaving a dry, combustible rod of poop fuel.

“Clean Coal

Ingelia, which is currently working to strike a deal with Spanish waste management facilities, hopes to make enough biochar to replace 220 thousand tons of coal per year, corresponding to 500 thousand tons of carbon dioxide emissions.

But that’s by 2022, at which point we’ll have even less time to reach the urgent clean energy goals of that doomsday United Nations report. In an ideal world, we would have moved away from coal years ago. At least this gives us a viable alternative as we transition to other, renewable forms of electricity.

So while we can, in part, poop our way to a better world, biochar — and other new sewage-based energy sources — will only be one of many new world-saving sources of clean energy.

READ MORE: This Spanish company found a way to produce a fuel that emits no CO2 — and it’s made of sewage [Business Insider]

More on poop: Edible Tech is Finally Useful, is Here to Help you Poop

Read more:

Clean Coal Startup Turns Human Waste Into Earth-Friendly Fuel


12345...102030...