Cosmonaut: Hole Was Drilled From Inside Space Station

A Russian cosmonaut who helped investigate damage to the ISS says the hole was apparently drilled from the inside. Here's what it means.

Hole Story

Back in August, the crew on the International Space Station (ISS) repaired a small leak that was allowing air to escape the orbiting outpost — damage that Russia speculated could have been the result of intentional sabotage.

Now, the Associated Press reports that a Russian cosmonaut who helped investigate the damage during a spacewalk earlier this month has said that whoever drilled the hole appeared to do so from the interior of the station — though he’s far from convinced the drilling was an act of sabotage.

The controversy surrounding this hole has revealed that international tensions on Earth can reach outer space, which has long been a haven for international scientific cooperation — but the cosmonaut’s comments are a heartening example of how public figures in the research community can push back against the use of space as a proxy battleground for terrestrial conflicts.

Space Talk

On Monday, the AP reported, cosmonaut Sergei Prokopyev — who took part in the spacewalk to investigate the damage — said at a press conference after returning to Earth that the hole appeared to originate from inside the capsule.

But Prokopyev also “scoffed” at the idea that an ISS crew member drilled the hole, according to the AP.

“You shouldn’t think so badly of our crew,” he said. He also cautioned that “it’s up to the investigative organs to judge when that hole was made” — an apparent suggestion that the hole could have originated in a manufacturing facility on Earth.

Fresh Air

When the leak was first discovered, Russia’s space agency said the damage was probably caused by a micrometeorite. But in a startling about-face, the director of the agency suggested days later that the leak was the result of intentional sabotage — even specifically mentioning evidence of “several attempts at drilling” with a “wavering hand.”

Prokopyev, though, is behaving like a scientist: reporting the facts, but withholding judgment until after the collection of all data — a comforting approach during an era characterized by knee-jerk reactions.

READ MORE: Russia: Hole Drilled From Inside Int’l Space Station Capsule [The Associated Press]

More on the ISS hole: Someone Drilled a Hole in the ISS. Was It a Mistake or Sabotage?

The post Cosmonaut: Hole Was Drilled From Inside Space Station appeared first on Futurism.

Read this article:
Cosmonaut: Hole Was Drilled From Inside Space Station

Watch a Tiny Gecko-Like Robot Climb Upside Down

A team of Harvard researchers has created a tiny robot for Rolls-Royce that can climb in every direction, including upside down.

Roving Robot

Rolls-Royce really wants to get robots into its jet engines.

In July, the U.K. engineering firm unveiled the tiny, cockroach-like robot it designed to crawl into engines’ hard-to-reach areas. The goal: give mechanics a way to investigate issues without going through the complicated process of taking an engine apart.

Now, a team from Harvard has produced another microbot for Rolls-Royce  one more like a lizard than an insect — and it’s another fascinating concept about how bots can navigate hard-to-access areas on behalf of human engineers.

Sky’s the Limit

The Harvard team calls its robot HAMR-E, which stands for Harvard Ambulatory Micro-Robot with Electroadhesion. The little machine is able to climb on surfaces in any direction, even upside down — just like a gecko.

According to a paper published Thursday in the journal Science Robotics, the bot’s four flexible foot pads include copper electrodes that generate an electrostatic force as it walks across a conductive surface. Turn a pad’s electric field on, and it sticks to the surface. Turn it off, and the pad releases.

In order to stick to a surface upside down, the robot needs to have three pads turned on, so the researchers programmed it to use a special walking pattern in which no more than one pad is lifted at any point in time. Specially designed joints modeled after origami allow the bot’s ankles to rotate freely, too, which enables it to maintain its orientation while moving in any direction.

It’s also vaguely adorable.

The Upside

In testing, HAMR-E was able to take more than one hundred consecutive steps and even managed to navigate a curved section of a jet engine while upside down. Next, the team plans to explore giving the robot the ability to carry electronics or sensors it could use to inspect its environment. They also want to research ways to allow the microbot to walk on non-conductive surfaces.

“Now that these robots can explore in three dimensions instead of just moving back and forth on a flat surface, there’s a whole new world that they can move around in and engage with,” researcher Sébastien de Rivaz said in a news release. “They could one day enable non-invasive inspection of hard-to-reach areas of large machines, saving companies time and money and making those machines safer.”

READ MORE: Robots With Sticky Feet Can Climb up, Down, and All Around [Wyss Institute]

More on tiny robots: Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines

The post Watch a Tiny Gecko-Like Robot Climb Upside Down appeared first on Futurism.

Read more:
Watch a Tiny Gecko-Like Robot Climb Upside Down

Watch “Acoustic Tweezers” Use Sound to Levitate Multiple Objects

Researchers have created the first acoustic tweezers capable of levitating and moving multiple tiny objects simultaneously.

Wingardium Leviosa

A pair of researchers didn’t need a Harry Potter spell to levitate multiple objects at once. They just needed the power of sound waves.

In a paper published in the Proceedings of the National Academy of Sciences on Monday, researchers Asier Marzo Pérez of the Public University of Navarra and Bruce Drinkwater from the University of Bristol describe how they were able to use sound to levitate multiple objects independently of one another — a never-before-accomplished feat the researchers say could lead to next-generation medical procedures.

Acoustic Tweezers

The duo started by building two arrays, each featuring 256 tiny speakers just one centimeter in diameter. Next, they placed those arrays on opposite walls with the speakers facing inward. Finally, they placed a sound-reflective surface on the ground between the arrays.

Using a computer, the researchers could cause the speakers to emit sound waves in the 40-kilohertz range. By manipulating the sound waves using a specially developed algorithm, the researchers found that they could exert a force on tiny objects placed on the reflected surface, causing them to levitate.

In their experiments, they managed to manipulate Styrofoam balls with diameters up to three millimeters, causing the balls to appear to dance in midair.

Inner Workings

According to the researchers, we could one day use a version of their acoustic tweezers to conduct surgery-free medical procedures within the human body.

“The flexibility of ultrasonic waves allows us to operate at micrometric scales to move cells inside 3D printed structures or living tissue,” Marzo said in a news release.

To that end, the team plans to begin working on finding a way to get their technique to work in water. They’re hoping that will take less than a year, after which they’ll begin focusing on adapting the technology to work in biological tissue.

READ MORE: Holographic Acoustic Tweezers Able to Manipulate Multiple Objects in 3-D Space [Phys.org]

More on sound levitation: Sonic Tractor Beams: Scientists Made a Hologram Device That Costs Less Than $10

The post Watch “Acoustic Tweezers” Use Sound to Levitate Multiple Objects appeared first on Futurism.

Read the original post:
Watch “Acoustic Tweezers” Use Sound to Levitate Multiple Objects

Nine Months After Killing a Pedestrian, Uber’s Self-Driving Cars Are Back on the Road

Uber will begin testing its autonomous vehicles in Pittsburgh on Thursday, marking the company's first tests since a fatal incident in March.

They’re Back

Uber is set to resume open-road tests for its self-driving vehicles in Pittsburgh on Thursday, according to The Associated Press.

The tests mark the first time that Uber will test one of its autonomous vehicles since one struck and killed a pedestrian in Arizona in March 2018 — and the latest sign that the self-driving car industry could be outgrowing its safety-be-damned early days as its leading developers try to mature and reach the mainstream.

Take Two

After the deadly collision in March, it turned out that the software in Uber’s Volvo XC90 had decided to continue on its deadly path even after detecting the woman ahead. It also emerged that the car’s assigned human operator had been watching TV on their phone. More recently, a whistleblower leaked emails showing that Uber’s executives were aware of and ignored safety concerns prior to the crash.

In spite of all that, Pennsylvania’s Transportation Department authorized the company to conduct more tests this week — though with significant restrictions. Uber also plans on resuming tests in Toronto and San Francisco.

Triple Check

In this new round of road tests, Uber’s cars will also be limited to roadways with a speed limit of 25 MPH, and tests will only run on weekdays with clear weather. When the Uber car killed the pedestrian in March, it was traveling 38 MPH in a 35 zone during a nighttime drive.

The vehicles will also activate Volvo’s emergency braking system and each will carry a second human operator, according to the AP.

“Before any vehicles are on public roads, they must pass a series of more than 70 scenarios without safety related failures on our test track,” Uber Advanced Technologies Group head Eric Meyhofer told the Washington Post. “We are confident we’ve met that bar.”

On one hand, these extra precautions are valuable reassurances that Uber wants to avoid another horrific crash. But the fact that these safety measures and restrictions are necessary to test autonomous vehicles is also a clear indicator that self-driving technology has a long way to go before it becomes truly viable.

READ MORE: Uber resumes autonomous vehicle tests in Pittsburgh [AP]

More on self-driving cars: A Waymo Rider Talked Publicly About the Service — Even Though He Wasn’t Supposed To

The post Nine Months After Killing a Pedestrian, Uber’s Self-Driving Cars Are Back on the Road appeared first on Futurism.

More:
Nine Months After Killing a Pedestrian, Uber’s Self-Driving Cars Are Back on the Road

See Stunning New Images of a Mars Crater Full of Mile-Thick Ice

The ESA has released several photos of a massive ice-filled crater on Mars that are stunning enough to adorn a Christmas card.

Martian Wonderland

Christmas is always white at Mars’s Korolev crater.

In fact, every day is a snowy expanse of white — the 50-mile-wide impact crater is filled with ice year-round due to the cold air trapped within it.

See for yourself. On Thursday, the European Space Agency (ESA) released several icy, snowy images of the crater, and they’re stunning enough to adorn the cover of a Martian Christmas card.

Express Delivery

The ESA generated the images of Mars’s ice-filled crater using data from its Mars Express probe, which entered Mars’s orbit — appropriately enough — on Christmas Day in 2003.

To produce the images, the ESA had to combine data from five Mars Express orbits, with the most recent taking place on April 4, 2018. The High Resolution Stereo Camera (HRSC), an instrument capable of capturing images in three dimensions and in color, recorded the data.

Image Credit: ESA/DLR/FU Berlin

Water World

The ice at the center of the Martian crater is more than a mile thick. In total, Korolev contains 530 cubic miles of water ice. That’s more than four times the volume of Lake Erie.

The fact that we even know such a massive ice-filled crater exists on Mars is remarkable. After all, less than two decades ago, we weren’t even sure the Red Planet had any form of water at all. Now we have the technology in place to see a massive crater full of it — in stunning detail.

Image Credit: ESA/DLR/FU Berlin

READ MORE: Mars Express Beams Back Images of Ice-Filled Korolev Crater [The Guardian]

More on Mars water: Mars Used To Be Dotted With Life-Friendly Lakes

The post See Stunning New Images of a Mars Crater Full of Mile-Thick Ice appeared first on Futurism.

See original here:
See Stunning New Images of a Mars Crater Full of Mile-Thick Ice

Astronomers Found Ancient Remains of a Big Bang “Fossil Cloud”

Astronomers at Swinburne University of Technology discovered a cloud of gas that was created around the time of the Big Bang.

Fossil Clouds

Astronomers at the M. Keck Observatory on Maunakea, Hawaii made an incredible discovery earlier this month. It was a rare relic of the ancient universe: a “fossil cloud” of gas created around the time of the Big Bang.

The extraordinary discovery could provide new information about the origins of the universe and its early composition — including why certain stars and galaxies formed from gases and others didn’t.

Quasar Light

The group was led by Fred Robert and Michael Murphy, both at Swinburne University of Technology in Melbourne. Their results are about to be published in an astronomical journal called Monthly Notices of the Royal Astronomical Society, but they’ve also made them available on the preprint server arXiv.

“Everywhere we look, the gas in the universe is polluted by waste heavy elements from exploding stars,” said Robert in a statement on the Keck Observatory’s website. “But this particular cloud seems pristine, unpolluted by stars even 1.5 billion years after the Big Bang.”

The team used two M. Keck Observatory spectrometers — think of them as extremely sensitive and complex cameras — that have been used to discover exoplanets in the past.

The astronomers spotted the fossil cloud because of an extremely bright quasar — a celestial object that emits large amounts of energy — behind it. The spectrometer readings found the cloud to be extremely low in density, leading them to believe it was a “true relic of the Big Bang,” according to Robert, since they remained almost entirely unaffected by exploding star particles that fill the rest of the galaxy.

Tip of the Iceberg

And many more such fossil clouds could be discovered in the future thanks to this particular discovery — the first two fossil clouds were actually found by accident in 2011 — Robert and his team knew to look for this third one.

“It’s now possible to survey for these fossil relics of the Big Bang,” said Murphy. “That will tell us exactly how rare they are and help us understand how some gas formed stars and galaxies in the early universe, and why some didn’t.”

READ MORE: Astronomers find a ‘fossil cloud’ uncontaminated since the Big Bang [Astronomy]

More on the Big Bang: An Ancient Star Reveals Our Galaxy Is Older Than We Thought

The post Astronomers Found Ancient Remains of a Big Bang “Fossil Cloud” appeared first on Futurism.

Read the rest here:
Astronomers Found Ancient Remains of a Big Bang “Fossil Cloud”

AIs Whip Christmas Leftovers Into Loathsome New Recipes

An AI created by Stanford transformed typical Christmas leftovers into new recipes, which the BBC then cooked up and tasted so you don't have to.

SwedAIsh Chef

Name a problem and someone’s probably trying to use artificial intelligence to solve it.

AI may be a relatively new technology, but researchers across the globe are already leveraging it to address major problems, from climate change to cancer to poverty. So finding a use for your Christmas leftovers should be a piece of (fruit)cake for AI. Right?

Maybe not.

On Wednesday, the BBC published the results of an experiment in which it asked two teams of researchers to task their AIs with creating new recipes out of typical holiday fare. The brave souls at the BBC then cooked up and tasted some of the AIs’ creations — and it’s safe to say the chefs of the world needn’t worry about losing their jobs to automation quite yet.

Eat and Run (to the Bathroom)

Computer scientists at Stanford University created the first AI, an algorithm called Forage. Using a dataset of 60,000 recipes, they trained Forage to craft new recipes using whatever you tell it is in your fridge.

When given a list of typical holiday leftovers to work with, Forage came up with recipes for a Spanish potato casserole, turkey croquettes, and a spicy seafood casserole. The BBC reports that they were, respectively, “well-received,” “not too bad to eat,” and “about as close to a bout of food poisoning as we were prepared to get.”

After reading the recipe for that last one, in fact, they couldn’t even bring themselves to cook it.

Culinary Culture Shock

Researchers at the University of Illinois built the second AI, which they trained using nearly 40,000 recipes from 20 different countries.

That AI puts an international twist on existing recipes, so the BBC team asked it to create Indian versions of traditional holiday dishes. Michelin star-holding chef Keisuke Matsushima then cooked up and tasted a few of those recipes, which also received mixed reviews.

Matsushima thought a recipe for Indian roast turkey was “very easy to make and worked well,” but overall, he’d pass on the AI-crafted menu. “It is good for new experiences and experiments but I wouldn’t choose to eat these,” the chef told the BBC.

Feeling Lucky?

Taste is subjective, of course, so if you don’t want to spend New Year’s Day throwing away all the Christmas leftovers still cluttering up your fridge, maybe give one of these AI-produced recipes a shot.

Who knows? You might find you have a taste for Indian Christmas pudding or the combination of toffee and spaghetti — both recipes the AIs actually dreamt up.

READ MOREA Christmas Menu Dreamed up by a Robot [BBC]

More on AI chefs: Your Next Burger Could Be Grilled by This Robot Chef

The post AIs Whip Christmas Leftovers Into Loathsome New Recipes appeared first on Futurism.

Continue reading here:
AIs Whip Christmas Leftovers Into Loathsome New Recipes

Stars Torn From Galaxies Let Astronomers “See” Dark Matter

Astronomers recently discovered a new technique that makes it much easier to detect and study how dark matter is distributed throughout the universe.

Star Light

For decades, astronomers have been trying to spot dark matter, an invisible substance that makes up about 85 percent of the matter in the universe and lends much of the gravitational force holding galaxies together. Figuring out how to find and study dark matter would give scientists a better understanding of how the universe formed and how it works today.

But in spite of its prevalence, much of what we know about dark matter comes from indirect measurements, collected by expensive equipment, that require serious mathematical gymnastics to interpret. Scientists use gravitational lensing, for example, to track dark matter by looking at the gravitational pull it exerts on nearby stars or planets.

Now, new research suggests there’s a simpler and cheaper way to measure dark matter. The new astronomical trick tracks the light given off by stars that have been ripped from their home galaxies — and NASA scientists say it gives them more accurate readings on the distribution of dark matter than ever before.

Star Bright

It turns out that scientists can get precise measurements on dark matter distribution by recording something called intracluster light, according to NASA research published to the preprint server ArXiv in October, that looked at intracluster light in images taken by the Hubble Telescope.

Intracluster light is the light given off by stars that are flung away when galaxies merge or travel near another. Rather than orbiting a galaxy’s core, these stars careen through the universe alone. But it turns out that the stars take some dark matter with them.

Take it Easy

And it turns out that measurements of intracluster light directly correlate to the amount of dark matter present in the area, according to the research. Studying these stars — which exist in a vacuum with their corresponding pockets of dark matter — is much simpler and more straightforward than analyzing all the dark matter that’s clumped together in a galaxy’s core.

“We have found a way to ‘see’ dark matter,” University of New South Wales astronomer Mireia Montes, the lead author of the study, said in a press release. “We have found that very faint light in galaxy clusters, the intracluster light, maps how dark matter is distributed.”

READ MORE: Faint starlight in Hubble images reveals distribution of dark matter [SpaceTelescope.org]

More on dark matter: This Ancient Galaxy Was Loaded With Dark Matter

The post Stars Torn From Galaxies Let Astronomers “See” Dark Matter appeared first on Futurism.

Read the original:
Stars Torn From Galaxies Let Astronomers “See” Dark Matter

Neural Stem Cells Grown From Blood Could Revolutionize Medicine

New medical research shows how to reprogram a mature blood cell into a new kind of stem cell that cell can grow into a neuron and multiply indefinitely.

Stem Fatale

Doctors could soon be able to grow new brain cells, which would help treat people with strokes or other neurological conditions, using just a small blood sample.

Scientists from Heidelberg University Hospital in Germany and the University of Innsbruck in Austria figured out how to reprogram mature human blood cells into neural stem cells. Scientists have reprogrammed stem cells before, but these new cells are the first ones that can continue to multiply and propagate in the lab thanks to specific genetic tweaks, according to research published Thursday in the journal Stem Cell.

Brain Trust

The scientists hope that their findings will improve regenerative medicine, a kind of treatment in which new cells help rejuvenate the body. Specifically, the neural stem cells could grow into either the neurons found in the central nervous system or the glial cells that support and protect those neurons. Both could help treat people who have survived strokes or been diagnosed with neurological diseases.

Previous research that reprogrammed cells into stem cells was never as useful in a medical context because those cells couldn’t continue to multiply, and sometimes grew tumors called teratomas, according to the research.

Cell Culture

While this new type of stem cell shows promise for future regenerative medicine, there’s still a ways to go.

Additionally, it’s important to note that the team’s findings show only a new method of programming stem cells, rather than any actual medical applications. But given recent years’ progress in stem cell research and utility, it’s safe to assume doctors will want to find a way to use these new cells soon.

READ MORE: An important step for regenerative medicine: Human blood cells can be directly reprogrammed into neural stem cells [German Cancer Research Center]

More on stem cells: Robots Can Grow Humanoid Mini-Organs From Stem Cells Faster And Better Than People

The post Neural Stem Cells Grown From Blood Could Revolutionize Medicine appeared first on Futurism.

Read this article:
Neural Stem Cells Grown From Blood Could Revolutionize Medicine

FCC Fines Startup for Launching Satellites Without Permission

After investigating Swarm Technologies for months, the FCC finally settled with the space startup for launching unauthorized satellites in January.

Consequences

It was only a matter of time until spacetech startup Swarm Technologies had to face the music.

In December 2017, the company asked the U.S. Federal Communications Commission (FCC) for permission to launch its four small “SpaceBEE” satellites into space. It never got that authorization — but launched them anyway on January 12, 2018 aboard the Indian rocket PSLV-C40.

And now we know what happens when a space startup goes rogue. Yesterday, Swarm was slapped with a substantial $900,000 fine as part of a settlement with the FCC for an “unauthorized satellite launch” according to official filings.

The FCC was already hot on the company’s heels. It had reportedly been investigating Swarm since at least March.

The satellites launched to serve as a global communications gateway for “internet of things” applications. “By providing connectivity to people and devices in remote regions, Swarm makes data accessible to everyone, everywhere on earth,” Swarm’s official website reads.

Going Rogue

Back in March, Quartz suggested that Swarm’s launch was “likely the first time a private organization has launched spacecraft without the explicit approval of any government.”

FCC commissioner Michael O’Reilly said the fine might not be enough to “deter future behavior,” but that the hope was that enough “negative press coverage” would prevent others from doing the same according to an official statement.

Regret

The unauthorized launch was such a controversial move that even Swarm’s CEO came to regret. In September, CEO Sara Spangelo said that launching the satellites anyway was “a mistake” in an interview with The Atlantic.

“I feel terrible for the confusion and the additional regulation that we may see come,” she said. “It’s a very difficult situation, and we’ve done everything we can to resolve the issues to move forward positively.”

Regardless of the investigation and subsequent settlement, the four SpaceBEE satellites are now in orbit, oblivious to the drama they’ve caused back on Earth.

READ MORE: A space startup was just hit with a $900,000 fine for illegally launching four tiny satellites [MIT Technology Review]

More on Swarm: The FCC May Penalize Companies That Launch Tiny Rogue Satellites, After All

The post FCC Fines Startup for Launching Satellites Without Permission appeared first on Futurism.

See the rest here:
FCC Fines Startup for Launching Satellites Without Permission

IBM’s Fingernail Sensor Could Evaluate New Parkinson’s Medications

IBM has created a fingernail sensor that detects tiny nail deformations and then uses AI to analyze them for signs of health issues.

Grip Strength

Scientists have found that the strength of a person’s grip can reveal important details about their health. Research shows that links exist between grip strength and medication effectiveness, cognitive function, cardiovascular health, and even mortality.

Unfortunately, finding an effective way to monitor grip strength has thus far proven difficult — but that might change thanks to a super-sensitive wireless sensor you wear on your fingernail.

Nailed It

This fingernail sensor is the work of a team from IBM Research, which published a paper on the sensor’s creation in the journal Scientific Reports on Friday.

According to an IBM blog post, skin-worn sensors can capture data on grip strength, but because older people — the population most likely to benefit from this grip monitoring — often have fragile, brittle skin, those sensors can sometimes do more harm than good, causing health issues such as infection.

Fingernails, however, are far tougher, so the IBM team set out to see if they could generate useful data from a sensor worn on a person’s nail.

Bend but Don’t Break

IBM learned that fingernails bend and move in predictable ways depending on hand motion — the deformation caused by gripping, for example, is different from that caused by flexing the fingers — but the deformation is infinitesimal. In fact, the bend is about as pronounced as the diameter of a single red blood cell.

However, the IBM team could detect these tiny deformations in the nail using a type of sensor called a strain gauge, which they used to create a system that communicates readings from a fingernail to a smart watch. The watch, in turn, used machine learning to analyze the deformations.

For their study, the researchers used the system to effectively rate symptoms of Parkinson’s disease, including tremors and slowed movement. Such information could be valuable to a doctor monitoring the progress of a patient testing a new treatment or medication, meaning this first-of-its-kind fingernail sensor could have an immediate real-world application.

READ MORE: Fingernail Sensors and AI Can Help Clinicians to Monitor Health and Disease Progression [IBM]

More on fingernail tech: A New App Can Detect a Blood Disorder From a Photo of Your Nails

The post IBM’s Fingernail Sensor Could Evaluate New Parkinson’s Medications appeared first on Futurism.

See the article here:
IBM’s Fingernail Sensor Could Evaluate New Parkinson’s Medications

Scientists Thought Mars Was Covered in Methane. Now It’s Missing

For the last 15 years, NASA and ESA scientists have found signs of methane in Mars' atmosphere. But a recent survey by an ESA orbiter didn't find any.

Whodunnit?

There are reports of a major cosmic heist.

New research shows that Mars, once thought to host a methane-rich atmosphere, now seems to be devoid of the organic molecule, Universe Today reports.

If you follow space news, you may recall the excitement in 2003 and 2004 when scientists at NASA and the European Space Agency (ESA) each announced they’d found methane on the Red Planet — and pointed out that it could be a sign of former or even current extraterrestrial life. The finding was again corroborated in 2014 by NASA’s Curiosity rover.

But now recent atmospheric surveys call into question the presence of the gas, according to research presented last week by American and European astronomers at the 2018 Fall meeting of the American Geophysical Union.

Just the Facts

Here’s what we know: The ESA’s ExoMars Trace Gas Orbiter (TGO) reached its orbit around Mars in 2016, when it began its mission to monitor and analyze the Red Planet’s atmosphere. Though scientists expected the TGO’s extremely sensitive sensors to pick up on atmospheric methane, it has yet to detect any at all.

The orbiter is capable of detecting trace amounts of methane — concentrations as low as 50 parts per trillion — at any elevation in Mars’ atmosphere. And yet it came back with bupkis.

However, the scientists working on the project concede that these are preliminary results. There’s still plenty of noise to clean out of the data and more analysis to be done.

Developing Case

These preliminary findings demonstrate a classic example of how the absence of evidence is not the same as the evidence of absence. It is far too soon to throw out all of the evidence for Martian methane. The scientists behind this TGO research are confident that the orbiter’s sensors are working correctly and that NASA and the ESA’s previous methane findings couldn’t all be incorrect, according to Universe Today.

It’s possible, according to the scientists, that methane seeps out of Martian soil rather than originating in the atmosphere, which would make it easier to detect with a rover than an orbiter like the TGO.

Needless to say, this is far from an open-and-shut case, and we’re likely to learn more about Mars’ atmosphere as the TGO team finalizes its results.

READ MORE: Remember the Discovery of Methane in the Martian Atmosphere? Now Scientists Can’t Find any Evidence of it, at all [Universe Today]

More on the search for methane: Scientists Need to Solve These Two Mysteries to Find Life on Mars

The post Scientists Thought Mars Was Covered in Methane. Now It’s Missing appeared first on Futurism.

Original post:
Scientists Thought Mars Was Covered in Methane. Now It’s Missing

Lung-Like Device Transforms Water Into a Clean Source of Fuel

A lung-like design created by researchers at Stanford University has the potential to increase the efficiency of hydrogen fuel cells.

Time and Place

Get water in your lungs, and you’re in for a very bad time.

But when water enters a new type of “lung” created by researchers at Stanford University, the result is hydrogen fuel — a clean source of energy that could one day power everything from our cars to our smartphones.

Though this isn’t the first device to produce hydrogen fuel, the unique design could be the first step along the path to an efficient method of generating hydrogen fuel.

Looking to Nature

The Stanford team describes its device in a paper published on Thursday in the journal Joule.

When air enters a human lung, it passes through a thin membrane. This membrane extracts the oxygen from the air and send it into the bloodstream. The unique structure of the organ makes this gas exchange highly efficient.

Combine hydrogen with oxygen, and you get electricity — and unlike the burning of fossil fuels, the only byproduct is water. For that reason, researchers have been looking into hydrogen fuel for decades, but they simply haven’t found a way to produce it that is efficient enough to be worthwhile.

This is mainly because hydrogen doesn’t often exist on its own in nature — we need to isolate it, often by separating water into hydrogen and oxygen.

Take a Breath

The Stanford researchers’ lung is essentially a pouch created out of a thick plastic film. Tiny water-repelling pores cover the exterior of the pouch, while gold and platinum nanoparticles line its interior.

By placing the pouch in water and applying voltage, the researchers were able to compel the device to create energy at an efficiency 32 percent higher than if they laid the film flat. They claim this is because the lung-like shape did a better job than other fuel cell designs of minimizing the bubbles that can form — and hurt efficiency — during the energy-generation process. “The geometry is important,” Stanford research Yi Cui told New Scientist.

The team will now focus on scaling-up its design and finding a way to get it to tolerate higher temperatures — right now, it doesn’t work above 100 degrees Celsius (212 degrees Fahrenheit), which could be a problem for commercial applications.

READ MORE: Device That Works Like a Lung Makes Clean Fuel From Water [New Scientist]

More on hydrogen fuel: Cheap Hydrogen Fuel Was a Failed Promise. But Its Time May Have Arrived.

The post Lung-Like Device Transforms Water Into a Clean Source of Fuel appeared first on Futurism.

See more here:
Lung-Like Device Transforms Water Into a Clean Source of Fuel

Scientists Used DNA to Play the Tiniest Game of Tic-Tac-Toe Ever

Researchers from Caltech demonstrate a new DNA origami technique by playing the world's smallest game of tic-tac-toe on a DNA board.

X’s and O’s

The world’s smallest game of tic-tac-toe could have a big impact on the future of nanotechnology.

Researchers from the California Institute of Technology (Caltech) have developed a new technique for shaping structures out of strands of DNA, a process known as DNA origami. Unlike previous techniques, which effectively locked a structure in place once created, the researchers could reshape an already-constructed DNA structure using this new technique.

To demonstrate the powerful new technique, they used it to play game of tic-tac-toe using a DNA board. Because of course they did.

A’s, T’s, C’s, and G’s

DNA origami takes advantage of the natural tendency of DNA molecules, or bases, to pair up with one another. Each base — A, T, C, and G — pairs with another of the bases: A and T are a team, as are C and G.

Strands of DNA pair up based on these matches — a strand with a molecule sequence ATTGCGA, for example, pairs perfectly with a TAACGCT strand — and researchers can create shapes out of DNA simply by manipulating the sequences of the letters.

But DNA strands can pair up partially, too. The Caltech team provides a somewhat depressing analogy of this based on dating people most similar to you, but the basic gist of it is that a strand will “dump” a strand that’s a partial match (for example, one where five out of eight bases are pairs) for one that’s a better match (one where six out of eight bases are pairs). This replacement of one so-so match for a better match is called “strand displacement.”

Perfect Match

In a paper published on Tuesday in the journal Nature Communications, the Caltech researchers describe how they combined strand displacement with a DNA origami technology called self-assembling tiles. That involves the creation of square-shaped tiles of DNA designed to fit together like the pieces of a puzzle.

To demonstrate their new DNA origami technique, the researcher put nine blank DNA tiles designed to form a three-by-three grid into a test tube. Once the tiles assembled, the researchers took turns adding X or O tiles to the test tube.

The researchers designed those tiles to replace specific blank tiles in the grid using strand displacement — the new tile was simply a better match in the chosen spot than the blank tile.

Tiniest Winner

The game took six days, with player X emerging as the winner. But the research was about far more than just a game of tic-tac-toe.

The ability to reshape DNA structures could prove immensely useful in the future, as scientists are already exploring ways to use DNA origami to deliver drugs and sort molecular cargo.

“When you get a flat tire, you will likely just replace it instead of buying a new car. Such a manual repair is not possible for nanoscale machines,” researcher Grigory Tikhomirov said in a news release. “But with this tile displacement process we discovered, it becomes possible to replace and upgrade multiple parts of engineered nanoscale machines to make them more efficient and sophisticated.”

Good game, Caltech.

READ MORE: Researchers Make World’s Smallest Tic-Tac-Toe Game Board With DNA [Caltech]

More on DNA origami: Nanobots Made of DNA Can Now Carry and Sort Molecular Cargo

The post Scientists Used DNA to Play the Tiniest Game of Tic-Tac-Toe Ever appeared first on Futurism.

Originally posted here:
Scientists Used DNA to Play the Tiniest Game of Tic-Tac-Toe Ever

We Have No Idea How to Deal With Unexpected Drones at Airports

Local law enforcement and transport agencies will have to come up with a solution to ward off unwanted drones at airfields.

Won’t Be Home for Christmas

A number of mysterious drones caused officials to completely shut down all of Gatwick Airport’s runways in London this week — and it highlights the uncomfortable fact that we have no idea what to do when drones show up unannounced at airports.

The chaos started on Wednesday night, the BBC reports, when a commercial-size drone was spotted near Gatwick’s airfield, confirmed by about 50 sightings. During a two-day game of cat and mouse — the drone kept reappearing as soon as Gatwick decided to reopen its runways — more than 120,000 passengers and some 760 flights were affected by the disruption, according to the BBC.

Clueless

Local law enforcement believes it was a “deliberate act.” At the time of reporting, no arrests have been made. Possible jail time: five years, according to Transport Secretary Chris Grayling.

“This kind of incident is unprecedented anywhere in the world, the disruption of an airport in this way,” Grayling said, as quoted by Agence France-Presse. “We’re going to have to learn very quickly from what’s happened.”

It’s not even the first time a drones have disrupted the operations of an airport, as the AFP points out. Drones caused chaos three times at Dubai International Airport in 2016, and an airline pilot avoided a crash with a drone in Paris in February 2016.

Eagles and Lasers

So what can we do about drones at airports?

When it comes to commercial drones near runways and other no-fly zones, security companies have invented “jamming” guns that cause drones to stop dead in their track, and even fall from the sky. For instance, Virginia-based security company DroneShield’s DroneGun can force a drone to land without even destroying it from a mile away by pointing a massive signal-jamming gun at it.

Other methods include high-energy lasers that can obliterate small drones from several miles away. Police in the Netherlands have even tried using trained eagles to take down drones.

The chaos at Gatwick chaos makes one thing clear: transport agencies and local law enforcement will have to come up with some kind of permanent solution in the future.

Because traveling by plane this close to Christmas is terrible enough as it is.

READ MORE: Gatwick chaos: Police ‘could shoot down drone’ [BBC]

More on anti-drone technology: These Are the Most Advanced Anti-Drone Technologies

The post We Have No Idea How to Deal With Unexpected Drones at Airports appeared first on Futurism.

Follow this link:
We Have No Idea How to Deal With Unexpected Drones at Airports

Researchers Taught an AI About Ownership Rules and Social Norms

A group of researchers at Yale University successfully taught a robotic system about ownership relations and the social norms that surround those relations.

“No Baxter, That’s Mine!”

What is yours is not mine, and what is mine is not yours — unless we agree to share it.

This simple example of how we agree on ownership of objects around us is something that may come naturally to us, but it’s a notion that robotic systems have to be taught first — in much the same way a child is taught about what belongs to them, and what doesn’t.

And a group of researchers at Yale University tried to do just that. They successfully taught a robotic system about ownership relations and the social norms that determine those relations — a much overlooked, but critical aspect of human-machine interaction.

And the results are promising: in a series of simulations, the robot could infer which objects belonged to it, and which didn’t — even with a “limited amount of training data,” according to a pre-written paper.

Peaceful Coexistence

“With the growing prevalence of AI and robotics in our social lives, social competence is becoming a crucial component for intelligent systems that interact with humans,” reads the paper. “Yet, explicating and implementing these norms in a robot is a deceptively challenging problem.”

The robotic system the researchers devised used a machine learning algorithm to learn from a series of rule violations — “No Baxter, that’s mine!” reads a speech bubble under a picture presented in the paper — and could even infer ownership purely based on the object’s qualities.

“One of the challenges in this work is that some of the ways that we learn about ownership are through being told explicit rules (e.g., ‘don’t take my tools’) and others are learned through experience,” Brian Scassellati, one of the scientists working on the project, told TechXplore. “Combining these two types of learning may be easy for people, but is much more challenging for robots.”

“I’m Afraid I Can’t Do That”

And it will become increasingly important for us to find ways to make robots understand these rules going forward. “Understanding about object ownerships, permissions, and customs is one of these topics that hasn’t really received much attention but will be critical to the way that machines operate in our homes, schools, and offices,” said Scassellati.

READ MORE: A new robot capable of learning ownership relations and norms [TechXplore]

More on robots and social norms: Culturally Sensitive Robots Are Here to Care for the Elderly

The post Researchers Taught an AI About Ownership Rules and Social Norms appeared first on Futurism.

See the article here:
Researchers Taught an AI About Ownership Rules and Social Norms

The U.S. Challenges Iran’s Attempt to Develop a Cryptocurrency

Congress just introduced a bill calling for sanctions against Iran's efforts to develop its own digital currency.

More Sanctions

U.S. regulators are taking a stronger stance against Iran’s plans to develop its own sovereign cryptocurrency.

Congress introduced a bill on December 17 called the “Blocking Iran Illicit Finance Act” that threatens Iran with more sanctions in response to activity that could see Iran develop its own national cryptocurrency. The U.S. is worried that the development of such a cryptocurrency could allow Iran to launder money, thereby dodging U.S. sanctions.

A corresponding bill sponsored by Ted Cruz (R-TX) calls for additional sanctions against anybody who could be aiding Iran to develop such a digital currency, including sanctions against any foreign person that might facilitate transactions for the “sale, supply, or transfer” of such a currency.

Existing Tension

And tensions between the U.S. and Iran are already high after the Trump administration decided to withdraw from the Joint Comprehensive Plan of Action (JCPOA) in May 2018. Under the accord, Iran agreed to limit any nuclear activity, and allow international actors to inspect any nuclear plants in the country. In return, economic sanctions would be lifted.

And Iran has been busy laying the groundwork for an official national digital currency. The country has been in the news recently, promising that blockchain technology could give Iran a much-needed economic boost, Coindesk reports.

Officials announced in July that Iran was planning to issue its own national cryptocurrency. Months later, Iran agreed to officially recognize crypto mining as an industry in the government.

The Petro

There’s a precedent for a country developing its own sovereign cryptocurrency — without the best of intentions. Venezuela’s highly controversial “petro” — an oil- and state-backed cryptocurrency pushed by president Nicolas Maduro — is meant to give the South American nation a way out of crippling hyperinflation. So far, the people of Venezuela have yet to see any tangible benefits from the petro.

In fact, the Trump administration has also expanded sanctions in March against Venezuela to include any trade of the petro.

Will Iran comply with Congress’ demands — if the bill passes — and cease any activity related to the issuing of a digital currency? Unlikely. But it could have a negative effect on already poor relations between the two countries.

READ MORE: US Lawmakers Seek Sanctions Against Iran’s Cryptocurrency Efforts [Coindesk]

More on Iran’s cryptocurrency: Iran May Move to Create Its Own National Cryptocurrency

 

The post The U.S. Challenges Iran’s Attempt to Develop a Cryptocurrency appeared first on Futurism.

Go here to read the rest:
The U.S. Challenges Iran’s Attempt to Develop a Cryptocurrency

New Horizons Will Fly Past the Most Distant Object We’ve Ever Visited

After whizzing past Pluto in July 2015, NASA's New Horizons spacecraft is fast approaching Ultima Thule — an odd space rock 6.6 billion kilometers away.

Old Space Rock

After whizzing past Pluto in July 2015, NASA’s New Horizons spacecraft is fast approaching its next destination: Ultima Thule — or 2014 MU69 — a space rock only 23 miles (37 kilometers) across. Expected date of arrival: New Year’s Day.

Ultima Thule is some 4.1 billion miles (6.6 billion kilometers) from Earth in the so-called Kuiper Belt — the region of the Solar System that lies beyond the known eight planets. It was discovered in 2014 when NASA scientists were looking for space rocks for New Horizons to visit.

It’s been there for more than 4.5 billion years, so having a closer look could reveal key insights into the earliest days of our solar system.

Tricky Fly-By

We don’t know much about the space rock yet, but we do know that getting a closer look will prove pretty difficult. “It’s a lot harder than Pluto,” said mission leader Alan Stern, as quoted by New Scientist. “Instead of being the size of the continental US, it’s the size of Boston. Being 100 times smaller means it’s 10,000 times fainter.”

And then there’s the fact that New Horizons’ batteries are slowly degrading — that means less power for keeping the lights on. And suffice it so say, things get pretty dark billions of kilometers away from the Sun.

Early Mysteries

Earlier this week, scientists were puzzled by Ultima Thule’s odd shape. Team members at NASA thought the space rock was elongated rather than spherical. But observations showed that the “light curve” emitted from it was constant, suggesting it was spherical after all.

“I call this Ultima’s first puzzle — why does it have such a tiny light curve that we can’t even detect it? I expect the detailed flyby images coming soon to give us many more mysteries, but I did not expect this, and so soon,” said New Horizons project lead Alan Stern in a recent statement.

The odd light signature could be caused by a “cloud of dust” or even “many tiny tumbling moons” surrounding it, suggests New Horizon’s team in the statement.

It’s impossible to tell as of right now. But thanks to NASA’s New Horizons probe, we might soon get an answer.

READ MORE: NASA probe will hurtle past the most distant object we’ve ever visited [New Scientist]

More on New Horizons: New Horizons Space Probe Captures the Farthest Photo Ever Taken From Earth

The post New Horizons Will Fly Past the Most Distant Object We’ve Ever Visited appeared first on Futurism.

Read the original:
New Horizons Will Fly Past the Most Distant Object We’ve Ever Visited

NASA Wants to Send Earthquake-Detecting Balloons to Venus

Scientists set off a massive explosion in the Nevada desert to test special earthquake-detecting balloons that could one day be deployed over Venus.

Big Boom

On December 19, researchers, with the help of the U.S. Deparmtent of Energy, set off a 50-ton chemical explosion hundreds of meters below the surface of the Mojave desert, Science Magazine reports.

The resulting artificial earthquake allowed NASA scientists to test out special earthquake sensors hanging below helium-filled balloons that were floating hundreds of meters above the desert.

NASA is hoping that a similar system could one day take detailed seismic measurements above the surface of Venus. The goal is to eventually discover clues about the planet’s internal structure.

The idea of using balloons to collect data about our “evil twin” planet has been around since at least 2014. A futuristic — but no longer active — concept called High Altitude Venus Operational Concept (HAVOC) suggested we could one day go on 30 day missions into Venus’s atmosphere on board special lighter-than-air vehicles.

Balloons Over Venus

So why balloons? Landing a rover on Venus to collect seismic data is practically impossible. Venus’s dense atmosphere of mostly carbon dioxide is so hot it could melt lead — average temperatures are estimated to be 864 degrees Fahrenheit (462 degrees Celsius). The outer atmosphere is a lot cooler, and would be a far more suitable place to conduct scientific experiments from.

But how could a ballon floating up to 31 miles (50 kilometers) above the surface of Venus be able to detect seismic activity? Venus’s incredibly dense atmosphere is results in pressure that’s equivalent to 3,000 feet below the ocean back on Earth. In such a dense environment, seismic waves could travel far easier from the surface to the ballon.

Mysterious Sister Planet

But there’s a lot about Venus we don’t even know yet — let alone if there’s any seismic activity to begin with. The results from the explosion might also not translate to the harsh environment of Venus’s atmosphere.

But it’s a clever solution that could one day allow us to get to know Venus a whole lot better.

READ MORE: A desert explosion helps scientists plan earthquake-detecting balloons on Venus [Science]

More on Venus: Enough About Mars. Here’s How We Could Terraform Venus

The post NASA Wants to Send Earthquake-Detecting Balloons to Venus appeared first on Futurism.

View post:
NASA Wants to Send Earthquake-Detecting Balloons to Venus

This Algorithm Can Create 3D Animations From A Single Still Image

A researcher's new astonishing software called

“Photo Wake-Up”

Chung-Yi Weng, PhD student at the University of Washington, and some of his friends created something truly astonishing.

Their software called “Photo Wake-Up” allows character animations to simply “walk out” of a static image frame — without leaving a hole in the picture behind them. The results were published in a recently submitted paper.

Weng’s method identifies a 2D subject in a single photo as input, and creates a 3D animated version of that subject. The animation can then “walk out, run, sit, or jump in 3D.”

And it could redefine the way we interact with photos. “We believe the method not only enables new ways for people to enjoy and interact with photos, but also suggests a pathway to reconstructing a virtual avatar from a single image,” Weng, and his collaborators explain in the paper.

Like Magic

The effect, as seen in the video below, is amazing, albeit jarring. Basketball legend Stephen Curry can be seen jumping into action, jogging straight out of his photo frame. One of Picasso’s surrealist creations cuts itself out of its frame perfectly, leaving the painting behind it in tact.

For the effect to work properly, all Weng’s software needs is a still frame showing a silhouette. It cuts out the 2D shape, and warps it around a 3D skeleton that matches it.

A Big Improvement

Researchers have tried to create a similar effect in the past, but the results have been a lot less impressive. Weng’s new approach adds an important new ability: it can identify different body parts like arms and legs, and warp each one individually in a way that matches the 2D cutout exactly.

The software even works in augmented reality, and could redefine the way we interact with two-dimensional pieces of art the next time we visit an art gallery.

READ MORE: Machine vision can create Harry Potter–style photos for muggles [MIT Technology Review]

More on 3D animation: Witness the Remarkable Evolution of Animation in Video Games

The post This Algorithm Can Create 3D Animations From A Single Still Image appeared first on Futurism.

Follow this link:
This Algorithm Can Create 3D Animations From A Single Still Image