Sugar Pills Can Help Tiny Robots Navigate Your Body

A ROBOT ARMY. If you believe futurist Ray Kurzweil, by 2030 we’ll all have armies of microscopic robots flowing through our bodies, diagnosing diseases and delivering drug treatments where needed.

One big problem? We don’t have a good way to get drug-delivering bots where they need to go. However, researchers from the University of California San Diego (UCSD) think they’ve found a solution: sugar pills.

They published their study in the journal ACS Nano on July 30.

SUGAR, SUGAR. In a previous study, the UCSD team used micromotors — tiny self-propelling robots — coated with antibiotics to treat ulcers in lab mice.

The bots did what they were supposed to, but the mice’s gastric acid and intestinal fluids caused some of the tiny bots to release the drugs before they reached the ulcers. Additionally, some of the micromotors got stuck in the mice’s throats if they got into the body by being swallowed with a liquid.

To get around these issues, the UCSD researchers placed tens of thousands of micromotors into pills created out of lactose and maltose, two sugars chosen because they’re nontoxic, easy to mold into tablets, and can disintegrate when needed.

ON-DEMAND DRUG DELIVERY. When they tested the pills in lab mice, the researchers found that the micromotors encapsulated in sugar did a better job of delivering their drugs than ones delivered via liquid solutions or tablets created out of silica.

Now that we have a more effective way to deliver micromotors exactly where we want them, we can move one step closer to Kurzweil’s vision of robot armies patrolling our insides to keep us healthy.

READ MORE: A Pill for Delivering Biomedical Micromotors [EurekAlert]

More on micromotors: Kurzweil: By 2030, Nanobots Will Flow Throughout Our Bodies

The post Sugar Pills Can Help Tiny Robots Navigate Your Body appeared first on Futurism.

Read the original post:
Sugar Pills Can Help Tiny Robots Navigate Your Body

The US Has Tons of Spaceports But Not Enough Launches

TOO MANY PORTS. In all of 2017, 29 spacecraft launched from three launchpads in the U.S.

So why do we now have 10 active spaceports — sites for launching or receiving spacecraft — when seven of them are just taking up space?

That’s the question explored in a report published by WIRED on Wednesday.

The problem seems to be that the reality of space travel today simply doesn’t match past expectations.

“For many of us who’ve been following the commercial space industry for the past 10 to 15 years, I think it’s going slower than everybody had hoped,” aerospace and spaceport expert Brian Gulliver told WIRED. “We expected more launch vehicles to be operating now.”

AN OPTIMISTIC OUTLOOK. Of course, just because those expected launches aren’t happening yet doesn’t mean they won’t in the near future.

Several companies are working tirelessly to usher in the era of space tourism. When (or if) they’re successful, people across the nation could eventually flock to these spaceports to hitch a ride off-world.

Satellites are also getting smaller, cheaper, and more powerful, which means we could see an increase in launches in the near future — another potential use for the U.S.’s many spaceports.

IF YOU BUILD IT… The organizations (some public, some private) in the cities hosting currently-dormant spaceports hope that having the facilities in place could draw aerospace companies to set up shop nearby. Then, when those companies are ready for liftoff, the spaceports will be able to fulfill their intended purpose.

And it costs some serious cash to be “hopeful” — one spaceport in New Mexico, for example, cost almost $200 million to build.

For each spaceport to host even one launch per month, we’d need to more than quadruple 2017’s number of launches. That’s not likely to happen overnight. So for now, the people invested in those spaceports will just have to hope theirs is one of the lucky ones to host the handful of launches currently available.

READ MORE: America’s Spaceport Boom Is Outpacing the Need to Go to Space [WIRED]

More on space tourism: The Digest: Blue Origin’s Spaceflight Tickets Will Go on Sale in 2019

The post The US Has Tons of Spaceports But Not Enough Launches appeared first on Futurism.

Read the original:
The US Has Tons of Spaceports But Not Enough Launches

It’s Now Way Easier to Manipulate Individual Atoms in 3D

QUANTUM CONTROL. If we want quantum computers to become more sophisticated — which, yeah, we definitely do — we’ve got to be able to manipulate the quantum world. Programming a quantum computer means we need to be able to precisely arrange individual atoms, which isn’t all that easy. But now, researchers from the University of Paris-Saclay have figured out a better way to do just that.

They published a paper on the subject on Wednesday in the journal Nature.

THE NEXT DIMENSION. Right now, we can arrange atoms into various one- and two-dimensional arrays using optical tweezers, instruments that trap atoms in place using highly focused laser beams.

The French researchers figured out that they could reflect a laser off a spatial light modulator (a device that can make a light beam more or less intense) and then refocus it to produce three-dimensional arrays that could trap atoms.

Once they had their atomic arrays in place to act as a scaffolding, the researchers could put atoms in to populate them. “We initially randomly load and half-fill these traps with cold rubidium atoms,” the study’s lead author Daniel Barredo told Physics World.

The team then used optical tweezers to rearrange the atoms within the trap however they wanted, moving them from one spot in the array to another. Ultimately, the researchers were able to produce atomic arrays containing 72 atoms.

EXCITABLE ATOMS. Not only could the researchers arrange the atoms however they saw fit (how they saw fit was in the shape of the Eiffel Tower, because of course), they could even “entangle” pairs of atoms within the array. They just needed to zap each atom with a laser to excite one of its electrons (producing what’s called Rydberg excitation), and the atoms could then exchange spins.

“Arrays of neutral atoms excited to Rydberg states have recently emerged as a very promising platform for quantum simulations of large physical systems,” said Barredo. “Until now, however, the largest quantum simulations that could be performed using these systems involved around 50 qubits in 1D and 2D geometries.”

“Accessing the third dimension, as we have achieved in this work, not only allows these qubits to be scaled up (to 72 atoms in our case), it also opens the way to simulating more complex, real-world physical phenomena and materials.”

READ MOREAtomic Eiffel Tower Looms Over Quantum Computing Landscape [Chemistry World]

More on quantum computing: Quantum Computing Is Going to Change the World. Here’s What This Means for You.

The post It’s Now Way Easier to Manipulate Individual Atoms in 3D appeared first on Futurism.

Go here to read the rest:
It’s Now Way Easier to Manipulate Individual Atoms in 3D

Social Media Giants Need Regulation From a Government That’s Unsure How To Help

Today, Facebook COO Sheryl Sandberg and Twitter CEO Jack Dorsey stood up in front of the Senate’s Intelligence Committee. They were prepared for a flogging — it was what they came for. Sandberg and Dorsey dutifully answered softball questions from senators, fielding them wearing their best poker faces.

It’s the fourth time social media executives have been grilled in front of Congress. The topic of conversation hasn’t really changed much since the first time, back in October of last year: Russia is interfering with democratic elections in the U.S. with the help of troll farms and billions of fake accounts on platforms including Facebook and Twitter.

But the stakes do seem higher now. The cards are on the table — everybody knows Russia is interfering, that platforms like Facebook and WhatsApp (owned by Facebook) fueled hate speech and drug warsinciting mass violence or possible genocide. We all know that fake news and conspiracy theorists like Alex Jones are flourishing on social media.

No one can deny that these companies are manipulating core functions of our society in ways that we still don’t fully understand.

Social media giants are starting acknowledge that maybe they can’t handle all that on their own. “[…] we also recognize that we’re not going to catch everything alone,” a somber-faced Dorsey responded to a question by Sen. Susan Collins (R-MA), suggesting that Twitter needed help from U.S. intelligence partners.

That may mean that government regulation of these social media companies is a clearer possibility. “The era of the Wild West in social media is coming to an end,” said Sen. Mark Warner (D-VA) in his opening statement.

In their defense, Facebook and Twitter haven’t sat idly by, waiting for the government to impose rules on them. Facebook has started constructing a literal war room at its campus in Menlo Park, California to be “laser focused” — as Samidh Chakrabarti, product manager of Civic Engagement at Facebook put it in an NBC interview — on protecting their users from foreign interference. And in today’s Congressional hearing, Sandberg said fake news sources and Facebook’s human and algorithmic moderators were locked in an “arms race.”

It’s not a stretch to call what’s going on a “war.” In fact, Facebook is barely stemming the tide: the company of over two billion active users deleted billions of fake accounts since the beginning of this year. Twitter removed tens of millions of suspicious accounts in July in its attempt to fight back.

But hate speech is still rampant on Burmese Facebook accounts, and we still don’t know the full extent of Russian influence on the platform. “There’s a lot we don’t know yet,” CEO Mark Zuckerberg admitted on a conference call with reporters, as quoted by National Post.

Despite their efforts, Russian interference in the upcoming midterm elections is more than likely. U.S. officials, including U.S. intelligence and Homeland Security, have repeatedly warned of Russian meddling in the elections, but a meaningful long-term solution to thwart Russian efforts is still not on the table.

Realistically, though, regulation is still a ways off. Facebook and Twitter want help to identify suspicious accounts, but lawmakers have offered only a few tepid ideas so far. Warner, in fact, penned a white paper earlier this year in which he suggests the government force social platforms to identify fake accounts as “bots” to its users.

But that would require Facebook and Twitter to be able to identify these bots in the first place. That’s becoming increasingly difficult, thanks to sophisticated tech behind troll farms, and a never-ending stream of new fake accounts.

No one, not regulators or the companies themselves, has found the right solution yet. But increasingly, the government seems willing to hold them accountable. Because the prize for getting this right? Control of information — and, more importantly, disinformation. There’s time to get it right before the midterm elections. We’ve just gotta go with whatever solutions might work, and implement them as quickly as possible.

The post Social Media Giants Need Regulation From a Government That’s Unsure How To Help appeared first on Futurism.

Visit link:
Social Media Giants Need Regulation From a Government That’s Unsure How To Help

Genetically Engineered Mosquitoes Are About to Fly Free in Africa

An African nation has agreed to let researchers release genetically engineered mosquitoes into the wild as part of a plan to eradicate malaria.

AN AFRICAN FIRST. A tiny nation in Africa is ready to take a big biotech gamble.

On Wednesday, researchers announced that they’d gotten the go-ahead from the government of Burkina Faso to release genetically engineered mosquitoes into the wild.

The move is part of a long-term plan to eradicate a malaria-transmitting species. This will be the first release of any genetically modified animals into the African wild (genetically modified mosquitoes have been released in the wild elsewhere).

MALEVOLENT MALARIA. Malaria spreads when parasites infect mosquitoes, and those insects then transmit it to humans. According to the Centers for Disease Control and Prevention (CDC), 445,000 people died from malaria in 2016, and most of those were children in Africa. If we can get rid of that particular type of mosquito — or at least decrease its numbers — we should be able to reduce the number of malaria cases and deaths.

The mosquitoes the researchers plan to release in the Burkinabé village of Bana this month will be males genetically modified to be sterile. These mosquitoes aren’t intended to eradicate malaria — they’ll be there to help locals trust the scientific community.

THE ULTIMATE GOAL. If they’re able to meet that goal, researchers in Burkina Faso, plus the African nations Mali and Uganda, hope to later release “gene drive” mosquitoes into the wild. Unlike the mosquitoes approved by the Burkinabé government, those mosquitoes will have been genetically altered to carry mutations intended to thin the population that they’ll pass down to all of their offspring.

No one has ever released a gene drive-modified animal into the wild, and for good reason: it’s very risky. If the release of these genetically modified sterile mosquitos produces some unforeseen consequences, we can simply wait for all the insects to die. Once we release gene drive mosquitoes into the wild, though, we don’t really have an “undo” button. But given the number of malaria-caused deaths, the risk just might be worth the potential reward.

READ MORE: For the First Time, Researchers Will Release Genetically Engineered Mosquitoes in Africa [STAT]

More on mosquitos: Scientists Could Eradicate Malaria by Altering Mosquitoes DNA, but Should They?

The post Genetically Engineered Mosquitoes Are About to Fly Free in Africa appeared first on Futurism.

See the rest here:
Genetically Engineered Mosquitoes Are About to Fly Free in Africa

Japan Will Soon Conduct The First Test of Elevator Movement in Space

GOING (WAY) UP. For more than a century, the scientific community has kicked around the idea of space elevators. Specifics vary, but the basic design involves a vehicle of some sort that travels along a cable that spans from the Earth all the way up into space.

So far, space elevators have been little more than a sci-fi pipe dream. But now researchers from Japan’s Shizuoka University appear determined to bring the concept to reality — they’re ready to conduct the first test of elevator movement in space.

WHAT IS THIS? A SPACE ELEVATOR FOR ANTS? The Japanese team plans to test its space elevator design using a scaled-down version of the system — way scaled-down. The “elevator” in the test is a box only six centimeters (2.4 inches) long and three centimeters (1.1 inches) high and wide (at full scale, the box will be large enough to transport actual supplies to space).

On September 11, Japan’s space agency will launch an H-2B rocket carrying two mini satellites, one of which will contain the elevator stand-in. Once in space, motors will power the box like a celestial tightrope walker along a cable strung between the two mini satellites positioned 10 meters (10.9 yards) apart from one another. Cameras on the satellites will monitor the motorized motion of the box.

CHEAPER (MAYBE). This movement is definitely easier to do in space than it will be between Earth and space (we don’t have a material strong enough to handle those forces involved, at least not yet).

If Japan (or anyone) can successfully create a space elevator, we could have a low-cost way to deliver supplies and people to space — some experts predict the devices could cut the cost of transporting goods from $22,000 per kilogram ($10,000 per pound) to just $220 per kilogram ($100 per pound).

Of course, reusable rockets are already making space transportation much cheaper, so by the time space elevators are finally ready for action, we might not even need them. But the fact that Japan is at least taking the tech for a test drive is still pretty cool.

READ MORE: Japan Preps First Test for Its Awesome ‘Space Elevator’ [Digital Trends]

More on space elevators: Space Elevators: From Science Fiction to Reality

The post Japan Will Soon Conduct The First Test of Elevator Movement in Space appeared first on Futurism.

Originally posted here:
Japan Will Soon Conduct The First Test of Elevator Movement in Space

A Venus Flytrap-Like System Could Help Military Drones Avoid Detection

The U.S. Army is eyeing a system that traps military drones while they're on the move, which could help the devices avoid detection by enemies.

ON THE MOVE. When military drones stand still, they become a much easier target for enemies to detect and attack. So keeping the devices on the move is a top priority for the U.S. Army, which is now eyeing a mobile retrieval system that could ensure its drones are never in one place for long.

The system is Talon, and the Army got a look at a prototype of it at the Maneuver Center of Excellence (MCoE) at Fort Benning in Georgia on Thursday.

NATURE-INSPIRED. Talon is essentially a box with two open ends that sits on the back of a truck. A grid of rods on the floor and ceiling of the box move up and down to trap a drone when it flies into the box through an open end, similar to the way a Venus Flytrap traps its prey. Because the truck is moving during the process, the drone is never actually stationary. Soldiers could launch drones from Talon as well.

“The ability to launch and recover aircraft from a moving platform really helps our ground formations on a battlefield, where we know they have to move quickly,” Don Sando, deputy to the commanding general of the Army MCoE said on Thursday, as reported by Military.com. “Anytime you stop, you become a target.”

TESTING TALON. For now, Talon is still a prototype, but its creators, military tech company Target Arm, could submit a proposal to the Army asking for funding to test the system with soldiers, according to Thomas Nelson, director of Robotics Requirements at Fort Benning.

If those tests go well, the system could see action on the battlefield where it could help drones go undetected. That would keep soldiers out of harm’s way by allowing them to retrieve and launch the devices without leaving the safety of their vehicles.

READ MORE: The Army’s Looking at Robots, and This Venus Flytrap-Like System Has Caught Its Eye [Army Times]

More on military dronesShoot This Military Drone With a Laser, and It’ll Stay in the Air Indefinitely

The post A Venus Flytrap-Like System Could Help Military Drones Avoid Detection appeared first on Futurism.

Go here to see the original:
A Venus Flytrap-Like System Could Help Military Drones Avoid Detection

We Can Now Easily 3D Print With Metal

Researchers from Yale University have found a way to make 3D printing metal far easier, potentially opening up the material to consumer use.

THE NEXT LEVEL. We can now 3D print pretty much anything from seemingly any material. Want to print a house out of concrete, a human cornea out of a bio-ink, or a pizza out of dough, sauce, and cheese? Piece of (3D-printed) cake.

The one material that’s proven a bit trickier, though, is metal. Industrial printers are up to the task, but we’ve yet to see a commercial 3D printer that can create objects out of metal with the same ease others print with plastic.

However, researchers from Yale University think they’ve found a way to make 3D printing metal objects easier than ever before. They published their study in the journal Materials Today on Tuesday.

HOW IT WORKS. The problem with printing objects with metals? Metals aren’t typical found in an easily “printable” state — it’s just not easy to get them soft enough to make into different shapes, something that’s pretty simple to do with plastic. To get around this issue, the researchers turned to bulk metallic glasses (BMGs).

A BMG is a type of metallic material that doesn’t exhibit the same rigid atomic structure as most metal alloys. This means BMGs can soften more easily than most other metals, but they are still strong with high elastic limits, fracture toughness, and corrosion resistance — qualities typically associated with metals.

For their 3D printing research, the Yale team focused on a readily available BMG containing zirconium, titanium, copper, nickel, and beryllium. Under the same conditions used to 3D print with plastics, the researchers forced rods of their BMG through a feeding system heated to 460°C to soften the BMG.

Using this process, they found they could print a number of different shapes out of the high-strength metal material.

A NEW ERA. The team claims it has already successfully tested its 3D printing system with various other BMGs. The next step is “making the process more practical- and commercially-usable,” researcher Jan Schroers said in a press release.

The applications for a device that makes 3D printing metal simple are practically limitless. A mechanic could simply fabricate the part needed to fix your car while you wait at the shop; makers could print metallic parts for engineering projects from their garages.

Ultimately, this research could mark commercial 3D printing’s transition from the plastic era to the metal age.

READ MORE: At Last, a Simple 3D Printer for Metal [Elsevier]

More on 3D printing: Blueprints for 3D Printed Guns Will Stay (Sort of) Offline, Judge Rules

The post We Can Now Easily 3D Print With Metal appeared first on Futurism.

Link:
We Can Now Easily 3D Print With Metal

Artificial Intelligence Could Be The Key To Longevity [Affiliate]

The following editorial content was originally published here.

What if we could generate novel molecules to target any disease, overnight, ready for clinical trials? Imagine leveraging machine learning to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000. It’s a multibillion-dollar opportunity that can help billions.

The worldwide pharmaceutical market, one of the slowest monolithic industries to adapt, surpassed $1.1 trillion in 2016. In 2018, the top 10 pharmaceutical companies alone are projected to generate over $355 billion in revenue. At the same time, it currently costs more than $2.5 billion (sometimes up to $12 billion) and takes over 10 years to bring a new drug to market. Nine out of 10 drugs entering Phase I clinical trials will never reach patients. As the population ages, we don’t have time to rely on this slow, costly production rate. Some 12 percent of the world population will be 65 or older by 2030, and “diseases of aging” like Alzheimer’s will pose increasingly greater challenges to society. But a world of pharmaceutical abundance is already emerging. As artificial intelligence converges with massive datasets in everything from gene expression to blood tests, novel drug discovery is about to get more than 100 times cheaper, faster, and more intelligently targeted.

One of the hottest startups I know in this area is Insilico Medicine. Leveraging AI in its end-to-end drug pipeline, Insilico Medicine is extending healthy longevity through drug discovery and aging research. Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease and identify the most promising targets for billions of molecules. These molecules either already exist or can be generated de novo with the desired set of parameters.

Insilico’s CEO Dr. Alex Zhavoronkov recently joined me on an Abundance Digital webinar to discuss the future of longevity research. Insilico announced the completion of a strategic round of funding led by WuXi AppTec’s Corporate Venture Fund, with participation from Pavilion Capital, Juvenescence, and my venture fund BOLD Capital Partners. What they’re doing is extraordinary — and it’s an excellent lens through which to view converging exponential technologies.

Case Study: Leveraging AI for Drug Discovery

You’ve likely heard of deep neural nets — multilayered networks of artificial neurons, able to ‘learn’ from massive amounts of data and essentially program themselves. Build upon deep neural nets, and you get generative adversarial networks (GANs), the revolutionary technology that underpins Insilico’s drug discovery pipeline.

What are GANs?  By pitting two deep neural nets against each other (“adversarial”), GANs enable the imagination and creation of entirely new things (“generative”). Developed by Google Brain in 2014, GANs have been used to output almost photographically accurate pictures from textual descriptions (as seen below).

Source: Reed et al., 2016

Insilico and its researchers are the first in the world to use GANs to generate molecules. “The GAN technique is essentially an adversarial game between two deep neural networks,” as Alex explains. While one generates meaningful noise in response to input, the other evaluates the generator’s output. Both networks thereby learn to generate increasingly perfect output. In Insilico’s case, that output consists of perfected molecules. Generating novel molecular structures for diseases both with and without known targets, Insilico is pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s Disease, Alzheimer’s Disease, ALS, diabetes, and many others.

Once rolled out, the implications would be profound. Alex’s ultimate goal is to develop a fully-automated Health as a Service (HaaS) / Longevity as a Service (LaaS) engine. Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health. But what does this tangibly look like?

Insilico’s End-to-End Pipeline

First, Insilico leverages AI — in the form of GANs — to identify targets (as seen in the first stage of their pipeline below). To do this, Insilico uses gene expression data from both healthy tissue samples and those affected by disease. (Targets are the cellular or molecular structures involved in a given pathology that drugs are intended to act on.)

Source: Insilico Medicine via Medium

Within this first pipeline stage, Insilico can identify targets, reconstruct entire disease pathways and understand the regulatory mechanisms that result in aging-related diseases. This alone enables breakthroughs for healthcare and medical research. But it doesn’t stop there. After understanding the underlying mechanisms and causality involved in aging, Insilico uses GANs to ‘imagine’ novel molecular structures. With reinforcement learning, Insilico’s system lets you generate a molecule with any of up to 20 different properties to hit a specified target. This means that we can now identify targets like never before, and then generate custom molecules de novo such that they hit those specific targets. At scale, this would also involve designing drugs with minimized side effects, a pursuit already being worked on by Insilico scientist Polina Mamoshina in collaboration with Oxford University’s Computational Cardiovascular Team.

One of Insilico’s newest initiatives — to complete the trifecta, if you will — involves predicting the outcomes of clinical trials. While still in the early stages of development, accurate clinical trial predictors would enable researchers to identify ideal preclinical candidates. That’s a 10X improvement from today’s state of affairs.

Currently, over 90 percent of molecules discovered through traditional techniques and tested in mice end up failing in human clinical trials. Accurate clinical trial predictors would result in an unprecedented cutting of cost, time, and overhead in drug testing.

The 6 D’s of Drug Discovery

The digitization and dematerialization of drug discovery has already happened. Thanks to converging breakthroughs in machine learning, drug discovery and molecular biology, companies like Insilico can now do with 50 people what the pharmaceutical industry can barely do with an army of 5,000. As computing power improves, we’ll be able to bring novel therapies to market at lightning speeds, at much lower cost, and with no requirement for massive infrastructure and investments. These therapies will demonetize and democratize as a result. Add to this anticipated breakthroughs in quantum computing, and we’ll soon witness an explosion in the number of molecules that can be tested (with much higher accuracy).

Finally, AI enables us to produce sophisticated, multitarget drugs. “Currently, the pharma model, in general, is very simplistic. You have one target and one disease — but usually, a disease is not one target, it is many targets,” Alex has explained.

Final Thoughts

Inefficient, slow-to-innovate, and risk-averse industries will all be disrupted in the years ahead. Big Pharma is an area worth watching right now, no matter your industry. Converging technologies will soon enable extraordinary strides in longevity and disease prevention, with companies like Insilico leading the charge. Fueled by massive datasets, skyrocketing computational power, quantum computing, blockchain-enabled patient access, cognitive surplus capabilities and remarkable innovations in AI, the future of human health and longevity is truly worth getting excited about.

Rejuvenational biotechnology will be commercially available sooner than you think. When I asked Alex for his own projection, he set the timeline at “maybe 20 years — that’s a reasonable horizon for tangible rejuvenational biotechnology.” Alex’s prediction may even be conservative. My friend Ray Kurzweil often discusses the concept of “longevity escape velocity” — the point at which, for every year that you’re alive, science is able to extend your life for more than a year.

With a record-breaking prediction accuracy of 86 percent, Ray predicts “it’s likely just another 10 to 12 years before the general public will hit longevity escape velocity.” How might you use an extra 20 or more healthy years in your life? What impact would you be able to make?

Interested in exploring topics like this further? Join Peter Diamandis and Ray Kurzweil as they discuss Transforming Humanity in the 21st Century and Beyond, live-streamed from Ray’s Google office on September 14th. RSVP here.


Disclaimer: This guest post is in partnership with Abundance 360, and Futurism may get a small percentage of sales.

The post Artificial Intelligence Could Be The Key To Longevity [Affiliate] appeared first on Futurism.

Read more:
Artificial Intelligence Could Be The Key To Longevity [Affiliate]

Today in Dystopia: This Roomba “Remembers” a Map of Your House

If users aren't comfortable with the fact that iRobot's Roomba i7+ stores a map of their floor plan in the cloud, they can opt for the e5.

THE ROBOT REMEMBERS. Don’t like vacuuming, but also don’t like the idea of criminals knowing where you keep your valuables? iRobot’s got you covered.

On Thursday, the robotics company unveiled the newest version of Roomba, the autonomous robotic vacuum cleaner that’s been helping keep homes dust-free since 2002.

Unlike previous versions, Roomba i7+ is able to empty itself. Pretty cool! Potentially less cool, depending on how much you value your privacy: the bot also stores images of your home in its internal memory and a map of your home’s layout in the cloud.

TO THE CLOUD. This isn’t the first Roomba capable of mapping the floor plan of its master’s home — iRobot gave the bots that ability back in 2015 to help them navigate homes and cover every foot of floorspace. But those versions deleted the information right away. This one stores it, though the company says protecting user data is its top priority.

Clearly, though, iRobot understands that consumers might still be wary. It’s offering another, lower-tech version of the device, which doesn’t include the room-mapping functionality.

THE LUDDITE VERSION. This lower-tech version is called the Roomba e5. Unfortunately, the trade-off for not having a map of your home floating around the cloud appears to be having to empty your own vacuum — the e5 doesn’t include the same auto-empty feature of the Roomba i7+.

The fact that iRobot even bothered to create the e5 could be sign that device manufacturers are aware of consumers’ concerns over privacy. We could be heading toward a future in which it’s common practice to release two versions of the same device — one has all the fancy bells and whistles, while the other puts user privacy at a premium.

READ MORE: iRobot’s Latest Roomba Remembers Your Home’s Layout and Empties Itself [The Verge]

More on Roomba: The Makers of Roomba Want to Share Maps of Users’ Homes With Smart Tech Companies

The post Today in Dystopia: This Roomba “Remembers” a Map of Your House appeared first on Futurism.

Read the original here:
Today in Dystopia: This Roomba “Remembers” a Map of Your House

Google’s New Search Engine Could Solve a Major Problem in the Scientific Community

LET ME GOOGLE THAT. Any random question you have — why platypuses produce milk if they lay eggs, Winona Ryder’s birthday, the name of that actor in that movie you saw in high school in one time — you can (probably) find the answer on Google.

In the past, scientists looking for datasets to use in their research? They didn’t have that luxury. The internet hosts millions of datasets, and the information they contain fuels the latest scientific research. But because these datasets exist across thousands of repositories, it’s not always easy for scientists to find the datasets they need.

Now researchers have the same option as your wandering mind: they can just check Google.

On Wednesday, Google launched Dataset Search, a new search engine specifically geared toward collections of data. The company hopes the platform will help scientists to locate datasets quickly and painlessly.

READABLE DATA. According to a blog post, Google started the project by creating guidelines for dataset providers to ensure the search engine could understand the content of a dataset. For example, they suggested that providers should include particular information in the dataset’s metadata, such as how the provider collected the data and who can use it.

Data that follows these guidelines is easier for Google to index the datasets so that the relevant ones show up in search queries.

JUST THE BEGINNING. This first version of Dataset Search includes datasets focused on the environmental and social sciences, as well as datasets from government websites and various news organizations focused on other topics.

According to Google, the number and type of datasets included in the search engine will continue to grow as more dataset providers adopt the company’s metadata guidelines. Eventually, accessing the millions of available datasets might be as easy as typing a few words into a search box.

READ MORE: Google Launches New Search Engine to Help Scientists Find the Datasets They Need [The Verge]

More on data: It’s Official. Fossil Fuels Are No Longer the World’s Most Valuable Resource.

The post Google’s New Search Engine Could Solve a Major Problem in the Scientific Community appeared first on Futurism.

Original post:
Google’s New Search Engine Could Solve a Major Problem in the Scientific Community

The Most Far-Out Statements From Elon Musk’s Wide-Ranging Interview With Joe Rogan

“You probably can’t [smoke] because of stockholders, right?” Rogan asks.

“I mean it’s legal, right?” Musk replies.

Elon Musk spent yesterday evening in a rambling conversation with comedian and podcast host Joe Rogan, all while puff-puff-passing a blunt (Musk didn’t inhale) and sipping on glasses of whiskey (with far, far too much ice).

The two-and-a-half-hour exchange ranges pretty widely. They got into the vastness of the universe, Musk’s vintage car collection, electric planes, and whether we are really living in a simulation.

Oddly absent from Rogan’s interview with Elon Musk? The complete chaos his electric car company is in right now. Mere hours after Musk’s interview with Rogan went live, the LA Times reported that two of Tesla’s senior executives announced plans to quit the company.

But that didn’t come up. In fact, it only took 13 minutes for Rogan to steer the conversation from digging tunnels in LA to the dangers of artificial intelligence. Musk took the bait:

The thing that is going to be tricky here is that it’s going to be very tempting to use AI as a weapon. In fact, it will be used used as a weapon. The danger is going to be more humans using it against each other most likely, I think.

So, how far away are humans from creating something truly sentient? Rogan asked. Musk had some insightful words for him — that is, if you can actually figure out what the heck he’s talking about.

You could argue that any group of people — like a company — is essentially a cybernetic collective of human people and machines. That’s what a company is. And then there are different levels of complexity in the way these companies are formed and then there is a collective AI in Google search, where we are also plugged in like nodes in a network, like leaves in a tree.

You hear that? Like a tree. Trees, man. 

We’re all feeding this network without questions and answers. We’re all collectively programming the AI and Google. […] It feels like we are the biological boot-loader for AI effectively. We are building progressively greater intelligence. And the percentage that is not human is increasing, and eventually we will represent a very small percentage of intelligence.

But why would greater artificial intelligence matter if we’re all just living in one giant simulation?

The argument for the simulation I think is quite strong. Because if you assume any improvements at all over time — any improvement, one percent, .1 percent. Just extend the time frame, make it a thousand years, a million years — the universe is 13.8 billion years old. Civilization if you are very generous is maybe seven or eight thousand years old if you count it from the first writing. This is nothing. So if you assume any rate of improvement at all, then [virtual reality video] games will be indistinguishable from reality. Or civilization will end. Either one of those two things will occur. Or we are most likely living in a simulation.

If Elon hasn’t lost you there yet, get ready for multiple simulations. But there’s a silver lining: outside of our simulations, it’s probably boring as hell.

There are many many simulations. These simulations, we might as well call them reality, or you can call them multiverse. They are running on a substrate. That substrate is probably boring.

When we create a simulation like a game or movie, it’s a distillation of what’s interesting about life. It takes a year to shoot an action movie. And that is distilled down into two to three hours. But the filming is boring. It’s pretty goofy, doesn’t look cool. But when you add the CGI and upgrade editing, it’s amazing. So i think most likely, if we’re a simulation, it’s really boring outside the simulation. Because why would you make the simulation as boring? [You’d] make simulation way more interesting than base reality.

They did get into some topics that directly had to do with Musk’s various business ventures, including sustainable energy infrastructure, Tesla cars’ safety features, and Vertical Take Off and Landing (VTOL) vehicles.

That’s when Rogan retrieved the blunt, from a tacky box he got on vacation in Mexico. Wide-eyed Elon took a single timid drag. For the first time in a long while, he looked at peace.

The post The Most Far-Out Statements From Elon Musk’s Wide-Ranging Interview With Joe Rogan appeared first on Futurism.

Read the original here:
The Most Far-Out Statements From Elon Musk’s Wide-Ranging Interview With Joe Rogan

Two Companies Are Going to Manufacture Optical Fibers in Space

Two companies are ready to try manufacturing optical fibers aboard the International Space Station to improve their quality.

MADE IN SPACE. Ah, the 1970s — disco was in vogue, Moon landings NBD. Also,  companies began throwing around the idea of manufacturing products in space. The reasoning was that the unique setting could perhaps cut costs in some way. At the time, though, the math just didn’t add up.

Now, two companies think they’ve found a product that actually might be cheaper to create off-world, and they’re both ready to give the process a shot.

THE PRODUCT. The companies are Made in Space and FOMS (Fiber Optic Manufacturing in Space). If you couldn’t guess from the name, they want to manufacture optical fibers — super-thin strands of glass that we gather into cables that transmit telecommunications data more — in space, or, more specifically, on board the ISS.

When we manufacture optical fibers on Earth, they can contain tiny imperfections that affect their ability to transmit data. Creating the fibers out of a glass called ZBLAN solves the problem, but it’s super fragile, which makes producing long strands difficult. When it cools, it also produces tiny crystals that impact the fibers’ performance.

Once you remove the stress of gravity from the equation, though, you no longer get those crystals and it’s far easier to produce longer strands.

READY FOR TESTING. Both Made in Space and FOMS already have systems they say are capable of producing ZBLAN fibers on the ISS. In fact, Made in Space has had a prototype of its device onboard the station since July. FOMS plans to send its own device up later this year.

On Earth, optical fibers cost about $1 million per kilogram to produce. If we can produce better fibers at or below that cost in space, it might not be long before you’re sending data over fibers that were created beyond Earth’s atmosphere.

READ MORE: Optical Fibre Made in Orbit Should Be Better Than the Terrestrial Sort [The Economist]

More on optical fibers: To Boost Internet Speeds, We’re Making Optical Fibers…in Space

The post Two Companies Are Going to Manufacture Optical Fibers in Space appeared first on Futurism.

See original here:
Two Companies Are Going to Manufacture Optical Fibers in Space

Left Unchecked, Artificial Intelligence Can Become Prejudiced All On Its Own

Artificial intelligence agents in a simulated environment developed became biased against dissimilar agents without any outside influence.

If artificial intelligence were to run the world, it wouldn’t be so bad — it could objectively make decisions on exactly the kinds of things humans tend to screw up. But if we’re gonna hand over the reins to AI, it’s gotta be fair. And so far, it’s not. AI trained on datasets that were annotated or curated by people tend to learn the same racist, sexist, or otherwise bigoted biases of those people.

Slowly, programmers seem to be correcting for these biases. But even if we succeed at keeping our own prejudice out of our code, it seems that AI is now capable of developing it all on its own.

New research published Wednesday in Scientific Reports shows how a network of AI agents autonomously developed not only an in-group preference for other agents that were similar to themselves but also an active bias against those that were different. In fact, the scientists concluded that it takes very little cognitive ability at all to develop these biases, which means it could pop up in code all over the place.

The AI agents were set up so that they could donate virtual money to other agents, with the goal of getting as much as possible in return. Essentially, they had to choose how much to share and which other agents to share with. The researchers behind the experiment saw two trends emerge: AI agents were more likely to donate to others that had been labeled with similar traits (denoted by an arbitrary numeric value) and active prejudice against those that were different. The AI agents seemed to have learned that a donation to the in-group result would in more reciprocation — and that donating to others would actively lead to a loss for them.

This is a more abstract form of prejudice than what we see in the real world, where algorithms are used to specifically target and oppress black people. The AI agents didn’t develop a specific disdain for a specific minority group, as some people have. Instead, it’s a prejudice against a vague “other,” against anything different from themselves. And, yes, it’s a form of prejudice that’s limited to this particular simulation.

But the research does have big implications for real-world applications. If left unchecked, algorithms like this could lead to greater institutionalized racism and sexism — maybe, in some far-out scenario, even anti-human bias altogether — in spite of our best efforts to prevent it.

There are ways to repair this, according to the paper. For instance, the researchers found that they could reduce the levels of prejudice by forcing AI agents to engage in what they called global learning, or interacting with the different AI agents outside of their own bubble. And when populations of the AI agents had more traits in general, prejudice levels also dropped, simply because there was more built-in diversity. The researchers drew a parallel to exposing prejudiced people to other perspectives in real life.

From the paper:

These factors abstractly reflect the pluralism of a society, being influenced by issues beyond the individual, such as social policy, government, historical conflict, culture, religion and the media.

More broadly, this means that we absolutely cannot leave AI to its own devices and assume everything will work out. In her upcoming book, “Hello World: Being Human in the Age of Algorithms,” an excerpt of which was published in The Wall Street Journal, Mathematician Hannah Fry urges us to be more critical, more skeptical of the algorithms that shape our lives. And as this study shows, it’s not bad advice.

No matter how much we would like to believe that AI systems are objective, impartial machines, we need to accept that there may always be glitches in the system. For now, that means we need to keep an eye on these algorithms and watch out for ourselves.

More on algorithmic bias: Microsoft Announces Tool To Catch Biased AI Because We Keep Making Biased AI

The post Left Unchecked, Artificial Intelligence Can Become Prejudiced All On Its Own appeared first on Futurism.

Here is the original post:
Left Unchecked, Artificial Intelligence Can Become Prejudiced All On Its Own

MIT’s New Robot Can Visually Understand Objects It’s Never Seen Before

A FIRST LOOK.

Computer vision — a machine’s ability to “see” objects or images and understand something about them — can already help you unlock your fancy new iPhone just by looking at it. The thing is, it has to have seen that information before to know what it’s looking at.

Now, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a computer vision system that can identify objects it has never seen before. Not to mention it’s an important step towards getting robots to move and think the way humans do.

They published their research on Monday and plan to present it at the Conference on Robot Learning in Zürich, Switzerland, in October.

A UNIQUE PERSPECTIVE.

The MIT team calls its system “Dense Object Nets” (DON). DON can “see” because it looks at objects as a collection of points that the robot processes to form a three-dimensional “visual roadmap.” That means scientists don’t have to sit there and tediously label the massive datasets that most computer vision systems require.

FAMILIAR BUT NOT IDENTICAL.

When DON sees an object that looks like one it’s already familiar with, it can identify the various parts of that new object. For example, after researchers showed DON a shoe and taught it to pick the shoe up by its tongue, the system could then pick other shoes up by their tongues, even if it hadn’t seen the shoes previously or they were in different positions than the original.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” researcher Lucas Manuelli said in a press release. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

TOMORROW’S ROBOTS.

This advanced form of computer vision could do a lot that robots can’t do now. We could someday get a robot equipped with the system to sort items in a recycling center as they move down a conveyor belt without needing to train it on huge datasets. Or maybe we could even show one of these “self-supervised” robots an image of a tidy desk and have it organize our own.

Ultimately, this is another step forward on the path to machines that are as capable as humans. After that, we’ll just have to see how much more capable than us they can get.

READ MORERobot Can Pick up Any Object After Inspecting It [EurekAlert]

More on smarter bots: DARPA Is Funding Research Into AI That Can Explain What It’s “Thinking”

The post MIT’s New Robot Can Visually Understand Objects It’s Never Seen Before appeared first on Futurism.

Read this article:
MIT’s New Robot Can Visually Understand Objects It’s Never Seen Before

Drone Racing: The Next Battleground For AI to Try to Dominate Humans

THE NEXT BATTLEFIELD. Artificial intelligence (AI) has been gradually kicking human butt in competitions. Now, it will have a new venue in which to (try to) assert its dominance over humanity: drone racing.

On Wednesday, VentureBeat reported that the Drone Racing League (DRL) plans to launch the Artificial Intelligence Robotic Racing (AIRR) Circuit, a series of competitions between autonomous drones and their human-piloted counterparts.

AI VS. AI. In 2019, autonomous drones — ones capable of navigating complex courses without any supervision from humans or preprogramming — will compete against one another in four races, DRL CEO Nicholas Horbaczewski told VentureBeat via an email. The team responsible for the winning drone will receive $1 million prize provided by Lockheed Martin.

HUMAN VS. MACHINE. At the end of the AIRR season, the winning autonomous drone will race against the 2019 DRL champion. If it wins, the team receives another $250,000, but Horbaczewski doesn’t think that’s likely. “In 2019, we’re fairly certain the human pilot will win. By 2020, it’s anyone’s race,” he told VentureBeat.

If a human pilot does come out on top, that $250,000 prize rolls over into the following season, and with that much money on the line, it’s seeming only a matter of time before we add “drone racing” to the list of competitions in which we just can’t keep up with AI.

READ MORE: Drone Racing League Launches $2 Million Autonomous Drone Competition [VentureBeat]

More on AI vs. humans: AI Couldn’t Beat a Team of Professional Gamers at DOTA 2, but It Held Its Own

The post Drone Racing: The Next Battleground For AI to Try to Dominate Humans appeared first on Futurism.

Here is the original post:
Drone Racing: The Next Battleground For AI to Try to Dominate Humans

We’re Almost Able to Cool Antimatter. Here’s Why That’s a Big Deal.

Researchers from CERN have just moved one step closer to cooling antimatter, which should make it easier for us to study the mysterious substance.

A COOL NEW LEAD. We’re still figuring out what the heck antimatter even is, but scientists are already getting ready to fiddle with it. Physicists at the European Organization for Nuclear Research (CERN) are one step closer to cooling antimatter using lasers, a milestone that could help us crack its many mysteries.

They published their research on Wednesday in the journal Nature.

BIG BANG, BIGGER MYSTERY. Antimatter is essentially the opposite of “normal” matter. While protons have a positive charge, their antimatter equivalents, antiprotons, have the same mass, but a negative charge. Electrons and their corresponding antiparticle, positrons, have the same mass — the only difference is that they have different charges (negative for electrons, positive for positrons).

When a particle meets its antimatter equivalent, the two annihilate one another, canceling the other out. In theory, the Big Bang should have produced an equal amount of matter and antimatter, in which case, the two would have just annihilated one another.

But that’s not what happened — the universe seems to have way more matter than antimatter. Researchers have no idea why that is, and because antimatter is very difficult to study, they haven’t had much recourse for figuring it out. And that’s why CERN researchers are trying to cool antimatter off, so they can get a better look.

MAGNETS AND LASERS. Using a tool called the Antihydrogen Laser Physics Apparatus (ALPHA), the researchers combined antiprotons with positrons to form antihydrogen atoms. Then, they magnetically trapped hundreds of these atoms in a vacuum and zapped them with laser pulses. This caused the antihydrogen atoms to undergo something called the Lyman-alpha transition.

“The Lyman-alpha transition is the most basic, important transition in regular hydrogen atoms, and to capture the same phenomenon in antihydrogen opens up a new era in antimatter science,” one of the researchers, Takamasa Momose, said in a university press release.

According to Momose, this phase change is a critical first step toward cooling antihydrogen. Researchers have long used lasers to cool other atoms to make them easier to study. If we can do the same for antimatter atoms, we’ll be better able to study them. Scientists can take more accurate measurements, and they might even be able to solve another long-unsettled mystery: figuring out how antimatter interacts with gravity.

For now, the team plans to continue working toward that goal of cooling antimatter. If they’re successful, they might be able to help unravel mysteries with answers critical to our understanding of the universe.

READ MORE: Canadian Laser Breakthrough Has Physicists Close to Cooling Down Antimatter [The University of British Columbia]

More on antihydrogen: Physicists Have Captured the First Spectral Fingerprints of Antimatter

The post We’re Almost Able to Cool Antimatter. Here’s Why That’s a Big Deal. appeared first on Futurism.

Go here to see the original:
We’re Almost Able to Cool Antimatter. Here’s Why That’s a Big Deal.