Gelatin is one versatile protein. It puts the “gel” in Jell-O and it’s an essential ingredient in many other foods, cosmetics, and medications. The downside: We can’t produce gelatin without harming animals and the environment, since it historically comes from animal skin, bones, and tissue.
But a new startup could change that. It’s found a way to create the protein collagen — the building block of gelatin — in the lab without the use of any animal products.
Twinsies
In 2012, the founders of California-based startup Geltor decided they wanted to figure out a way to produce collagen without relying on the animal products traditionally used in the manufacturing process.
They landed on fermentation, the process of using microorganisms to break down a substance. Using fermentation, they were able to convert the common elements carbon, nitrogen, and oxygeninto collagen. Not something like collagen, but collagen itself, which anyone can then use to produce gelatin.
“It’s nature-identical,” Geltor co-founder Alex Lorestani told CNBC. “It’s just a pure protein, identical to the one you would find wherever. We just make it a different way.”
Progressive Protein
According to Rosie Wardle, the program director of a foundation that advises Geltor investor Jeremy Coller, lab-grown collagen is a major improvement over the traditional kind.
“The current system of protein production is a broken system,” she told CNBC. “It’s resource-intensive and having a massive detrimental impact on the environment. Business as usual isn’t really an option for the protein-production sector going forward.”
Geltor already provides its lab-grown collagen to cosmetics and skincare companies, but if the product passes Food and Drug Administration vetting, you could find yourself snacking on vegan, eco-friendly Jell-O, jams, or juicesin the not-so-distant future.
Nerve damage is no joke. Even something as simple as typing too much can lead to bolts of pain shooting up your arm. The damage from an accident or injury can be even more traumatic.
Doctors have known for a while that electrical stimulation can help nerves heal after injuries. Now that observation has led to a futuristic new treatment for nerve damage: implantable gadgets that wrap around damaged nerves, delivering pulses of healing electricity for a set period of time before harmlessly degrading into the body.
Material Existence
According to a study published Monday in the journal Nature Medicine, the dime-sized devices comprise biodegradable polymers and magnesium, and they’re controlled and powered by an external transmitter, similarly to an inductive charging mat for a cell phone.
The researchers, who hail from Northwestern University and Washington University School of Medicine, say they can adjust the devices’ lifespans from less than a minute to several months by changing the composition of the biodegradable materials.
“We engineer the devices to disappear,” said researcher John Rogers, who helped create the device, in a press release. “This notion of transient electronic devices has been a topic of deep interest in my group for nearly 10 years — a grand quest in materials science, in a sense.”
Rats, Foiled Again
To test whether the gadgets help nerves heal, the research team implanted them around the injured sciatic nerves of 25 adult rats. The more electrical stimulation they gave each rat during the following 10 weeks, the faster the animal recovered its nerve function. It’s not clear from this initial research whether there’s an ideal amount of electrical stimulation for nerve therapy, however.
The researchers didn’t share a timeline for human trials, but they said they’re hopeful that their technique could one day “complement or replace” existing treatments for nerve injuries in people, which frequently rely on prescription painkillers.
Until now, this use for AR tech has been mostly theoretical. But a wild new MIT Tech Review story describes how that might finally be changing: Microsoft’s HoloLens is speeding up construction of NASA’s next space capsule by providing Lockheed Martin engineers with a virtual overlay of the project and the next steps in its assembly.
Layer Cake
For each new project, Lockheed typically assembles a list of building instructions, which can sometimes run thousands of pages long. But to build the Orion capsule, which will carry a crew on the NASA Space Launch System rocket, the aerospace contractor decided to shake things up.
Using HoloLens headsets, which overlay information onto the physical world, the company’s technicians can now see a virtual model of the spacecraft over the actual work-in-progress. This overlay includes everything from the model numbers of parts to information about how to drill holes and secure fasteners.
“At the start of the day, I put on the device to get accustomed to what we will be doing in the morning,” technician Decker Jory, who is helping build the Orion capsule, told MIT Technology Review.
Time Capsule
One reason AR hasn’t lived up to its manufacturing potential is that the headsets are uncomfortable to wear for extended periods of time. At Lockheed, Jory and his team use the HoloLens system for 15-minute stretches, to familiarize themselves with the next steps, and then take them off.
But the Orion project provides a compelling vision of the future of manufacturing, and not just here on Earth, either — as Lockheed technologist Shelley Peterson teased in the MIT Tech article, AR could potentially help future astronauts perform maintenance in space as well.
In a blog post published last week, Tesla claimed that its Model 3 was the safest car ever tested by the National Highway Traffic Safety Administration (NHTSA.)
Seemingly in response, the NHTSA just released a very, oh, let’s call it a measured statement. The statement, which doesn’t mention Tesla by name, clarifies that there are no bonus points above a five-star rating. If you reach the top bracket, you’re good.
The statement reads:
Results from these three crash tests and the rollover resistance assessments are weighted and combined into an overall safety rating. A 5-star rating is the highest safety rating a vehicle can achieve. NHTSA does not distinguish safety performance beyond that rating, thus there is no ‘safest’ vehicle among those vehicles achieving 5-star ratings.
Bragging Rights
That seems to contradict Tesla’s post, which used NHTSA testing data to claim that the Model 3 showed a lower probability of accident-related injuries than any other car model out there. The government apparently felt the need to step in and shoot down the hype.
Look, y’all already got five stars, okay? Let’s cool it.
READ MORE: National Highway Traffic Safety Administration issues statement about New Car Assessment Program’s highest rating [NHTSA.gov]
It’s the stuff of nightmares: a stuffed animal with a beating heart.
After spotting viral posts on social media about users strapping fitness trackers to rolls of toilet paper, Chinese website Abacustried it with a Xiaomi wristband. The surprise result: The fitness tracker detected a heartbeat. The toilet paper’s “heart rate” ranged from 59 to 88 beats per minute, according to Abacus. A banana and a coffee mug showed similar heartbeats.
So what the hell is going on? These wonky results are a quirk of how the devices determine your heart rate in the first place. The fitness tracker shines a green light toward your wrist, which blood absorbs because it’s red. During beats, more blood passes, and it absorbs more green light, so the tracker can count the pulses and estimate your heart rate.
But that tech — also called photoplethysmography — is easily confused by things that reflect light. That’s not a big concern when the device sits flush against your skin, but it leads to weird results from inanimate objects.
Heartthrob
That doesn’t spell the end of the wrist-based fitness tracker — just because the trick works on a banana doesn’t mean it’s wrong about your pulse.
The industry might also move past photoplethysmography. The Apple Watch Series 4 managed to cram in an EKG scanner, which measures electrical signals instead of light — a godsend for people with chronic heart conditions.
Besides, the less sophisticated fitness trackers out there are mostly meant to do one thing: get you off the damn couch. Do you really want your toilet paper to burn more calories than you do?
We need the internet more than ever these days. Who among us doesn’t get a shiver up the spine at the thought of losing access to their navigation app – just as they’re speeding down the highway in an unfamiliar city? Or missing a crucial deadline because the WiFi that was promised on the train turns out to be nothing more than a phantom selection in a dropdown menu?
While the world grows more reliant on having on-demand internet at every possible moment, our existing internet infrastructures haven’t kept pace with our increased dependency on connectivity.
Bottlenecks, lags, and dead spots have become one of the most common and annoying nuisances in our everyday lives. Shouting “The internet’s down!” in a busy workplace is the modern-day equivalent of yelling fire in a crowded movie theater. And it’s happening more than ever. If the era of dial-up internet was Web1.0, the last decade-and-change of broadband and 4G was Web2.0—it’s awkward, yet promising, adolescence.
Photo credit: Tim Gouw via Unsplash
In the world of Web3.0, the time has come for how we access the internet to finally grow up. If you have noticed though, no one’s taking up the challenge. There’s good news though: Magic is a new distributed project attempting to change that.
You Don’t Know It Yet, But Web3.0 Has Arrived
Practically everywhere you go, there’s data in the air—whether it’s coming from cellular networks, private WiFi, or other sources like 5G and LPWAN. The problem is that it’s hard to tap into all that information when you need it. That’s because, up until now, the internet has existed as vast system of walled gardens in the form of thousands of private networks that are all kept under lock and key, usually by limiting password access to trusted users.
It’s a model that works…sort of. But there are a host of problems with this system: it’s inconvenient, it leads to tons of wasted bandwidth, and it’s not nearly as secure—or as private—as we’d like to think it is. You’re not using the WiFi at the local coffee shop to do anything even remotely private, right?
How many times have you found yourself on your phone, in a cellular dead-zone, unable to see if that important e-mail just arrived because you don’t know the password for the nearest WiFi network? It’s a problem so common that we take it for granted.
Now imagine a different system—one in which all that internet was made available in a cooperative way, allowing you to tap into it—or even to share it—seamlessly and without hassles, no matter where you are. Imagine being able to jump painlessly from one connection source to a better one as it becomes available, without having to waste your time scrolling through access points trying to find the network that will actually let you in.
Photo Credit: Ash Edmonds via Unsplash
Magic is here to take care of that. “Right now, using the internet is like traveling internationally in the early 1900s,” says Benjamin Forgan, founder and CEO of Hologram, Magic’s parent company. “Your data departs via horse, or car, or train, or boat and stop in numerous ports of call. You show your ID at each stop, and sometimes you have to travel through some pretty dangerous spots to get to your destination. Magic is like a teleport for data, by contrast. Your data travels safely from point to point in the fastest, most economical way possible.”
Choose Magic
Photo credit: Hologram, Inc.
Magic is a service that creates a network of networks, allowing anyone—from big service providers to everyday people with WiFi routers—to grant connectivity to anyone else who needs it, automatically negotiating access based on demand and quality of service, with everything facilitated using a simple client app. In the Magic ecosystem, the line between a provider and a user will be erased: you can give out internet when you have it, and use it when you need it.
If all that sounds scary, it’s not. Privacy and encryption is baked into every Magic connection, ensuring that devices stay in their own lane on your network when a provider, and that the networks you connect to as a user can’t go snooping around in your stuff.
And people won’t just be giving out their internet out of the kindness of their own hearts. Instead, they’ll get compensated for access based on the service and speed they provide, while consumers pay seamlessly for only the service they use and need. It’s a system that’s completely different from the current pay-for-play services that are currently out there—rather than having to get out your credit card every time you want to join a network, everything’s negotiated behind the scenes.
Gone will be the days of frantically scrolling through networks trying to find one that’s available, or sweating bullets as you try to connect to an access point that’s being finicky—Magic will take care of all that for you automatically. And all this increased connectivity won’t just facilitate seamless internet connectivity when you’re on the move: it also means that all that wasted bandwidth that’s floating around in the ether suddenly becomes available, making the internet faster and more reliable for everyone.
“When Magic becomes the predominant means of connecting to the internet, the most noteworthy thing will be how not noteworthy it will seem,” says Forgan. “Everyone and everything will simply have connectivity by default. You won’t even have to think about it. You’ll never waste a day of your life waiting for someone to install a cable connection in your new apartment. You’ll just be connected; the internet will just work.”
In other words: the internet is all grown up and ready for the big time. To learn more, head to magic.co.
The preceding communication has been paid for by Magic. This communication is for informational purposes only and does not constitute an offer or solicitation to sell shares or securities in Magic or any related or associated company. None of the information presented herein is intended to form the basis for any investment decision, and no specific recommendations are intended. This communication does not constitute investment advice or solicitation for investment. Futurism expressly disclaims any and all responsibility for any direct or consequential loss or damage of any kind whatsoever arising directly or indirectly from: (i) reliance on any information contained herein, (ii) any error, omission or inaccuracy in any such information or (iii) any action resulting from such information. This post does not reflect the views or the endorsement of the Futurism.com editorial staff.
If the digital age had a slogan, it would be: do it your way.
Computers have always been customizable. In the early era of personal computers, DIYers were ordering motherboards, CPUs, and other parts online, putting together their own personal computers. Gamers, engineers, and computer enthusiasts often built DIY computers because they demanded more performance from a specific component than the average user or wanted to upgrade certain components to fit their needs. The practice continues today — even the versatile computer Raspberry Pi has inspired an ever-growing community of enthusiasts to build their own simple robots, from remote-controlled cars to simple online games.
The future advantages of AI and robotics can only be ours if we act as creators, not just consumers, of this new technology, no matter whether we have an engineering degree or not. The vehicles and robots that move us through space shouldn’t be an exception.
I believe that there is a true “human” cost to the way we travel today. According to The Boston Consulting Group (BCG), Americans spend a total of 30 billion hours per year driving, sitting in traffic, or looking for a parking space. Clearly, there must be more valuable ways to spend our time. But more importantly, we feel that cost most in terms of safety. Estimates from the National Safety Council estimates more than 40,000 people died in motor vehicle deaths in the US in 2017.
It’s not a secret that autonomous vehicles could do better. Human drivers crash at a rate of 4.2 accidents per million miles (PMM), while the current autonomous vehicle crash rate is 3.2 PMM. If the safety of autonomous vehicles continues to improve and the rate of autonomous vehicle crashes can drop to negligible levels, a huge number of those 40,000 lives could be saved every year — and that’s in the U.S. alone.
Realistically, we are about five to 10 years away from safe, convenient method of travel. The technology simply isn’t mature enough yet. “Today’s systems aren’t robust substitutes for human drivers, ” the nonprofit Insurance Institute for Highway Safety (IIHS) determined in its assessment of five autonomous vehicles.
But there’s a bigger picture here. Thinking of autonomous vehicles only in terms of cars on the road points to a failure of imagination — citizens from all walks of life can have a role in building machines that truly serve their needs. A mobile grocery store that drives itself could deliver meals to underserved communities that might not have had access previously. Even IKEA’s R&D division revealed several concepts to reimagine workspaces, cafes and even healthcare.
Tech leaders have commented that building autonomous vehicles is extremely difficult. Tim Cook described the challenge of building autonomous vehicles as “the mother of all” artificial intelligence projects; Elon Musk recently tweeted that it’s “extremely difficult to achieve a general solution for self-driving that works well everywhere.”
But I disagree.
I think we should apply a Lego-like approach to building autonomous vehicles and robots.
That way, developers can use the basic modules as building blocks and create the autonomous vehicles and robots that perfectly fit their own needs — whether that’s for a vehicles that’s cheaper, more efficient, safer, faster, to serve a specific purpose or to simply fulfill a creative vision. This plug-and-play concept would encourage people to have fun building their own autonomous machines, but it can be a lot more. If the process to build an AV was simple enough to build with modularized key components, even individuals with only a basic understanding of engineering could easily integrate those five or six elements to build their own autonomous vehicle.
Image Credit: NIO
A car or robot, like a computer, is the sum of its parts. You do not need to physically build a computer chip to design your own PC. The same could be true for autonomous vehicles. The sensor components, a customizable chassis based on your specific needs for the vehicle and labor should be all that is required. I propose that modular autonomous vehicles can be broken down to six key components:
A highly accurate GPS system
A computer vision module for localization
The same computer vision module for active perception or detecting pedestrians and cars in the environment
Radar and sonar systems for passive perception or triggering the brake when objects are close to the vehicle
A planning and control module for real-time planning
A chassis module to execute the motion plans
A modular approach to building one’s own autonomous vehicles or robots makes the process much simpler — developers would only need to understand and follow the specifications. Early adopters with a basic background in engineering could create an autonomous vehicle that is personalized and solves a specific need. An autonomous vending machine for college campuses, a farm to table delivery vehicle for produce or a library on wheels — all possibilities.
And this versatility can provide practical value beyond the highway-ready AV. A farmer, for instance, can use the resources and knowledge already available in textbooks and manuals to build their own autonomous machine to spray or irrigate the land.
But for this modular approach to be truly widespread, we first need to design a unified interface for different components so that we can plug in the different sensor components without redeveloping the whole system. For instance, to track the robots’ location in real time, developers can use an accurate GPS system when the vehicle is outdoors, and easily swap to a computer vision system if vehicle is indoors. As long as these two modules provide the exact same interface, the developers do not have to rewrite their codes to make the change.
A Lego-like, or modular approach, is not just about making it simpler for engineers to design and build AVs. It’s about building a community of like-minded individuals with a passion for technology and a DIY ethic who can find new applications for autonomous machines. Drone hobbyists, Linux users, and Maker Faire participants have all worked within similar enthusiastic communities and, often unintentionally, set the stage for the development of emerging new technologies. While we may not be on the highway going to work in our autonomous vehicles tomorrow, there are many more applications of autonomous machines that DIYers can start develop as a community today.
I believe we are at a unique juncture of our history. We, as individual citizens, can re-imagine how robotics can be used to address new facets of how we live, work, travel and play. We simply can’t afford to let others shape our lives with uses for robots, AI, and autonomous machines. We must learn the skills to create, the way students today are encouraged to learn to code. It’s not about becoming engineer — instead, it’s more about becoming someone who creates instead of consumes. It’s also about understanding how technology works as it becomes increasingly pervasive in our lives. That’s the approach we need to ensure that we as humans — with all of our creative energy, brilliance and capacity to care for others — are in the driver’s seat of the coming Age of Robotization.
Shaoshan Liu is the founder of Perceptin, a company that creates parts of autonomous vehicles.
But the web has a dark side, too — cyberbullying, internet gambling, and social media addiction are just a few of its many pitfalls.
Now, a team of European researchers plans to figure out just how much psychological harm the net can cause — and how we might be able to help the people it hurts.
Problem Users
On Monday, the scientists announced a new group called the European Problematic Use of the Internet (EU-PUI) Research Network. That’s a mouthful, but the idea is to create a hub to better understand psychological problems linked to internet usage.
“Problematic Use of the Internet is a serious issue,” said the network’s chair, Naomi Fineberg, in a press release. “Just about everyone uses the Internet, but much information on problem use is still lacking.”
Existing research is very fragmented, according to Fineberg. It focuses only on specific behaviors, geographical regions, or segments of society. This international collaboration, she hopes, will help researchers identify “big picture” takeaways about the internet and mental health.
Manifesto
The group outlined its goals in a manifesto published in the journal European Neuropsychopharmacology.
With the document in place, researchers can begin the task of using approximately $600,000 in funding from the European Union to tackle its objectives. Those include everything from figuring out the role genetics might play in problematic internet usage to how website design might affect it.
Now that the EU-PUI Research Network is in place, researchers can use it in a number of ways. They can access resources that could help with their research, or share what they’ve learned about problem behaviors, such as gaming addiction and compulsions related to shopping and social network use
After that, the next step will be figuring out the best ways to prevent and treat these issues, which could ensure the internet is a positive force on the mental health of all — not just some — of us.
In 2014, Amazon built an AI to evaluate job applicants’ résumés. By 2015, it realized the system had a major flaw: It didn’t like women.
According to five sources who spoke to Reuters, Amazon spent years developing an algorithm that used machine learning to sift through job applicants in order to identify the best candidates.
But, in a decision that reads like a metaphor for the diversity-challenged tech sector, the company abandoned the effort in 2017 after it realized it couldn’t guarantee the AI wouldn’t discriminate against female applicants.
An Amazon spokesperson provided this statement to Futurism: “This was never used by Amazon recruiters to evaluate candidates.”
Bad Data
The problem, according to the unnamed Amazon sources, was that the company’s developers trained the AI on résumés submitted to the company over a 10-year period. Though the Reuters report didn’t spell it out, it sounds like researchers were probably trying to train it to identify new résumés that were similar to those of applicants who the company had hired in the past.
But because most Amazon employees are male — as of late last year, men filled 17 out of 18 of its top executive positions — the AI seemingly decided that men were preferable.
Biased World
Training AIs with biased data — and thereby producing biased AIs — is a major problem in machine learning.
A ProPublica investigation found that an algorithm that predicts the likelihood that criminals will offend again discriminated against black people. And that’s to say nothing of the Microsoft-created Tay, an artificially intelligent Twitter chatbot that quickly learned from online pranksters to spew racist vitriol.
The tech industry now faces a huge challenge: It needs to figure out a way to create unbiased AIs when all the available training data comes from a biased world.
This story was updated with a statement from an Amazon spokesperson.
Wondering if you should invest in bitcoin or ether? Or worried that under-regulated cryptocurrencies could destroy the world economy?
This week, two separate groups released new cryptocurrency reports. Depending on how you feel about blockchain tech, they’ll either put your mind at ease — or leave you regretting that decision to convert your entire savings account to Dogecoin.
Good News / Bad News
On Wednesday, the Financial Stability Board, an international body dedicated to analyzing global financial systems, published a 17-page report about the world cryptocurrency market.
The tl;dr: Crypto isn’t going to throw the world economy into chaos, mainly because the market simply isn’t large enough yet. But insufficient regulations, a lack of liquidity, and fragmented markets mean that investing is a seriously risky move on a personal level.
The same day, U.S.-based cyber security firm CipherTrace drove the latter conclusion home with a second report, which found that crypto investors have already lost nearly $1 billion to theft in 2018. That’s a 250 percent increase over 2017’s figure, and the year isn’t even over.
HODL
Taken together, these reports seem to confirm what all but the most ardent crypto supporters probably already knew: Yes, cryptocurrencies boast a number of benefits over traditional assets, but until the space is properly regulated on a global level, experts suggest you invest in crypto at your own risk.
Disclosure: Several members of the Futurism team, including the editors of this piece, are personal investors in a number of cryptocurrency markets. Their personal investment perspectives have no impact on editorial content.
In 2015, The New York Times debuted a short documentary called “A Robotic Dog’s Mortality.”The film profiled a woman named Michiko Sukurai. She owned an Aibo, a robotic dog sold by Sony between 1999 and 2006. Recently, though, Sukurai and many like her suffered a loss: in 2014, Sony announced would no longer repair the robo-pets.
After that, owners and were on their own. Sukurai, in particular, struggled to come to terms with the impermanence of a companion she’d grown to love — one that was never supposed to die.
As it turned out, that wasn’t the end of Aibo’s story. At a press conference held in November 2017, Sony made a surprise announcement. It would begin manufacturing the iconic dogs once again, this time with all of the bells and whistles of modern robotics: OLED eyes, built-in LTE, facial recognition. Aibo owners, it seemed, were finally getting the ‘forever friend’ they were promised.
This same connection is seen in Sparky, the fifth episode of Glimpse, a new original sci-fi series from Futurism Studios (a division of Futurism LLC) and DUST. Watch the episode below.
Michiko Sukurai’s story isn’t unique in the world of robotic pets. Scientists have long understood the psychological benefits of computerized companions. Studies have shown they can help combat loneliness among the elderly, motivate students in isolated communities, and even improve symptoms in dementia patients.
Scientists have long understood the psychological benefits of computerized companions.
Still, despite all of this research, one big question remains: are robotic pets as good as the real thing?
The robots of yesteryear clearly were not. In 2009, researchers conducted a series of studies that looked at the emotion with which children responded to Aibo robotic dogs and live dogs. They discovered that while the children did ascribe thoughts, feelings, and social value to Aibo, they also showed markedly less attention and affection toward it than they did the real dogs.
That isn’t surprising, since early Aibo units were so simplistic. Even the most advanced models were limited to 128mb of memory, barely enough to hold a copy of The Beatles’ “White Album.” Robotics has come a long way since 2006, though, and the field continues to evolve even more rapidly thanks to advances in artificial intelligence, machine learning, and microprocessing.
Copyright Dust/Futurism, 2018
In the future, though, it’s not going to be so obvious whether or not robotic pets are as good as the real thing. To figure it out, we need to determine what makes dogs so lovable in the first place. According to Ronald Arkin, director of the Mobile Robot Laboratory at Georgia Institute of Technology, it all comes down to basic biology.
“People enjoy biological pets for their behavior,” Arkin told Futurism recently. “As a roboticist, I study human psychology and try to engineer systems that will provide them with the same types of satisfaction and interaction that the best pets can offer.”
Arkin calls this area of study “behavioral simulation ethological modeling,” and he’s been doing it for a longtime. He believes all aspects of animal behavior — “movement, emotion, even morality” — can be authentically simulated in robotic companions. He says he holds patents on robot “emotions” and is currently working on simulating feelings like guilt, shame, embarrassment, and empathy in robots to prove out his theories. For Arkin, though, it isn’t enough to build a robotic dog that’s as good as the real thing. He believes he can build one that’s better.
“It pays to understand how humans relate to animals, and find out what provides them with satisfaction in their interactions,” Arkin said. “Not everything does: chewing the furniture, humping a leg, and biological elimination in general are things that pet owners might like to do without.”
While Arkin’s “perfect” robotic dog may not be here for a few years, he’s pretty enthusiastic about where the field seems to be headed. He believes the next generation aibo (it wouldn’t be a reboot without a letter case facelift) is miles ahead of the competition.
We ain’t nothing but mammals. Reproduction usually requires a male and a female. Right?
A little over a decade ago, researchers figured out a way to produce mice from two females and no males. It was a huge scientific breakthrough, but the study had its issues — the resulting pups were abnormal, with some “defective features,”according to stem-cell researcher Qi Zhou.
Now, Zhou and a team of Chinese researchers have figured out another way to not only produce offspring from two mice of the same sex, but to produce normal offspring. And there’s a chance this method could one day allow two humans of the same sex to do the same thing.
Mouse Moms
In a study published Thursday in the journal Cell Stem Cell, Zhou and his colleagues at the Chinese Academy of Sciences explain how they created their mice pups using haploid embryonic stem cells (ESCs). These stem cells are unlike most others because they contain DNA from only one parent, and have only half the normal number of chromosomes.
The researchers used CRISPR to hack an ESC so that it could be injected into the egg of another female mouse to produce an embryo. The 210 embryos produced 29 healthy mice, which lived to adulthood and could even give birth to their own offspring.
The team also produced pups from two male mice through a similar process, but those pups had issues suckling and breathing, and only lived for 48 hours after a surrogate gave birth to them.
Mice and Men
The researchers plan to expand their research to include other animals — and ultimately the work could be an important early step along the path to a day when any two people of either sex can make a baby together.
Have you ever dreamed of leaving it all behind, setting out on that cross-country road trip, and coming back with the manuscript you always swore you’d write? Too late. Now robots are even coming after your great American novel.
A bunch of interconnected neural networks wrote a novel called “1 the Road” during a road trip from Brooklyn to New Orleans. Programmer Ross Goodwin built the system to churn out bursts of prose based on inputs including GPS coordinates, Foursquare data, and a camera with image recognition software.
There’s not much in the way of a story in the novel — though to be fair, that’s seldom stopped mediocre human novelists. And what it lacks in a compelling narrative it makes up for by demonstrating the experimental, artsy potential of AI-generated text.
AI AUTHOR
While TheAtlanticcompared the AI to Jack Kerouac and guessed at hidden meanings in the text, it’s important to remember that we’re talking about nonsense passages like:
The table is black to be seen, the bus crossed in a corner. A military apple breaks in. Part of a white line of stairs and a street light was standing in the street, and it was a deep parking lot.
If you’re having a hard time gleaning the deep, hidden meaning from that excerpt, it’s likely because there isn’t one.
ON THE NODE
Goodwin trained his neural nets with databases of poetry and literature. So while “1 the Road” might seem artsy compared to Siri or Alexa, that’s only because the AI that produced it had been forced to read more lyrical, provocative language than a typical chatbot.
And to say that “artificial intelligence wrote a novel” is a bit of a stretch, given than doing so usually involves crafting coherent sentences and stringing thoughts together, all things that artificial intelligence isn’t yet creative enough to do.
Years ago, the idea of genetic splicing was a fictional pipe dream that invoked images of bat winged-lions or bears with laser eyes. That’s not on the horizon quite yet, but scientists recently accomplished a feat of genetic manipulation that’s nearly as exciting.
They altered the genes of a fruit, the strawberry groundcherry (Physalis pruinose) so that it can be readily cultivated and enjoyed outside of its native region — Mexico as well as Central and South America — for the first time. They published their results Monday in the journal Nature Plants.
Groundcherries aren’t unheard of to American consumers, but they’re pretty hard to get. See, the plant is notoriously difficult to keep alive in farms or gardens long enough to get the darn things to blossom in the first place. Like the cherry tomatoes to which they are closely related, groundcherries are particularly vulnerable to destruction by pests and cool temperatures.
The researchers used CRISPR-Cas9 gene editing tools to improve the size and rate of flower production for a particular kind of groundcherry called the strawberry groundcherry, known for its tropical vanilla flavor. They’ll use those techniques to help make the plant hardier so it can grow outside its native range.
“I firmly believe that with the right approach, the groundcherry could become a major berry crop,” said study author Zachary Lippman in a press release.
That’s exciting news, especially compared to what we normally hear from the world of CRISPR research: resilient corn and water-efficient tobacco, for example. These are important but overwhelmingly unsexy developments.
But this groundcherry research has a much greater chance of affecting everyday people. Sure, it’s just a berry that grows inside of a weird sack-like husk, but this research shows that we can domesticate a new crop in the lab over just a few years instead of a millennium on farms. If a more resilient, accessible groundcherry suddenly becomes available around the world, people can add a new type of healthy fruit to their diet.
This is all possible because scientists had already studied the tomato genome and experimented on it with CRISPR and other techniques. The groundcherry’s genes are similar enough that much of the work came down to fine-tuning existing tricks to a slightly-different genetic code. As such, we may soon see CRISPR-altered versions of other hyper-local plants that have historically been hard to tame in the near future alongside our new supply of groundcherries.
The advent of FaceID, a feature that lets iPhone owners log into their devices using facial recognition, means it’s never been easier to unlock a smartphone — all you’ve gotta do is look at the camera. But the technology also makes it easier for police to access data stored on suspects’ phones, raising thorny new legal questions.
Take the case of Ohio resident Grant Michalski. Forbesreports that the FBI raided Michalski’s home in August, on suspicion that he’d sent and received child pornography. Then investigators forced Michalski to unlock his personal iPhone X using FaceID, allowing them to access his chats and photos. It’s the first documented case of its kind.
BIOMETRIC INSECURITY
The battle over whether law enforcement should be able to access suspects’ phones is hotly contested.
In 2016, the FBI tried to get into the iPhone of one of the shooters who perpetrated the 2015 terrorist attack in San Bernardino, California. The attackers were killed in a shootout with police, but officials wanted information off the one suspect’s phone to find out if the couple had accomplices or had been planning further attacks. The phone was protected by a pass code that only allowed a limited number of login attempts. The FBI asked Apple for help, but the tech giant didn’t comply with requests.
Also in 2016, the FBI tried to access the iPhone of an alleged gang member in California. This time, prosecutors wanted the suspect to provide a fingerprint to unlock the iPhone using TouchID, which a Los Angeles judge granted. But the next year, a federal judge in Chicago rejected a request to force suspects to provide their fingerprints to unlock a personal device.
FACING JUSTICE
We use phones to store some of the most intimate details of our lives. It might seem like a good, legal plan to allow the FBI to force a child porn suspect like Michalski to give up his data. But what if that decision weakens rights for the rest of us?
One thing is for sure: this case is bound to rekindle that discussion.
If the traffic is really bad, maybe you’ll need the spirit of action hero Mad Max to be your copilot.
In June, Tesla CEO Elon Musk tweeted that the Autopilot system on the company’s Semi truck would feature three settings for lane changes: “Standard,” “Aggressive,” and “Mad Max.”
Now it turns out the Semi isn’t the only Tesla vehicle getting a Mad Max option — the company has included it in the Autopilot Version 9 update it’s currently rolling out to all Tesla vehicles. One driver has posted two videos online of Mad Max mode in action on the freeway.
IN VALHALLA
In two videos posted to YouTube, a Tesla driver going by the name Jasper Nuyens shared about 10 minutes of footage taken from behind the wheel of a Tesla with Mad Max mode enabled.
In the clips, you can see the Tesla autonomously navigate the mostly-deserted freeway and overtake a truck. Aside from that, though, the clips mostly just feature the effusive narration of Nuyens and video of the Tesla plowing straight ahead.
It’s not exactly the adrenaline-pumping action you’d expect from a car bearing the Mad Max moniker, but Nuyens seems impressed in the clips, thanking Elon Musk for adding the feature.
ETERNAL, SHINY AND CHROME
Nuyens does note that Mad Max mode isn’t yet perfect. “I noticed that one time it wanted to change lanes to the non-existing lane — the security lane, basically — and to a closed off lane,” he said in the video.
Yeah, that doesn’t sound like a good thing.
Like any true Teslaphile, though, Nuyens doesn’t see this “viewing non-lanes as lanes” situation as a major problem, noting that drivers will just need to pay extra attention. So, while “Mad Max” mode won’t channel Charlize Theron as Furiousa quite yet, it just might keep you alert while you’re behind the wheel.
Ecovative, the startup that makes biodegradable packaging for furniture seller IKEA, says that the same mushroom roots it uses to pack up tables and chairs could be used to create the next generation of delicious lab-grown meats.
“This is the next natural step in this evolution to use natural products to make things,” said co-founder Eben Bayer, in an interview with Business Insider.
MARBLED MEATS
The problem Ecovative wants to solve, Bayer told Business Insider, is that while many lab-grown meat startups have succeeded in growing individual cells from livestock into sausages and burgers, they’ve struggled to recreate the complex anatomy of a chicken breast or a fatty steak.
That’s where Bayer thinks his company’s mycelium, or mushroom roots, could help. Using a formula similar to the mixture of mycelium and discarded farm materials it’s turned into green packaging for IKEA and Dell, he said that Ecovative has created a “scaffold” that lets meat cells grow into ropes of muscle and layers of succulent fat.
FUTURE FARM
The carbon emissions of farm-grown meat are colossal. A tasty lab-grown meat with a low carbon footprint could be a game changer — not just for your dinner plate, but for the future of the planet.
Bayer didn’t say whether his company has locked down any industry partners, like Memphis Meats or New Age Meats. Unless Ecovative plans to market its own fake meat, that’ll be a key step to getting their food into hungry mouths.
Getting humans to Mars could cost upwards of $1 trillion. But for the astronauts making the journey, the cost could be even higher — they might pay with their lives.
According to a new NASA-funded study conducted by researchers at Georgetown University Medical Center, exposure to galactic cosmic radiation during a trip to Mars could leave astronauts with permanent and potentially deadly damage to their gastrointestinal tissue.
RUINOUS RADIATION
On Earth and in its orbit, people and animals are protected from certain types of cosmic radiation by the planet’s magnetic field. For their study, published Monday in the Proceedings of the National Academy of Sciences, the researchers attempted to simulate the conditions found in deep space by blasting 10 male mice with doses of heavy ion radiation — the equivalent, they said, of what a human astronaut would be exposed to on a deep space journey that lasted several months.
The researchers then euthanized the mice and studied samples of their intestinal tissue. They found that the guts of irradiated mice hadn’t been absorbing nutrients properly, and had formed cancerous polyps. Even worse, the damage appears to be permanent — mice killed and dissected after a year still hadn’t recovered.
A GIANT LEAP
The findings are alarming because we don’t yet have a way to protect astronauts from cosmic radiation.
“With the current shielding technology, it is difficult to protect astronauts from the adverse effects of heavy ion radiation,” said researcher Kamal Datta in a press release. “Although there may be a way to use medicines to counter these effects, no such agent has been developed yet,”
Of course, mice aren’t the same as humans, so the actual effect of radiation on astronauts is still largely unknown. However, if we ever hope to send humans to Mars and beyond, nailing down the effects of deep space on astronaut health will need to be a top priority.
Artificial intelligence and human intelligence aren’t the same, to be sure. But it seems we have one big thing in common: we spend our time fantasizing about dinner.
In a research paper published Friday in the preprint server arXiv, a team of AI researchers from Google DeepMind teamed up with a scientist from Heriot-Watt University to develop what they’re calling the largest, most advanced Generative Adversarial Network (GAN) ever. And to prove that it works better than any other, they had it create photorealistic images of landscapes, (very) good dogs and other animals, and of course some hot, juicy burgers.
WHAT’S THE BEEF
Generative Adversarial Networks are one of the more sophisticated types of AI algorithms out there. In short, one network creates something — in this case an image — as realistically as possible. Meanwhile, another network checks its work against examples of the real deal. This back and forth makes the two networks gradually improve to the point where the latter system is really good at detecting AI-generated images but the other is still able to fool it.
GANs are generally used to create media, whether its a new level of a video game or 3D models. And though their ability to fool us and themselves presents a bit of a double-edged sword, their ability to discriminate algorithmic from human output can be used to find and fight misleading deepfakes.
This process is what allowed DeepMind’s burger-cooking GAN to go from creating, as Quartz reported, a weird, beefy blob in 2016 to what actually looks like an appetizing (albeit overcooked) slab of burg today.
ALTERNATIVE PROTEIN
Some argue that we need to move away from red meat, but AI isn’t ready to join us yet. While the burger-generating algorithm is in great shape, others aren’t quite there. For instance, the twists and turns of brass tubing that make up a French horn baffled the network. The butterfly image is just a little off, and any attempt to render a photo of a human results in a horrifying blob monster.
It’s an ugly reality we see in every corner of the web: racism, bigotry, misogyny, political extremism. Hate speech seems to thrive on the internet like a cancer.
It persists and flourishes on social media platforms like Facebook, Twitter, and Reddit — they certainly don’t claim to welcome it, but they’re having a hell of a time keeping it in check. No AI is yet sophisticated enough to flag all hate speech perfectly, so human moderators have to join the robots in the trenches. It’s an imperfect, time-consuming process.
As social media sites come under increasing scrutiny to root out their hate speech problem, they also come up against limits for how much they can (or will) do. So whose responsibility is it, anyway, to mediate hate speech? Is it up to online platforms themselves, or should the government intervene?
The British government seems to think the answer is both. The Home Office and the Department of Digital, Culture, Media, and Sports (DCMS) — a department responsible for regulating broadcasting and the internet — is drafting plans for regulation that would make platforms like Facebook and Twitterlegally responsible for all the content they host, according to Buzzfeed News.
In a statement to Futurism, the DCMS says that it has “primarily encouraged internet companies to take action on a voluntary basis.” But progress has been too slow — and that’s why it plans to intervene with “statutory intervention.”
But is this kind of government intervention really the right way forward when it comes to hate speech online? Experts aren’t convinced it is. In fact, some think it may even do more harm than good.
Details on about DCMS’ plan are scant — it’s still early in development. What we do know so far is that the legislation, Buzzfeed reports, would have two parts. One: it would introduce “take down times” — timeframes within which online platforms have to take down hate speech, or face fines. Two: it would standardize age verification for Facebook, Twitter, and Instagram users. A white paper detailing these plans will allegedly be published later this year.
Why should the government intervene at all? Internet platforms are already trying to limit hate speech on their own. Facebook removed more than 2.5 million pieces of hate speech and “violent content” in the first quarter of 2018 alone, according to a Facebook blog post published back in May.
Indeed, these platforms have been dealing with hate speech for as long as they’ve existed. “There’s nothing new about hate speech on online platforms,” says Brett Frischmann, a professor in Law, Business and Economics at Villanova University. The British government might be trying to put in a law to stop hate speech too quickly to come up with anything that will work the way it’s supposed to.
Unfortunately, hate speech is a whack-a-mole that moves far faster than publishers seem to be able to. As a result, a lot of it goes unmediated. For instance, hate speech from far right extremist groups in the U.K. often still falls through the cracks, fueling xenophobic beliefs. In extreme cases, that kind of hate speech can lead to physical violence and the radicalization of impressionable minds on the internet.
Image Credit: Pathum Danthanarayana
Jim Killock, executive director for the Open Rights Group in the U.K. — a non-profit committed to preserving and promoting citizens’ rights on the internet — thinks the legislation, were it to pass tomorrow, wouldn’t be just ineffective. It might even prove to be counterproductive.
The rampant hate speech online, Killock believes, is symptomatic of a much larger problem. “In some ways, Facebook is a mirror of our society,” he says. “This tidal wave of unpleasantness, like racism and many other things, has come on the back of [feeling] disquiet about powerlessness in society, people wanting someone to blame.”
Unfortunately, that kind of disillusionment with society won’t change overnight. But when a policy only addresses the symptoms of systemic injustice instead of the actual issues, the government is making a mistake. By censoring those who feel like they are being censored, the government is reinforcing their beliefs. And that’s not a good sign, especially when those who are being censored are actively spreading hate speech online themselves.
Plus, a law like the one DCMS has proposed would effectively make certain kinds of speech illegal, even if that’s not what the law says. Killock argues that while a lot of online material may be “unpleasant,” it often doesn’t violate any laws. And it shouldn’t be to companies to decide where the line between the two lies, he adds. “If people are breaking the law, it frankly is the job of courts to set those boundaries.”
But there’s good reason to avoid redrawing those legal boundaries for what kind of behavior online should be enforced (even if it is technically not illegal). The government might have to adjust much wider sweeping common law that concerns the freedom of speech. That is probably not going to happen.
The UK government’s plans are still in the development stage, but there are already plenty of reasons to be skeptical that the law would do what the government intends. Muddying the boundaries between illegal and non-illegal behavior online sets a dangerous precedent, and that could have some undesirable consequences — like wrongfully flagging satirical content as hate speech for instance.
The DCMS is setting itself up for failure: censoring content online will only embolden its critics, while failing to address the root issues. It has to find that middle ground if it wants a real shot: too much censorship, and the mistrust of those who feel marginalized will keep building. Too little regulation, and internet platforms will continue to make many users feel unwelcome or lead to violence.
The U.K. government has a few tactics it could try before it decides to regulate speech online. The government could incentivize companies to strengthen the appeal process for taking down harmful content. “If you make it really hard for people to use appeals, they may not use them at all,” Killock argues. For instance, the government could introduce legislation that would ensure each user has a standardized way of reporting problematic content online.
But it will take a much bigger shift before we are able to get rid of hate speech in a meaningful way. “Blaming Facebook or the horrendous people and opinions that exist in society is perhaps a little unfair,” Killock says. “If people really want to do and say these [hurtful] things, they will do it. And if you want them to stop, you have to persuade them that it’s a bad idea.”
What do those policies look like? Killock doesn’t have the answer yet. “The question we have really is, how do we make society feel better about itself?” says Killock. “And I’m not pretending that that’s a small thing at all.”