12345...102030...


Artificial Intelligence Is Automating Hollywood. Now, Art Can Thrive.

The next movie you watch could have been performed, edited, and recommended by artificial intelligence.

The next time you sit down to watch a movie, the algorithm behind your streaming service might recommend a blockbuster that was written by AI, performed by robots, and animated and rendered by a deep learning algorithm. An AI algorithm may have even read the script and suggested the studio buy the rights.

It’s easy to think that technology like algorithms and robots will make the film industry go the way of the factory worker and the customer service rep, and argue that artistic filmmaking is in its death throes. For the film industry, the same narrative doesn’t apply — artificial intelligence seems to have enhanced Hollywood’s creativity, not squelched it.

It’s true that some jobs and tasks are being rendered obsolete now that computers can do them better. The job requirements for a visual effects artist are no longer owning a beret and being good at painting backdrops; the industry now calls for engineers who are good at training deep learning algorithms to do the mundane work, like manually smoothing out an effect or making a digital character look realistic. In doing so, the creative artists who still work in the industry can spend less time hunched over a computer meticulously editing frame-by-frame and go do more interesting things, explains Darren Hendler, who heads Digital Domain’s Digital Humans Group.

Just like computers made it so animators didn’t have to draw every frame by hand, advanced algorithms can automatically render advanced visual effects. In both cases, the animator didn’t lose their job.

“We find that a lot of the more manual, mundane jobs become easy targets [for AI automation] where we can have a system that can do that much, much quicker, freeing up those people to do more creative tasks,” Hendler tells Futurism.

“I think you see a lot of things where it becomes easier and easier for actors to play their alter egos, creatures, characters,” he adds. “You’re gonna see more where the actors are doing their performance in front of other actors, where they’re going to be mapped onto the characters later.”

“We find that a lot of the more manual, mundane jobs become easy targets [for AI automation] where we can have a system that can do that much, much quicker, freeing up those people to do more creative tasks.”

Recently, Hendler and his team used artificial intelligence and other sophisticated software to turn Josh Brolin into Thanos for Avengers: Infinity War. More specifically, they used an AI algorithm trained on high-resolution scans of Brolin’s face to track his expressions down to individual wrinkles, then used another algorithm to automatically map the resulting face renders onto Thanos’ body before animators went in to make some finishing touches.

Such a process yields the best of both motion capture worlds — the high resolution usually obtained by unwieldy camera setups and the more nuanced performance made possible by letting an actor perform surrounded by their costars, instead of alone in front of a green screen. And even though face mapping and swapping technology would normally take weeks, Digital Domain’s machine learning algorithm could do it in nearly real-time, which let them set up a sort of digital mirror for Brolin.

“When Josh Brolin came for the first day to set, he could already see what his character, his performance would look like,” says Hendler.

Yes, these algorithms can quickly accomplish tasks that used to require dedicated teams of people. But when used effectively they can help bring out the best performances, the smoothest edits, and the most advanced visual effects possible today. And even though the most advanced (read: expensive) algorithms may be limited to major Disney blockbusters right now, Hendler suspects that they’ll someday become the norm.

“I think it’s going to be pretty widespread,” he says. “I think the tricky part is its such a different mindset and approach that people are just figuring out how to get it to work.” Still, as people become more familiar with training machines to do their animation for them, Hendler thinks that it’s only logical that more and more applications for artificial intelligence in filmmaking are about to emerge.

“[Machine learning] hasn’t been widely adopted yet because filmmakers don’t fully understand it,” Hendler says. “But we’re starting to see elements of deep learning and machine learning incorporated into very specific areas. It’s something so new and so different from anything we’ve done in the past.”

But many of these applications already have emerged. Last January, Kristen Stewart (yes, that one) directed a short film and partnered with an Adobe engineer to develop a new kind of neural network that edited their footage to look like an impressionist painting that Stewart made.

Fine-tuning the value of u let the team change how much the film resembled an impressionist painting. Image Credit: 2017 Starlight Studios LLC & Kristen Stewart

Meanwhile, Disney built robot acrobats that can be flung high into the air so that they can later be edited (perhaps by AI) to look like the performers and their stunt doubles. Now those actors get to rest easy and focus on the less dangerous parts of their job.

Artificial intelligence may soon move beyond even the performance and editing sides of the filmmaking process — it may soon weigh in on whether or not a film is made in the first place. A Belgian AI company called Scriptbook developed an algorithm that the company claims can predict whether or not a film will be commercially successful just by analyzing the screenplay, according to Variety.

Normally, script coverage is handled by a production house or agency’s hierarchy of executive assistants and interns, the latter which don’t need to be paid under California law. To justify the $5,000 expenditure over sometimes-free human labor, Scriptbook claims that its algorithm is three times better at predicting box office success than human readers. The company also asserts that it would have recommended that Sony Pictures not make 22 of its biggest box office flops over the past three years, which would have saved the production company millions of dollars.

Scriptbook has yet to respond to Futurism’s multiple requests for information about its technology and the limitations of its algorithm (we will update this article if and when the company replies). But Variety mentioned that the AI system can predict an MPAA rating (R, PG-13, that sort of thing), detect who the characters are and what emotions they express, and also predict a screenplay’s target audience. It can also determine whether or not a film will pass the Bechdel Test, a bare minimum baseline for representation of women in media. The algorithm can also determine whether or not the film will include a diverse cast of characters, though it’s worth noting that many scripts don’t specify a character’s race and whitewashing can occur later on in the process.

Human script readers can do all this too, of course. And human-written coverage often includes a comprehensive summary and recommendations as to whether or not a particular production house should pursue a particular screenplay based on its branding and audience. Given how much artificial intelligence struggles with emotional communication, it’s unlikely that Scriptbook can provide the same level of comprehensive analysis that a person could. But, to be fair, Scriptbook suggested to Variety that their system could be used to aid human readers, not replace them.

Getting some help from an algorithm could help people ground their coverage in some cold, hard data before they recommend one script over another. While it may seem like this takes a lot of the creative decision-making out of human hands, tools like Scriptbook can help studio houses make better financial choices —  and it would be naïve to assume there was a time when they weren’t primarily motivated by the bottom line.

Hollywood’s automated future isn’t one that takes humans out of the frame (that is, unless powerful humans decide to do so). Rather, Hendler foresees a future where creative people get to keep on being creative as they work alongside the time-saving machines that will make their jobs simpler and less mundane.

“We’re still, even now, figuring out how we can apply [machine learning] to all the different problems,” Hendler says. “We’re gonna see some very, very drastic speed improvements in the next two, three years in a lot of the things that we do on the visual effects side, and hopefully quality improvements too.”

To read more about people using artificial intelligence to create special effects, click here: The Oscar for Best Visual Effects Goes To: AI

The post Artificial Intelligence Is Automating Hollywood. Now, Art Can Thrive. appeared first on Futurism.

Here is the original post:
Artificial Intelligence Is Automating Hollywood. Now, Art Can Thrive.

Blue Origin’s New Shepard Rocket Aces Important Test

Blue Origin launched another New Shepard rocket, successfully engaging its capsule's emergency motor after separation.

NAILED IT. Wednesday morning, Blue Origin launched a capsule-carrying New Shepard rocket from its facility in West Texas. The goal: to test the escape motor, the motor that would engage if astronauts needed to make a fast getaway due to some unforeseen circumstances at any point during a flight. Its effectiveness is essential to ensuring the safety of any people who might ride aboard the capsule in the future, and this was Blue Origin’s third test of the system.

And the company appeared to nail the test.

After launch, the capsule separated from the rocket just like it should, and then its emergency motor ignited — just as it’s supposed to during an emergency situation. This propelled the capsule to an altitude of roughly 119 kilometers (74 miles), a record height for the company. After that, both the capsule and the rocket safely returned to Earth.

TESTING. TESTING. Blue Origin wasn’t the only company testing the waters with this New Shepard rocket launch; the rocket’s capsule contained a variety of devices and experiments for scientists and educators conducting microgravity research. One company sent up a system designed to provide reliable WiFi connectivity in space, while another added a number of textiles to the capsule so they could test their viability for use in space suits.

SPACE TOURISM. However, the most interesting payload aboard the capsule (for the general public, anyways) was likely Mannequin Skywalker, the test dummy that first rode a New Shepard back in December. In addition to bearing a number of instruments that collect valuable data for Blue Origin, the dummy also serves as a reminder of Blue Origin’s ultimate goal — to send people into space — and with today’s successful test, the company moves one step closer to reaching that goal.

READ MORE: Blue Origin Will Push Its Rocket ‘to Its Limits’ With High-Altitude Emergency Abort Test Today [The Verge]

More on the New Shepard’s capsule: Blue Origin Just Released Images of Its Sleek Space Tourism Capsule

Editor’s note 7/20/18 at 11:15 AM: This piece originally mischaracterized the purpose of the test and the role of Mannequin Skywalker. It has been updated accordingly. We regret the error.

The post Blue Origin’s New Shepard Rocket Aces Important Test appeared first on Futurism.

Continued here:
Blue Origin’s New Shepard Rocket Aces Important Test

Facebook Needs Humans *And* Algorithms To Filter Hate Speech

“We really believed in social experiences. We really believed in protecting privacy. But we were way too idealistic. We did not think enough about the abuse cases,” Facebook COO Sheryl Sandberg admitted to NPR.

Yes, Facebook has a hate speech problem.

After all, how could it not? For many of Facebook’s 2 billion active users, the site is the center of the internet, the place to catch up on news and get updates from friends. That make it a natural target for those looking to persuade, delude, or abuse others online. Users flood the platform with images and posts that exhibit racism, bigotry, and exploitation.

Facebook has not yet found a good strategy to deal with the deluge of hateful content. A recent investigation showed this to be true. A reporter from Channel 4 Dispatches in the United Kingdom went undercover with a Facebook contractor in Ireland and discovered a laundry list of failures, most notably that violent content stays on the site even after users flag it, and that “thousands” of posts were left unmoderated well after Facebook’s goal of a 24 hour turnaround. Tech Crunch took a more charitable view of the investigation’s findings, but still found Facebook “hugely unprepared” for the messy business of moderation. In a letter to the investigation’s producer, Facebook vowed to quickly address the issues it highlighted.

Tech giants have primarily relied on human moderators to flag posts that were problematic, but they are increasingly turning algorithms to do so. They’re convinced it’s the only way forward, even as they hire more humans to do the job AI isn’t ready to do itself.

The stakes are high; poorly moderating hate speech has tangible effects on the real world. UN investigators found that Facebook had failed to curb the outpouring of hate speech targeting Muslim minorities on its platform during a possible genocide in Myanmar. Meanwhile, countries in the European Union are closer to requiring Facebook to curb hate speech, especially hurtful posts targeting asylum seekers — Germany has proposed laws that would tighten regulations and could fine the social platform if it doesn’t follow them.

Facebook has been at the epicenter of the spread of hate speech online, but it is, of course, not the only digital giant to deal with this problem. Google has been working to keep videos promoting terrorism and hate speech off YouTube (but not fast enough, much to the chagrin of big-money advertisers whose ads showed up right before or during these videos). Back in December, Twitter started banning accounts associated with white nationalism as part of a wider crackdown on hate speech and abusive behavior on its platform. Google has also spent a ton of resources amassing an army of human moderators to clean up their platforms, while simultaneously working to train algorithms to help them out.

thumbs down facebook emoji

Keeping platforms free of hate speech is a truly gargantuan task. 600,000 hours of new video are added to YouTube, and Facebook users upload some 350 million photos every day.

Most sites use algorithms in tandem with human moderators. These algorithms are trained by humans first to flag the content the company deems problematic. Human moderators then review what the algorithms flag — it’s a reactive approach, not a proactive one. “We’re developing AI tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook,” CEO Mark Zuckerberg told Senator John Thune (R-SD) during his two-day grilling in front of Congress earlier this year, as Slate reported. Zuckerberg admitted that hate speech was too “linguistically nuanced” for AI at this point. He suspected it’ll get there in about five to ten years.

But here’s the thing: There’s no one right way to eradicate hate speech and abusive behavior online. Tech companies, though, clearly want algorithms to do the job, with as little human input as possible. “The problem cannot be solved by humans and it shouldn’t be solved by humans,” Google’s chief business officer Philipp Schindler told Bloomberg News.

AI has become much better at picking out the hate speech and letting everything else go through, but it’s far from perfect. Earlier this month, Facebook’s hate speech filters decided that large sections of an image of the Declaration of Independence were hate speech, and redacted chunks of it from a Texas-based newspaper that posted it on July 4th. A Facebook moderator restored the full text a day later, with a hasty apology thrown in.

Part of the reason it’s so hard to get algorithms talk like humans, and even debate human opponents effectively is that algorithms still get caught up on context, nuance, and recognition of intent. Was it a sarcastic comment or purely commentary? The algorithm can’t really tell.

A lot of the tools used by Facebook’s moderation team shouldn’t even be referred to as “AI” in the first place, according to Daniel Faltesek, assistant professor of social media at Oregon State University. “Most systems we call AI are making a guess as to what users mean. A filter that blocks posts that use an offensive term is not particularly intelligent,” Faltesek tells Futurism.

Effective AI would be able to highlight problematic content not just by scanning for a combination of letters, but instead respond to users’ shifting sentiment and emotional values. But we don’t have that yet. So humans, it seems, will continue to be a part of the solution — at least until AI can do it by itself. Google is planning on hiring more than 10,000 people this year alone, while Facebook wants to ramp up its human moderator army to 20,000 by the end of the year.

In a perfect world, every instance of hate speech would be thoroughly vetted. But Facebook’s user base is so enormous that even 20,000 human moderators wouldn’t be enough (since there are 2 billion people on the platform, each human moderator look after some 100,000 accounts, which, we’ve learned is simply maddening work).

The thing that would work best according to Faltesek? Pairing up these algorithms with human moderators. It’s not all that different from what Facebook is working on right now, but it’s gotta keep humans involved. “There is an important role for human staff in reviewing the current function of systems, training new filters, and responding in high-context situations when automated systems fail,” says Faltesek. “The best world is one where people are empowered to do their best work with intelligent systems.”

There’s a trick to doing this well, a way for companies to maintain control of their platforms without scaring away users. “For many large organizations, false negatives are worse than false positives,” says Faltesek. “Once the platform becomes unpleasant it is hard to build up a pool of good will again.” After all, that’s what happened with MySpace, and it’s why you’re probably not on it anymore.

Hate speech on social media is a real problem with real consequences. Facebook knows now it can’t sit idly by and let it take over its platform. But whether human moderators paired with algorithms will be enough to quell the onslaught of hate on the internet is still very uncertain. Even an army of 20,000 human moderators won’t improve their odds, but as of right now, it’s the best shot they have. And after what Zuckerberg called a “hard year” for the platform with a lot of soul searching, now’s the best time to get it right.

The post Facebook Needs Humans *And* Algorithms To Filter Hate Speech appeared first on Futurism.

See original here:
Facebook Needs Humans *And* Algorithms To Filter Hate Speech

Top AI Experts Vow They Won’t Help Create Lethal Autonomous Weapons

Hundreds of AI experts just signed a pledge vowing to never develop lethal autonomous weapons and calling for their global ban.

AI FOR GOOD. Artificial intelligence (AI) has the potential to save lives by predicting natural disasters, stopping human trafficking, and diagnosing deadly diseases. Unfortunately, it also has the potential to take lives. Efforts to design lethal autonomous weapons — weapons that use AI to decide on their own whether or not to attempt to kill a person — are already underway.

On Wednesday, the Future of Life Institute (FLI) — an organization focused on the use of tech for the betterment of humanity — released a pledge decrying the development of lethal autonomous weapons and calling on governments to prevent it.

“AI has huge potential to help the world — if we stigmatize and prevent its abuse,” said FLI President Max Tegmark in a press release. “AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

JUST SIGN HERE. One hundred and seventy organizations and 2,464 individuals signed the pledge, committing to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” Signatories of the pledge include OpenAI founder Elon Musk, Skype founder Jaan Tallinn, and leading AI researcher Stuart Russell.

The three co-founders of Google DeepMind (Demis Hassabis, Shane Legg, and Mustafa Suleyman) also signed the pledge. DeepMind is Google’s top AI research team, and the company recently saw itself in the crosshairs of the lethal autonomous weapons controversy for its work with the U.S. Department of Defense.

In June, Google vowed it would not renew that DoD contract, and later, it released new guidelines for its AI development, including a ban on building autonomous weapons. Signing the FLI pledge could further confirm the company’s revised public stance on lethal autonomous weapons.

ALL TALK? It’s not yet clear whether the pledge will actually lead to any definitive action. Twenty-six members of the United Nations have already endorsed a global ban on lethal autonomous weapons, but several world leaders, including Russia, the United Kingdom, and the United States, have yet to get on board.

This also isn’t the first time AI experts have come together to sign a pledge against the development of autonomous weapons. However, this pledge does feature more signatories, and some of those new additions are pretty big names in the AI space (see: DeepMind).

Unfortunately, even if all the world’s nations agree to ban lethal autonomous weapons, that wouldn’t necessarily stop individuals or even governments from continuing to develop the weapons in secret. As we enter this new era in AI, it looks like we’ll have little choice but to hope the good players outnumber the bad.

READ MORE: Thousands of Top AI Experts Vow to Never Build Lethal Autonomous Weapons [Gizmodo]

More on autonomous weapons: Ungrateful Google Plebes Somehow Not Excited to Work on Military Industrial Complex Death Machines

The post Top AI Experts Vow They Won’t Help Create Lethal Autonomous Weapons appeared first on Futurism.

Continued here:
Top AI Experts Vow They Won’t Help Create Lethal Autonomous Weapons

Origami-Inspired Device Catches Fragile Sea Creatures Without Harming Them

Researchers have created a dodecahedron-shaped device for collecting marine life that may be too fragile for traditional methods.

OOPS! SORRY, SQUID. When researchers want to study fish and crustaceans, it’s pretty easy to collect a sample — tow a net behind a boat and you’re bound to capture a few. But collecting delicate deep-sea organisms such as squid and jellyfish isn’t so simple — the nets can literally shred the creatures’ bodies.

Now, Zhi Ern Teoh, a mechanical engineer from Harvard Microrobotics Laboratory, and his colleagues have developed a better way for scientists to collect these elusive organisms. They published their research in Science Robotics on Wednesday.

Image Credit: Wyss Institute at Harvard University

THE OLD WAY. Currently, researchers who want to collect delicate marine life have two ways to do it (that aren’t nets). The first is a detritus sampler, a tube-shaped device with round “doors” on either end. To capture a creature with this device, the operator must slide open the doors, manually position the tube over the creature, then quickly shut the doors before the creature escapes. According to the researchers’ paper, this positioning requires the operator to have a bit of skill. The second type of device is one that uses suction to pull a specimen through a tube into a storage bucket. This process can destroy delicate creatures.

THE BETTER WAY. To create a device that is both easy to use and unlikely to harm a specimen, Teoh looked to origami, the Japanese art of paper folding. He came up with device with a body made of 3D-printed photopolymer and modeled after a dodecahedron, a shape with 12 identical flat faces. If you’ve ever played a board game with a 12-sided die, you’re already familiar with the shape of the device in the closed position; when open, it looks somewhat like a flat starfish.

The device was tested up to 700 meters’ depth, but it’s designed to withstand pressure of “full ocean depth,” the researchers write (11 kilometers, or 6.8 miles deep).Despite the large number of joints, Teoh’s device can open or close in a single motion using the power of just a single rotational actuator, a motor that converts energy into a rotational force. Once the open device is in position right behind a specimen, the operator simply triggers the actuator, and the 12 sides of the dodecahedron fold around the creature and the water it’s in, though it doesn’t seal tightly enough to carry that water above the surface.

Eventually, the researchers hope to update the device to include built-in cameras and touch sensors. They also think it has the potential to be useful for space missions, helping with off-world construction, but for now, they’re focused on collecting marine life without destroying it in the process.

READ MORE: Rotary-Actuated Folding Polyhedrons for Midwater Investigation of Delicate Marine Organisms [Science Robotics]

More on self-folding technology: DNA That Folds Like Origami Has Applications for Drug-Delivering Nanobots

The post Origami-Inspired Device Catches Fragile Sea Creatures Without Harming Them appeared first on Futurism.

Read more:
Origami-Inspired Device Catches Fragile Sea Creatures Without Harming Them

EU Fines Google $5 Billion for Stifling Android’s Competition

The EU just fined Google $5 billion for violating antitrust laws designed to prevent market leaders from stifling competition.

THE FINE. On Wednesday, the European Commission (EC) — a group tasked with enforcing European Union (EU) laws — fined Google €4.34 billion ($5 billion) for breaking the EU’s antitrust laws in the mobile industry. This is the biggest antitrust fine the EU has ever levied at a company.

Regulators create antitrust laws to prevent companies from abusing their position of power to restrict competition. In a press release, the EU specifies three ways in which Google has done just that:

  • By only agreeing to license its app store (the Play Store) to phone manufacturers that agree to pre-install the Google Search and Chrome apps on their devices
  • By giving money to manufacturers and mobile network operators that agree to only pre-install Google’s search app on their devices
  • By preventing phone manufacturers that wanted to pre-install Google’s apps from selling any smart mobile devices running on an “Android fork” — a third-party-modified version of Android — that Google hadn’t approved

“In this way, Google has used Android as a vehicle to cement the dominance of its search engine,” Margrethe Vestager, the European Commissioner for Competition, said in the press release. “These practices have denied rivals the chance to innovate and compete on the merits. They have denied European consumers the benefits of effective competition in the important mobile sphere. This is illegal under EU antitrust rules.”

THE RESPONSE. After the EC announced it had fined Google, the company’s CEO, Sundar Pichai, penned a blog post in which he notes that Google plans to appeal the fine.

Pichai asserts that the EC failed to consider that Android does have at least one major competitor in the mobile market: Apple’s iOS. However, Apple doesn’t license iOS to other phone manufacturers — it only installs the system on its own devices — so the comparison doesn’t quite work.

Pichai also notes that users are free to uninstall Google’s apps at any time, which is entirely true — no one is forcing mobile owners to use the apps, though presumably at least some would use preinstalled apps simply for their convenience (for example, why bother to uninstall Chrome just to download and install Firefox?).

He also points out that manufactures don’t have to agree to pre-install Google’s apps. Also true. What he doesn’t address is how not agreeing (and thereby missing out on the Google payout and losing the right to install the Play Store) might affect that manufacturer’s business.

THE BIG PICTURE. This isn’t the first fine the EU has lobbed at Google, but it is by far the biggest. Even for a company as massive as Google, $5 billion isn’t exactly pocket change; it represents about 40 percent of Google’s net profit in 2017, according to the Wall Street Journal.

If Google doesn’t pay the fine within 90 days, it’ll face an additional fine: up to 5 percent of its worldwide daily revenue for each day it’s late. That would be equivalent to roughly $300 million per day based on 2017’s totals, so if the appeal fails, Google will need to pay up or face the possibility of serious damage to its bottom line.

READ MORE: Google Is Fined $5 Billion by EU in Android Antitrust Case [The Wall Street Journal]

More on antitrust laws: Experts Argue That We Need to Rethink How We Regulate Tech Companies

The post EU Fines Google $5 Billion for Stifling Android’s Competition appeared first on Futurism.

Read this article:
EU Fines Google $5 Billion for Stifling Android’s Competition

Here’s What It Takes to Get Kicked off Facebook (Hint: It’s A Lot)

Recently-acquired Facebook documents reveal what it would take to get kicked off facebook for hates peech or propaganda.

Alex Jones does it. So do Holocaust deniers. Yes, they all have committed the cardinal sin of what Mark Zuckerberg so generously calls “getting things wrong.” And, yet, they still haven’t gotten kicked off Facebook.

Recently, some people, including reporters invited to attend a Facebook event and Kara Swisher of Recode, have wondered why you’re allowed to keep your page even if you’re rampantly spewing misinformation (my words, not theirs).

Now we know why — internal Facebook guidelines recently acquired by Motherboard describe exactly what it would take to get kicked off Facebook.

The key takeaway: It takes ­a lot to get your account or page deleted by the admins.

Indeed, Zuckerberg seems committed to making Facebook a place to share your views, no matter how unsavory! In the Recode interview published on Wednesday, Zuck said that, instead of removing accounts or pages for the likes of InfoWars and Holocaust deniers, he would rather limit those pages’ reach and move them down on people’s news feeds.

So if you can get away with peddling dangerous and offensive misinformation on Facebook, what would it take to get your account shut down? To make it easy for you to go down in a blaze of glory, we’ve made you a checklist.

  • Post some real hateful stuff, and lots of it. According to those internal documents that Motherboard uncovered, Facebook has a strangely-lenient “five strikes in a rapid succession and you’re out, buster!” rule for pages.
  • Be an admin, or not. If you admin a page on Facebook, you won’t be deleted unless you get flagged for five separate instances of hate speech within 90 days. If you’re not, that’s cool too: as long as at least 30 percent of the posts to a group or page within 90 days are found to violate Facebook user guidelines, it can be taken down just the same.
  • Spruce up your profile with photos and propaganda. If you’re trying to get Zuck’s attention, it might be time to change your profile pic with an identified leader of a hate group. Alternatively, you can also get your five strikes in by posting hateful misinformation or propaganda to your wall.
  • Not a hateful person? Try soliciting sex. What’s a non-bigot to do? Pages and groups that do so are subject to the same five-strike policy for admin posts or 30 percent cap for user posts. Also, a page that provides two separate sources of information on meeting up for sex or submitting pornography (such as the page title ­and description but not just one of the two) can be removed by Facebook.

It goes without saying that we don’t recommend doing, you know, any of this. If you want to get off Facebook, we recommend simply logging off.

But these internal guidelines for page removal raise the question of what Facebook’s team considers hate speech or propaganda. If Facebook truly has teams dedicated to removing vile content from the web, how can your racist aunt possibly share so many thinly-veiled white supremacist memes every single day?

In a congressional hearing on Tuesday, Monika Bickert, Facebook’s President for Global Policy Management, told the House Judiciary Committee that InfoWars’ page would be taken down “if they posted sufficient content that violated [Facebook’s] threshold,” according to Engadget. But she added that “sufficient” is defined pretty loosely, and that some of these violations are more or less severe than others.

What remains puzzling, then, is exactly how Facebook defines propaganda and hate speech, versus what it thinks is just opinionated people getting it wrong. And while any platform would be understandably reluctant to draw a line that limits people’s speech, no matter how loathsome, drawing a clear line in the sand right through all of these gray areas could keep a lot of people from getting hurt.

Update July 18, 2018 at 5:10 PM: In a follow up email to Swisher, Mark Zuckerberg clarified: “I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that.” Read the rest of Zuckerberg’s statement here

More about Facebook’s attempts to fight misinformation: Facebook Wants To Make Its News More Credible With New Hires And Partnerships

The post Here’s What It Takes to Get Kicked off Facebook (Hint: It’s A Lot) appeared first on Futurism.

The rest is here:
Here’s What It Takes to Get Kicked off Facebook (Hint: It’s A Lot)

Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines

Rolls Royce just unveiled the latest iteration of its cockroach-like robots designed to enter airplane engines and help repair them.

DEBUGGING. Typically, engineers want to get bugs out of their creations. Not so for the U.K. engineering firm (not the famed carmaker) Rolls-Royce — it’s looking for a way to get bugs into the aircraft engines it builds.

These “bugs” aren’t software glitches or even actual insects. They’re tiny robots modeled after the cockroach. On Tuesday, Rolls-Royce shared the latest developments in its research into cockroach-like robots at the Farnborough International Airshow.

ROBO-MECHANICS. Rolls-Royce believes these tiny insect-inspired robots will save engineers time by serving as their eyes and hands within the tight confines of an airplane’s engine. According to a report by The Next Web, the company plans to mount a camera on each bot to allow engineers to see what’s going on inside an engine without have to take it apart. Rolls-Royce thinks it could even train its cockroach-like robots to complete repairs.

“They could go off scuttling around reaching all different parts of the combustion chamber,” Rolls-Royce technology specialist James Cell said at the airshow, according to CNBC. “If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

NOW MAKE IT SMALLER. Rolls-Royce has already created prototypes of the little bot with the help of robotics experts from Harvard University and University of Nottingham. But they are still too large for the company’s intended use. The goal is to scale the roach-like robots down to stand about half-an-inch tall and weigh just a few ounces, which a Rolls-Royce representative told TNW should be possible within the next couple of years.

READ MORE: Rolls-Royce Is Developing Tiny ‘Cockroach’ Robots to Crawl in and Fix Airplane Engines [CNBC]

More on insect-inspired bots: Robo-Bees Can Infiltrate and Influence Insect Societies To Stop Them From Going Extinct

The post Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines appeared first on Futurism.

Here is the original post:
Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines

Alphabet Will Bring Its Balloon-Powered Internet to Kenya

Alphabet has inked a deal with a Kenyan telecom to bring its balloon-powered internet to rural and suburban parts of Kenya

BADASS BALLOONS. In 2013, Google unveiled Project Loon, a plan to send a fleet of balloons into the stratosphere that could then beam internet service back down to people on Earth.

And it worked! Just last year, the project provided more than 250,000 Puerto Ricans with internet service in the wake of the devastation of Hurricane Maria. The company, now simply called Loon, was the work of X, an innovation lab originally nestled under Google but now a subsidiary of Google’s parent company, Alphabet. And it’s planning to bring its balloon-powered internet to Kenya.

EYES ON AFRICA. On Thursday, Loon announced a partnership with Telkom Kenya, Kenya’s third largest telecommunications provider. Starting next year, Loon balloons will soar high above the East African nation, sending 4G internet coverage down to its rural and suburban populations. This marks the first time Loon has inked a commercial deal with an African nation.

“Loon’s mission is to connect people everywhere by inventing and integrating audacious technologies,” Loon CEO Alastair Westgarth told Reuters. Telkom CEO Aldo Mareuse added,“We will work very hard with Loon, to deliver the first commercial mobile service, as quickly as possible, using Loon’s balloon-powered internet in Africa.”

INTERNET EVERYWHERE. The internet is such an important part of modern life that, back in 2016, the United Nations declared access to it a human right. And while you might have a hard time thinking about going even a day without internet access, more than half of the world’s population still can’t log on. In Kenya, about one-third of the population still lacks access.

Thankfully, Alphabet isn’t the only company working to get the world connected. SpaceX, Facebook, and SoftBank-backed startup Altaeros have their own plans involving satellites, drones, and blimps, respectively. Between those projects and Loon, the world wide web may finally be available to the entire world.

READ MORE: Alphabet to Deploy Balloon Internet in Kenya With Telkom in 2019 [Reuters]

More on Loon: Alphabet Has Officially Launched Balloons that Deliver Internet In Puerto Rico

The post Alphabet Will Bring Its Balloon-Powered Internet to Kenya appeared first on Futurism.

The rest is here:
Alphabet Will Bring Its Balloon-Powered Internet to Kenya

Google and The UN Team Up To Study The Effects of Climate Change

Google agreed to work with UN Environment to create a platform that gives the world access to valuable environmental data.

WITH OUR POWERS COMBINED… The United Nations’ environmental agency has landed itself a powerful partner in the fight against climate change: Google. The tech company has agreed to partner with UN Environment to increase the world’s access to valuable environmental data. Specifically, the two plan to create a user-friendly platform that lets anyone, anywhere, access environmental data collected by Google’s vast network of satellites. The organizations announced their partnership at a UN forum focused on sustainable development on Monday.

FRESHWATER FIRST. The partnership will first focus on freshwater ecosystems, such as mountains, wetlands, and rivers. These ecosystems provide homes for an estimated 10 percent of our planet’s known species, and research has shown that climate change is causing a rapid loss in biodiversity. Google will use satellite imagery to produce maps and data on these ecosystems in real-time, making that information freely available to anyone via the in-development online platform. According to a UN Environment press release, this will allow nations and other organizations to track changes and take action to prevent or reverse ecosystem loss.

LOST FUNDING. Since President Trump took office, the United States has consistently decreased its contributions to global climate research funds. Collecting and analyzing satellite data is neither cheap nor easy, but Google is already doing it to power platforms such as Google Maps and Google Earth. Now, thanks to this partnership, people all over the world will have a way to access information to help combat the impacts of climate change. Seems the same data that let’s you virtually visit the Eiffel Tower could help save our planet.

READ MORE: UN Environment and Google Announce Ground-Breaking Partnership to Protect Our Planet [UN Environment]

More on freshwater: Climate Change Is Acidifying Our Lakes and Rivers the Same Way It Does With Oceans

The post Google and The UN Team Up To Study The Effects of Climate Change appeared first on Futurism.

Continued here:
Google and The UN Team Up To Study The Effects of Climate Change

This Wearable Controller Lets You Pilot a Drone With Your Body

PUT DOWN THE JOYSTICK. If you’ve ever tried to pilot a drone, it’s probably taken a little while to do it well; each drone is a little different, and figuring out how to use its manual controller can take time. There seems to be no shortcut other than to suffer a crash landing or two.

Now, a team of researchers from the Swiss Federal Institute of Technology in Lausanne (EPFL) have created a wearable drone controller that makes the process of navigation so intuitive, it requires almost no thought at all. They published their research in the journal PNAS on Monday.

NOW, PRETEND YOU’RE A DRONE. To create their wearable drone controller, the researchers first needed to figure out how people wanted to move their bodies to control a drone. So they placed 19 motion-capture markers and various electrodes all across the upper bodies of 17 volunteers. Then, they asked each volunteer to watch simulated drone footage through virtual reality goggles. This let the volunteer feel like they were seeing through the eyes of a drone.

The researchers then asked the volunteers to move their bodies however they liked to mimic the drone as it completed five specific movements (for example, turning right or flying toward the ground). The markers and electrodes allowed the researchers to monitor those movements, and they found that most volunteers moved their torsos in a way simple enough to track using just four motion-capture markers.

With this information, the researchers created a wearable drone controller that could relay the user’s movements to an actual drone — essentially, they built a wearable joystick.

PUTTING IT TO THE TEST. To test their wearable drone controller, the researchers asked 39 volunteers to complete a real (not virtual) drone course using either the wearable or a standard joystick. They found that volunteers wearing the suit outperformed those using the joystick in both learning time and steering abilities.

“Using your torso really gives you the feeling that you are actually flying,” lead author Jenifer Miehlbradt said in a press release. “Joysticks, on the other hand, are of simple design but mastering their use to precisely control distant objects can be challenging.”

IN THE FIELD. Mehlbradt envisions search and rescue crews using her team’s wearable drone controller. “These tasks require you to control the drone and analyze the environment simultaneously, so the cognitive load is much higher,” she told Inverse. “I think having control over the drone with your body will allow you to focus more on what’s around you.”

However, this greater sense of immersion in the drone’s environment might not be beneficial in all scenarios. Previous research has shown that piloting strike drones for the military can cause soldiers to experience significant levels of trauma, and a wearable like the EPFL team’s has the potential to exacerbate the problem.

While Miehlbradt told Futurism her team did not consider drone strikes while developing their drone suit, she speculates that such applications wouldn’t be a good fit.

“I think that, in this case, the ‘distance’ created between the operator and the drone by the use of a third-party control device is beneficial regarding posterior emotional trauma,” she said. “With great caution, I would speculate that our control approach — should it be used in such a case —  may therefore increase the risk of experiencing such symptoms.”

READ MORE: Drone Researchers Develop Genius Method for Piloting Using Body Movements [Inverse]

More on rescue drones: A Rescue Drone Saved Two Teen Swimmers on Its First Day of Deployment

The post This Wearable Controller Lets You Pilot a Drone With Your Body appeared first on Futurism.

Here is the original post:
This Wearable Controller Lets You Pilot a Drone With Your Body

This New Startup Is Making Chatbots Dumber So You Can Actually Talk to Them

A Spanish tech startup decided to ditch artificial intelligence to make its chatbot platform more approachable

Tech giants have been trying to one-up each other to make the most intelligent chatbot out there. They can help you simply fill in forms, or take the form of fleshed-out digital personalities that can have meaningful conversations with you. Those that have voice functions have come insanely close to mimicking human speech — inflections, and even the occasional “uhm’s” and “ah’s” — perfectly.

And they’re much more common than you might think. In 2016, Facebook introduced Messenger Bots that businesses worldwide now use for simple tasks like ordering flowers, getting news updates in chat form, or getting information on flights from an airline. Millions of users are filling waiting lists to talk to an “emotional chatbot” on an app called Replika.

But there’s no getting around AI’s shortcomings. And for chatbots in particular, the frustration arises from a disconnect between the user’s intent or expectations, and the chatbot’s programmed abilities.

Take Facebook’s Project M. Sources believe Facebook’s (long forgotten) attempt at developing a truly intelligent chatbot never surpassed a 30 percent success rate, according to Wired — the remaining 70 percent of the time, human employees had to step in to solve tasks. Facebook billed the bot as all-knowing, but the reality was far less promising. It simply couldn’t handle pretty much any task it was asked to do by Facebook’s numerous users.

Admittedly, takes a a lot of resources to develop complex AI chatbots. Even Google Duplex, arguably the most advanced chatbot around today, is still limited to verifying business hours and making simple appointments. Still, users simply expect far more than what AI chatbots can actually do, which tends to enrage users.

The tech industry isn’t giving up. Market researchers predict that chatbots will grow to become a $1 billion market by 2025.

But maybe they’re going about this all wrong. Maybe, instead of making more sophisticated chatbots, businesses should focus on what users really need in a chatbot, stripped down to its very essence.

Landbot, a one-year-old Spanish tech startup, is taking a different approach: it’s making a chatbot-builder for businesses that does the bare minimum, and nothing more. The small company landed $2.2 million in a single round of funding (it plans to use those funds primarily to expand its operations and cover the costs of relocating to tech innovation hub Barcelona).

“We started our chatbot journey using Artificial Intelligence technology but found out that there was a huge gap between user expectations and reality,” co-founder Jiaqi Pan tells TechCrunch. “No matter how well trained our chatbots were, users were constantly dropped off the desired flow, which ended up in 20 different ways of saying ‘TALK WITH A HUMAN’.”

Instead of creating advanced tech that could predict and analyze user prompts, Landbot decided to work on a simple user interface that allows businesses to create chat flows that link prompt and action, question and answer. It’s kind of like a chatbot flowchart builder. And the results are pretty positive: the company has seen healthy revenue growth, and the tool is used by hundreds of businesses in more than 50 countries, according to TechCrunch.

The world is obsessed with achieving perfect artificial intelligence, and the growing AI chatbot market is no different. So obsessed in fact, it’s driving users away — growing disillusionment, frustration, and rage are undermining tech companies’ efforts. And this obsession might be doing far more harm than good. It’s simple: people are happiest when they get the results they expect. Added complexity or lofty promises of “true AI” will end up pushing them away if it doesn’t actually end up helping them.

After all, sometimes less is more. Landbot and its customers are making it work with less.

Besides, listening to your customers can go a long way.

Now can you please connect me to a human?

The post This New Startup Is Making Chatbots Dumber So You Can Actually Talk to Them appeared first on Futurism.

Continued here:
This New Startup Is Making Chatbots Dumber So You Can Actually Talk to Them

WhatsApp Updates Controls in India in an Effort to Thwart Mob Violence

WhatsApp has announced plans to update how users forward content, presumably in an effort to address mob violence in India.

CHANGE IS COMING. Today, more than 1 billion people use the Facebook-owned messaging app WhatsApp to share messages, photos, and videos. With the tap of a button, they can forward a funny meme or send a party invite to groups of friends and family. They can also easily share “fake news,” rumors and propaganda disguised as legitimate information.

In India — the nation where people forward more WhatsApp content than anywhere else — WhatsApp-spread fake news is inciting mob violence and literally getting people killed. On Thursday, WhatsApp announced in a blog post that it plans to make several changes in an effort to prevent more violence.

Some of the changes will only apply to users in India. They will no longer see the “quick forward” button next to photos and videos that made that content particularly easy to send along quickly, without incorporating information about where it came from. They’ll also no longer be able to forward content to more than five chats at a time. In the rest of the world, the new limit for forwards will be 20 chats. The previous cap was 250.

THE ELEPHANT IN THE ROOM. Over the past two months, violent mobs have attacked two dozen people in India after WhatsApp users spread rumors that those people had abducted children. Some of those people even died from their injuries.

The Indian government has been pressuring WhatsApp to do something to address these recent bouts of violence; earlier on Thursday, India’s Ministry of Electronics and Information Technology threatened the company with legal action if it didn’t figure out some effective way to stop the mob violence.

The WhatsApp team, however, never mentions that violence is the reason for the changes in its blog post, simply asserting that the goal of the control changes is to maintain the app’s “feeling of intimacy” and “keep WhatsApp the way it was designed to be: a private messaging app.”

TRY, TRY AGAIN. This is WhatsApps’ third attempt in the last few weeks to address the spread of fake news in India. First, the company added a new label to the app to indicate that a message is a forward (and not original content from the sender). Then, they published full-page ads in Indian newspapers to educate the public on the best way to spot fake news.

Neither of those efforts has appeared to work, and it’s hard to believe the latest move will have the intended impact either. Each WhatsApp chat can include up to 256 people. That means a message forwarded to five chats (per the new limit) could still reach 1,280 people. And if those 1,280 people then forward the message to five chats, it’s not hard to see how fake news could still spread like wildfire across the nation.

READ MORE: WhatsApp Launches New Controls After Widespread App-Fueled Mob Violence in India [The Washington Post]

More on fake news: Massive Study of Fake News May Reveal Why It Spreads so Easily

The post WhatsApp Updates Controls in India in an Effort to Thwart Mob Violence appeared first on Futurism.

Link:
WhatsApp Updates Controls in India in an Effort to Thwart Mob Violence

3 Reasons Why We Might Return to The Moon

we may see manned missions to the moon. Science, politics, and celestial cash grabs are at the forefront of why people want to go back.

Friday marks the 49th anniversary of the first time any human set foot on solid, extraterrestrial ground. The details are probably familiar: on July 20, 1969, Neil Armstrong and Buzz Aldrin became the first people to walk on the Moon. It’s a rare privilege, even now: only ten other people have landed on the Moon and gone out for a stroll.

Just over three years later, humans walked on the Moon for the last time. Changing political and economic priorities meant NASA would no longer focus on sending people to the Moon. After all, we had already planted a flag, confirmed that the Moon wasn’t made of cheese, and played some golf. What else is left?

Well, it just so turns out that we might be heading back out there — and soon. President Trump has insisted on resuming manned Moon missions, despite the fact that it doesn’t match the public or scientific community’s desires for a space program (no one is quite sure where his determination stems from, but it doesn’t seem to have much more substance than a whim).

But there are some other, real reasons that we might want to send someone to the Moon. There’s science to be done, and money to be made. Let’s dig a little deeper and see what might be bringing us back to our lunar neighbor.

1) Trump really wants it to happen.

Last December, President Trump signed a directive indicating that NASA would prioritize human exploration to the Moon and beyond. Just imagine: a human setting foot on the Moon! Accomplishing such an impossible feat would show the rest of the world that America is capable of great things, which would really assert our dominance on the international stage!

So, assuming that President Trump knows we won the space race 43 years ago (he knows, right? right?) there might be other reasons why Trump wants more people to go visit. Maybe it’s a display of national achievement, maybe it’s to develop economic or military advantages. Either way, the White House is pushing hard for that giant leap.

2) Cash money.

A rare isotope called helium-3 could help us produce clean and safe nuclear energy without giving off any hazardous or radioactive waste. And it just so happens that the Moon has loads of the stuff (so does Jupiter, but that’s a bit harder to reach).

While a helium nuclear fusion reactor does not yet exist, many expect that helium-3 could be the missing piece — and whoever secures the supply would unlock riches to rival Scrooge McDuck.

Two years ago, the federal government gave a private company its blessing to land on the Moon for the first time. Moon Express, which also plans to dump human ashes on the Moon (read: litter) for customers who want an unconventional cremation, has the ultimate goal of establishing a lunar mining colony. According to the company’s website, Expedition “Harvest Moon” plans to have a permanent research station up and running by 2021. At that point, it will begin extracting samples and raw materials to send back to Earth.

This could lead to more and (maybe) better research into the moon’s history and makeup, especially since our supply of samples from the Apollo missions is so limited. But helium-3 is what Moon Express is really after. And they’re not the only ones  the Chinese government also has its eyes set on the Moon’s helium-3 supply.

In addition to opening space up to private mining operations, Trump has reached out to NASA in hopes that the agency’s technology could be used to launch mining rigs to the Moon and to asteroids.

But there’s a lot that needs to happen before the spacefaring equivalents of coal barons start selling space rocks. For instance, we need to figure out how to approach and land on an asteroid, and to set up at least semi-permanent bases and mining operations. But still, some companies some companies are forging ahead.

3) Science! slash, practice for Mars.

The government, along with multiple space-interested billionaires, have some well-publicized plans to colonize Mars. Their reasons range from: furthering scientific research, to exploring the cosmos for funsies, to saving humanity from, uh, something.

The Moon could play a vital role in those plans — as practice off-world destination, and as a celestial truck stop along the way.

In February, Commerce Secretary Wilbur Ross said that setting up a colony on the Moon will be essential for future space exploration. Especially, he mentioned, so that it can serve as a refueling station. His logic seems to be based on the fact that the Moon exerts less gravitational force than the Earth, so landing and relaunching a refueled rocket would let that rocket explore farther into space.

Some have also proposed using a Moon base as practice for a Martian settlement, since they would be much closer to Earth — Moon-dwellers would only be three days from Earth, while human Martians would be eight months from home.

NASA’s Gateway mission, as Time reported, could give rise to lunar settlements within the next ten years. Gateway would function as a space station in orbit around the Moon, but would be capable of traveling to and from the surface. The expected Gateway timeline is controversial even within NASA, however, as some feel that its far too optimistic about when we might actually see results.

There are still too many unknowns and hazards for people in space settlements for such a program to succeed today. Even trying to simulate a Mars colony on Earth led to several unforeseen mental strains and complications.

But either way, ongoing exploration and research missions continue to radically change our understanding of the Moon.

“Ten years ago we would have said that the Moon was complete dry,” Ryan Zeigler, NASA’s curator of lunar samples from the Apollo missions, told Futurism. “Over the past ten years, new instruments and new scientists have shown this to not be the case, and that has had profound effects on the models that predict how the Earth-Moon system has formed,” he added.

Of course, there are financial reasons at the forefront the recent push for lunar exploration. But even if its just a pleasant side effect, we may get valuable new science out of these missions, too.

Read more about complications with NASA’s lunar plans: NASA Just Canceled Its Only Moon Rover Project. That’s Bad News for Trump’s Lunar Plans.

The post 3 Reasons Why We Might Return to The Moon appeared first on Futurism.

See the original post:
3 Reasons Why We Might Return to The Moon

Malta Plans to Create the World’s First Decentralized Stock Exchange

Malta has announced plans to created the world's first decentralized stock exchange

BLOCKCHAIN ISLAND. The tiny European nation of Malta is truly living up to its nickname of “Blockchain Island.” On Thursday, MSX (the innovation arm of the Malta Stock Exchange) announced a new partnership with blockchain-based equity fundraising platform Neufund and Binance, one of the world’s biggest cryptocurrency exchanges). Their goal: create the first global stock exchange that’s both regulated and decentralized.

THE NEW SCHOOL. There are a lot of complex concepts at play here, so let’s break them down.

First, tokens. In the realm of cryptocurrency, a token is a digital asset on a blockchain, a ledger that records every time two parties trade an asset. A token can represent practically anything, from money to a vote in an election. Today, many blockchain startups raise funds by selling “equity tokens” through initial coin offerings (ICO).

When a person buys one of these equity tokens, they are essentially buying a percentage ownership of the startup. They can later use an online platform known as a cryptocurrency exchange to sell the tokens or buy more from other investors at any time, quickly and fairly cheaply.

Though various governments are starting to look into regulating tokens, the cryptocurrency realm is still largely unregulated, making it an enticing target for scammers.

THE OLD SCHOOL. Equity securities, also known as stocks, are similar to equity tokens. A person who buys stock in a company owns a percentage of that company. However, securities are not traded via 24-hour online exchanges — they’re bought and sold via stock exchanges, which are only open during certain hours. Navigating them often requires the help of middleman, such as a broker or lawyer, which could be costly.

A government agency typically regulates a nation’s securities and stock exchanges — in the United States, that agency is the Securities and Exchange Commission (SEC). This regulation can protect investors from scams and ensure companies don’t try to swindle them.

TOKENIZED SECURITIES. Tokenized securities are a melding of these two worlds. They’re securities, and when they’re traded, a blockchain records the transaction. This combines the fast, cheap transactions associated with tokens with the protective oversight of securities.

Right now, there’s not a government-regulated, global platform hosting the trading of tokenized securities, and that’s the void the Malta team plans to fill with their decentralized stock exchange.

“We are thrilled to announce the partnerships with Malta Stock Exchange and Binance, that will ensure high liquidity to equity tokens issued on Neufund,” Zoe Adamovicz, CEO and Co-founder at Neufund, said in a press release. “It is the first time in history that security tokens can be offered and traded in a legally binding way.”

Experts estimate that the value of the world’s equity tokens could soar as high as $1 trillion by 2020. Malta’s project is still in the pilot stages, but if all the pieces for its decentralized stock exchange fall into place, the tiny European island could find itself at the center of that incredibly fruitful market.

READ MORE: Malta Paves the Way for a Decentralized Stock Exchange [TechCrunch]

More on tokens: Tokens Will Become the Foundation of a New Digital Economy

The post Malta Plans to Create the World’s First Decentralized Stock Exchange appeared first on Futurism.

See the article here:
Malta Plans to Create the World’s First Decentralized Stock Exchange

Most Of NASA’s Moon Rocks Remain Untouched By Scientists

we have only studied about 16 percent of the moon rocks taken during the Apollo missions. NASA's Apollo curator keeps them for future generations.

Forty-nine years ago this Friday, Neil Armstrong and Buzz Aldrin became the first humans to set foot on the Moon. That day, they also became the first people to harvest samples from another celestial body and bring them back to Earth.

Over the course of the Apollo missions, astronauts collected about 2,200 individual samples weighing a total of 842 pounds (382 kg) for scientific study that continues today, NASA curator Ryan Zeigler told Futurism. Zeigler, who also conducts geochemical research, is responsible for overseeing NASA’s collection of space rocks from the Apollo missions, as well as those from Mars, asteroids, stars, and anywhere else other than Earth.

Scientists have only studied about 16 percent of all the Apollo samples by mass, Zeigler told Futurism. Within that 16 percent, just under one-third has been put on display, which Zeigler noted largely keeps the samples pristine. Another quarter were at least partially destroyed (on purpose) during NASA-approved research, and the rest have been analyzed in less destructive ways.

“Trying not to deplete the samples so that future scientists will still have the opportunity to work with them is definitely something we are considering,” says Zeigler. “Also, while I would consider the Apollo samples primarily a scientific resource (though as a scientist am obviously biased), it is undeniable that these samples also have significant historic and cultural importance as well, and thus need to be preserved on those grounds, too.”

The cultural reasons to preserve moon rocks, Zeigler says, are harder to define. But it’s still important to make sure future scientists have enough space rocks left to work with, especially since we can’t fully predict the sorts of questions they’ll try to answer using the Apollo samples, or the technology that will be at their disposal.

“Every decade since the Apollo samples came back has seen significant advances in instrumentation that have allowed samples to be analyzed at higher levels of precision, or smaller spatial resolution,” Zeigler says. “Our understanding of the Moon, and really the whole solar system, has evolved considerably by continuing studies of the Apollo samples.”

“Our understanding of the Moon, and really the whole solar system, has evolved considerably by continuing studies of the Apollo samples.”

In the last six years, Zeigler says that his curation team saw 351 requests for Apollo samples, which comes out to about 60 each year. Within those requests, the scientists have asked for about 692 individual samples per year, most of which weigh one to two grams each. Even if the researchers don’t get everything that they ask for, Zeigler says, most of the studies are at least partially approved, and he’s been loaning out about 525 samples every year. That comes out to just over 75 percent of what the scientists requested.

“So while it is true that significant scientific justification is required to get Apollo samples, and we (NASA, with the support of the planetary scientific community) are intentionally reserving a portion of the Apollo samples for future generations of scientists and scientific instruments to study, the samples are available to scientists around the world to study, and we are slowly lowering the percentage of material that is left,” Zeigler says.

Thankfully, about 84 percent of the Apollo samples are still untouched. That pretty much guarantees that the next generation of geologists and astronomers who try to decipher the Moon’s remaining secrets will have enough samples to fiddle with.

To read more on future lunar research, click here: Three Reasons Why We Might Return To The Moon

The post Most Of NASA’s Moon Rocks Remain Untouched By Scientists appeared first on Futurism.

Read the original:
Most Of NASA’s Moon Rocks Remain Untouched By Scientists

China Is Investing In Its Own Hyperloop To Clear Its Crowded Highways

Chinese state-backed companies just made huge investments in U.S. based Hyperloop startups. But will it solve China's stifling traffic problems?

GRIDLOCK. China’s largest cities are choking in traffic. Millions of cars on the road means stifling levels of air pollution and astronomical commute times, especially during rush hours.

The latest move to address this urban traffic nightmare: Chinese state-backed companies are making heavy investments in U.S. hyperloop startups Arrivo and Hyperloop Transportation Technologies, lining up $1 billion and $300 million in credit respectively. It’s substantial financing that could put China ahead in the race to open the first full-scale hyperloop track.

MAG-LEV SLEDS. Both companies are planning something big, although their approaches differ in some key ways. Transport company Arrivo is focusing on relieving highway traffic by creating a separate track that allows cars to zip along at 200 miles per hour (320 km/h) on magnetically levitated sleds inside vacuum-sealed tubes (it’s not yet clear if this will be above ground or underground).

Arrivo’s exact plans to build a Chinese hyperloop system have not yet been announced, but co-founder Andrew Liu told Bloomberg that $1 billion in funding could be enough to build “as many as three legs of a commercial, citywide hyperloop system of 6 miles to 9 miles [9.5 to 14.4 km] per section.” The company hasn’t yet announced in which city it’ll be built.

Meanwhile, Hyperloop Transportation Technologies has already made up its mind as to where it will plop down its first Chinese loop. It’s the old familiar maglev train design inside a vacuum tube, but instead it’s passengers, not their cars, that will ride along at speeds of up to 750 mph (1200 km/h). Most of the $300 million will go towards building a 6.2 mile (10 km) test track in Guizhou province. According to a press release, this marks the third commercial agreement for HyperloopTT after Abu Dhabi and Ukraine from earlier this year.

A PRICEY SOLUTION. Building a hyperloop is expensive. This latest investment hints at just how expensive just a single system could be in the end. But providing high-speed alternatives to car-based transport is only one of many ways to deal with the gridlock and traffic jams that plague urban centers. China, for instance, has attempted to tackle the problem by restricting driving times based on license plates, expanding bike sharing networks, and even mesh ride-sharing data with smart traffic lights.

And according to a recent report by Chinese location-based services provider AutoNavi, those solutions seem to be working: a Quartz analysis of the data found that traffic declined by 12.5 and 9 percent in Hangzhou and Shenzhen respectively, even though the population grew by 3 and 5 percent.

MO’ MONEY, MO’ PROBLEMS. There are more hurdles to overcome before hyperloop can have a significant impact in China. There is the cost of using the hyperloop system — if admission is priced too high (perhaps to cover astronomical infrastructure costs), adoption rates may remain too low to have a significant effect.

The capacity of a maglev train system would also have to accommodate China’s  growing population centers. That’s not an easy feat HyperloopTT’s capusles have to squeeze through a four meter (13 feet) diameter tube and only hold 28 to 40 people at a time, and there are 3 million cars in Shenzhen alone.

We don’t know yet whether China’s hyperloop investments will pay off and significantly reduce traffic in China’s urban centers. But bringing new innovations to transportation in massive and growing cities — especially when those new innovations are more environmentally friendly — is rarely a bad idea.

The post China Is Investing In Its Own Hyperloop To Clear Its Crowded Highways appeared first on Futurism.

More here:
China Is Investing In Its Own Hyperloop To Clear Its Crowded Highways

Federal Agencies Propose Major Changes to Endangered Species Act

A PROPOSAL. Species on the brink of extinction in the U.S. could soon have their government protections stripped from them.

On Thursday, the U.S. Fish and Wildlife Service (FWS) (the government agency that manages the U.S.’s fish, wildlife, and natural habitats) and National Oceanic Atmospheric Administration (NOAA) (a scientific government agency that studies the world’s oceans, major waterways, and atmosphere) proposed revisions to the Endangered Species Act, a law designed to empower the federal government to protect threatened or endangered species.

The agencies propose making changes to three sections of the ESA — Section 4, Section 4D, and Section 7 — and the full explanations of the proposed changes are available to the public via a trio of Federal Register notices. If you don’t have time to sift through all 118 pages of Register notices, though, here’s a breakdown of the changes that could have the biggest impact.

THERE’S ALWAYS MONEY IN THE PROTECTED LAND. One potentially major change centers on removing language designed to ensure regulators make decisions about species/habits solely based on scientific factors, not economic ones.

The agencies propose removing “without reference to possible economic or other impacts of such determination” from the ESA because, they write, “there may be circumstances where referencing economic, or other impacts may be informative to the public.” As pointed out by The New York Times, this could make it easier for companies to obtain approval for potentially damaging construction projects, such as roads or oil pipelines.

Another major change centers on “threatened” species. These are currently defined as “any species which is likely to become endangered within the foreseeable future.”  But the proposal suggests giving the FWS the ability to define “foreseeable future” on a species-by-species basis. Today, threatened and endangered species receive more or less the same protections, but under the proposed changes, species newly classified as threatened wouldn’t automatically receive those protections.

PRAISE AND BACKLASH. The proposed changes quickly elicited an impassioned response from the public.

“For too long, the ESA has been used as a means of controlling lands in the West rather than actually focusing on species recovery,” Kathleen Sgamma, president of Western Energy Alliance, which lobbies on behalf of the oil and gas industry, told The New York Times. She added that she was hopeful the changes would “[help lift restrictions on] responsible economic activities on private and public lands.”

Environmental activists, however, see the changes as undercutting the purpose of the ESA: to protect endangered species.

“These proposals would slam a wrecking ball into the most crucial protections for our most endangered wildlife. If these regulations had been in place in the 1970s, the bald eagle and the gray whale would be extinct today,” Brett Hartl, government affairs director for the Center for Biological Diversity, a nonprofit focused on protecting endangered species, said in a statement.

“Allowing the federal government to turn a blind eye to climate change will be a death sentence for polar bears and hundreds of other animals and plants,” he added. “This proposal turns the extinction-prevention tool of the Endangered Species Act into a rubber stamp for powerful corporate interests

Members of the public have 60 days to share their thoughts on the proposed changes with the government, though it’s hard to say what impact that might have. Ultimately, if environmental advocates are right, the U.S. could soon see a dramatic increase in the number of animals that move from endangered to outright extinct.

READ MORE: Law That Saved the Bald Eagle Could Be Vastly Reworked [The New York Times]

More on the Endangered Species Act: The War for Endangered Species Has Begun

The post Federal Agencies Propose Major Changes to Endangered Species Act appeared first on Futurism.

Visit link:
Federal Agencies Propose Major Changes to Endangered Species Act


12345...102030...