12345...102030...


Artificial Intelligence Is Automating Hollywood. Now, Art Can Thrive.

The next movie you watch could have been performed, edited, and recommended by artificial intelligence.

The next time you sit down to watch a movie, the algorithm behind your streaming service might recommend a blockbuster that was written by AI, performed by robots, and animated and rendered by a deep learning algorithm. An AI algorithm may have even read the script and suggested the studio buy the rights.

It’s easy to think that technology like algorithms and robots will make the film industry go the way of the factory worker and the customer service rep, and argue that artistic filmmaking is in its death throes. For the film industry, the same narrative doesn’t apply — artificial intelligence seems to have enhanced Hollywood’s creativity, not squelched it.

It’s true that some jobs and tasks are being rendered obsolete now that computers can do them better. The job requirements for a visual effects artist are no longer owning a beret and being good at painting backdrops; the industry now calls for engineers who are good at training deep learning algorithms to do the mundane work, like manually smoothing out an effect or making a digital character look realistic. In doing so, the creative artists who still work in the industry can spend less time hunched over a computer meticulously editing frame-by-frame and go do more interesting things, explains Darren Hendler, who heads Digital Domain’s Digital Humans Group.

Just like computers made it so animators didn’t have to draw every frame by hand, advanced algorithms can automatically render advanced visual effects. In both cases, the animator didn’t lose their job.

“We find that a lot of the more manual, mundane jobs become easy targets [for AI automation] where we can have a system that can do that much, much quicker, freeing up those people to do more creative tasks,” Hendler tells Futurism.

“I think you see a lot of things where it becomes easier and easier for actors to play their alter egos, creatures, characters,” he adds. “You’re gonna see more where the actors are doing their performance in front of other actors, where they’re going to be mapped onto the characters later.”

“We find that a lot of the more manual, mundane jobs become easy targets [for AI automation] where we can have a system that can do that much, much quicker, freeing up those people to do more creative tasks.”

Recently, Hendler and his team used artificial intelligence and other sophisticated software to turn Josh Brolin into Thanos for Avengers: Infinity War. More specifically, they used an AI algorithm trained on high-resolution scans of Brolin’s face to track his expressions down to individual wrinkles, then used another algorithm to automatically map the resulting face renders onto Thanos’ body before animators went in to make some finishing touches.

Such a process yields the best of both motion capture worlds — the high resolution usually obtained by unwieldy camera setups and the more nuanced performance made possible by letting an actor perform surrounded by their costars, instead of alone in front of a green screen. And even though face mapping and swapping technology would normally take weeks, Digital Domain’s machine learning algorithm could do it in nearly real-time, which let them set up a sort of digital mirror for Brolin.

“When Josh Brolin came for the first day to set, he could already see what his character, his performance would look like,” says Hendler.

Yes, these algorithms can quickly accomplish tasks that used to require dedicated teams of people. But when used effectively they can help bring out the best performances, the smoothest edits, and the most advanced visual effects possible today. And even though the most advanced (read: expensive) algorithms may be limited to major Disney blockbusters right now, Hendler suspects that they’ll someday become the norm.

“I think it’s going to be pretty widespread,” he says. “I think the tricky part is its such a different mindset and approach that people are just figuring out how to get it to work.” Still, as people become more familiar with training machines to do their animation for them, Hendler thinks that it’s only logical that more and more applications for artificial intelligence in filmmaking are about to emerge.

“[Machine learning] hasn’t been widely adopted yet because filmmakers don’t fully understand it,” Hendler says. “But we’re starting to see elements of deep learning and machine learning incorporated into very specific areas. It’s something so new and so different from anything we’ve done in the past.”

But many of these applications already have emerged. Last January, Kristen Stewart (yes, that one) directed a short film and partnered with an Adobe engineer to develop a new kind of neural network that edited their footage to look like an impressionist painting that Stewart made.

Fine-tuning the value of u let the team change how much the film resembled an impressionist painting. Image Credit: 2017 Starlight Studios LLC & Kristen Stewart

Meanwhile, Disney built robot acrobats that can be flung high into the air so that they can later be edited (perhaps by AI) to look like the performers and their stunt doubles. Now those actors get to rest easy and focus on the less dangerous parts of their job.

Artificial intelligence may soon move beyond even the performance and editing sides of the filmmaking process — it may soon weigh in on whether or not a film is made in the first place. A Belgian AI company called Scriptbook developed an algorithm that the company claims can predict whether or not a film will be commercially successful just by analyzing the screenplay, according to Variety.

Normally, script coverage is handled by a production house or agency’s hierarchy of executive assistants and interns, the latter which don’t need to be paid under California law. To justify the $5,000 expenditure over sometimes-free human labor, Scriptbook claims that its algorithm is three times better at predicting box office success than human readers. The company also asserts that it would have recommended that Sony Pictures not make 22 of its biggest box office flops over the past three years, which would have saved the production company millions of dollars.

Scriptbook has yet to respond to Futurism’s multiple requests for information about its technology and the limitations of its algorithm (we will update this article if and when the company replies). But Variety mentioned that the AI system can predict an MPAA rating (R, PG-13, that sort of thing), detect who the characters are and what emotions they express, and also predict a screenplay’s target audience. It can also determine whether or not a film will pass the Bechdel Test, a bare minimum baseline for representation of women in media. The algorithm can also determine whether or not the film will include a diverse cast of characters, though it’s worth noting that many scripts don’t specify a character’s race and whitewashing can occur later on in the process.

Human script readers can do all this too, of course. And human-written coverage often includes a comprehensive summary and recommendations as to whether or not a particular production house should pursue a particular screenplay based on its branding and audience. Given how much artificial intelligence struggles with emotional communication, it’s unlikely that Scriptbook can provide the same level of comprehensive analysis that a person could. But, to be fair, Scriptbook suggested to Variety that their system could be used to aid human readers, not replace them.

Getting some help from an algorithm could help people ground their coverage in some cold, hard data before they recommend one script over another. While it may seem like this takes a lot of the creative decision-making out of human hands, tools like Scriptbook can help studio houses make better financial choices —  and it would be naïve to assume there was a time when they weren’t primarily motivated by the bottom line.

Hollywood’s automated future isn’t one that takes humans out of the frame (that is, unless powerful humans decide to do so). Rather, Hendler foresees a future where creative people get to keep on being creative as they work alongside the time-saving machines that will make their jobs simpler and less mundane.

“We’re still, even now, figuring out how we can apply [machine learning] to all the different problems,” Hendler says. “We’re gonna see some very, very drastic speed improvements in the next two, three years in a lot of the things that we do on the visual effects side, and hopefully quality improvements too.”

To read more about people using artificial intelligence to create special effects, click here: The Oscar for Best Visual Effects Goes To: AI

The post Artificial Intelligence Is Automating Hollywood. Now, Art Can Thrive. appeared first on Futurism.

See the original post:
Artificial Intelligence Is Automating Hollywood. Now, Art Can Thrive.

EU Fines Google $5 Billion for Stifling Android’s Competition

The EU just fined Google $5 billion for violating antitrust laws designed to prevent market leaders from stifling competition.

THE FINE. On Wednesday, the European Commission (EC) — a group tasked with enforcing European Union (EU) laws — fined Google €4.34 billion ($5 billion) for breaking the EU’s antitrust laws in the mobile industry. This is the biggest antitrust fine the EU has ever levied at a company.

Regulators create antitrust laws to prevent companies from abusing their position of power to restrict competition. In a press release, the EU specifies three ways in which Google has done just that:

  • By only agreeing to license its app store (the Play Store) to phone manufacturers that agree to pre-install the Google Search and Chrome apps on their devices
  • By giving money to manufacturers and mobile network operators that agree to only pre-install Google’s search app on their devices
  • By preventing phone manufacturers that wanted to pre-install Google’s apps from selling any smart mobile devices running on an “Android fork” — a third-party-modified version of Android — that Google hadn’t approved

“In this way, Google has used Android as a vehicle to cement the dominance of its search engine,” Margrethe Vestager, the European Commissioner for Competition, said in the press release. “These practices have denied rivals the chance to innovate and compete on the merits. They have denied European consumers the benefits of effective competition in the important mobile sphere. This is illegal under EU antitrust rules.”

THE RESPONSE. After the EC announced it had fined Google, the company’s CEO, Sundar Pichai, penned a blog post in which he notes that Google plans to appeal the fine.

Pichai asserts that the EC failed to consider that Android does have at least one major competitor in the mobile market: Apple’s iOS. However, Apple doesn’t license iOS to other phone manufacturers — it only installs the system on its own devices — so the comparison doesn’t quite work.

Pichai also notes that users are free to uninstall Google’s apps at any time, which is entirely true — no one is forcing mobile owners to use the apps, though presumably at least some would use preinstalled apps simply for their convenience (for example, why bother to uninstall Chrome just to download and install Firefox?).

He also points out that manufactures don’t have to agree to pre-install Google’s apps. Also true. What he doesn’t address is how not agreeing (and thereby missing out on the Google payout and losing the right to install the Play Store) might affect that manufacturer’s business.

THE BIG PICTURE. This isn’t the first fine the EU has lobbed at Google, but it is by far the biggest. Even for a company as massive as Google, $5 billion isn’t exactly pocket change; it represents about 40 percent of Google’s net profit in 2017, according to the Wall Street Journal.

If Google doesn’t pay the fine within 90 days, it’ll face an additional fine: up to 5 percent of its worldwide daily revenue for each day it’s late. That would be equivalent to roughly $300 million per day based on 2017’s totals, so if the appeal fails, Google will need to pay up or face the possibility of serious damage to its bottom line.

READ MORE: Google Is Fined $5 Billion by EU in Android Antitrust Case [The Wall Street Journal]

More on antitrust laws: Experts Argue That We Need to Rethink How We Regulate Tech Companies

The post EU Fines Google $5 Billion for Stifling Android’s Competition appeared first on Futurism.

Read the original:
EU Fines Google $5 Billion for Stifling Android’s Competition

Creating Genetically Modified Babies Is “Morally Permissible,” Says Ethics Committee

ETHICALLY ACCEPTABLE. We may have just moved one step closer to designer babies. On Tuesday, the Nuffield Council on Bioethics (NCB), an independent U.K.-based organization that analyzes and reports on ethical issues in biology and medicine, released a report focused on the social and ethical issues surrounding human genome editing and reproduction.

According to the report, editing human embryos, sperm, or eggs is “morally permissible” as long as the edit doesn’t jeopardize the welfare of the future person (the one born from the edited embryo) or “increase disadvantage, discrimination, or division in society.”

PROCEED WITH CAUTION. The NCB report doesn’t say we should only make edits to embryos for therapeutic reasons, meaning changes for cosmetic reasons are still on the table, ethically speaking. However, by no means does the report suggest we jump right into editing human embryos.

Before we get to that point, we must conduct further research to establish safety standards, according to the report. We’ll also need to publicly debate the use of the technology and consider its possible implications. Additionally, we’ll need to assess any potential risks to individuals, groups, or society in general, and figure out a system for monitoring and addressing any unforeseen adverse affects as they may crop up.

After all that, gene-editing in humans will still need to be closely regulated by government agencies, and we’ll want to start by using it only in closely monitored clinical studies, says the NCB.

AN INFLUENTIAL VOICE. The NCB can’t actually write laws or establish any standards for the use of gene-editing in humans. However, the Council’s recommendations do carry some weight, with the BBC referring to the organization as “influential.”

So, while it may still be years before anyone gives birth to a “designer baby,” the mere fact that editing human embryos is getting the ethical green light from the NCB is a promising sign for anyone eager for the day gene-editing lets them create the offspring of their dreams.

READ MORE: UK Ethics Council Says It’s ‘Morally Permissible’ to Create Genetically Modified Babies [Gizmodo]

More on human embryo editing: Scientists Just Used Gene Editing to Remove a Fatal Blood Disorder From Human Embryos

The post Creating Genetically Modified Babies Is “Morally Permissible,” Says Ethics Committee appeared first on Futurism.

See original here:
Creating Genetically Modified Babies Is “Morally Permissible,” Says Ethics Committee

Manufacturer Confirms Installing Remote-Access Software on U.S. Voting Machines

CHANGING ITS STORY. Election hacking is at the top of everyone’s mind right now, thanks to the controversy surrounding the 2016 Presidential election. But a new report by Motherboard suggests the issue is far from new.

On Tuesday, the outlet published an article in which it claims it obtained a letter sent by Election Systems and Software (ES&S) (the company responsible for manufacturing the majority of voting machines used in the U.S.) to Senator Ron Wyden (D-OR) in April. In the letter, ES&S reportedly admits that it sold election-management systems (EMSs) equipped with pcAnywhere, a software program typically used by system administrators as a way to access computers remotely, to “a small number of customers” between 2000 and 2006.

In a report published by the New York Times in February, the company denied any knowledge of selling machines containing this software.

DIRECT ACCESS. EMSs play an important role in the election process. Officials use the systems to tabulate the results of individual voting machines and, in some cases, to program them as well.

The systems ES&S equipped with pcAnywhere also contained modems to connect the systems to the internet (the remote-access software would essentially be useless without the ability). Motherboard discovered at least two instances of ES&S tech support staff accessing EMSs using the software – one in Pennsylvania in 2006 and another in Michigan in 2007.

BETTER OFFLINE. EMSs and voting machines are never supposed to connect to the internet (or to any machine that’s connected to the internet) for a pretty obvious reason: once online, the systems are susceptible to hacking. As Wyden told the Senate Rules Committee on Wednesday regarding the ES&S situation, “The only way to make this worse would be to leave unguarded ballot boxes in Moscow and Beijing.”

ES&S has remained pretty tight-lipped about the situation, declining to speak to Motherboard and refusing to answer many of Wyden’s questions.

Ultimately, these pcAnywhere-equipped systems are just one of many vulnerabilities in the U.S. voting system, and according to Wyden, there’s only one way to secure our elections from hacking: forget machines altogether and adopt a system in which every citizen votes via a paper ballot.

READ MORE: Top Voting Machine Vendor Admits It Installed Remote-Access Software on Systems Sold to States [Motherboard]

More on election fraud: A Bipartisan Group of U.S. Senators Has a Plan to Secure Future Elections

The post Manufacturer Confirms Installing Remote-Access Software on U.S. Voting Machines appeared first on Futurism.

Continued here:
Manufacturer Confirms Installing Remote-Access Software on U.S. Voting Machines

Origami-Inspired Device Catches Fragile Sea Creatures Without Harming Them

Researchers have created a dodecahedron-shaped device for collecting marine life that may be too fragile for traditional methods.

OOPS! SORRY, SQUID. When researchers want to study fish and crustaceans, it’s pretty easy to collect a sample — tow a net behind a boat and you’re bound to capture a few. But collecting delicate deep-sea organisms such as squid and jellyfish isn’t so simple — the nets can literally shred the creatures’ bodies.

Now, Zhi Ern Teoh, a mechanical engineer from Harvard Microrobotics Laboratory, and his colleagues have developed a better way for scientists to collect these elusive organisms. They published their research in Science Robotics on Wednesday.

Image Credit: Wyss Institute at Harvard University

THE OLD WAY. Currently, researchers who want to collect delicate marine life have two ways to do it (that aren’t nets). The first is a detritus sampler, a tube-shaped device with round “doors” on either end. To capture a creature with this device, the operator must slide open the doors, manually position the tube over the creature, then quickly shut the doors before the creature escapes. According to the researchers’ paper, this positioning requires the operator to have a bit of skill. The second type of device is one that uses suction to pull a specimen through a tube into a storage bucket. This process can destroy delicate creatures.

THE BETTER WAY. To create a device that is both easy to use and unlikely to harm a specimen, Teoh looked to origami, the Japanese art of paper folding. He came up with device with a body made of 3D-printed photopolymer and modeled after a dodecahedron, a shape with 12 identical flat faces. If you’ve ever played a board game with a 12-sided die, you’re already familiar with the shape of the device in the closed position; when open, it looks somewhat like a flat starfish.

The device was tested up to 700 meters’ depth, but it’s designed to withstand pressure of “full ocean depth,” the researchers write (11 kilometers, or 6.8 miles deep).Despite the large number of joints, Teoh’s device can open or close in a single motion using the power of just a single rotational actuator, a motor that converts energy into a rotational force. Once the open device is in position right behind a specimen, the operator simply triggers the actuator, and the 12 sides of the dodecahedron fold around the creature and the water it’s in, though it doesn’t seal tightly enough to carry that water above the surface.

Eventually, the researchers hope to update the device to include built-in cameras and touch sensors. They also think it has the potential to be useful for space missions, helping with off-world construction, but for now, they’re focused on collecting marine life without destroying it in the process.

READ MORE: Rotary-Actuated Folding Polyhedrons for Midwater Investigation of Delicate Marine Organisms [Science Robotics]

More on self-folding technology: DNA That Folds Like Origami Has Applications for Drug-Delivering Nanobots

The post Origami-Inspired Device Catches Fragile Sea Creatures Without Harming Them appeared first on Futurism.

Original post:
Origami-Inspired Device Catches Fragile Sea Creatures Without Harming Them

Blue Origin’s New Shepard Rocket Aces Important Test

Blue Origin launched another New Shepard rocket, successfully engaging its capsule's emergency motor after separation.

NAILED IT. Wednesday morning, Blue Origin launched a capsule-carrying New Shepard rocket from its facility in West Texas. The goal: to test the altitude escape motor, the motor that would engage if astronauts needed to make a fast getaway due to some unforeseen circumstances in space. Its effectiveness is essential to ensuring the safety of any people who might ride aboard the capsule in the future.

And the company appeared to nail the test.

After launch, the capsule separated from the rocket just like it should, and then its emergency motor ignited — just as it’s supposed to during an emergency situation. This propelled the capsule to an altitude of roughly 119 kilometers (74 miles), a record height for the company. After that, both the capsule and the rocket safely returned to Earth.

TESTING. TESTING. Blue Origin wasn’t the only company testing the waters with this New Shepard rocket launch; the rocket’s capsule contained a variety of devices and experiments for scientists and educators conducting microgravity research. One company sent up a system designed to provide reliable WiFi connectivity in space, while another added a number of textiles to the capsule so they could test their viability for use in space suits.

SPACE TOURISM. However, the most interesting payload aboard the capsule (for the general public, anyways) was likely Mannequin Skywalker, the test dummy that first rode a New Shepard back in December. The mannequin serves as a reminder of Blue Origin’s ultimate goal — to send people into space — and with today’s successful test, the company moves one step closer to reaching that goal.

READ MORE: Blue Origin Will Push Its Rocket ‘to Its Limits’ With High-Altitude Emergency Abort Test Today [The Verge]

More on the New Shepard’s capsule: Blue Origin Just Released Images of Its Sleek Space Tourism Capsule

The post Blue Origin’s New Shepard Rocket Aces Important Test appeared first on Futurism.

See more here:
Blue Origin’s New Shepard Rocket Aces Important Test

Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines

Rolls Royce just unveiled the latest iteration of its cockroach-like robots designed to enter airplane engines and help repair them.

DEBUGGING. Typically, engineers want to get bugs out of their creations. Not so for the U.K. engineering firm (not the famed carmaker) Rolls-Royce — it’s looking for a way to get bugs into the aircraft engines it builds.

These “bugs” aren’t software glitches or even actual insects. They’re tiny robots modeled after the cockroach. On Tuesday, Rolls-Royce shared the latest developments in its research into cockroach-like robots at the Farnborough International Airshow.

ROBO-MECHANICS. Rolls-Royce believes these tiny insect-inspired robots will save engineers time by serving as their eyes and hands within the tight confines of an airplane’s engine. According to a report by The Next Web, the company plans to mount a camera on each bot to allow engineers to see what’s going on inside an engine without have to take it apart. Rolls-Royce thinks it could even train its cockroach-like robots to complete repairs.

“They could go off scuttling around reaching all different parts of the combustion chamber,” Rolls-Royce technology specialist James Cell said at the airshow, according to CNBC. “If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”

NOW MAKE IT SMALLER. Rolls-Royce has already created prototypes of the little bot with the help of robotics experts from Harvard University and University of Nottingham. But they are still too large for the company’s intended use. The goal is to scale the roach-like robots down to stand about half-an-inch tall and weigh just a few ounces, which a Rolls-Royce representative told TNW should be possible within the next couple of years.

READ MORE: Rolls-Royce Is Developing Tiny ‘Cockroach’ Robots to Crawl in and Fix Airplane Engines [CNBC]

More on insect-inspired bots: Robo-Bees Can Infiltrate and Influence Insect Societies To Stop Them From Going Extinct

The post Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines appeared first on Futurism.

See the rest here:
Rolls-Royce Is Building Cockroach-Like Robots to Fix Plane Engines

This Wearable Controller Lets You Pilot a Drone With Your Body

PUT DOWN THE JOYSTICK. If you’ve ever tried to pilot a drone, it’s probably taken a little while to do it well; each drone is a little different, and figuring out how to use its manual controller can take time. There seems to be no shortcut other than to suffer a crash landing or two.

Now, a team of researchers from the Swiss Federal Institute of Technology in Lausanne (EPFL) have created a wearable drone controller that makes the process of navigation so intuitive, it requires almost no thought at all. They published their research in the journal PNAS on Monday.

NOW, PRETEND YOU’RE A DRONE. To create their wearable drone controller, the researchers first needed to figure out how people wanted to move their bodies to control a drone. So they placed 19 motion-capture markers and various electrodes all across the upper bodies of 17 volunteers. Then, they asked each volunteer to watch simulated drone footage through virtual reality goggles. This let the volunteer feel like they were seeing through the eyes of a drone.

The researchers then asked the volunteers to move their bodies however they liked to mimic the drone as it completed five specific movements (for example, turning right or flying toward the ground). The markers and electrodes allowed the researchers to monitor those movements, and they found that most volunteers moved their torsos in a way simple enough to track using just four motion-capture markers.

With this information, the researchers created a wearable drone controller that could relay the user’s movements to an actual drone — essentially, they built a wearable joystick.

PUTTING IT TO THE TEST. To test their wearable drone controller, the researchers asked 39 volunteers to complete a real (not virtual) drone course using either the wearable or a standard joystick. They found that volunteers wearing the suit outperformed those using the joystick in both learning time and steering abilities.

“Using your torso really gives you the feeling that you are actually flying,” lead author Jenifer Miehlbradt said in a press release. “Joysticks, on the other hand, are of simple design but mastering their use to precisely control distant objects can be challenging.”

IN THE FIELD. Mehlbradt envisions search and rescue crews using her team’s wearable drone controller. “These tasks require you to control the drone and analyze the environment simultaneously, so the cognitive load is much higher,” she told Inverse. “I think having control over the drone with your body will allow you to focus more on what’s around you.”

However, this greater sense of immersion in the drone’s environment might not be beneficial in all scenarios. Previous research has shown that piloting strike drones for the military can cause soldiers to experience significant levels of trauma, and a wearable like the EPFL team’s has the potential to exacerbate the problem.

While Miehlbradt told Futurism her team did not consider drone strikes while developing their drone suit, she speculates that such applications wouldn’t be a good fit.

“I think that, in this case, the ‘distance’ created between the operator and the drone by the use of a third-party control device is beneficial regarding posterior emotional trauma,” she said. “With great caution, I would speculate that our control approach — should it be used in such a case —  may therefore increase the risk of experiencing such symptoms.”

READ MORE: Drone Researchers Develop Genius Method for Piloting Using Body Movements [Inverse]

More on rescue drones: A Rescue Drone Saved Two Teen Swimmers on Its First Day of Deployment

The post This Wearable Controller Lets You Pilot a Drone With Your Body appeared first on Futurism.

See original here:
This Wearable Controller Lets You Pilot a Drone With Your Body

Google and The UN Team Up To Study The Effects of Climate Change

Google agreed to work with UN Environment to create a platform that gives the world access to valuable environmental data.

WITH OUR POWERS COMBINED… The United Nations’ environmental agency has landed itself a powerful partner in the fight against climate change: Google. The tech company has agreed to partner with UN Environment to increase the world’s access to valuable environmental data. Specifically, the two plan to create a user-friendly platform that lets anyone, anywhere, access environmental data collected by Google’s vast network of satellites. The organizations announced their partnership at a UN forum focused on sustainable development on Monday.

FRESHWATER FIRST. The partnership will first focus on freshwater ecosystems, such as mountains, wetlands, and rivers. These ecosystems provide homes for an estimated 10 percent of our planet’s known species, and research has shown that climate change is causing a rapid loss in biodiversity. Google will use satellite imagery to produce maps and data on these ecosystems in real-time, making that information freely available to anyone via the in-development online platform. According to a UN Environment press release, this will allow nations and other organizations to track changes and take action to prevent or reverse ecosystem loss.

LOST FUNDING. Since President Trump took office, the United States has consistently decreased its contributions to global climate research funds. Collecting and analyzing satellite data is neither cheap nor easy, but Google is already doing it to power platforms such as Google Maps and Google Earth. Now, thanks to this partnership, people all over the world will have a way to access information to help combat the impacts of climate change. Seems the same data that let’s you virtually visit the Eiffel Tower could help save our planet.

READ MORE: UN Environment and Google Announce Ground-Breaking Partnership to Protect Our Planet [UN Environment]

More on freshwater: Climate Change Is Acidifying Our Lakes and Rivers the Same Way It Does With Oceans

The post Google and The UN Team Up To Study The Effects of Climate Change appeared first on Futurism.

Go here to read the rest:
Google and The UN Team Up To Study The Effects of Climate Change

This New Startup Is Making Chatbots Dumber So You Can Actually Talk to Them

A Spanish tech startup decided to ditch artificial intelligence to make its chatbot platform more approachable

Tech giants have been trying to one-up each other to make the most intelligent chatbot out there. They can help you simply fill in forms, or take the form of fleshed-out digital personalities that can have meaningful conversations with you. Those that have voice functions have come insanely close to mimicking human speech — inflections, and even the occasional “uhm’s” and “ah’s” — perfectly.

And they’re much more common than you might think. In 2016, Facebook introduced Messenger Bots that businesses worldwide now use for simple tasks like ordering flowers, getting news updates in chat form, or getting information on flights from an airline. Millions of users are filling waiting lists to talk to an “emotional chatbot” on an app called Replika.

But there’s no getting around AI’s shortcomings. And for chatbots in particular, the frustration arises from a disconnect between the user’s intent or expectations, and the chatbot’s programmed abilities.

Take Facebook’s Project M. Sources believe Facebook’s (long forgotten) attempt at developing a truly intelligent chatbot never surpassed a 30 percent success rate, according to Wired — the remaining 70 percent of the time, human employees had to step in to solve tasks. Facebook billed the bot as all-knowing, but the reality was far less promising. It simply couldn’t handle pretty much any task it was asked to do by Facebook’s numerous users.

Admittedly, takes a a lot of resources to develop complex AI chatbots. Even Google Duplex, arguably the most advanced chatbot around today, is still limited to verifying business hours and making simple appointments. Still, users simply expect far more than what AI chatbots can actually do, which tends to enrage users.

The tech industry isn’t giving up. Market researchers predict that chatbots will grow to become a $1 billion market by 2025.

But maybe they’re going about this all wrong. Maybe, instead of making more sophisticated chatbots, businesses should focus on what users really need in a chatbot, stripped down to its very essence.

Landbot, a one-year-old Spanish tech startup, is taking a different approach: it’s making a chatbot-builder for businesses that does the bare minimum, and nothing more. The small company landed $2.2 million in a single round of funding (it plans to use those funds primarily to expand its operations and cover the costs of relocating to tech innovation hub Barcelona).

“We started our chatbot journey using Artificial Intelligence technology but found out that there was a huge gap between user expectations and reality,” co-founder Jiaqi Pan tells TechCrunch. “No matter how well trained our chatbots were, users were constantly dropped off the desired flow, which ended up in 20 different ways of saying ‘TALK WITH A HUMAN’.”

Instead of creating advanced tech that could predict and analyze user prompts, Landbot decided to work on a simple user interface that allows businesses to create chat flows that link prompt and action, question and answer. It’s kind of like a chatbot flowchart builder. And the results are pretty positive: the company has seen healthy revenue growth, and the tool is used by hundreds of businesses in more than 50 countries, according to TechCrunch.

The world is obsessed with achieving perfect artificial intelligence, and the growing AI chatbot market is no different. So obsessed in fact, it’s driving users away — growing disillusionment, frustration, and rage are undermining tech companies’ efforts. And this obsession might be doing far more harm than good. It’s simple: people are happiest when they get the results they expect. Added complexity or lofty promises of “true AI” will end up pushing them away if it doesn’t actually end up helping them.

After all, sometimes less is more. Landbot and its customers are making it work with less.

Besides, listening to your customers can go a long way.

Now can you please connect me to a human?

The post This New Startup Is Making Chatbots Dumber So You Can Actually Talk to Them appeared first on Futurism.

Read the original post:
This New Startup Is Making Chatbots Dumber So You Can Actually Talk to Them

Facebook Needs Humans *And* Algorithms To Filter Hate Speech

“We really believed in social experiences. We really believed in protecting privacy. But we were way too idealistic. We did not think enough about the abuse cases,” Facebook COO Sheryl Sandberg admitted to NPR.

Yes, Facebook has a hate speech problem.

After all, how could it not? For many of Facebook’s 2 billion active users, the site is the center of the internet, the place to catch up on news and get updates from friends. That make it a natural target for those looking to persuade, delude, or abuse others online. Users flood the platform with images and posts that exhibit racism, bigotry, and exploitation.

Facebook has not yet found a good strategy to deal with the deluge of hateful content. A recent investigation showed this to be true. A reporter from Channel 4 Dispatches in the United Kingdom went undercover with a Facebook contractor in Ireland and discovered a laundry list of failures, most notably that violent content stays on the site even after users flag it, and that “thousands” of posts were left unmoderated well after Facebook’s goal of a 24 hour turnaround. Tech Crunch took a more charitable view of the investigation’s findings, but still found Facebook “hugely unprepared” for the messy business of moderation. In a letter to the investigation’s producer, Facebook vowed to quickly address the issues it highlighted.

Tech giants have primarily relied on human moderators to flag posts that were problematic, but they are increasingly turning algorithms to do so. They’re convinced it’s the only way forward, even as they hire more humans to do the job AI isn’t ready to do itself.

The stakes are high; poorly moderating hate speech has tangible effects on the real world. UN investigators found that Facebook had failed to curb the outpouring of hate speech targeting Muslim minorities on its platform during a possible genocide in Myanmar. Meanwhile, countries in the European Union are closer to requiring Facebook to curb hate speech, especially hurtful posts targeting asylum seekers — Germany has proposed laws that would tighten regulations and could fine the social platform if it doesn’t follow them.

Facebook has been at the epicenter of the spread of hate speech online, but it is, of course, not the only digital giant to deal with this problem. Google has been working to keep videos promoting terrorism and hate speech off YouTube (but not fast enough, much to the chagrin of big-money advertisers whose ads showed up right before or during these videos). Back in December, Twitter started banning accounts associated with white nationalism as part of a wider crackdown on hate speech and abusive behavior on its platform. Google has also spent a ton of resources amassing an army of human moderators to clean up their platforms, while simultaneously working to train algorithms to help them out.

thumbs down facebook emoji

Keeping platforms free of hate speech is a truly gargantuan task. 600,000 hours of new video are added to YouTube, and Facebook users upload some 350 million photos every day.

Most sites use algorithms in tandem with human moderators. These algorithms are trained by humans first to flag the content the company deems problematic. Human moderators then review what the algorithms flag — it’s a reactive approach, not a proactive one. “We’re developing AI tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook,” CEO Mark Zuckerberg told Senator John Thune (R-SD) during his two-day grilling in front of Congress earlier this year, as Slate reported. Zuckerberg admitted that hate speech was too “linguistically nuanced” for AI at this point. He suspected it’ll get there in about five to ten years.

But here’s the thing: There’s no one right way to eradicate hate speech and abusive behavior online. Tech companies, though, clearly want algorithms to do the job, with as little human input as possible. “The problem cannot be solved by humans and it shouldn’t be solved by humans,” Google’s chief business officer Philipp Schindler told Bloomberg News.

AI has become much better at picking out the hate speech and letting everything else go through, but it’s far from perfect. Earlier this month, Facebook’s hate speech filters decided that large sections of an image of the Declaration of Independence were hate speech, and redacted chunks of it from a Texas-based newspaper that posted it on July 4th. A Facebook moderator restored the full text a day later, with a hasty apology thrown in.

Part of the reason it’s so hard to get algorithms talk like humans, and even debate human opponents effectively is that algorithms still get caught up on context, nuance, and recognition of intent. Was it a sarcastic comment or purely commentary? The algorithm can’t really tell.

A lot of the tools used by Facebook’s moderation team shouldn’t even be referred to as “AI” in the first place, according to Daniel Faltesek, assistant professor of social media at Oregon State University. “Most systems we call AI are making a guess as to what users mean. A filter that blocks posts that use an offensive term is not particularly intelligent,” Faltesek tells Futurism.

Effective AI would be able to highlight problematic content not just by scanning for a combination of letters, but instead respond to users’ shifting sentiment and emotional values. But we don’t have that yet. So humans, it seems, will continue to be a part of the solution — at least until AI can do it by itself. Google is planning on hiring more than 10,000 people this year alone, while Facebook wants to ramp up its human moderator army to 20,000 by the end of the year.

In a perfect world, every instance of hate speech would be thoroughly vetted. But Facebook’s user base is so enormous that even 20,000 human moderators wouldn’t be enough (since there are 2 billion people on the platform, each human moderator look after some 100,000 accounts, which, we’ve learned is simply maddening work).

The thing that would work best according to Faltesek? Pairing up these algorithms with human moderators. It’s not all that different from what Facebook is working on right now, but it’s gotta keep humans involved. “There is an important role for human staff in reviewing the current function of systems, training new filters, and responding in high-context situations when automated systems fail,” says Faltesek. “The best world is one where people are empowered to do their best work with intelligent systems.”

There’s a trick to doing this well, a way for companies to maintain control of their platforms without scaring away users. “For many large organizations, false negatives are worse than false positives,” says Faltesek. “Once the platform becomes unpleasant it is hard to build up a pool of good will again.” After all, that’s what happened with MySpace, and it’s why you’re probably not on it anymore.

Hate speech on social media is a real problem with real consequences. Facebook knows now it can’t sit idly by and let it take over its platform. But whether human moderators paired with algorithms will be enough to quell the onslaught of hate on the internet is still very uncertain. Even an army of 20,000 human moderators won’t improve their odds, but as of right now, it’s the best shot they have. And after what Zuckerberg called a “hard year” for the platform with a lot of soul searching, now’s the best time to get it right.

The post Facebook Needs Humans *And* Algorithms To Filter Hate Speech appeared first on Futurism.

See more here:
Facebook Needs Humans *And* Algorithms To Filter Hate Speech

Top AI Experts Vow They Won’t Help Create Lethal Autonomous Weapons

Hundreds of AI experts just signed a pledge vowing to never develop lethal autonomous weapons and calling for their global ban.

AI FOR GOOD. Artificial intelligence (AI) has the potential to save lives by predicting natural disasters, stopping human trafficking, and diagnosing deadly diseases. Unfortunately, it also has the potential to take lives. Efforts to design lethal autonomous weapons — weapons that use AI to decide on their own whether or not to attempt to kill a person — are already underway.

On Wednesday, the Future of Life Institute (FLI) — an organization focused on the use of tech for the betterment of humanity — released a pledge decrying the development of lethal autonomous weapons and calling on governments to prevent it.

“AI has huge potential to help the world — if we stigmatize and prevent its abuse,” said FLI President Max Tegmark in a press release. “AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”

JUST SIGN HERE. One hundred and seventy organizations and 2,464 individuals signed the pledge, committing to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” Signatories of the pledge include OpenAI founder Elon Musk, Skype founder Jaan Tallinn, and leading AI researcher Stuart Russell.

The three co-founders of Google DeepMind (Demis Hassabis, Shane Legg, and Mustafa Suleyman) also signed the pledge. DeepMind is Google’s top AI research team, and the company recently saw itself in the crosshairs of the lethal autonomous weapons controversy for its work with the U.S. Department of Defense.

In June, Google vowed it would not renew that DoD contract, and later, it released new guidelines for its AI development, including a ban on building autonomous weapons. Signing the FLI pledge could further confirm the company’s revised public stance on lethal autonomous weapons.

ALL TALK? It’s not yet clear whether the pledge will actually lead to any definitive action. Twenty-six members of the United Nations have already endorsed a global ban on lethal autonomous weapons, but several world leaders, including Russia, the United Kingdom, and the United States, have yet to get on board.

This also isn’t the first time AI experts have come together to sign a pledge against the development of autonomous weapons. However, this pledge does feature more signatories, and some of those new additions are pretty big names in the AI space (see: DeepMind).

Unfortunately, even if all the world’s nations agree to ban lethal autonomous weapons, that wouldn’t necessarily stop individuals or even governments from continuing to develop the weapons in secret. As we enter this new era in AI, it looks like we’ll have little choice but to hope the good players outnumber the bad.

READ MORE: Thousands of Top AI Experts Vow to Never Build Lethal Autonomous Weapons [Gizmodo]

More on autonomous weapons: Ungrateful Google Plebes Somehow Not Excited to Work on Military Industrial Complex Death Machines

The post Top AI Experts Vow They Won’t Help Create Lethal Autonomous Weapons appeared first on Futurism.

Read more from the original source:
Top AI Experts Vow They Won’t Help Create Lethal Autonomous Weapons

Alphabet Will Bring Its Balloon-Powered Internet to Kenya

Alphabet has inked a deal with a Kenyan telecom to bring its balloon-powered internet to rural and suburban parts of Kenya

BADASS BALLOONS. In 2013, Google unveiled Project Loon, a plan to send a fleet of balloons into the stratosphere that could then beam internet service back down to people on Earth.

And it worked! Just last year, the project provided more than 250,000 Puerto Ricans with internet service in the wake of the devastation of Hurricane Maria. The company, now simply called Loon, was the work of X, an innovation lab originally nestled under Google but now a subsidiary of Google’s parent company, Alphabet. And it’s planning to bring its balloon-powered internet to Kenya.

EYES ON AFRICA. On Thursday, Loon announced a partnership with Telkom Kenya, Kenya’s third largest telecommunications provider. Starting next year, Loon balloons will soar high above the East African nation, sending 4G internet coverage down to its rural and suburban populations. This marks the first time Loon has inked a commercial deal with an African nation.

“Loon’s mission is to connect people everywhere by inventing and integrating audacious technologies,” Loon CEO Alastair Westgarth told Reuters. Telkom CEO Aldo Mareuse added,“We will work very hard with Loon, to deliver the first commercial mobile service, as quickly as possible, using Loon’s balloon-powered internet in Africa.”

INTERNET EVERYWHERE. The internet is such an important part of modern life that, back in 2016, the United Nations declared access to it a human right. And while you might have a hard time thinking about going even a day without internet access, more than half of the world’s population still can’t log on. In Kenya, about one-third of the population still lacks access.

Thankfully, Alphabet isn’t the only company working to get the world connected. SpaceX, Facebook, and SoftBank-backed startup Altaeros have their own plans involving satellites, drones, and blimps, respectively. Between those projects and Loon, the world wide web may finally be available to the entire world.

READ MORE: Alphabet to Deploy Balloon Internet in Kenya With Telkom in 2019 [Reuters]

More on Loon: Alphabet Has Officially Launched Balloons that Deliver Internet In Puerto Rico

The post Alphabet Will Bring Its Balloon-Powered Internet to Kenya appeared first on Futurism.

See more here:
Alphabet Will Bring Its Balloon-Powered Internet to Kenya


12345...102030...