OOPS! SORRY, SQUID. When researchers want to study fish and crustaceans, it’s pretty easy to collect a sample — tow a net behind a boat and you’re bound to capture a few. But collecting delicate deep-sea organisms such as squid and jellyfish isn’t so simple — the nets can literally shred the creatures’ bodies.
Now, Zhi Ern Teoh, a mechanical engineer from Harvard Microrobotics Laboratory, and his colleagues have developed a better way for scientists to collect these elusive organisms. They published their research in Science Robotics on Wednesday.
THE OLD WAY. Currently, researchers who want to collect delicate marine life have two ways to do it (that aren’t nets). The first is a detritus sampler, a tube-shaped device with round “doors” on either end. To capture a creature with this device, the operator must slide open the doors, manually position the tube over the creature, then quickly shut the doors before the creature escapes. According to the researchers’ paper, this positioning requires the operator to have a bit of skill. The second type of device is one that uses suction to pull a specimen through a tube into a storage bucket. This process can destroy delicate creatures.
THE BETTER WAY. To create a device that is both easy to use and unlikely to harm a specimen, Teoh looked to origami, the Japanese art of paper folding. He came up with device with a body made of 3D-printed photopolymer and modeled after a dodecahedron, a shape with 12 identical flat faces. If you’ve ever played a board game with a 12-sided die, you’re already familiar with the shape of the device in the closed position; when open, it looks somewhat like a flat starfish.
The device was tested up to 700 meters’ depth, but it’s designed to withstand pressure of “full ocean depth,” the researchers write (11 kilometers, or 6.8 miles deep).Despite the large number of joints, Teoh’s device can open or close in a single motion using the power of just a single rotational actuator, a motor that converts energy into a rotational force. Once the open device is in position right behind a specimen, the operator simply triggers the actuator, and the 12 sides of the dodecahedron fold around the creature and the water it’s in, though it doesn’t seal tightly enough to carry that water above the surface.
Eventually, the researchers hope to update the device to include built-in cameras and touch sensors. They also think it has the potential to be useful for space missions, helping with off-world construction, but for now, they’re focused on collecting marine life without destroying it in the process.
The next time you sit down to watch a movie, the algorithm behind your streaming service might recommend a blockbuster that was written by AI, performed by robots, and animated and rendered by a deep learning algorithm. An AI algorithm may have even read the script and suggested the studio buy the rights.
It’s easy to think that technology like algorithms and robots will make the film industry go the way of the factory worker and the customer service rep, and argue that artistic filmmaking is in its death throes. For the film industry, the same narrative doesn’t apply — artificial intelligence seems to have enhanced Hollywood’s creativity, not squelched it.
It’s true that some jobs and tasks are being rendered obsolete now that computers can do them better. The job requirements for a visual effects artist are no longer owning a beret and being good at painting backdrops; the industry now calls for engineers who are good at training deep learning algorithms to do the mundane work, like manually smoothing out an effect or making a digital character look realistic. In doing so, the creative artists who still work in the industry can spend less time hunched over a computer meticulously editing frame-by-frame and go do more interesting things, explains Darren Hendler, who heads Digital Domain’s Digital Humans Group.
Just like computers made it so animators didn’t have to draw every frame by hand, advanced algorithms can automatically render advanced visual effects. In both cases, the animator didn’t lose their job.
“We find that a lot of the more manual, mundane jobs become easy targets [for AI automation] where we can have a system that can do that much, much quicker, freeing up those people to do more creative tasks,” Hendler tells Futurism.
“I think you see a lot of things where it becomes easier and easier for actors to play their alter egos, creatures, characters,” he adds. “You’re gonna see more where the actors are doing their performance in front of other actors, where they’re going to be mapped onto the characters later.”
“We find that a lot of the more manual, mundane jobs become easy targets [for AI automation] where we can have a system that can do that much, much quicker, freeing up those people to do more creative tasks.”
Recently, Hendler and his team used artificial intelligence and other sophisticated software to turn Josh Brolin into Thanos for Avengers: Infinity War. More specifically, they used an AI algorithm trained on high-resolution scans of Brolin’s face to track his expressions down to individual wrinkles, then used another algorithm to automatically map the resulting face renders onto Thanos’ body before animators went in to make some finishing touches.
Such a process yields the best of both motion capture worlds — the high resolution usually obtained by unwieldy camera setups and the more nuanced performance made possible by letting an actor perform surrounded by their costars, instead of alone in front of a green screen. And even though face mapping and swapping technology would normally take weeks, Digital Domain’s machine learning algorithm could do it in nearly real-time, which let them set up a sort of digital mirror for Brolin.
“When Josh Brolin came for the first day to set, he could already see what his character, his performance would look like,” says Hendler.
Yes, these algorithms can quickly accomplish tasks that used to require dedicated teams of people. But when used effectively they can help bring out the best performances, the smoothest edits, and the most advanced visual effects possible today. And even though the most advanced (read: expensive) algorithms may be limited to major Disney blockbusters right now, Hendler suspects that they’ll someday become the norm.
“I think it’s going to be pretty widespread,” he says. “I think the tricky part is its such a different mindset and approach that people are just figuring out how to get it to work.” Still, as people become more familiar with training machines to do their animation for them, Hendler thinks that it’s only logical that more and more applications for artificial intelligence in filmmaking are about to emerge.
“[Machine learning] hasn’t been widely adopted yet because filmmakers don’t fully understand it,” Hendler says. “But we’re starting to see elements of deep learning and machine learning incorporated into very specific areas. It’s something so new and so different from anything we’ve done in the past.”
But many of these applications already have emerged. Last January, Kristen Stewart (yes, that one) directed a short film and partnered with an Adobe engineer to develop a new kind of neural network that edited their footage to look like an impressionist painting that Stewart made.
Fine-tuning the value of u let the team change how much the film resembled an impressionist painting. Image Credit: 2017 Starlight Studios LLC & Kristen Stewart
Meanwhile, Disney built robot acrobats that can be flung high into the air so that they can later be edited (perhaps by AI) to look like the performers and their stunt doubles. Now those actors get to rest easy and focus on the less dangerous parts of their job.
Artificial intelligence may soon move beyond even the performance and editing sides of the filmmaking process — it may soon weigh in on whether or not a film is made in the first place. A Belgian AI company called Scriptbook developed an algorithm that the company claims can predict whether or not a film will be commercially successful just by analyzing the screenplay, according to Variety.
Normally, script coverage is handled by a production house or agency’s hierarchy of executive assistants and interns, the latter which don’t need to be paid under California law. To justify the $5,000 expenditure over sometimes-free human labor, Scriptbook claims that its algorithm is three times better at predicting box office success than human readers. The company also asserts that it would have recommended that Sony Pictures not make 22 of its biggest box office flops over the past three years, which would have saved the production company millions of dollars.
Scriptbook has yet to respond to Futurism’s multiple requests for information about its technology and the limitations of its algorithm (we will update this article if and when the company replies). But Variety mentioned that the AI system can predict an MPAA rating (R, PG-13, that sort of thing), detect who the characters are and what emotions they express, and also predict a screenplay’s target audience. It can also determine whether or not a film will pass the Bechdel Test, a bare minimum baseline for representation of women in media. The algorithm can also determine whether or not the film will include a diverse cast of characters, though it’s worth noting that many scripts don’t specify a character’s race and whitewashing can occur later on in the process.
Human script readers can do all this too, of course. And human-written coverage often includes a comprehensive summary and recommendations as to whether or not a particular production house should pursue a particular screenplay based on its branding and audience. Given how much artificial intelligence struggles with emotional communication, it’s unlikely that Scriptbook can provide the same level of comprehensive analysis that a person could. But, to be fair, Scriptbook suggested to Variety that their system could be used to aid human readers, not replace them.
Getting some help from an algorithm could help people ground their coverage in some cold, hard data before they recommend one script over another. While it may seem like this takes a lot of the creative decision-making out of human hands, tools like Scriptbook can help studio houses make better financial choices — and it would be naïve to assume there was a time when they weren’t primarily motivated by the bottom line.
Hollywood’s automated future isn’t one that takes humans out of the frame (that is, unless powerful humans decide to do so). Rather, Hendler foresees a future where creative people get to keep on being creative as they work alongside the time-saving machines that will make their jobs simpler and less mundane.
“We’re still, even now, figuring out how we can apply [machine learning] to all the different problems,” Hendler says. “We’re gonna see some very, very drastic speed improvements in the next two, three years in a lot of the things that we do on the visual effects side, and hopefully quality improvements too.”
NAILED IT. Wednesday morning, Blue Origin launched a capsule-carrying New Shepard rocket from its facility in West Texas. The goal: to test the altitude escape motor, the motor that would engage if astronauts needed to make a fast getaway due to some unforeseen circumstances in space. Its effectiveness is essential to ensuring the safety of any people who might ride aboard the capsule in the future.
And the company appeared to nail the test.
After launch, the capsule separated from the rocket just like it should, and then its emergency motor ignited — just as it’s supposed to during an emergency situation. This propelled the capsule to an altitude of roughly 119 kilometers (74 miles), a record height for the company. After that, both the capsule and the rocket safely returned to Earth.
TESTING. TESTING. Blue Origin wasn’t the only company testing the waters with this New Shepard rocket launch; the rocket’s capsule contained a variety of devices and experiments for scientists and educators conducting microgravity research. One company sent up a system designed to provide reliable WiFi connectivity in space, while another added a number of textiles to the capsule so they could test their viability for use in space suits.
SPACE TOURISM. However, the most interesting payload aboard the capsule (for the general public, anyways) was likely Mannequin Skywalker, the test dummy that first rode a New Shepard back in December. The mannequin serves as a reminder of Blue Origin’s ultimate goal — to send people into space — and with today’s successful test, the company moves one step closer to reaching that goal.
ETHICALLY ACCEPTABLE. We may have just moved one step closer to designer babies. On Tuesday, the Nuffield Council on Bioethics (NCB), an independent U.K.-based organization that analyzes and reports on ethical issues in biology and medicine, released a report focused on the social and ethical issues surrounding human genome editing and reproduction.
According to the report, editing human embryos, sperm, or eggs is “morally permissible” as long as the edit doesn’t jeopardize the welfare of the future person (the one born from the edited embryo) or “increase disadvantage, discrimination, or division in society.”
PROCEED WITH CAUTION. The NCB report doesn’t say we should only make edits to embryos for therapeutic reasons, meaning changes for cosmetic reasons are still on the table, ethically speaking. However, by no means does the report suggest we jump right into editing human embryos.
Before we get to that point, we must conduct further research to establish safety standards, according to the report. We’ll also need to publicly debate the use of the technology and consider its possible implications. Additionally, we’ll need to assess any potential risks to individuals, groups, or society in general, and figure out a system for monitoring and addressing any unforeseen adverse affects as they may crop up.
After all that, gene-editing in humans will still need to be closely regulated by government agencies, and we’ll want to start by using it only in closely monitored clinical studies, says the NCB.
AN INFLUENTIAL VOICE. The NCB can’t actually write laws or establish any standards for the use of gene-editing in humans. However, the Council’s recommendations do carry some weight, with the BBC referring to the organization as “influential.”
So, while it may still be years before anyone gives birth to a “designer baby,” the mere fact that editing human embryos is getting the ethical green light from the NCB is a promising sign for anyone eager for the day gene-editing lets them create the offspring of their dreams.
Alex Jones does it. So do Holocaust deniers. Yes, they all have committed the cardinal sin of what Mark Zuckerberg so generously calls “getting things wrong.” And, yet, they still haven’t gotten kicked off Facebook.
Now we know why — internal Facebook guidelines recently acquired by Motherboard describe exactly what it would take to get kicked off Facebook.
The key takeaway: It takes a lot to get your account or page deleted by the admins.
Indeed, Zuckerberg seems committed to making Facebook a place to share your views, no matter how unsavory! In the Recode interview published on Wednesday, Zuck said that, instead of removing accounts or pages for the likes of InfoWars and Holocaust deniers, he would rather limit those pages’ reach and move them down on people’s news feeds.
So if you can get away with peddling dangerous and offensive misinformation on Facebook, what would it take to get your account shut down? To make it easy for you to go down in a blaze of glory, we’ve made you a checklist.
Post some real hateful stuff, and lots of it. According to those internal documents that Motherboard uncovered, Facebook has a strangely-lenient “five strikes in a rapid succession and you’re out, buster!” rule for pages.
Be an admin, or not. If you admin a page on Facebook, you won’t be deleted unless you get flagged for five separate instances of hate speech within 90 days. If you’re not, that’s cool too: as long as at least 30 percent of the posts to a group or page within 90 days are found to violate Facebook user guidelines, it can be taken down just the same.
Spruce up your profile with photos and propaganda. If you’re trying to get Zuck’s attention, it might be time to change your profile pic with an identified leader of a hate group. Alternatively, you can also get your five strikes in by posting hateful misinformation or propaganda to your wall.
Not a hateful person? Try soliciting sex. What’s a non-bigot to do? Pages and groups that do so are subject to the same five-strike policy for admin posts or 30 percent cap for user posts. Also, a page that provides two separate sources of information on meeting up for sex or submitting pornography (such as the page title and description but not just one of the two) can be removed by Facebook.
It goes without saying that we don’t recommend doing, you know, any of this. If you want to get off Facebook, we recommend simply logging off.
But these internal guidelines for page removal raise the question of what Facebook’s team considers hate speech or propaganda. If Facebook truly has teams dedicated to removing vile content from the web, how can your racist aunt possibly share so many thinly-veiled white supremacist memes every single day?
In a congressional hearing on Tuesday, Monika Bickert, Facebook’s President for Global Policy Management, told the House Judiciary Committee that InfoWars’ page would be taken down “if they posted sufficient content that violated [Facebook’s] threshold,” according to Engadget. But she added that “sufficient” is defined pretty loosely, and that some of these violations are more or less severe than others.
What remains puzzling, then, is exactly how Facebook defines propaganda and hate speech, versus what it thinks is just opinionated people getting it wrong. And while any platform would be understandably reluctant to draw a line that limits people’s speech, no matter how loathsome, drawing a clear line in the sand right through all of these gray areas could keep a lot of people from getting hurt.
Update July 18, 2018 at 5:10 PM: In a follow up email to Swisher, Mark Zuckerberg clarified: “I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that.” Read the rest of Zuckerberg’s statement here.
AI FOR GOOD. Artificial intelligence (AI) has the potential to save lives by predicting natural disasters, stopping human trafficking, and diagnosing deadly diseases. Unfortunately, it also has the potential to take lives. Efforts to design lethal autonomous weapons — weapons that use AI to decide on their own whether or not to attempt to kill a person — are already underway.
On Wednesday, the Future of Life Institute (FLI) — an organization focused on the use of tech for the betterment of humanity — released a pledge decrying the development of lethal autonomous weapons and calling on governments to prevent it.
“AI has huge potential to help the world — if we stigmatize and prevent its abuse,” said FLI President Max Tegmark in a press release. “AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way.”
JUST SIGN HERE. One hundred and seventy organizations and 2,464 individuals signed the pledge, committing to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.” Signatories of the pledge include OpenAI founder Elon Musk, Skype founder Jaan Tallinn, and leading AI researcher Stuart Russell.
The three co-founders of Google DeepMind (Demis Hassabis, Shane Legg, and Mustafa Suleyman) also signed the pledge. DeepMind is Google’s top AI research team, and the company recently saw itself in the crosshairs of the lethal autonomous weapons controversy for its work with the U.S. Department of Defense.
In June, Google vowed it would not renew that DoD contract, and later, it released new guidelines for its AI development, including a ban on building autonomous weapons. Signing the FLI pledge could further confirm the company’s revised public stance on lethal autonomous weapons.
ALL TALK? It’s not yet clear whether the pledge will actually lead to any definitive action. Twenty-six members of the United Nations have already endorsed a global ban on lethal autonomous weapons, but several world leaders, including Russia, the United Kingdom, and the United States, have yet to get on board.
This also isn’t the first time AI experts have come together to sign a pledge against the development of autonomous weapons. However, this pledge does feature more signatories, and some of those new additions are pretty big names in the AI space (see: DeepMind).
Unfortunately, even if all the world’s nations agree to ban lethal autonomous weapons, that wouldn’t necessarily stop individuals or even governments from continuing to develop the weapons in secret. As we enter this new era in AI, it looks like we’ll have little choice but to hope the good players outnumber the bad.
DEBUGGING. Typically, engineers want to get bugs out of their creations. Not so for the U.K. engineering firm (not the famed carmaker) Rolls-Royce — it’s looking for a way to get bugs into the aircraft engines it builds.
These “bugs” aren’t software glitches or even actual insects. They’re tiny robots modeled after the cockroach. On Tuesday, Rolls-Royce shared the latest developments in its research into cockroach-like robots at the Farnborough International Airshow.
ROBO-MECHANICS. Rolls-Royce believes these tiny insect-inspired robots will save engineers time by serving as their eyes and hands within the tight confines of an airplane’s engine. According to a report by The Next Web, the company plans to mount a camera on each bot to allow engineers to see what’s going on inside an engine without have to take it apart. Rolls-Royce thinks it could even train its cockroach-like robots to complete repairs.
“They could go off scuttling around reaching all different parts of the combustion chamber,” Rolls-Royce technology specialist James Cell said at the airshow, according to CNBC. “If we did it conventionally it would take us five hours; with these little robots, who knows, it might take five minutes.”
NOW MAKE IT SMALLER. Rolls-Royce has already created prototypes of the little bot with the help of robotics experts from Harvard University and University of Nottingham. But they are still too large for the company’s intended use. The goal is to scale the roach-like robots down to stand about half-an-inch tall and weigh just a few ounces, which a Rolls-Royce representative told TNW should be possible within the next couple of years.
“We really believed in social experiences. We really believed in protecting privacy. But we were way too idealistic. We did not think enough about the abuse cases,” Facebook COO Sheryl Sandberg admitted to NPR.
Yes, Facebook has a hate speech problem.
After all, how could it not? For many of Facebook’s 2 billion active users, the site is the center of the internet, the place to catch up on news and get updates from friends. That make it a natural target for those looking to persuade, delude, or abuse others online. Users flood the platform with images and posts that exhibit racism, bigotry, and exploitation.
Facebook has not yet found a good strategy to deal with the deluge of hateful content. A recent investigation showed this to be true. A reporter from Channel 4 Dispatches in the United Kingdom went undercover with a Facebook contractor in Ireland and discovered a laundry list of failures, most notably that violent content stays on the site even after users flag it, and that “thousands” of posts were left unmoderated well after Facebook’s goal of a 24 hour turnaround. Tech Crunch took a more charitable view of the investigation’s findings, but still found Facebook “hugely unprepared” for the messy business of moderation. In a letter to the investigation’s producer, Facebook vowed to quickly address the issues it highlighted.
Tech giants have primarily relied on human moderators to flag posts that were problematic, but they are increasingly turning algorithms to do so. They’re convinced it’s the only way forward, even as they hire more humans to do the job AI isn’t ready to do itself.
The stakes are high; poorly moderating hate speech has tangible effects on the real world. UN investigators found that Facebook had failed to curb the outpouring of hate speech targeting Muslim minorities on its platform during a possible genocide in Myanmar. Meanwhile, countries in the European Union are closer to requiring Facebook to curb hate speech, especially hurtful posts targeting asylum seekers — Germany has proposed laws that would tighten regulations and could fine the social platform if it doesn’t follow them.
Facebook has been at the epicenter of the spread of hate speech online, but it is, of course, not the only digital giant to deal with this problem. Google has been working to keep videos promoting terrorism and hate speech off YouTube (but not fast enough, much to the chagrin of big-money advertisers whose ads showed up right before or during these videos). Back in December, Twitter started banning accounts associated with white nationalism as part of a wider crackdown on hate speech and abusive behavior on its platform. Google has also spent a ton of resources amassing an army of human moderators to clean up their platforms, while simultaneously working to train algorithms to help them out.
Keeping platforms free of hate speech is a truly gargantuan task. 600,000 hours of new video are added to YouTube, and Facebook users upload some 350 million photos every day.
Most sites use algorithms in tandem with human moderators. These algorithms are trained by humans first to flag the content the company deems problematic. Human moderators then review what the algorithms flag — it’s a reactive approach, not a proactive one. “We’re developing AI tools that can identify certain classes of bad activity proactively and flag it for our team at Facebook,” CEO Mark Zuckerberg told Senator John Thune (R-SD) during his two-day grilling in front of Congress earlier this year, as Slate reported. Zuckerberg admitted that hate speech was too “linguistically nuanced” for AI at this point. He suspected it’ll get there in about five to ten years.
But here’s the thing: There’s no one right way to eradicate hate speech and abusive behavior online. Tech companies, though, clearly want algorithms to do the job, with as little human input as possible. “The problem cannot be solved by humans and it shouldn’t be solved by humans,” Google’s chief business officer Philipp Schindler told Bloomberg News.
AI has become much better at picking out the hate speech and letting everything else go through, but it’s far from perfect. Earlier this month, Facebook’s hate speech filters decided that large sections of an image of the Declaration of Independence were hate speech, and redacted chunks of it from a Texas-based newspaper that posted it on July 4th. A Facebook moderator restored the full text a day later, with a hasty apology thrown in.
Part of the reason it’s so hard to get algorithms talk like humans, and even debate human opponents effectively is that algorithms still get caught up on context, nuance, and recognition of intent. Was it a sarcastic comment or purely commentary? The algorithm can’t really tell.
A lot of the tools used by Facebook’s moderation team shouldn’t even be referred to as “AI” in the first place, according to Daniel Faltesek, assistant professor of social media at Oregon State University. “Most systems we call AI are making a guess as to what users mean. A filter that blocks posts that use an offensive term is not particularly intelligent,” Faltesek tells Futurism.
Effective AI would be able to highlight problematic content not just by scanning for a combination of letters, but instead respond to users’ shifting sentiment and emotional values. But we don’t have that yet. So humans, it seems, will continue to be a part of the solution — at least until AI can do it by itself. Google is planning on hiring more than 10,000 people this year alone, while Facebook wants to ramp up its human moderator army to 20,000 by the end of the year.
In a perfect world, every instance of hate speech would be thoroughly vetted. But Facebook’s user base is so enormous that even 20,000 human moderators wouldn’t be enough (since there are 2 billion people on the platform, each human moderator look after some 100,000 accounts, which, we’ve learned is simply maddening work).
The thing that would work best according to Faltesek? Pairing up these algorithms with human moderators. It’s not all that different from what Facebook is working on right now, but it’s gotta keep humans involved. “There is an important role for human staff in reviewing the current function of systems, training new filters, and responding in high-context situations when automated systems fail,” says Faltesek. “The best world is one where people are empowered to do their best work with intelligent systems.”
There’s a trick to doing this well, a way for companies to maintain control of their platforms without scaring away users. “For many large organizations, false negatives are worse than false positives,” says Faltesek. “Once the platform becomes unpleasant it is hard to build up a pool of good will again.” After all, that’s what happened with MySpace, and it’s why you’re probably not on it anymore.
Hate speech on social media is a real problem with real consequences. Facebook knows now it can’t sit idly by and let it take over its platform. But whether human moderators paired with algorithms will be enough to quell the onslaught of hate on the internet is still very uncertain. Even an army of 20,000 human moderators won’t improve their odds, but as of right now, it’s the best shot they have. And after what Zuckerberg called a “hard year” for the platform with a lot of soul searching, now’s the best time to get it right.
PUT DOWN THE JOYSTICK. If you’ve ever tried to pilot a drone, it’s probably taken a little while to do it well; each drone is a little different, and figuring out how to use its manual controller can take time. There seems to be no shortcut other than to suffer a crash landing or two.
Now, a team of researchers from the Swiss Federal Institute of Technology in Lausanne (EPFL) have created a wearable drone controller that makes the process of navigation so intuitive, it requires almost no thought at all. They published their research in the journal PNAS on Monday.
NOW, PRETEND YOU’RE A DRONE. To create their wearable drone controller, the researchers first needed to figure out how people wanted to move their bodies to control a drone. So they placed 19 motion-capture markers and various electrodes all across the upper bodies of 17 volunteers. Then, they asked each volunteer to watch simulated drone footage through virtual reality goggles. This let the volunteer feel like they were seeing through the eyes of a drone.
The researchers then asked the volunteers to move their bodies however they liked to mimic the drone as it completed five specific movements (for example, turning right or flying toward the ground). The markers and electrodes allowed the researchers to monitor those movements, and they found that most volunteers moved their torsos in a way simple enough to track using just four motion-capture markers.
With this information, the researchers created a wearable drone controller that could relay the user’s movements to an actual drone — essentially, they built a wearable joystick.
PUTTING IT TO THE TEST. To test their wearable drone controller, the researchers asked 39 volunteers to complete a real (not virtual) drone course using either the wearable or a standard joystick. They found that volunteers wearing the suit outperformed those using the joystick in both learning time and steering abilities.
“Using your torso really gives you the feeling that you are actually flying,” lead author Jenifer Miehlbradt said in a press release. “Joysticks, on the other hand, are of simple design but mastering their use to precisely control distant objects can be challenging.”
IN THE FIELD. Mehlbradt envisions search and rescue crews using her team’s wearable drone controller. “These tasks require you to control the drone and analyze the environment simultaneously, so the cognitive load is much higher,” she told Inverse. “I think having control over the drone with your body will allow you to focus more on what’s around you.”
However, this greater sense of immersion in the drone’s environment might not be beneficial in all scenarios. Previous research has shown that piloting strike drones for the military can cause soldiers to experience significant levels of trauma, and a wearable like the EPFL team’s has the potential to exacerbate the problem.
While Miehlbradt told Futurism her team did not consider drone strikes while developing their drone suit, she speculates that such applications wouldn’t be a good fit.
“I think that, in this case, the ‘distance’ created between the operator and the drone by the use of a third-party control device is beneficial regarding posterior emotional trauma,” she said. “With great caution, I would speculate that our control approach — should it be used in such a case — may therefore increase the risk of experiencing such symptoms.”
WITH OUR POWERS COMBINED… The United Nations’ environmental agency has landed itself a powerful partner in the fight against climate change: Google. The tech company has agreed to partner with UN Environment to increase the world’s access to valuable environmental data. Specifically, the two plan to create a user-friendly platform that lets anyone, anywhere, access environmental data collected by Google’s vast network of satellites. The organizations announced their partnership at a UN forum focused on sustainable development on Monday.
FRESHWATER FIRST. The partnership will first focus on freshwater ecosystems, such as mountains, wetlands, and rivers. These ecosystems provide homes for an estimated 10 percent of our planet’s known species, and research has shown that climate change is causing a rapid loss in biodiversity. Google will use satellite imagery to produce maps and data on these ecosystems in real-time, making that information freely available to anyone via the in-development online platform. According to a UN Environment press release, this will allow nations and other organizations to track changes and take action to prevent or reverse ecosystem loss.
LOST FUNDING. Since President Trump took office, the United States has consistently decreased its contributions to global climate research funds. Collecting and analyzing satellite data is neither cheap nor easy, but Google is already doing it to power platforms such as Google Maps and Google Earth. Now, thanks to this partnership, people all over the world will have a way to access information to help combat the impacts of climate change. Seems the same data that let’s you virtually visit the Eiffel Tower could help save our planet.
Tech giants have been trying to one-up each other to make the most intelligent chatbot out there. They can help you simply fill in forms, or take the form of fleshed-out digital personalities that can have meaningful conversations with you. Those that have voice functions have come insanely close to mimicking human speech — inflections, and even the occasional “uhm’s” and “ah’s” — perfectly.
And they’re much more common than you might think. In 2016, Facebook introduced Messenger Botsthat businesses worldwide now use for simple tasks like ordering flowers, getting news updates in chat form, or getting information on flights from an airline. Millions of users are filling waiting lists to talk to an “emotional chatbot” on an app called Replika.
But there’s no getting around AI’s shortcomings. And for chatbots in particular, the frustration arises from a disconnect between the user’s intent or expectations, and the chatbot’s programmed abilities.
Take Facebook’s Project M. Sources believe Facebook’s (long forgotten) attempt at developing a truly intelligent chatbot never surpassed a 30 percent success rate, according to Wired — the remaining 70 percent of the time, human employees had to step in to solve tasks. Facebook billed the bot as all-knowing, but the reality was far less promising. It simply couldn’t handle pretty much any task it was asked to do by Facebook’s numerous users.
Admittedly, takes a a lot of resources to develop complex AI chatbots. Even Google Duplex, arguably the most advanced chatbot around today, is still limited to verifying business hours and making simple appointments. Still, users simply expect far more than what AI chatbots can actually do, which tends to enrage users.
The tech industry isn’t giving up. Market researchers predict that chatbots will grow to become a $1 billion market by 2025.
But maybe they’re going about this all wrong. Maybe, instead of making more sophisticated chatbots, businesses should focus on what users really need in a chatbot, stripped down to its very essence.
Landbot, a one-year-old Spanish tech startup, is taking a different approach: it’s making a chatbot-builder for businesses that does the bare minimum, and nothing more. The small company landed $2.2 million in a single round of funding (it plans to use those funds primarily to expand its operations and cover the costs of relocating to tech innovation hub Barcelona).
“We started our chatbot journey using Artificial Intelligence technology but found out that there was a huge gap between user expectations and reality,” co-founder Jiaqi Pan tells TechCrunch. “No matter how well trained our chatbots were, users were constantly dropped off the desired flow, which ended up in 20 different ways of saying ‘TALK WITH A HUMAN’.”
Instead of creating advanced tech that could predict and analyze user prompts, Landbot decided to work on a simple user interface that allows businesses to create chat flows that link prompt and action, question and answer. It’s kind of like a chatbot flowchart builder. And the results are pretty positive: the company has seen healthy revenue growth, and the tool is used by hundreds of businesses in more than 50 countries, according to TechCrunch.
The world is obsessed with achieving perfect artificial intelligence, and the growing AI chatbot market is no different. So obsessed in fact, it’s driving users away — growing disillusionment, frustration, and rage are undermining tech companies’ efforts. And this obsession might be doing far more harm than good. It’s simple: people are happiest when they get the results they expect. Added complexity or lofty promises of “true AI” will end up pushing them away if it doesn’t actually end up helping them.
After all, sometimes less is more. Landbot and its customers are making it work with less.
Besides, listening to your customers can go a long way.
BADASS BALLOONS. In 2013, Google unveiled Project Loon, a plan to send a fleet of balloons into the stratosphere that could then beam internet service back down to people on Earth.
And it worked! Just last year, the project provided more than 250,000 Puerto Ricans with internet service in the wake of the devastation of Hurricane Maria. The company, now simply called Loon, was the work of X, an innovation lab originally nestled under Google but now a subsidiary of Google’s parent company, Alphabet. And it’s planning to bring its balloon-powered internet to Kenya.
EYES ON AFRICA. On Thursday, Loon announced a partnership with Telkom Kenya, Kenya’s third largest telecommunications provider. Starting next year, Loon balloons will soar high above the East African nation, sending 4G internet coverage down to its rural and suburban populations. This marks the first time Loon has inked a commercial deal with an African nation.
“Loon’s mission is to connect people everywhere by inventing and integrating audacious technologies,” Loon CEO Alastair Westgarth told Reuters. Telkom CEO Aldo Mareuse added,“We will work very hard with Loon, to deliver the first commercial mobile service, as quickly as possible, using Loon’s balloon-powered internet in Africa.”
INTERNET EVERYWHERE. The internet is such an important part of modern life that, back in 2016, the United Nations declared access to it a human right. And while you might have a hard time thinking about going even a day without internet access, more than half of the world’s population still can’t log on. In Kenya, about one-third of the population still lacks access.
Thankfully, Alphabet isn’t the only company working to get the world connected. SpaceX, Facebook, and SoftBank-backed startup Altaeros have their own plans involving satellites, drones, and blimps, respectively. Between those projects and Loon, the world wide web may finally be available to the entire world.
Cubo-Futurism, Russian Budetlyanstvo, also called Russian Futurism, Russian avant-garde art movement in the 1910s that emerged as an offshoot of European Futurism and Cubism.
The term Cubo-Futurism was first used in 1913 by an art critic regarding the poetry of members of the Hylaea group (Russian Gileya), which included such writers as Velimir Khlebnikov, Aleksey Kruchenykh, David Burlyuk, and Vladimir Mayakovsky. However, the concept took on far more important meaning within visual arts, displacing the influence of French Cubism and Italian Futurism, and led to a distinct Russian style that blended features of the two European movements: fragmented forms fused with the representation of movement. The Cubo-Futurist style was characterized by the breaking down of forms, the alteration of contours, the displacement or fusion of various viewpoints, the intersection of spatial planes, and the contrast of colour and texture. Also typicaland one of the prominent aspects of the concurrent Synthetic Cubism movement in Pariswas the pasting of foreign materials onto the canvas: strips of newspaper, wallpaper, and even small objects.
Cubo-Futurist artists stressed the formal elements of their artwork, showing interest in the correlation of colour, form, and line. Their focus sought to affirm the intrinsic value of painting as an art form, one not wholly dependent on a narrative. Among the more notable Cubo-Futurist artists were Lyubov Popova (Travelling Woman, 1915), Kazimir Malevich (Aviator and Composition with Mona Lisa, both 1914), Olga Rozanova (Playing Card series, 191215), Ivan Puni (Baths, 1915), and Ivan Klyun (Ozonator, 1914).
Painting and other arts, especially poetry, were closely intertwined in Cubo-Futurism, through friendships among poets and painters, in joint public performances (before a scandalized but curious public), and in collaborations for theatre and ballet. Notably, the books of the transrational poetry (zaum) of Khlebnikov and Kruchenykh were illustrated with lithography by Mikhail Larionov and Natalya Goncharova, Malevich and Vladimir Tatlin, and Rozanova and Pavel Filonov. Cubo-Futurism, though brief, proved a vital stage in Russian art in its quest for nonobjectivity and abstraction.
Airships have often served as the symbol of a brighter tomorrow.
Even before the first zeppelin was invented, airships featured prominently in utopian visions of the future. This 1898 poster advertised a musical comedy on the New York stage:
Musical theater poster. 1898.
And these German and Frenchpostcardspredicted air travel in theyear 2000:
German postcard, circa 1900
French postcard. 1910.
Futurists of the early 20th Century often combined lighter-than-air and heavier-than-air technology, as in this urban skyscraper airport and solar-powered aerial landing field:
Popular Science magazine. November, 1939
Modern Mechanix magazine. October, 1934.
This hybrid airship concept from 1943, designed to meet the needs of war, predicted the hybrid airships that would be built inthe 21st century.
Popular Science magazine, February 1943
Sometimes futurist airship visions were promoted by companies which were actually involved in the lighter-than-air business.
For example, the Goodyear-Zeppelin company, which built the American airships Akron and Macon, and which had a financial interest in the promotion of the passenger dirigible, frequently offered alluring illustrations of future airship travel.
Goodyear president Paul Litchfield and publicist Hugh Allen included the following pictures in their 1945 book, WHY? Why has America no Rigid Airships?:
These drawingsfrom Hugh Allens The Story of the Airship(1931)imaginedan Art Deco dining salon, promenade, and even a lounge with a fireplace.
Airships could even advance medical technology, such as this airshiptuberculosis hospital.
Under the illusion that communism was the way of the future, Soviet propagandists loved images of modernity and enlisted the airship in their cause.
Soviet poster, 1931. (We Are Building a Fleet of Airships in the Name of Lenin. Azeri text)
Sometimes illustrators got so carried away depicting lavish interiors that they neglected to leave room for much lifting gas, as in this illustration from The American Magazine.
The article described future airships to be built by the Goodyear-Zeppelin Company, which would be fitted up as sumptuously as a Palm Beach winter hotel:
The American Magazine. May, 1930.
This illustration of an atomic dirigible from a Soviet magazine in the 1960s left no room for lifting gas at all:
am utterly dissatisfied. The idea and the design is good, but the app itself is a chaos. For the first thing, I can't imagine how you can just shove a 100 minute long youtube video there and sell it as a product. The description says it is designed for the top daily scientific breakthroughs/innovations, but it's impossible to watch videos of that detail and length daily - I would have liked a concluded version or at least some level of journalism. So in this sense it already fails for me as a news app. But even if I happen to have the time and lust to watch videos of that length, it's impossible - the in-app video player is slow to respond and doesn't respond to the full-screen mode button. Also couldn't increase the size of the vid with rotating my phone. When I noticed the problem, I would have liked to make a bug report or something but couldn't find anything like that in the app. Thus, I'm here. Also, the app often crashes when reopened.
Futurism was a modern art and social movement which originated in Italy in the early 20th century. It was largely an Italian phenomenon, though there were parallel movements in Russia, England and elsewhere. The Futurists practiced in every medium of art, including painting, sculpture, ceramics, graphic design, industrial design, interior design, theatre, movies, fashion, textiles, literature, music, architecture and even gastronomy.
The founder of Futurism and its most influential personality was the Italian writer Filippo Tommaso Marinetti. Marinetti launched the movement in his Futurist Manifesto, which he published for the first time on 5 February 1909 in La gazzetta dell'Emilia. This article was reprinted in the French daily newspaper Le Figaro on 20 February 1909. Marinetti was soon joined by the painters Umberto Boccioni, Carlo Carr, Giacomo Balla, Gino Severini and the composer Luigi Russolo.
Marinetti expressed a passionate loathing of everything old, especially political and artistic tradition. "We want no part of it, the past", he wrote, "we the young and strong Futurists!" The Futurists admired speed, technology, youth and violence, the car, the airplane and the industrial city, all that represented the technological triumph of humanity over nature, and they were passionate nationalists. They repudiated the cult of the past and all imitation, praised originality, "however daring, however violent", bore proudly "the smear of madness", dismissed art critics as useless, rebelled against harmony and good taste, swept away all the themes and subjects of all previous art, and gloried in science.
Publishing manifestos was a feature of Futurism, and the Futurists (usually led or prompted by Marinetti) wrote them on many topics, including painting, architecture, religion, clothing and cooking.[3]
The founding manifesto did not contain a positive artistic programme. The Futurists attempted to create it in their subsequent Technical Manifesto of Futurist Painting. This committed them to a "universal dynamism", which was to be directly represented in painting.[4]
In practice, much of their work was influenced by Cubism, and indeed their images were more dynamic than those of Picasso and Braque. The phrase 'plastic dynamism' has been used to describe their early work.
Many Italian Futurists supported Fascism in the hope of modernizing the country. Italy was divided between the industrial north and the rural, archaic South. Like the Fascists, the Futurists were Italian nationalists, radicals, admirers of violence, and were opposed to parliamentary democracy. Marinetti was one of the first members of the National Fascist Party. He soon found the Fascists were not radical enough for him, but he supported Italian Fascism until his death in 1944.
The Futurists' association with Fascism after its triumph in 1922 brought them official acceptance in Italy and the ability to carry out important work, especially in architecture. After the Second World War, many Futurist artists had difficulty in their careers because of their association with a defeated and discredited regime.
The Futurists renewed themselves again and again until Marinetti's death.
Futurism influenced many other twentieth century art movements, including Art Deco, Vorticism, Constructivism, Surrealism and Dadaism. Futurism was, like science fiction, in part overtaken by 'the future'.
Nonetheless, the ideals of futurism remain as part of modern Western culture: the emphasis on youth, speed, power and technology is expressed in much of modern cinema and culture. Ridley Scott used design ideas of Sant'Elia in Blade Runner.
Echoes of Marinetti's thought, especially his "dreamt-of metallization of the human body", are still strongly prevalent in Japanese culture, and surface in manga/anime and the works of artists such as Shinya Tsukamoto, director of the "Tetsuo" (lit. "Ironman") films.
Futurism influenced the literary genre of cyberpunk. Artists who came to prominence in the first flush of the internet, such as Stelarc and Mariko Mori, produced work influenced by Futurist ideas. A revival of sorts of the Futurist movement began in 1988 with the creation of the Neo-Futurist style of theatre in Chicago, which uses Futurism's focus on speed and brevity to create a new form of immediate theatre. There are active Neo-Futurist troupes in Chicago, New York, and Montreal.
These are the two movements, with more or less abstract tendencies, that first influenced the majority ofexperimental artists in this country, beginning about 1913 when both movements were at their height.
Cubism and Futurism, both of which had a great influence in the United States derives from the researches ofCezanne and Seurat. The beginnings of Cubism date back to about 1908 under the twin aegis of Picasso andBraque.
In the case of Cubism, the primitivist, instinctual content of Gauguin's and van Goh's paintings and the laterdiscovery of the barbaric, expressive power of Negro sculpture played an important part in such an early cubistpicture of Picasso's as his Les Demoiselles d'Avignon. And however much Picasso and his cubist followers tended tolimit their researches to the still life, they never divorced themselves completely from the sentimental, evenromantic, implications of their chosen subject matters the paraphernalia of the studio, musical instruments, theguitar, mandolin and violin and the characters out of the old commedia dell'arte associated with such instruments,Harlequin, Columbine and Pierrot.
Despite such emotional or non-rational elements in cubist painting, however, its rational motivation must stillbe said to have remained uppermQst. It consisted in a process of analytical abstraction of several planes of anobject to present a synthetic, simultaneous view of it.
And by directing the formal planes of this synthetic view towards the observer rather than making them retreatby traditional perspective principles into an illusionistic space, the picture frame no longer acted as a windowleading the eye into the distance but as a boundary enclosing a limited area of canvas or panel. In the so-calledanalytical phase of Cubism, painting tended also to be monochromatic, presumably to avoid as much as possible anysensuous or naturalistic reference to color.
The leading Cubists, Picasso and Braque, refused to take abstraction further than this point and actually intime climbed down from their pinnacle of analytical experiment to a more decorative, sensuous plateau. They leftthe final step of total geometrical abstraction to others.
Another proto-abstract movement, an anti-rational offshoot of Cubism, Futurism was launched by the ItalianFuturists about 1910. Rebelling against the cubist analysis of static form, the Futurists were above all inspiredby the dynamism of the machine, which they proceeded to glorify and to make a central tenet in their artisticcredo. Man to the Futurist must accept the machine and emulate its ruthless power. By way of emulation theyattempted to paint movement by indicating abstract lines of force and schematic stages in the progress of a movingimage. And furthermore, in some instances they sought to involve the observer in their pictures by viewing movementfrom an interior position-the inside of a trolley car, for example-thus denying, as the Cubists did, formal laws ofperspective.
Where the Cubists strove to eliminate three-dimensional space and thus bring the image in the picture closer tothe observer, although still at a distance, the Futurists attempted to suck the observer into a pictorial vortex.The greatest difference between these two proto-abstract movements, however, is that the one, Cubism, is concernedwith forms in static relationships while Futurism is concerned with them in a kinetic state.
Furthermore, the Cubists, with few exceptions, paid no attention to the machine, as such, while the Futurists,as we have said, glorified it.
The cubist movement, significantly, had no overt political implications and indulged in no manifestoes.
The Futurists, on the other hand, worshipped naked energy for its own sake and in their writings pointed forwardto the power-drunk ideology of Fascism.
The Cubists, it may be said, immured themselves from any contact with the public by shutting themselves up intheir studio laboratories.
The Futurists came out into the market place and demagogically attempted to appeal to the man in the trolleycar. If their pictures today seem dry and doctrinaire to some of us, the ideological appeal of Futurism and itspolitical partner, Fascism, was, we are all uncomfortably aware, quite the reverse.
Furthermore, the generally rational-minded Cubist contented himself as we have noted with the still-lifematerials of his studio for subject matter and abstract dissection, whereas the futurist picture falls mainly intothe category of landscape and figure compositions, however urban and mechanical the emphasis.
Davis' Lucky Strike abstract art from 1921 is a good exampleof Cubism.
Futurism Futurism was an art movementof 20th century Italy. Usingvarious types of medium, futurist artists usedemphasized themes of thecontemporary social issues of the time connecting specifically with the future. These themes included ideas based in the increasing speed of technology, automobiles and airplanes of the industrial revolution as well as youth and violence. Futurism focuses on the movement of the object within the piece, manipulating and overlaying an image several times to understand the motion and movement it creates. Colour, line and shape become very important in Futurist works, for the importance is on how the object moves throughout the canvas. Many futurist works appear abstract. Giacomo BallaUmberto Boccioni Gino Severini
Year 12 Observational Drawing Transformation to Futurist PaintingsUsing an observational drawing you've created. Think about movement and your lines. If your objects were in motion, what would they look like? What colours, shapes and lines would they product? What blocks of colour and abstracted shapes would be created? Use the above artists as an influence to your work and re-create your observational drawings as futurist works of art.