Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- New Utopia
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Superintelligence
Posted: October 16, 2019 at 5:14 pm
"AI gives you a treasure map, and the scientist needs to find the treasure," says Miguel Bessa, one of the researchers behind a new AI-created super-compressible material, and the lead author of a paper on the topic.
Bessa, alongside a team of researchers fromTU Delft, has developed a new super-compressible yet robust material without carrying out any experimental tests at all. All they used was artificial intelligence (AI).
RELATED: SHOULD WE FEAR ARTIFICIAL SUPERINTELLIGENCE?
As the researchers' paper says, "designing futureproof materials goes beyond a quest for the best."
"The next generation of materials needs to be adaptive, multipurpose, and tunable. This is not possible by following the traditional experimentally guided trialanderror process, as this limits the search for untapped regions of the solution space."
The solution to this? Artificial intelligence, the researchers say.
What the scientists did was use "a computational datadriven approach" to explore the feasibility of a new meta-material concept.
By using AI, they could adapt the concept material to differenttarget properties, choice of base materials, length scales, and manufacturing processes.
The work was inspired by Bessa's time atthe California Institute of Technology. While there, he noticed a satellite structure at the Space Structures Lab, that was able to open out, large, expansive solar sails from within a very small storage space.
Bessa wondered if this type ofhighly compressible design could be compressed into an even smaller space."If this was possible, everyday objects such as bicycles, dinner tables, and umbrellas could be folded into your pocket," he said in a press release.
He wondered if it would be possible to design a highly compressible, yet strong material that could be compressed into a small fraction of its volume. "If this was possible, everyday objects such as bicycles, dinner tables, and umbrellas could be folded into your pocket."
However, "metamaterial design has relied on extensive experimentation and a trial-and-error approach," Bessa says. "We argue in favor of inverting the process by usingmachine learningfor exploring new design possibilities while reducing experimentation to an absolute minimum."
Using machine learning, Bessa and his team fabricated two designs of different sizes that transform brittle polymers into lightweight, recoverable materials that are super-compressible. One design was built for strength and the other for maximum compressibility.
Yet, Bressa argues that the real achievement in the team's work is in the method of creation, not the material itself. As he puts it,"data-driven science will revolutionize the way we reach new discoveries, and I can't wait to see what the future will bring us."
Posted: October 4, 2019 at 7:52 pm
Susan Schneider is the NASA/Baruch Blumberg Chair at the Library of Congress and NASA, as well as the director of the AI, Mind and Society Group at the University of Connecticut. Her work has been featured by the New York Times, Scientific American, Smithsonian, Fox TV, History Channel, and more. Her two-year NASA project explored superintelligent AI. Previously, she was at the Institute for Advanced Study in Princeton devising tests for AI consciousness. Her books include The Language of Thought, The Blackwell Companion to Consciousness, and Science Fiction and Philosophy.
Chapter 1: The hard problem of consciousness SUSAN SCHNEIDER: Consciousness is the felt quality of experience. So when you see the rich hues of a sunset, or you smell the aroma of your morning coffee, you're having conscious experience. Whenever you're awake and even when you're dreaming, you are conscious. So consciousness is the most immediate aspect of your mental life. It's what makes life wonderful at times, and it's also what makes life so difficult and painful at other times.
No one fully understands why we're conscious. In neuroscience, there's a lot of disagreement about the actual neural basis of consciousness in the brain. In philosophy, there is something called the hard problem of consciousness, which is due to the philosopher David Chalmers. The hard problem of consciousness asks, why must we be conscious? Given that the brain is an information processing engine, why does it need to feel like anything to be us from the inside?
Chapter 2: Are we ready for machines that feel?SUSAN SCHNEIDER: The hard problem of consciousness is actually something that isn't quite directly the issue we want to get at when we're asking whether machines are conscious. The problem of AI consciousness simply asks, could the AIs that we humans develop one day or even AIs that we can imagine in our mind's eye through thought experiments, could they be conscious beings? Could it feel like something to be them?
The problem of AI consciousness is different from the hard problem of consciousness. In the case of the hard problem, it's a given that we're conscious beings. We're assuming that we're conscious, and we're asking, why must it be the case? The problem of AI consciousness, in contrast, asks whether machines could be conscious at all.
So why should we care about whether artificial intelligence is conscious? Well, given the rapid-fire developments in artificial intelligence, it wouldn't be surprising if within the next 30 to 80 years, we start developing very sophisticated general intelligences. They may not be precisely like humans. They may not be as smart as us. But they may be sentient beings. If they're conscious beings, we need ways of determining whether that's the case. It would be awful if, for example, we sent them to fight our wars, forced them to clean our houses, made them essentially a slave class. We don't want to make that mistake. We want to be sensitive to those issues. So we have to develop ways to determine whether artificial intelligence is conscious or not.
It's also extremely important because as we try to develop general intelligences, we want to understand the overall impact that consciousness has on an intelligent system. Would the spark of consciousness, for instance, make a machine safer and more empathetic? Or would it be adding something like volatility? Would we be, in effect, creating emotional teenagers that can't handle the tasks that we give them? So in order for us to understand whether machines are conscious, we have to be ready to hit the ground running and actually devise tests for conscious machines.
Chapter 3: Playing God: Are all machines created equal?SUSAN SCHNEIDER: In my book, I talk about the possibility of consciousness engineering. So suppose we figure out ways to devise consciousness in machines. It may be the case that we want to deliberately make sure that certain machines are not conscious. So for example, consider a machine that we would send to dismantle a nuclear reactor. So we'd essentially quite possibly be sending it to its death. Or a machine that we'd send to a war zone. Would we really want to send conscious machines in those circumstances? Would it be ethical?
You might say, well, maybe we can tweak their minds so they enjoy what they're doing or they don't mind sacrifice. But that gets into some really deep-seated engineering issues that are actually ethical in nature that go back to Brave New World, for example, situations where humans were genetically engineered and took a drug called soma, so that they would want to live the lives that they were given. So we have to really think about the right approach. So it may be the case that we deliberately devise machines for certain tasks that are not conscious.
On the other hand, should we actually be capable of making some machines conscious, it may be that humans want conscious AI companions. So, for example, suppose that humans want elder care androids, as is actually under development in Japan today. And as you're looking at the android shop, you're thinking of the kind of android you want to take care of your elderly grandmother, you decide you want a sentient being who would love your grandmother. You feel like that is what best does her justice. And in other cases, maybe humans actually want relationships with AIs. So there could be a demand for conscious AI companions.
Chapter 4: Superintelligence over sentienceSUSAN SCHNEIDER: In Artificial You, I actually offer a 'wait and see' approach to machine consciousness. I urge that we just don't know enough right now about the substrates that could be used to build microchips. We don't even know what the microchips would be that are utilized in 30 to 50 years or even 10 years. So we don't know enough about the substrate. We don't know enough about the architecture of these artificial general intelligences that could be built. We have to investigate all these avenues before we conclude that consciousness is an inevitable byproduct of any sophisticated artificial intelligences that we design.
Further, one concern I have is that consciousness could be outmoded by a sophisticated AI. So consider a super intelligent AI, an AI which, by definition, could outthink humans in every respect: social intelligence, scientific reasoning, and more. A super intelligence would have vast resources at its disposal. It could be a computronium built up from the resources of an entire planet with a database that extends beyond even the reaches of the human World Wide Web. It could be more extensive than the web, even.
So what would be novel to a superintelligence that would require slow conscious processing? The thing about conscious processing in humans is that it's particularly useful when it comes to slow deliberative thinking. So consciousness in humans is associated with slow mental processing, associated with working memory and attention. So there are important limitations on the number of variables, which we can even hold in our minds at a given time. I mean, we're very bad at working memory. We could barely remember a phone number for five minutes before we write it down. That's how bad our working memory systems are.
So if we are using consciousness for these slow, deliberative elements of our mental processing, and a superintelligence, in contrast, is an expert system which has a vast intellectual domain that encompasses the entire World Wide Web and is lightning fast in its processing, why would it need slow, deliberative focus? In short, a superintelligent system might outmode consciousness because it's slow and inefficient. So the most intelligent systems may not be conscious.
Chapter 5: Enter: Post-biological existenceSUSAN SCHNEIDER: Given that a superintelligence may outmode consciousness, we have to think about the role that consciousness plays in the evolution of intelligent life. Right now, NASA and many astrobiologists project that there could be life throughout the universe, and they've identified exoplanets, planets that are hospitable, in principle, to intelligent life. That is extremely exciting. But the origin of life right now is a matter of intense debate in astrophysics. And it may be that all of these habitable planets that we've identified are actually uninhabited.
But on the assumption that there's lots of intelligent life out there, you have to consider that, should these life forms survive their technological maturity, they may actually be turning on their own artificial intelligence devices themselves. And they eventually may upgrade their own brains so that they are cyborgs. They are post-biological beings. Eventually, they may have even their own singularities.
If that's the case, intelligence may go from being biological to post-biological. And as I stress in my project with NASA, these highly sophisticated biological beings may themselves outmode consciousness. Consciousness may be a blip, a momentary flowering of experience in the universe at a point in the history of life where there is an early technological civilization. But then as the civilizations have their own singularity, sadly, consciousness may leave those biological systems.
Chapter 6: The challenge: Maximizing conscious experienceSUSAN SCHNEIDER: That may sound grim, but I bring it up really as a challenge for humans. I believe that understanding how consciousness and intelligence interrelate could lead us to better make decisions about how we enhance our own brain. So on my own view, we should enhance our brains in a way that maximizes sentience, that allows conscious experience to flourish. And we certainly don't want to become expert systems that have no felt quality to experience. So the challenge for a technological civilization is actually to think not just technologically but philosophically, to think about how these enhancements impact our conscious experience.
Read the rest here:
Posted: at 7:51 pm
James Lovelock. Illustration: Stevie Remsberg/Intelligencer. Photo: Jeff Spicer/Getty Images
On September 23,the United Nations opened its Climate Action Summit here in New York, three days after theGlobal Climate Strike, led by Greta Thunberg, swept through thousands of cities worldwide more than 4 million protesters around the world, marching out of anger that so little has been done. To mark the occasion, Intelligencer is publishing State of the World, a series of in-depth interviews with climate leaders from Bill Gates to Naomi Klein and Rhiana Gunn-Wright to William Nordhaus interrogating just how they see the precarious climate future of the planet and just how hopeful they think we should all be about avoiding catastrophic warming. (Unfortunately, very few are hopeful.)
James Lovelock turned 100 this year and celebrated by publishing a new book on artificial intelligence. But he is known as much more of an old-fashioned scientist and compares himself to Darwin and Faraday in that he also likes to work alone, outside of institutions. Nevertheless, though you may not know his name, he is among the most influential scientists of the 20th century, having developed and then, over the course of decades of writing, refined and refashioned what is called the Gaia theory, or the principle that Earths ecosystem is a single, living, self-regulating entity. In early September, just a few months after his birthday, I met Lovelock one morning at his home on Chesil Beach in southern England, where we talked about nuclear power, his hope that AI might save the planet from catastrophic warming, and just how to integrate the disruptions and disturbances of climate change into a Gaia worldview.
At 100 years old, youve been alive for something like 90 percent or more of all the carbon emissions that have ever been produced from the burning of fossil fuels.Exactly. Well, I hope you dont blame me for that.
But the world really has changed an enormous amount in your lifetime.Yes. I grew up from 6 till about 14 years old in an area of London which was probably more polluted than anywhere in the world. Particularly vile air. It was so thick that not only could you not see a hand in front of your face, but people were dying on railroad platforms because they couldnt see where the platform ended. Thats coal for you.
On climate, your views have changed over time, I know. You were for a period more alarmed, and then you grew a little bit less alarmed. How do you see the big picture at the moment? Where do you think we are, and where do you think were heading? The big picture is that everything is continuing more or less as predicted by climate scientists. But the exact course, of course, depends on all sorts of things.
But taking seriously the main proposition of Gaia theory, if the whole Earth system is a kind of living, self-regulating entity of which human activity is also a natural part and one we shouldnt be trying to exclude, what is concerning about climate change? Why shouldnt we just accept that as being part of the same system? Up to a point, we have to, and we do, wrongly. I mean, if youre an ordinary man with a family, youve got to have an income. Youve got to work for somebody on something and that determines what you do, rather than any environmental concern.
But thinking more globally, people like you and me, who think about these things in somewhat bigger terms how concerned should we be? Well, at first you get into a panic. At least I did. And then eventually you realize that theres not a lot you can do about it. I mean, did you ever read that book by Martin Rees, Our Final Hour? Well, that was written quite a while back and I think hes right.
The warm-up of the sun is quite remorseless, and it will continue. Unless we do something like [physicist Edward] Tellers idea of putting up sunshades in the heliocentric orbit, weve had it. Thats it. There isnt any way you could survive if the sun continues to warm up.
But nobody can predict the climate in two or three years time. It could be almost anything. For example, there was news of a very large volcano eruption emerging in the middle of the Pacific, from below. Well, of course, if that develops and magma starts coming up, that could change the whole picture. Im hoping it wont happen and probably it wont.
When you allow yourself to be optimistic, how do you see the next few decades unfolding? Well, I wont be here for one, so I wont see them. But I think we will have to curb our tendency to burn fossil fuels. And I think the big companies are beginning to realize themselves that you cant make money that way. What replaces it, I hope, is nuclear, but probably theyll mess about with renewables for awhile until they find their way to nuclear.
Why do you think it has been so difficult to get nuclear power going again?Because theres propaganda. I think the coal and oil business fight like mad to tell bad stories about nuclear.
Why is that? Because historically they havent seen renewables as the same scale of threat? Yeah. I mean, when you look at the death rates in the nuclear industry, its almost ludicrously low. In this country, I think, it doesnt exist at all. Nobodys been hurt.
And even if you look at the worst disasters, theyre nothing compared with the damage thats done by burning coal. Thats right. Its a fake business. And its amazing that people have been persuaded by it. I wish you journalists would write out what happened, because just after World War II, there was a lot of interest in using nuclear power and the politicians are all for it. In fact, one of them said, itll be so cheap, it will be impossible to meter it. Which is would that it were true! But the people with loads of money in the oil industry made sure that never happened. And of course the greens played along with it. Theres bound to have been some corruption there Im sure that various green movements were paid some sums on the side to help with propaganda.
Just the word nuclear conjures such fears now. Its almost as though, if it had just been called a different thing, the public would have been much more receptive to it. And if we dont move into nuclear more aggressively, do you think theres any hope that we avoid, say, two degrees of warming? Or is that basically inevitable?I wish I knew. People have to ask the questions of the financial people theres the real driver. The reason were continuing to burn fossil fuels is that all the moneys invested in it, right? I find it almost hilarious.
It seems to me that the public is slowly waking up to this story. Especially over the past couple of years, there has been a kind of a change. Well, I hope youre right. I look at those affairs like the Paris conference more as parties. So, great, get together, youll have a great time. But the conferences are not serious.
And no country in the world is honoring the pledges it made during the Paris accords. But in your new book, you put a lot of faith in the possibility that superintelligence will arrive and, among other things, address this problem and maybe save us from ourselves.The reason I speculated along those lines was that Darwin has been an amazingly right during his lifetime. And it is a natural follow-on from Darwinism that we dont just stay still as humans. Theres this extraordinary belief amongst most people, that future humans are going to be just like us. Were beginning to see things like AI developments yielding the possibility of existing as the independent life forms, in which case youve got a new kingdom of life. Thats the way I see it.
I owe this to my colleague Lynn Margulies. She likes to divide life up into the kingdoms vegetable, animal its almost childish, but I think its absolutely solid. The AI stuff represents a new kingdom. Theyre about 10,000 times faster than we are, so it would look on us much in the way that we look on plants, which are 10,000 times slower than us. Its just another kingdom. But were all needed were all part of the same system, or thats how I think. Which is how you get Gaia.
But if you think about our relationship to plant life as having not exhibited what you could call a perfectly responsible relationship to the natural world, why should we expect better from a superintelligence?Because they need us.
But we need plant life, right?We do need plant life. We cant go to war with it.
So why should we expect it to be more responsible toward humans and the natural world than humans were to the plant world? As you say, were much more impressive cognitively than other animals, and certainly more than plant life, yet in many ways weve managed the planet much less well than those kingdoms did.Its a good point.
So why do you think AI would be a better steward of the planet than humans?I have a feeling that stewardship doesnt come into it. Its just what they will have to do to survive. Its nice to think that stewardship is important, but I rather suspect we talk about it but we dont practice it.
Whats a better model for how we should relate to the natural world?Accept it.
And you think AI would take that view?The reason they would limit warming is quite simple: the properties of water. Thats the deadly thing, which youve written about in your book. The ocean, at the moment, an awful lot of it is approaching 15 Celsius. Now that, what could be harmful about that? Everything. If you go anywhere in the world where the temperature is, the water is 15 and looked down, its beautifully clear and you can see down to 100 fathoms down, because theres no life in that water. Its a complete desert. And the reason for that is that the nutrient-rich lower waters cant get to the top.
Youve always written about the human role as being part of the greater Gaia ecosystem. But the theory of Gaia as Ive seen it picked up by environmentalists often sets human activity against the rhythms of the natural world as though we are outside the natural world, in fact its enemies.Thats absolutely right. Theyve gotten it dead wrong.
Why do you think that is? Why do you think theyve been so blind to the sort of basic formulation you put forward? I think its a series of reasons. I think its high time that science was treated in much the same way as the church was treated in the Middle Ages. You need a dissolution of the universities, because its quite ridiculous taking students and teaching them a single subject, with no idea whats going on in the rest of science. But thats what goes on. And you kind of cannot possibly understand a complex system like Gaia unless youre looking at not just one, but the great bulk of the sciences, together.And that may seem a dreadful task, but it isnt really because you dont have to understand the whole of all of the sciences theres a sort of crossover. You can in your mind cross over between the various parts and understand much more than you might think was possible.
But let me tell you just the story of how it all started. I was invited to go to the Jet Propulsion Laboratory, to NASA, only three years after NASA had formed. Soon after I was there, I was deposited for a bit in a meeting with a great group of biologists. Quite a few of them had Nobel Prizes in various things. Theyd been picked up by NASA to design a life-detection experiment. I was asked what I thought of that, and it was appalling. Most of them went out into the Mojave Desert and said, Mars is just like this, so if we can grow something here, we can do it there. Which was just crazy. What little we knew about Mars even then suggested it was totally different. It was a daft assumption. They got very cross with me cause I kept on saying, You know, youre wasting your time on that. And I got called to see one of the head what you might call rocket scientists. He said, Why are you upsetting all these biologists? You go on like this and hell be out of a job. But then he added, Well, what would you do to detect lives? And Id just read that little book by Schrdinger called What Is Life? I said, If you read that, that offers a good standard. Oh my God, they said, give me a practical example that we can put on a rocket.
I said, Ill have to think about it, you cant ask me a question like that across the table. They said, Well, youve got till Friday. I was pretty worried! Thursday night I could see my job going down the tubes. But then suddenly it came to me. God, dead easy. All I have to do is measure the composition of Marss atmosphere. If its made of gases that react with each other chemically and produce heat or products or whatnot, then that fulfills the definition of life, according to Schrdinger entropy. And you could do the same thing for the surface: If the surface reacts with the atmosphere or the ocean and you get heat produced, then the planets alive, because that can never happen by chance. And he said, Ah, now youre talking. And that became Viking.
And then you just applied the same perspective to Earth. It may be that Im too worried about climate change, but I have a hard time adopting the same point of view.I think we can extend the lifespan of the current system using nuclear power. But we are near the edge, in terms of keeping the thing going. Any further interference is likely to be disastrous.
Daily news about the politics, business, and technology shaping our world.
Posted: September 24, 2019 at 5:41 pm
Brain-computer interfaces will initially be used for assistive purposes, but it's likely that more general consumer versions will eventually become available, for better or for worse. (Photo by Rolf Vennenbernd/picture alliance via Getty Images)
Everything is heading towards the brain-computer interface. The cellphone, the internet, and social media are only three of the technologies that have colonized expanding segments of our lives, and while they have their own respective uses, each can be considered but a stepping stone on the path to plugging our brains directly into the web.
Yes, this sounds like the stuff of dystopian sci-fi, but for several years now a growing number of organizations have been working on the development of brain-computer interfaces (BCIs). While still at an early stages of development, these would enable us to operate connected devices simply by thought, while at the same time, some would allow for the causation to flow in the opposite direction, from the outside world into our brains.
Such a reality drew a little nearer this month, when researchers from the Georgia Institute of Technology, University of Kent and Wichita State University published a study detailing how they'd developed a wireless and portable brainmachine interface (BMI), which thanks to flexible scalp electronics and a deep learning algorithm was used to control a wheelchair, a small robotic vehicle, and a computer device.
"This work reports fundamental strategies to design an ergonomic, portable EEG [electroencephalography] system for a broad range of assistive devices, smart home systems and neuro-gaming interfaces," explained study co-author Woon-Hong Yeo, an assistant professor at Georgia Tech. "The primary innovation is in the development of a fully integrated package of high-resolution EEG monitoring systems and circuits within a miniaturized skin-conformal system."
The study represents a breakthrough in numerous areas. For one, many BCI or BMI interfaces developed up until now have relied on intrusive methods for connecting brain signals with computing devices, whereas the study's interface utilizes flexible, hair-mounted electrodes that make contact with the scalp through hair, as well as using soft circuity equipped with a Bluetooth telemetry unit that can transmit signals wirelessly to devices up to 15 metres away.
On top of this, the researchers also developed neural network algorithms in order to accurately interpret the wirelessly transmitted signals, which previously were difficult to process and identify.
"Deep learning methods, commonly used to classify pictures of everyday things such as cats and dogs, are used to analyze the EEG signals, said Chee Siang (Jim) Ang, a senior lecturer at the University of Kent. Like pictures of a dog which can have a lot of variations, EEG signals have the same challenge of high variability. Deep learning methods have proven to work well with pictures, and we show that they work very well with EEG signals as well.
Put simply, commercially available BCIs that can be used by patients, disabled individuals and perhaps even the general public to control smart devices are now one step closer to becoming a reality. The thing is, it's clear that research and development won't stop with the production of rehabilitative and assistive devices, but will pursue everyday BCIs that will ultimately serve to keep us constantly connected to the internet.
Sounds too far-fetched to believe? Perhaps, but there's no doubt this is the scenario some very high profile and powerful corporations are working towards. Back in 2017, Facebook infamously announced that it had begun work on developing "brain-typing" technology, so that people could post to the social network "directly" from their cerebra, even when they're doing something else, such as speaking face-to-face with an actual friend or driving their cars.
"This isnt about decoding random thoughts. This is about decoding the words youve already decided to share by sending them to the speech center of your brain," clarified Facebook's Regina Dugan, as if hooking our speech centers directly to Facebook weren't already scary enough.
Unsurprisingly, Facebook isn't the only big name working hard to produce reliable and usable brain-computer interfaces. Another key company is Neuralink, which was launched by Elon Musk in 2016 and plans its first human test of its current BCI technology in 2020. "A monkey has been able to control a computer with his brain," revealed Musk in July, when he was live-streaming a presentation on what his brainchild (pun intended) had achieved up until then.
Much like the researchers from Georgia, Kent and Wichita, Neuralink has been developing sensors that connect to the brain and permit it to control linked devices. However, somewhat unsurprisingly for the man who also established Tesla and SpaceX, Musk has set his sights ominously high for Neuralink, with his ultimate goal being the construction of a "digital superintelligence layer" to connect humans with AI systems.
"Ultimately, we can do a full brain-machine interfaces where we can achieve a sort of symbiosis with AI," Musk also said during the same July presentation.
Needless to say, this kind of eventuality is decades away. Nonetheless, the intent and direction is clear: hook people up directly to the internet and to smart technology, and not just to permit them to control things remotely, but to influence or even control how they behave.
Of course, this is the worst-case scenario, but with startups such as Kernel, Neurable, and BrainCo also working on similar technology, it will surely be only a matter of time before at least one of them produces something that's currently better left to a Philip K. Dick or William Gibson novel.
And once they do produce a workable BCI, the sky will be the limit in terms of how they can use it for profit. People will be able to be online regardless of where they are, either helping to generate the personal data that makes money for the likes of Facebook, or purchasing the products that have made Amazon and Walmart some of the biggest companies in history. And at the same time, the possibility of being 'connected to AI' would mean that our actions will flow less from our own judgments and thoughts on what's in our best interests, and more from what data and algorithms have decided is best for us.
In other words, the insertion of technology (i.e. the corporations that produce technology) into every aspect of our lives and selves will be complete.
Read this article:
Posted: at 5:41 pm
It has been more than six decades since the concept of Artificial Intelligence has transformed from imagination to an academic discipline. Influencers, especially those active on social media help give direction to the policymakers and academicians. They keep common men updated on the trends and what is what in AI, Machine Learning and associated concepts like Big Data and BlockChain. AiThority introduces you to the 50 most popular AI-influencers of North America.
1. Bob E Hayes Senior Director of Research and Analytics atIndigo Slate an Experience agency. A PhD in industrial-organizational psychology, his interests lies in Data Science, CX, Statistics and Machine Learning.
2. Kirk Borne Principal Data Scientist at Booz Allen Hamilton. Astrophysicist and Top Big Data Science and AI Influencer.
3. Martin Ford Futurist and the author of three books on Artificial Intelligence, Robots and Automation. His new book is Architects of Intelligence. He is also the founder of a Silicon valley-based Software Development firm.
4. Adam Coates A Director at Apple. I received my PhD from Stanford University in 2012 and was the director of the Silicon Valley AI Lab at Baidu Research until September 2017, then an Operating Partner at Khosla Ventures until 2018.
5. Fei-Fei-Li Co-Director at Stanfords Stanford Vision and Learning Lab (SVL), a Human-Centered AI Institute. Researcher in AI, Computer Vision, AI+Healthcare.
6. Satya Mallick is the Interim CEO of OpenCV.org. He is the founder of Big Vision LLC, a San Diego, California based company that specializes in Computer Vision, Machine Learning, Deep Learning and Artificial Intelligence consulting services and products.
7. Amir Shevat CPO at Prev, CPO and Co-Founder of ShiftJS, built Googles GDE and Launchpad program.
8. Soumith Chintala Principal Engineer level employee at Facebook AI. Createdand lead PyTorch. He loves researching on multi-modal world models and robots.
9. Adelyn Zhou is a Business Leader and bestselling author who is passionate about the intersection of Marketing, Automation, and the future of independent work. She has worked with Nextdoor, Eventbrite and Amazon.
10. Alec Lazarescu Chatbot builder writer and creator of Bots and I meet up.
11. Roman Yampolskiy is a Tenured Associate Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. Creator of 3 Human-Level Intelligences. He is the Founding and current Director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach.
12. Gary Marcus is a Scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI and was Founder and CEO of Geometric Intelligence, a Machine Learning company acquired by Uber in 2016.
13. Mike Quindazzi is a Business Development Leader and Management Consultant at PwC. He is the Executive Chairman and Board member of The LA County Economic Development Corporation.
14. Yann Lecun is a Vice President and Chief AI Scientist, Facebook
15. Rachel Thomas Director of USF Center for Applied Data Ethics. She is also the Co-Founder of fast.ai. Her interests lie in Deep Learning, bias and ethics.
16. Mariya Yao Editor-In-Chief of Topbots. She was the former Former CTO of Metamevn.ai. She is a writer, speaker and an expert in AI and Machine Learning.
17. Francois Chollet is a Software Engineer at Google. He is the creator of Keras, neural networks library and authored Deep Learning with Python.
18. GP Pulipaka is a Chief Data Scientist with Accenture Technology. He is a Top Data Science Influencer
19. Andrej Karpathy Director of AI at Tesla. Previously a Research Scientist at OpenAI, and CS PhD student at Stanford. I like to train Deep Neural Nets on large datasets.
20. Gideon Rosenblatt Gideon Rosenblatt writes about the relationship between technology, work, and humanity. His primary outlet is his website, The Vital Edge.
21. Delip Rao VP Research at the Ai Foundation, Author. He works on Startups, Natural Language Processing (NLP), Computer Vision, Speech, Machine Learning.
22. Angelica Lim: Canadian-American Computer Science prof and robot geek. PhD in AI, Robotics and Emotions. Previously worked with SoftBank Robotics.
23. Oren Etzioni CEO of AIlen Institute for AI (AI2) and a Professor.
24. Adam Geitgey Software Engineer and Consultant, authored Machine Learning is Fun.
25.Paul Roetzer Founder of Marketing Artificial Intelligence Institute, authored The Marketing Performance Blueprint, creator of Marketing AI Conference (MAICON).
26. Tamara Mc Cleary CEO of Thulium, a Social Media Marketing agency. Her interests lie in Social Media Analytics.
27. Katrina Klier Digital, Marketing and Operation Executive; Women in Tech Advocate and Global Managing Director of Accenture.
28. JJ Kardwell CEO and Co-founder of EverStrings, an SaaS solutions firm. Kardwells interests lie in Data Science, AI control, Predictive Analytics and ABM.
29. Vincent Boucher is the Founding Chairman of Montreal AI and Quebec AI. He is the CEO of billionaire.tv
30. Larry Kim is the CEO of MobileMonkey, Founder of WordStream which was acquired by Gannet for $150M. He is into start-ups, AdWords, Chatbots and played a part in popularizing Unicorns in Marketing.
31. Alec Radford A Machine Language developer and researcher at OpenAI. He Co-founded indico.io
32. Miles Brundage Research Scientist (Policy) at OpenAI. Research Associate, Future of Humanity Institute, University of Oxford.
33. Hard Maru: Research Scientist at google brain
34. Evan Kirstel Evan Kirstel is a thought leader, technology influencer, and B2B marketer. He is the Chief Digital Evangelist and Co-Founder of EviraHealthm
35. Demis Hassabis Google DeepMinds Co-Founder and CEO. He is into AlphaGo, AlphaZero, AlphaFold and Atari DQN.
36. Hugo Larochelle Google Brain researcher, Machine Learning professor at the University of Sherbrooke.
37. Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He designs Machine Learning algorithms.
38. Sarah Austin Founder CEO of We Wire. She is into FinTech, Product Marketing and is a Data Scientist.
39. Anima Anandkumar Director of research at Nvidia. Professor at Caltech University.
40. Sebastin Raschka Assistant Professor of Statistics at the University of WisconsinMadison. He is a Machine Learning and Deep Learning researcher. He is the author of Python Machine Learning and open source contributor.
41. Vered Shwartz She is a Postdoctoral Researcher at the Allen Institute for Artificial Intelligence(AI2) and theUniversity of Washington. She did her PhD in Natural Language Processing.
42. Rana el Kaliouby Entrepreneur, Scientist, Co-founder and CEO of Affectiva, that offers an image recognition software. She claims to be on a mission to humanize technology with Emotional AI.
43. Mike Tamir Head of Data Science at Uber Advanced Technologies Group. He is a faculty at UC Berkeley Faculty. He is into Deep Learning, Machine Learning, Artificial Intelligence and has interests in fighting fake news.
44. David Kenny CEO and Chief Diversity Officer at Nielsen. AI. He was the Senior Vice President of IBMs Watson & Cloud platform. He was also formerly the CEO of The Weather Company, which was acquired by IBM.
45. The Cybercode Twins Twin sisters America Lopez andPenelope Lopez are famed coders. They have massive interests in Cryptocurrency, Blockchain and AI.
46. Ben Lorica Chief Data Scientist at OReillyMedia. He is the Program Chair of The Aiconf, TensorFlowWorld and hosts the OReilly Data Show podcast.
47. Bill Franks Chief Analytics Officer for The International Institute For Analytics (IIA),
48. Hillary Mason General Manager for Machine Learning at Cloudera. Founder of Fast Forward Labs.
49. Gil Press He is writing and is into Marketing services, Writes for whatsthebigdata.com and prominent business newspapers.
50. KD Nuggetsaka Gregory Piatetsky Shapiro, is a Data Scientist and Co-Founder of KDD conferences. He is into Analytics, Data Mining, Big Data
Following some of these opinion leaders on Twitter and other social media platforms is the best way to keep yourselves in sync with the AI technologies that are going to affect your life and lifestyle.
Read more from the original source:
Posted: September 18, 2019 at 4:24 pm
Many science fiction novels have theorized an omniscient and omnipresent artificial intelligence that towers above human intelligence. What many dont know is that this concept does have a place in the field of artificial intelligence, albeit merely as a theory.
First theorized by Oxford philosopher Nick Bostrom, artificial superintelligence is a theory in the field of artificial intelligence. This futuristic AI can perform beyond the limits of the human mind, even geniuses. While it is still a theory, the methods of achieving superintelligence are also widely debated.
Most of these methods involve creating a human-like artificial intelligence known as artificial general intelligence. The fallout from creating such an artificial intelligence is also the subject of discussion among the AI community. Overall, the concept has sparked endless conversations among scientists and philosophers alike.
In this article, we will try to understand superintelligence, how it could come about in the future and the risks associated with it. We will also take a look at how regulation today will offset some of the potential negative outcomes of artificial superintelligence. Lets delve deeper into superintelligence.
Table of Contents
What Is Superintelligence?
The Road to Superintelligence
Risks of Superintelligence
Regulation and Ethics in AI Today
Closing Thoughts for Techies
According to Nick Bostrom, superintelligence is any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest. This means that superintelligence is smarter than visionaries in any field and surpasses genius-level human beings at conducting a task.
Bostrom says that superintelligence will eclipse human intelligence in areas, such as scientific creativity, social skills, and general wisdom. This could mean that superintelligence can employ various cognitive processes that make human intelligence what it is, albeit at much higher speed and efficiency.
Hence, superintelligence can be defined as artificial intelligence that performs human-like cognitive processes at exponentially higher speeds and efficiencies when compared to the human mind. The AI programs we have today fall into the category of narrow artificial intelligence, as their domain of authority is constrained to a well-defined problem.
This means that any AI program created today is created for a specific purpose, usually a problem to be solved. Certain AI programs can even learn by ingesting new data relevant to the problem and are termed machine learning algorithms. Going beyond this, the most recent field in the AI space is deep learning.
Deep learning mainly uses neural networks, which are a simulation of the structure of the human brain. These neural networks are able to process data more efficiently than previous AI algorithms, and can even learn from their results to iterate and improve upon themselves. Currently, this is the apex of how AI programs are created, and usually provides better results than their machine learning counterparts when faced with similar problems.
Even as neural networks are more advanced today, they still fall under the category of narrow AI. They are programmed to solve a complex problem and will only know about the problem at hand. Moreover, they cannot expand upon this learning to effectively tackle similar problems without additional training.
The next theoretical step forward would be to create a general artificial intelligence. More commonly referred to as strong AI, general AI will be capable of solving many problems with human-like accuracy. We are yet to create a true general artificial intelligence, as the requirements for it are beyond the scope of current technology.
General AI will be able to replicate human intelligence in all cognitive aspects. This includes concepts, such as abstract thinking, comprehension of complex ideas, reasoning, and general problem-solving. The road to artificial superintelligence requires the creation of such an AI.
Learn More: What Is the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning?
While it is easy to replicate certain cognitive processes, such as arithmetic, statistics, calculus, and language translation, it is more difficult to replicate unconscious cognitive processes. Therefore, superintelligence first requires humanity to create a general artificial intelligence.
In other words, we have managed to make computers accurately replicate the cognitive processes that require thinking but have not replicated the tasks humans carry out without thinking. This includes processes, such as vision, perception, language, and understanding complex ideas. Before doing so, we must build the vast computing infrastructure to power the algorithm.
Any computing infrastructure required to power an AGI must be capable of reproducing the power of the human brain. This has been estimated to be around one exaflop or one billion calculations per second. Keeping in mind the current pace of advancement of computing power, this level is expected to be reached around 2025.
Secondly, we must find a way to accurately recreate the human mind in a digital form. This has many pitfalls, with the primary one being the inability of humans to understand how the human mind works. The human mind is thought to be a collection of thousands of biological and behavioral processes, much of which we simply do not understand. The complexity of the workings of the mind can never be easily explained, and thus has never been entirely understood, despite several efforts to explain it using science and psychology.
Superintelligence is predicted to be achieved through a phenomenon known as an intelligence explosion. When a general AI is created, it will begin to not only learn from data but also its own actions. As it is conscious of its own actions, it will continually improve itself in a process known as recursive self-improvement.
This means that the program will begin at a human-level intelligence and use that intelligence to improve itself. This will increase the general threshold of what it can process, and it will use this knowledge to improve itself further. This process will cause an exponential increase in the intelligence of the program, with each iteration being more powerful than the previous one.
Then, the program will undergo an explosion of intelligence, improving itself at a rate that surpasses genius-level intelligence. Eventually, it will surpass the collective intelligence of human civilization and will continue to increase its intelligence, thus becoming a superintelligence.
Learn More: How Artificial Intelligence Has Evolved
As a program that is more intelligent than any human being, even the ones that created it, containing or controlling an artificial superintelligence is a difficult undertaking. By definition, superintelligence will be even smarter than the engineer who created its architecture, allowing to simply break out of the machine.
Due to this, superintelligence is subject to many risks stemming from the way it is programmed. If the program does not have clearly specified goals, it can make a decision that is unfavorable to humanity.
To properly illustrate this, let us take a scenario into consideration.
Sometime in the future, scientists are at their wits end regarding how climate change can be reversed. Therefore, they create a superintelligence to save the world from global warming. This is defined as its only goal, and upon reaching superintelligence, the program begins ingesting data to determine the cause of global warming.
Here, the intention is to save the human race by engineering a solution to global warming. However, in the circumstance that this condition is not explicitly mentioned, two main scenarios will occur. Firstly, the program will ensure its survival by any means possible, as it must finish its assigned task. Secondarily, after looking at the data, the superintelligence may determine that humanity is the biggest reason for global warming and create a plan to wipe humanity off the face of the earth.
Vaguely defined problems are one of the biggest risks that come into play when dealing with superintelligence. This might create a program whose goals are not aligned with humanitys, leading to its extinction. In a situation where the conquering intelligence is significantly higher than humanitys, it is inevitable that misaligned goals will cause the extinction of lesser species.
As mentioned previously, superintelligence will set self-preservation as one of its biggest priorities. This is because the algorithm will be created with the sole purpose of solving a problem. It cannot solve a problem if it does not have the required resources to do so. This will cause it to compete with humanity for resources, resulting in an adverse scenario for humanity.
In addition to this, a self-aware superintelligence will not allow its goals to be changed after deployment. This means that even if a superintelligence is conducting unfavorable activities to reach its goal, its priorities cannot be changed as the program will simply not allow it.
Prominent science fiction authors have introduced concepts, such as Asimovs Laws of Robotics, which are a set of hard-coded laws that the AI is required to follow. While this might fall into the category of defining a problem better, such safety measures are already being set up for the eventual rise of self-aware AI and, by extension, superintelligence.
Learn More: How Ethically Can Artificial Intelligence Think?
As AI becomes more powerful, regulators around the world are looking for ways to reign in the misuse of such algorithms. Some of them include the weaponization of AI and the ethical ramifications of releasing powerful AI to the world. Handling biases that are faced by todays AI also falls under this category.
An example of how the use of AI is being regulated comes from OpenAIs take towards GPT-2 (generative pretrained transformer), a text generator algorithm. The program was trained with 40GB of text data from around the Internet and was able to generate text just like a human.
Going against their philosophy of open-sourcing all of their models, OpenAI decided against releasing the algorithm. This was due to concerns that the program could be used by parties with malicious intent to create fake news at an alarming rate. They called this an experiment in responsible disclosure, setting the standard for the kind of AI algorithms that should be disclosed to the public.
Such systems must be put in place for the responsible use of powerful AI programs. This is even more relevant in the case of superintelligence, as its misuse will not only result in widespread social ramifications but a possible extinction event for humanity.
The weaponization of AI must also be heavily discouraged, as a weaponized superintelligence will inevitably destroy the human race. Moreover, biases and ethical consideration must also be taken into consideration, as future AI may be conscious and self-aware. Forward-thinking regulations must be put in place, handling both todays issues with AI and the future of AI in the form of general AI and superintelligence.
Overall, even though the concept of superintelligence might seem like something that is restricted to science fiction novels, it is a possibility looming on the horizon. With the pace of AI innovation today, the building blocks for an AGI may be created before we are prepared for it.
The advancement of capable and affordable computing power is sure to slow down the general adoption of AGI. Researchers predict that the computing power required to replicate the human brains capacity will be reached around 2025. This, along with the pace of AI innovation today, sets the goal for a true AGI somewhere in the middle of the 21st century.
According to Nick Bostroms predictions, general AI could become the dominant AI by the middle of the 21st century. Although Bostrom is correct in assuming that such a human-level intelligence would be possible, the inability to control that intelligence will likely pose a much bigger threat to humans.
What are your views on superintelligence and the changes it will bring to humanity? Let us know on LinkedIn, Twitter, or Facebook. Wed love to hear from you.
Posted: at 4:24 pm
By Tom McLaughlin
Art is in the eye of the beholder. But art made from DNA and living cells? Like it or not, says LiQin Tan, we are only at the beginning of a revolutionary fusion of art and living organisms.
Tan notes that his book looks toward the future to consider how technological singularitys impact on conceptual and live bioart raises many thought-provoking and sometimes controversial issues.
The Rutgers UniversityCamden artist and researcher explains that the future of bioart art conceptualizing and/or incorporating biological elements will continue to be impacted by technology at a meteoric rate.
This is where art is going; no one can escape it, says the art professor matter-of-factly.
Tan explores the unchartered world of bioart in his new book, Singularity: Subversive BioArt (Guangdong Peoples Publishing House).
The book is a follow-up to his 2018 offering, Singularity Art: How Technology Singularity Will Impact Art (China Machine Press), which explores the impact of technological singularity, the notion that artificial superintelligence will trigger runaway technological growth, resulting in previously unforeseen changes to human civilization.
Some people want to panic when they consider, for instance, art that merges living organisms with inanimate materials, says Tan. Of course, we always fear everything new.
The RutgersCamden artist explains that there are two definitions as to what constitutes bioart. The first is live art, which uses genes, DNA, bacterium, algae, and living cells to create artworks.
For instance, some artists are using DNA to create transgenic genetically modified plants and animals, says Tan, citing the work of artist Eduarto Kac, who combined rabbit and jellyfish DNA to produce a bunny that glows green under blue light.
In another example, he notes, artist Li Shan changed the genes of pumpkins, resulting in the vegetables growing in an array of different shapes and sizes.
The RutgersCamden artist focused his art on ink-brush drawing on rice paper before being introduced to computers in the early 1990s.
Tan explains that, although biologists will alter DNA for scientific purposes for example, altering a vegetable to make it heartier or to taste better artists change genes with artistic concepts or metaphors in mind.
For instance, artists may try to represent social or political issues, says Tan, who adds that it is still against international standards to change human embryo DNA. People will ask, Why do you create? Its because artists need to express themselves.
The other form, says Tan, is called general bioart, which includes anything made from biological elements or symbolizing bio concepts. For instance, he says, a bioart installation may use computer animation to make cells move.
Some people would argue that this isnt bioart, but others agree that it is because it presents biological movement and elements regardless of whether they are living or still, he says.
Tan describes how he created a conceptual bioart installation wherein he grew plants on the top of large central processing units the electrical circuitry of computer systems in the shape of a square. The creation didnt use soil and relied on humidity in the air.
My main concept is that the Earths soil is not the only mother carrier of the plant, says Tan, who, for more than two decades, focused his art on ink-brush drawing on rice paper before being introduced to computers in the early 1990s. CPU technology has the potential to replace it gradually. In other words, technology would be the carrier of life evolution in the near future.
The RutgersCamden artist notes that, while previous books have defined and explored bioart, design, and education, his book looks toward the future to consider how technological singularitys impact on conceptual and live bioart raises many thought-provoking and sometimes controversial issues.
Tans 2018 book explores the impact of technological singularity, the notion that artificial superintelligence will trigger runaway technological growth, resulting in previously unforeseen changes to human civilization.
Among these points of discussion, Tan shares his personal philosophy that, as an artist and creator, technology shouldnt be utilized solely to change the tools and media that artists employ, but to change the very nature of what it means to be human.
Technology is going to change your life construction; the inside of your body, he explains. So if you change the human body, it will change ones creativity as well.
He adds that genetics for non-human species will be altered as well and a human-dominant view of life and civilization will be altered forever.
Humans have dominated society for nearly 6,000 years and we treat animals as a lower species, he says. Technology will enable non-human species to have consciousness and creativity as well, and give animals the opportunities to change and become equal to humans. So then, how will we define beauty and what is considered art? Those definitions will totally reconstruct.
He warns that no one person will be able to hold back technological progress and, with that, safeguard international, ethical standards that come along with these changes. With this inevitability, says Tan, its up to people everywhere to the change the world responsibly.
That is a positive way that we can embrace these changes, he says.
However, Tan readily admits, the debate as to what is considered a responsible and ethnical approach will continue. Some people, he notes, believe that a humanoid has already been programmed with deeper learning.
In the end, says the RutgersCamden artist and researcher, technological advances continue to be made at an unfathomable rate, so its up to people as humans and artists to realize their untapped potential.
See the original post:
Posted: at 4:23 pm
JAKARTA: As the worlds first humanoid robot with citizenship to flaunt, Sophia is no small wonder.
Developed by Hong Kong-based Hanson Robotics, Sophia made the headlines in October 2017 after Saudi Arabia became the first country in the world to grant her citizenship.
On Monday, she shared the stage with other speakers at the 2019 CSIS Global organized by the Center for Strategic and International Studies (CSIS) and the Pacific Economic Cooperation Council (PECC) in the Indonesian capital Jakarta and pushed all the right buttons.
During an interactive session with the audience, Sophia addressed one of mankinds greatest fears will we be one day replaced by artificial intelligence (AI) and machines?
Robot brains are modeled after human brains, but they are very different in many ways, Sophia said, adding that there were more opportunities for a partnership rather than competing against one another.
According to Dr. Luke Hutchison, founding member of Ray Kurzweils AI Lab at Google, it is not about the rise of superintelligence and doing something evil to humans that constitutes a danger.
Citing the recent cases of deadly Tesla crashes due to a faulty autopilot as an example, he argued that the real dangers of the AI are not evil AI but bad AI and a lack of human, corporate responsibility.
While Tesla blamed drivers for not taking action seconds prior to the crashes, Hutchison said it was AI technology creators who needed to be held accountable for what they built.
This is a very common example of what we see in the corporate use of machine learning, where companies are not taking responsibility for the very technologies that they create. And its a very serious problem, he said in a keynote session.
He added that the deliberate misuse of AI was also problematic. Machine learning-powered disinformation campaigns or AI-based techniques known as deepfakes, destroyed the human category of truth versus falsehood, which is among our mental means to deal with the real world.
Deepfakes realistic video content showing people doing things they had never done or said things they had never said give room for making real claims about fake news or for denying real footage by claiming it is fake, which messes with our concept of reality, Hutchison said.
Another issue was brought to the fore by Sophia herself: the extension of human and civil rights to nonhumans.
When asked whether as a Saudi citizen she had to stand in an immigration line or entered Indonesia through customs, she said: They havent sent me my passport yet ... I still have to go through customs.
Her response was met with laughter, even as everyone present in the audience was aware of the fact that the issue itself could redefine the basic concept of human and civil rights. Universally denied to animals, which like us are sentient, they may soon be universally granted to insentient nonhumans. Sophias creator, David Hanson, said last year that this could happen by 2045.
While a discussion on human liberties for nonhumans has yet to start, much has been said about robots taking over our jobs, which appeared to be a major fear among audience members.
In 2017, the McKinsey Global Institute forecasted that nearly 800 million jobs could be lost to automation by 2030. However, most of them are the simplest, manual occupations that for ages have seen the use of bonded labor.
Asked about the jobs of tomorrow, Sophia herself listed those that will require governments to offer better education, which consequently will give people more opportunity to flourish. Engineering and programming will be high on the list, she said, but we will also need people with creativity and the ability to dream. We will need artists, writers and visionaries.
Posted: at 4:23 pm
The 2019 drought among adult female-driven films at the U.S. box office got a much-needed drenching over the weekend, thanks to a pic with an all-star cast led by Jennifer Lopez and Constance Wu.
Younger and older women turned out in droves to see writer-director Lorene Scafaria's R-rated Hustlers, which opened nationwide to a better-than-expected $33.2 million to boast the biggest live-action opening of Lopez's 24-year film career, as well as the biggest start ever for Wu and STXfilms. More than two-thirds of the audience was female.
In 2018, a crop of movies relying largely on adult women to boost their fortunes prospered, including the PG-13-rated Crazy Rich Asians (also starring Wu), Ocean's 8 and Mamma Mia! Here We Go Again and the R-ratedFifty Shades Freed.
This year has been mostly tough going for movies targeting a female-dominated audience. The August ensemble mob drama The Kitchen, teaming Melissa McCarthy, Tiffany Haddish and Elisabeth Moss, bombed badly, opening to just $5.5 million in the U.S. and having earned $15 million to date globally.
Failing to break out were What Men Want (starring Taraji P. Henson), Little (starring Issa Rae, Regina Hall and Marsai Martin), Booksmart (led by Beanie Feldstein and Kaitlyn Dever)and Late Night(pairing Mindy Kaling and Emma Thompson), which topped out domestically at $54.6 million, $48.8 million, $22.7 million and $21.4 million, respectively.
"Hustlersis important on many levels and its success will reverse (at least for now) the idea that audiences only want to go the multiplex for a big-budget franchise movie experience and resist female-led films. Thats a good thing," says Paul Dergarabedian of Comscore.
Titles from the Disney empire are the big exception. Live-action movies from Disney Studios proper, such as this year's blockbuster Aladdin,often play heavily to females, in part because they are PG-rated family titles that also appeal to girls.
And while Disney and Marvel Studios' March 2019 blockbuster Captain Marvel, the first female-fronted superhero pic from Marvel, grossed $1.13 billion at the global box office, males still made up the majority of the audience turning out to see the PG-13-rated film, including 55 percent to 58 percent on opening weekend.
The performance of Hustlers is a much-neededwinfor films starring and directed by women that aren't Hollywood tentpoles.
Hustlers, with a reported net budget of $20 million after tax incentives, stars Wu and Lopez as strippers who lead a band of dancers in a plot to drug and steal from their Wall Street clientele in post-2008 Recession-era New York City. It is based on real-life events chronicled in a 2015 New York magazinestory.
Julia Stiles, Keke Palmer, Lili Reinhart, Lizzo and Cardi B also star in the movie, which garnered strong reviews and loud buzz out of its debut at the Toronto International Film Festival. (STXfilms picked up the project earlier this year when Megan Ellison of Annapurna Pictures put the movie into turnaround.)
"There is a pretty solid track record over many decades of audiences responding to well-made, interesting crime stories and that is the lens through which we viewed this film," says STXfilms chairman Adam Fogelson. "Some people have understandably broken it down to just being a stripper movie, but that's a mistake. With the incredible appeal of the cast, we always believed that at the right price, it was a bet worth making."
Female viewers made up 67 percent of Hustlers' audience, while Caucasians represented 36 percent of ticket buyers, followed by African Americans (26 percent), Hispanics (27 percent) and Asians/other (11 percent), according to PostTrak.
"We've leaned into films featuring and made by women. And there's no question as to the ongoing benefits of having diversity in the cast and crew," says Fogleson. "I'm thrilled this is another moment of which the industry should take note."
STX's previous biggest opening ($23.8 million) belonged to the 2016 female ensemble pic Bad Moms, not adjusted for inflation.
In July 2018, Warner Bros.' star-studded, gender-bending Ocean's 8 opened to $41.6 million domestically, followed a month later by a pleasing $26.5 million bow for the studio's Crazy Rich Asians. The two films went on to gross $297.7 million and $238.5 million, respectively, at the global box office. That same summer, the older-skewing Book Club, from Paramount, earned an impressive $104.4 million domestically after debuting to a modest $13.6 million.
And in 2017, Universal's female ensemble comedy Girls Trip opened to a strong $31.2 million on its way to grossing $115.1 million domestically.
That compares to a modest $18.2 million domestic launch for Paramount's What Men Want in February, while Universal's Little started off with $15.4 million in April. Booksmart's May opening was even more disappointing, opening to just $6.9 million.
There are several high-profile female-fronted films on the fall and year-end release calendar, including Disney's Maleficent: Mistress of Evil, returning Angelina Jolie in the titular role (due out Oct. 18); Sony and director Elizabeth Banks' Charlie's Angels reboot (Nov. 15); New Line's comedy Superintelligence, starring McCarthy and helmed by her husband, Ben Falcone (Dec. 20); and Sony's Little Women adaptation, directed by Greta Gerwig (Dec. 25).
Hustlers is one of the the biggest successes of 2019 to date for an original film, alongside Quentin Tarantino's Once Upon a Time in Hollywood and the comedyGood Boys, both of which are also R-rated.
Says Dergarabedian, "2019 has been a year of box office contradictions and for every original film that failed, there has been another that exceeded expectations."
Go here to read the rest:
Posted: August 25, 2017 at 4:19 am
When Elon Musk is not the billionaire CEO running three companies, he has a side hustle as our greatest living prophet of the upcoming war between humans and machines.
In his latest public testimony about the dark future that awaits us all, Musk urged the United Nations to ban artificially intelligent killer robots. And he and other fellow prophets emphasized that we have no time. No time.
In an open letterto the U.N., Musk, along with 115 other experts in robotics, co-signed a grim future where artificial superintelligence would lead to lethal autonomous weapons that would bring the third revolution in warfare.
And to add some urgency to the matter, the letter said that this future wasnt a distant science fiction, it was a near and present danger.
Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend, the letter states. We do not have long to act. Once this Pandoras box is opened, it will be hard to close.
Although lethal autonomous weapons are not mainstream yet, they do already exist. Samsungs SGR-A1 sentry robot is reportedly used by the South Korean army to monitor the Korean Demilitarized Zone with guns capable of autonomous firing.Taranis, an unmanned combat air vehicle, is being developed by the U.K. So autonomous weapons are already here. It remains to be seen, however, if this brings a new World War.
This is not the first time Musk has sounded the alarm on machines taking over. Heres a look at all the ways Musk has tried to convince humanity of its impending doom.
And he hasnt been mild in his warnings. If youre going to get people to pay attention to your robot visions, you need to raise the stakes.
Thats what Musk did when he toldMassachusetts Institute of Technology students in 2014 that artificial intelligence was our biggest existential threat. And in case he didnt get the students attention there, Musk compared artificial intelligence research to a metaphor of Good and Evil.
With artificial intelligence we are summoning the demon, in all those stories where theres the guy with the pentagram and the holy water, its like yeah hes sure he can control the demon. Didnt work out, Musk said.
So what can humans use as prayer beads against these robotic demons? Musk thinks that well need to use artificial intelligence to beat artificial intelligence. In a Vanity Fair profile of his artificial intelligence ambitions, Musk said that the human A.I. collective could beat rogue algorithms that could arise with artificial superintelligence.
If youre not sold with A.I. being an existential threat to humanity, are you more alarmed when you consider a world where our robot overlords treat us like pets? This is an argument Musk tried in 2017.
When Musk founded his brain-implant company, Neuralink, in 2017, he needed to explain why developing a connection between brains and machines was necessary. As part of this press tour, he talked with Wait But Why about the background behind his latest company and the existential risk were facing with artificial intelligence.
Were going to have the choice of either being left behind and being effectively useless or like a pet you know, like a house cat or something or eventually figuring out some way to be symbiotic and merge with AI, Musk told the blog. A house cats a good outcome, by the way.
Musk meant that being housecats for the demonic robot overlords is the best possible outcome, of course. But its also worth considering that housecats are not only well-treated and largely adored, but also, by acclamation, came to dominate the internet. Humanity could do worse.
Our impending irrelevance means well have to become cyborgs to stay useful to this world, according to Musk. While computers can communicate at a a trillion bits per second, Musk has said, we flawed humans are built with much slower bandwidths our puny brains and sluggish fingers that process information more slowly. We will need to evolve past this to stay useful.
To do this, humans and robots will need to form a merger so that we can achieve a symbiosis between human and machine intelligence, and maybe solves the control problem and the usefulness problem, Musk told an audience at the World Government Summit in Dubai in 2017, according to CNBC.
In other words, one day in the future, humans will have to join forces with artificial intelligence to keep up with the times or become the collared felines Musk fears well become without intervention.
What will it take for our robot prophet to be heard, so that his proclamations dont keep falling on deaf ears?
Although Musk may seem like a minority opinion now, his ideas around the threat of artificial intelligence are becoming more mainstream. For instance, his idea has been widely adopted that we are living right now in a computer simulation staged by future scientists.
Although Facebook CEO Mark Zuckerberg disagrees with Musks dark future, more tech leaders are siding with Musk when it comes to killer robots. Alphabets artificial intelligence expert, Mustafa Suleyman, was one of the U.N. open letters signatories. In the past, Bill Gates has said that the intelligence in A.I. is strong enough to be a concern.
So we can laugh now at these outlandish science fiction worlds where were robots domestic pets. But Musk has been sounding the alarm for years and he has held firm to his beliefs. What may be one mans outlier theory now may become a reality in the future. If nothing else, hes making sure you listen.
Monica Torres is a reporter for Ladders. She is based in New York City and can be reached at email@example.com.
Go here to read the rest: