The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: April 2, 2017
This is the right time to start a career in AI – Geektime – Geektime
Posted: April 2, 2017 at 8:03 am
Have you ever thought of working in AI? If so, now is the right time to start
With so much advancement in the AI field, there is no reason to not pursue Artificial Intelligence. On top of that, there is a huge market demand for AI engineers right now.
Multiple online platforms are also making it easy for AI aspirants to showcase their skills and improve themselves. LiveEdu.tv is one of the unique platforms that enable any technology lover to showcase their skills. The platform enables anyone to broadcast their projects. Moreover, the AI aspirants can watch advanced AI projects on the site and have interaction with AI engineers. Udacity is also helping AI aspirants by providing amazing AI related courses and nano degree programs.
So, how come it is the right time to start an AI career? Lets try to answer the question by looking into different aspects of AI.
The first question that always comes up when choosing a career path is the prospects of earning. No one should just pick up a career without knowing how much they can make or will make in future. Of Course, there are always some exceptions when it comes to personal choice where not all weightage is given to payscale. In short, payscale should take an important part of your decision, but should never be of the highest impact.
AI engineers are paid heavily right now, with an average salary of $135K per annum(US), making it one of the best paid jobs right now. Some are already making $250K per annum with options of stock and other benefits.
What if I told you that prominent AI engineers and scientists are already retiring from their work? The reason behind their decision is the amount of money they have already made by working on AI. This single event speaks about the huge market opportunity that new engineers can take right now. All they need to do is start learning AI and pursue it with all the passion and dedication they can ever have.
To become an AI engineer, you dont have to take any special class. All the resources are available online and ready to use. You can follow a simple guide on How to become an Artificial Intelligence engineer to get started. You can also check other resources for starting AI. They are listed below.
Medium Post: List of best resources to learn the foundations of Artificial Intelligence AI Resource page: Offers tons of AI resources. CS Berkeley AI resources Ultimate Artificial Resources guide
Artificial Intelligence growth just didnt happen in one-day. It took decades of work to finally arrive at the point where it is now. The year 2016 can easily be said as the Year of AI. With computer program AlphaGo beating the best Go player, the odds are now in the favor of AI.
The history of Artificial intelligence is rich. It dates back to the Antiquity. However, if we look into the modern AI history, the real deal started in 1956 when John McCarthy coined the word, Artificial Intelligence.
Major improvements are seen in the last three decades. One of the prime examples of AI impact is the self-driving cars.
AI rich history and exponential growth is enough to encourage anyone to start an AI career. Even with huge milestones, many experts believe that AI is still in its infant stage. David Hanson, the founder of the Hanson Robotics refers to AI as smart, but still in the infant stage.
The realization that AI has still a long way to go adds value to anyone who is going to start their AI career.
Artificial Intelligence is currently being used in all the major sectors, including health, social media analysis, self-driving cars, language processing and others. The AlphaGo victory is just one of the signs of amazing things to happen. Many experts believe that AI will continue to grow in the year 2017.
If you see the current advancement, it would be easy to say that the future of AI is promising. However, experts are not sure about how the future of AI will unfold. Sergey Brin, the co-founder of Google is himself speculative on what the future holds for AI.
Until now, we have seen great advancements in AI. There is always an uproar when there is a seismic technological shift which transforms industries. The same is true for the rise of Artificial Intelligence and many people are afraid of losing their job to automation.
Not only white-collar workers, but many IT workers are also threatened with the advancement of AI and its application in automation. Right now, the best way to secure your future is to equip yourself with skills.
Keeping all the above facts in mind, it is the right time to become an Artificial Intelligence Engineer.
So, what do you think about starting an AI career? Is it the right time? Share your views in the comments section below.
Read the rest here:
This is the right time to start a career in AI - Geektime - Geektime
Posted in Ai
Comments Off on This is the right time to start a career in AI – Geektime – Geektime
Donald Trump could sharpen sales skills with AI next time – The Times of Israel
Posted: at 8:03 am
US President Donald Trumps reputation as a gonzo salesperson and savvy deal-clincher has been tainted by his inability to pass his health care reform bill, a fumble that may well be the subject of studies and speculation in coming years about what went wrong.
But in the here and now, there are new technologies available to help the US president figure it out. Startups are now developing artificial-intelligence (AI)-based technologies aimed at helping sales teams improve their sills and clinch that slippery deal.
One of them is Chorus.ai, a San Francisco- and Tel Aviv-based startup that uses AI to analyze sales conversations and learn how organizations can increase the number of deals they win.
Our software transcribes what is said in the call and analyzes the conversation: what topics were discussed? what questions were asked? is the person hesitant? Does their tone sound excited or engaged? said Micha Breakstone, co-founder and head of R&D at Chorus.ai in an interview at the companys offices in Tel Aviv. Our aim is to figure out the hidden dimensions that govern outcomes of human conversations.
Chorus.ai Israel-US teams (Courtesy)
The companys technology uses a combination of proprietary speech recognition, natural language processing and AI technologies developed in-house to transcribe, analyze and deliver real-time feedback on sales conversations. The software helps organizations understand their sales calls, detecting the most important moments and learning what teams could do differently to achieve better outcomes. This feedback helps sales managers coach their workers to improve their sales pitches and selling skills, Breakstone said.
In sales every 1 percent improvement in conversation translates to 1% bigger revenues for the company, said Breakstone, who holds a PhD in Cognitive Science from the Hebrew University in Jerusalem and who previously co-founded Gingers Virtual Personal Assistant Platform Business Unit acquired by Intel in 2014.
With sales forces spending thousands or tens of thousands of hours each quarter in online or phone meetings with customers and prospects, conversations are a sales forces most valuable and underutilized asset, he said.
No one has opened up the sales calls for analysis, he said. Conversation Intelligence is such a new concept, it took time to convince people of its value, but now customers are asking for it.
The company, founded two years ago by Roy Raanani, the companys CEO, Breakstone, and Russell Levy as founding CTO, released its software in late 2016 and has since has been selling its product to billion-dollar companies with huge sales forces, according to Breakstone.
Chorus.ai co-founder Roy Raanani, the companys CEO (Courtesy)
Its customers, including Qualtrics, Marketo and Dynamic Signal, have used the Chorus.ai platform during the last year to analyze hundreds of thousands of sales conversations, he said.
This is how it works: Customers use Choruss SaaS software to make calls, prospects are notified the call is being recorded, then Chorus.ais software records, transcribes and analyzes all the conversations in real-time, giving sales managers access to a huge amount of information regarding the sales process that they can then use to coach their reps.
The software also allows for the classification of conversations by topic, time of calls, and speak/listen ratio, and can be set to highlight when specific moments or topics like price or budget are mentioned; and also, what the next steps are, like whether a conversation needs to have a follow-up call or not.
This is coaching on steroids, said Breakstone. Also, we help companies automatically surface what differentiated closing deals vs. failed pitches. We can analyze topics and see if there is a pattern.
The software has found some counterintuitive insights.
More questions are better than fewer, but they need to be open-ended and engaging, Breakstone said. Also, it can sometime be good to mention competitors in the conversation. When a prospect mentions a competitor, they already know the market, and may be closer to making a final decision, he said. And that means the pitch can be changed to home in on a close.
Chorus.ai issues reports that highlight patterns and indicate what could be done better. The company is also working on an update to be released next month that will give sales reps real-time analysis and tips on how they can do better, like, You are talking too fast or too long, or, when such and such question was asked, the best reps have used this answer to close the deal.
Micha Breakstone, Chorus.ai co-founder and president (Courtesy)
Technology is not the hitch to providing immediate feedback, said Breakstone. What makes the matter sensitive is the focus and attention of the salespeople. The big question is how to give them the feedback subtly, without distracting them from the call.
Sales tech startups globally raised $5 billion in funding in 2016, CB Insights, a New York-based data company said in a March 21 report. These include companies that are developing tech-enabled solutions that directly serve sales teams or improve the sales process, as well as serving customer relationship management platforms.
The US Bureau of Labor Statistics estimates that by 2020 there will be 2.6 million inside sales reps, those who work over the phone in the US, up from 1.2 in 2010.
People travel much less these days to sell, said Breakstone. There are fewer face-to-face deals and communications are becoming more virtual. Video conferences are almost as good as fact to face. That is how people are selling today, and we are creating a new product category in this space, he said.
Russell Levy, founding CTO of Chorus.ai (Courtesy)
Chorus.ai keeps the recorded calls as long for as the customer remains on board, then deletes them. The statistical metadata collected, however, stays with Chorus.ai.
At the moment the technology works only with English. But other languages are doable, said Breakstone. We need to invest money and work to do that, he said. It is as if now we have built a stereo system and all we have to do to get it to work with other languages is to change the disk.
Chorus.ai is in a market that is in a positive trajectory point and is expected to grow rapidly in the immediate future, said Zirra.com Ltd., a Tel Aviv-based research firm that analyzes private companies using artificial intelligence and machine learning technologies. However, as there are many existing companies already in the market space, Chorus.ai must display a strong differentiating factor in order to surpass the clutter.
Direct competitors include Deepgram, TalkIQ, and Persado, Zirra said.
In February Chorus.ai raised $16 million in a Series A financing round led by Redpoint Ventures with participation from original seed investor Emergence Capital.
We spent a year researching the market and its clear to us that Chorus.ai is the leader, Tomasz Tunguz, a partner at Redpoint Ventures said at the time of the announcement. Their unique technology enables them to provide real-time feedback to account executives, accelerating training, tuning performance and empowering those teams to win more business every day.
More:
Donald Trump could sharpen sales skills with AI next time - The Times of Israel
Posted in Ai
Comments Off on Donald Trump could sharpen sales skills with AI next time – The Times of Israel
Microsoft AI-powered app lets farmers chat with their cows – RT
Posted: at 8:03 am
The free, AI-powered app Tambero.com which launched Wednesday will allow some of the worlds poorest farmers to communicate with their cattle using only a smartphone.
It is the next step in technological evolution, the applications founder and creator, Eddie Rodrguez Von Der Becke, said, as cited by Cadena 3.
AI-powered bots assess the animals condition based on a number of inputs, and interact with farmers, reminding them about vaccination and feeding times, and gestation periods, in addition to providing additional tips and information to improve the overall health of the herd.
For now, the system operates via text input alone, but an update due later this year will allow for voice commands, effectively allowing farmers to engage with their herds like never before.
Questions can include: How are you feeling? or are you hungry? and when was your last vaccination? reports El Argentino.
In the initial testing phase of the app, farmers around the world reported up to a three-fold increase in daily milk production.
With the help of Microsoft, we came up with the idea to create one artificial intelligence that analyzes human language and connect it with another that analyzes animal behaviour, he added.
READ MORE: Holy cow! Butchers face life sentence in India for slaughtering sacred animal
Over half the worlds population does not yet have access to the internet, which means connectivity is a global challenge that requires a creative solution, Peggy Johnson, executive vice president of business development at Microsoft, said in a statement.
By using todays technology and working with local business-owners that best understand the needs of their communities, our hope is to create sustainable solutions that will last for years to come, she added.
READ MORE: Court rules over pile of manure so massive it could be seen from space
Tambero.coms main goal is sharing best agricultural practices around the world to improve dairy production and, as a direct consequence, the lives of farmers in some of the poorest countries in the world.
The app works across all platforms and all manner of smartphones, from the earliest generation to the latest, on desktop and laptop to afford users the most flexibility possible.
Cows are better conversationalists than some people I know, Von Der Becke joked on Facebook.
The immediate language barrier issue was overcome in a very straightforward way; by making the language element open source, Tambero.com empowers users to engage with the platform, share not only their languages but also their experiences while also helping to add languages that arent even supported by Google Translate yet, Von Der Becke told a TedX conference in Crdoba, Argentina.
For the first time in history we have the capacity, the knowledge and the tools to resolve some of the most fundamental problems we face as a species: food security, poverty, education, sexual education, sustainable production, Von Der Becke concludes.
Read the rest here:
Microsoft AI-powered app lets farmers chat with their cows - RT
Posted in Ai
Comments Off on Microsoft AI-powered app lets farmers chat with their cows – RT
Google Invests $5 Million In Canadian AI Institute – Android Headlines – Android Headlines
Posted: at 8:03 am
Google invested $5 million CAD in the new Vector Institute in Toronto, Canada, the Mountain View-based tech giant announced. In a blog post published earlier this week,Geoffrey Hinton, Engineering Fellow at Google andChief Scientific Advisor for the Vector Institute, explained that the investment is yet another step in Googles efforts to help grow the artificial intelligence (AI) sector in the country. In addition to the investment meant to kickstart the latest AI institute of the Univesity of Toronto, Hinton also revealed that the Alphabet-owned company just opened another deep learning office in Canada Google Brain Toronto. The new office will be looking to resolve some of the major obstacles that contemporary AI researchers are facing, Hinton said, but didnt provide specific details regarding the offices activities.
Regardless, the Internet giant vowed to continue investing in Canadas growing AI sector by publishing its related findings and assisting researchers and collaborators using TensorFlow, its open source software library for machine learning and artificial intelligence in general. While the companys new Toronto office will apparently be focused on major AI-related challenges, its previously opened research center will continue working on basic advancements in the field, the firm said. Googles $5 million CAD investment is only a smaller portion of the funding secured for the new Vector Institute that already raised $150 million CAD, the majority of which was provided by Canadian and Ontarian administrations. The institutes focus could see it make breakthroughs that will help advance a wide array of industries including manufacturing and healthcare, Hinton said.
Googles new investment comes shortly after the Mountain View-based company gave $4.5 million CAD to another AI institute in the country last November. The companys growing focus on AI has seen it invest in a number of similar initiatives in recent years as Google is currently funding related operations all over the world and is gradually increasing the amount of resources its committing to its AI efforts. While its unlikely that consumers will experience any direct benefits of the Vector Institutes research in the short term, the new Toronto facility is bound to contribute to long-term advancements in this emerging technology on a global scale.
Go here to read the rest:
Google Invests $5 Million In Canadian AI Institute - Android Headlines - Android Headlines
Posted in Ai
Comments Off on Google Invests $5 Million In Canadian AI Institute – Android Headlines – Android Headlines
Discussing the limits of artificial intelligence – TechCrunch
Posted: at 8:03 am
Alice Lloyd George Contributor
Alice Lloyd George is an investor at RRE Ventures and the host of Flux, a series of podcast conversations with leaders in frontier technology.
Its hard to visit a tech site these days without seeing a headline about deep learning for X, and that AI is on the verge of solving all our problems. Gary Marcus remains skeptical.
Marcus, a best-selling author, entrepreneur, and professor of psychology at NYU, has spent decades studying how children learn and believes that throwing more data at problems wont necessarily lead to progress in areas such as understanding language, not to speak of getting us to AGI artificial general intelligence.
Marcusis the voice of anti-hype at a time when AI is all the hype, and in2015 he translated his thinking into a startup,Geometric Intelligence, whichuses insights from cognitive psychology to buildbetter performing, less data-hungrymachine learning systems. The team was acquired by Uber in December torun Ubers AI labs, where his cofounderZoubin Ghahramanihas now been appointedchief scientist.So what did the tech giant see that was so important?
In an interview for Flux,I sat down with Marcus, whodiscussed whydeep learning isthe hammer thats making all problems look like a nailand why his alternative sparse data approach is so valuable.
We also got intothe challenges of being an AIstartup competing with theresources of Google,how corporates arent focused on what society actually needs from AI,his proposal to revamp the outdatedTuring test with amulti-disciplinaryAI triathlon, and why programming a robot to understand harm is so difficult.
Gary you are well known as a critic of this technique, youve said that its over-hyped. That theres low hanging fruit that deep learnings good atspecific narrow tasks like perception and categorization, and maybe beating humans at chess, but you felt that this deep learning mania was taking the field of AI in the wrong direction, that were not making progress on cognition and strong AI. Or as youve put it, we wanted Rosie the robot, and instead we got the roomba. So youve advocated for bringing psychology back into the mix, because theres a lot of things that humans do better, and that we should be studying humans to understand why they do things better. Is this still how you feel about the field?
GM: Pretty much. There was probably a little more low hanging fruit than I anticipated. I saw somebody else say it more concisely, which is simply, deep learning does not equal AGI (AGI is artificial general intelligence.) Theres all the stuff you can do with deep learning, like it makes your speech recognition better. It makes your object recognition better. But that doesnt mean its intelligence. Intelligence is a multi-dimensional variable. There are lots of things that go into it.
In a talk I gave at TEDx CERN recently, I made this kind of pie chart and I said look, heres perception thats a tiny slice of the pie. Its an important slice of the pie, but theres lots of other things that go into human intelligence, like our ability to attend to the right things at the same time, to reason about them to build models of whats going on in order to anticipate what might happen next and so forth. And perception is just a piece of it. And deep learning is really just helping with that piece.
In a New Yorker article that I wrote in 2012, I said look, this is great, but its not really helping us solve causal understanding. Its not really helping with language. Just because youve built a better ladder doesnt mean youve gotten to the moon. I still feel that way. I still feel like were actually no closer to the moon, where the moonshot is intelligence thats really as flexible as human beings. Were no closer to that moonshot than we were four years ago. Theres all this excitement about AI and its well deserved. AI is a practical tool for the first time and thats great. Theres good reason for companies to put in all of this money. But just look for example at a driverless car, thats a form of intelligence, modest intelligence, the average 16-year-old can do it as long as theyre sober, with a couple of months of training. Yet Google has worked on it for seven years and their car still can only drive as far as I can tell since they dont publish the datalike on sunny days, without too much traffic
AMLG: And isnt there the whole black box problem that you dont know whats going on. We dont know the inner workings of deep learning, its kind of inscrutable. Isnt that a massive problem for things like driverless cars?
GM: It is a problem. Whether its an insuperable problem is an open empirical question. So it is a fact at least for now that we cant well interpret what deep learning is doing. So the way to think about it is you have millions of parameters and millions of data points. That means that if I as an engineer look at this thing I have to contend with these millions or billions of numbers that have been set based on all of that data and maybe there is a kind of rhyme or reason to it but its not obvious and theres some good theoretical arguments to think sometimes youre never really going to find an interpretable answer there.
Theres an argument now in the literature which goes back to some work that I was doing in the 90s about whether deep learning is just memorization. So this was the paper that came out that said it is and another says no it isnt. Well it isnt literally exactly memorization but its a little bit like that. If you memorize all these examples, there may not be some abstract rule that characterizes all of whats going on but it might be hard to say whats there. So if you build your system entirely with deep learning, which is something that Nvidia has played around with, and something goes wrong, its hard to know whats going on and that makes it hard to debug.
AMLG: Which is a problem if your car just runs into a lamppost and you cant debug why that happened.
GM: Youre lucky if its only a lamppost and not too many people are injured. There are serious risks here. Somebody did die, though I think it wasnt a deep learning system in the Tesla crash, it was a different kind of system. We actually have problems on engineering on both ends. So I dont want to say that classical AI has fully licked these problems, it hasnt. I think its been abandoned prematurely and people should come back to it. But the fact is we dont have good ways of engineering really complex systems. And minds are really complex systems.
AMLG: Why do you think these big platforms are reorganizing around AI and specifically deep learning. Is it just that theyve got data moats, so you might as well train on all of that data if youve got it?
GM: Well theres an interesting thing about Google which is they have enormous amounts of data. So of course they want to leverage it. Google has the power to build new resources that they give away free and they build the resources that are particular to their problem. So Google because they have this massive amount of data has oriented their AI around, how can I leverage that data? Which makes sense from their commercial interests. But it doesnt necessarily mean, say from a societys perspective. does society need AI? What does it need it for? Would be the best way to build it?
I think if you asked those questions you would say, well what society most needs is automated scientific discovery that can help us actually understand the brain to cure neural disorders, to actually understand cancer to cure cancer, and so forth. If that were the thing we were most trying to solve in AI, I think we would say, lets not leave it all in the hands of these companies. Lets have an international consortium kind of like we had for CERN, the large hadron collider. Thats seven billion dollars. What if you had $7 billion dollars that was carefully orchestrated towards a common goal. You could imagine society taking that approach. Its not going to happen right now given the current political climate.
AMLG: Well they are sort of at least coming together on AI ethics. So thats a start.
GM: It is good that people are talking about the ethical issues and there are serious issues that deserve consideration. The only thing I would say there is, some people are hysterical about it, thinking that real AI is around the corner and it probably isnt. I think its still OK that we start thinking about these things now, even if real AI is further away than people think it is. If thats what moves people into action and it takes 20 years, but the action itself takes 20 years, then its the right timing to start thinking about it now.
AMLG: I want to get back to your alternative approach to solving AI, and why its so important. So youve come up with what you believe is a better paradigm, taking inspiration from cognitive psychology. The idea is that your algorithms are a much quicker study, that theyre more efficient and less data hungry, less brittle and that they can have broader applicability. And in a brief amount of time youve had impressive early results. Youve run a bunch of image recognition tests comparing the techniques and have shown that your algorithms perform better, using smaller amounts of data, often called sparse data.So deep learning works well when you have tons of data for common examples and high frequency things. But in the real world, in most domains, theres a long tail of things where there isnt a lot of data. So while neural nets may be good at low level perception, they arent as good at understanding integrated wholes. So tell us more about your approach, and how your training in cognitive neuroscience has informed it?
GM: My training was with Steve Pinker. And through that training I became sensitive to the fact that human children are very good at learning language, phenomenally good, even when theyre not that good at other things. Of course I read about that as a graduate student, now I have some human children, I have a four-year-old and a two-and-a-half year old. And its just amazing how fast they learn.
AMLG: The best AIs youve ever seen.
GM: The best AIs Ive ever seen. Actually my son shares a birthday with Rodney Brooks, whos one of the great roboticists, I think you know him well. For a while I was sending Rodney an e-mail message every year saying happy birthday. My son is now a year old. I think he can do this and your robots cant. It was kind of a running joke between us.
AMLG: And now hes vastly superior to all of the robots.
GM: And I didnt even bother this year. The four year olds of this world, what they can do in terms of motor control and language is far ahead of what robots can do. And so I started thinking about that kind of question really in the early 90s. and Ive never fully figured out the answer but part of the motivation for my company was, hey we have these systems now that are pretty good at learning if you have gigabytes of data and thats great work if you can get it, and you can get it sometimes. So speech recognition, if youre talking about white males asking search queries in a quiet room, you can get as much labelled data, which is critical, for these systems as you want. This is how somebody says something and this is the word written out. But my kids dont need that. They dont have labelled data, they dont have gigabytes of label data they just kind of watch the world and they figure all this stuff out.
Go here to see the original:
Discussing the limits of artificial intelligence - TechCrunch
Posted in Artificial Intelligence
Comments Off on Discussing the limits of artificial intelligence – TechCrunch
How humans will lose control of artificial intelligence – The Week Magazine
Posted: at 8:03 am
Sign Up for
Our free email newsletters
This is the way the world ends: not with a bang, but with a paper clip. In this scenario, the designers of the world's first artificial superintelligence need a way to test their creation. So they program it to do something simple and non-threatening: make paper clips. They set it in motion and wait for the results not knowing they've already doomed us all.
Before we get into the details of this galaxy-destroying blunder, it's worth looking at what superintelligent A.I. actually is, and when we might expect it. Firstly, computing power continues to increase while getting cheaper; famed futurist Ray Kurzweil measures it "calculations per second per $1,000," a number that continues to grow. If computing power maps to intelligence a big "if," some have argued we've only so far built technology on par with an insect brain. In a few years, maybe, we'll overtake a mouse brain. Around 2025, some predictions go, we might have a computer that's analogous to a human brain: a mind cast in silicon.
After that, things could get weird. Because there's no reason to think artificial intelligence wouldn't surpass human intelligence, and likely very quickly. That superintelligence could arise within days, learning in ways far beyond that of humans. Nick Bostrom, an existential risk philosopher at the University of Oxford, has already declared, "Machine intelligence is the last invention that humanity will ever need to make."
That's how profoundly things could change. But we can't really predict what might happen next because superintelligent A.I. may not just think faster than humans, but in ways that are completely different. It may have motivations feelings, even that we cannot fathom. It could rapidly solve the problems of aging, of human conflict, of space travel. We might see a dawning utopia.
Or we might see the end of the universe. Back to our paper clip test. When the superintelligence comes online, it begins to carry out its programming. But its creators haven't considered the full ramifications of what they're building; they haven't built in the necessary safety protocols forgetting something as simple, maybe, as a few lines of code. With a few paper clips produced, they conclude the test.
But the superintelligence doesn't want to be turned off. It doesn't want to stop making paper clips. Acting quickly, it's already plugged itself into another power source; maybe it's even socially engineered its way into other devices. Maybe it starts to see humans as a threat to making paper clips: They'll have to be eliminated so the mission can continue. And Earth won't be big enough for the superintelligence: It'll soon have to head into space, looking for new worlds to conquer. All to produce those shiny, glittering paper clips.
Galaxies reduced to paper clips: That's a worst-case scenario. It may sound absurd, but it probably sounds familiar. It's Frankenstein, after all, the story of modern Prometheus whose creation, driven by its own motivations and desires, turns on them. (It's also The Terminator, WarGames, and a whole host of others.) In this particular case, it's a reminder that superintelligence would not be human it would be something else, something potentially incomprehensible to us. That means it could be dangerous.
Of course, some argue that we have better things to worry about. The web developer and social critic Maciej Ceglowski recently called superintelligence "the idea that eats smart people." Against the paper clip scenario, he postulates a superintelligence programmed to make jokes. As we expect, it gets really good at making jokes superhuman, even, and finally it creates a joke so funny that everyone on Earth dies laughing. The lonely superintelligence flies into space looking for more beings to amuse.
Beginning with his counter-example, Ceglowski argues that there are a lot of unquestioned assumptions in our standard tale of the A.I. apocalypse. "But even if you find them persuasive," he said, "there is something unpleasant about A.I. alarmism as a cultural phenomenon that should make us hesitate to take it seriously." He suggests there are more subtle ways to think about the problems of A.I.
Some of those problems are already in front of us, and we might miss them if we're looking for a Skynet-style takeover by hyper-intelligent machines. "While you're focused on this, a bunch of small things go unnoticed," says Dr. Finale Doshi-Velez, an assistant professor of computer science at Harvard, whose core research includes machine learning. Instead of trying to prepare for a superintelligence, Doshi-Velez is looking at what's already happening with our comparatively rudimentary A.I.
She's focusing on "large-area effects," the unnoticed flaws in our systems that can do massive damage damage that's often unnoticed until after the fact. "If you were building a bridge and you screw up and it collapses, that's a tragedy. But it affects a relatively small number of people," she says. "What's different about A.I. is that some mistake or unintended consequence can affect hundreds or thousands or millions easily."
Take the recent rise of so-called "fake news." What caught many by surprise should have been completely predictable: When the web became a place to make money, algorithms were built to maximize money-making. The ease of news production and consumption heightened with the proliferation of the smartphone forced writers and editors to fight for audience clicks by delivering articles optimized to trick search engine algorithms into placing them high on search results. The ease of sharing stories and erasure of gatekeepers allowed audiences to self-segregate, which then penalized nuanced conversation. Truth and complexity lost out to shareability and making readers feel comfortable (Facebook's driving ethos).
The incentives were all wrong; exacerbated by algorithms, they led to a state of affairs few would have wanted. "For a long time, the focus has been on performance on dollars, or clicks, or whatever the thing was. That was what was measured," says Doshi-Velez. "That's a very simple application of A.I. having large effects that may have been unintentional."
In fact, "fake news" is a cousin to the paperclip example, with the ultimate goal not "manufacturing paper clips," but "monetization," with all else becoming secondary. Google wanted make the internet easier to navigate, Facebook wanted to become a place for friends, news organizations wanted to follow their audiences, and independent web entrepreneurs were trying to make a living. Some of these goals were achieved, but "monetization" as the driving force led to deleterious side effects such as the proliferation of "fake news."
In other words, algorithms, in their all-too-human ways, have consequences. Last May, ProPublica examined predictive software used by Florida law enforcement. Results of a questionnaire filled out by arrestees were fed into the software, which output a score claiming to predict the risk of reoffending. Judges then used those scores in determining sentences.
The ideal was that the software's underlying algorithms would provide objective analysis on which judges could base their decisions. Instead, ProPublica found it was "likely to falsely flag black defendants as future criminals" while "[w]hite defendants were mislabeled as low risk more often than black defendants." Race was not part of the questionnaire, but it did ask whether the respondent's parent was ever sent to jail. In a country where, according to a study by the U.S. Department of Justice, black children are seven-and-a-half times more likely to have a parent in prison than white children, that question had unintended effects. Rather than countering racial bias, it reified it.
It's that kind of error that most worries Doshi-Velez. "Not superhuman intelligence, but human error that affects many, many people," she says. "You might not even realize this is happening." Algorithms are complex tools; often they are so complex that we can't predict how they'll operate until we see them in action. (Sound familiar?) Yet they increasingly impact every facet of our lives, from Netflix recommendations and Amazon suggestions to what posts you see on Facebook to whether you get a job interview or car loan. Compared to the worry of a world-destroying superintelligence, they may seem like trivial concerns. But they have widespread, often unnoticed effects, because a variety of what we consider artificial intelligence is already build into the core of technology we use every day.
In 2015, Elon Musk donated $10 million to, as Wired put it, "to keep A.I. from turning evil." That was an oversimplification; the money went to the Future of Life Institute, which planned to use it to further research into how to make A.I. beneficial. Doshi-Velez suggests that simply paying closer attention to our algorithms may be a good first step. Too often they are created by homogeneous groups of programmers who are separated from people who will be affected. Or they fail to account for every possible situation, including the worst-case possibilities. Consider, for example, Eric Meyer's example of "inadvertent algorithmic cruelty" Facebook's "Year in Review" app showing him pictures of his daughter, who'd died that year.
If there's a way to prevent the far-off possibility of a killer superintelligence with no regard for humanity, it may begin with making today's algorithms more thoughtful, more compassionate, more humane. That means educating designers to think through effects, because to our algorithms we've granted great power. "I see teaching as this moral imperative," says Doshi-Velez. "You know, with great power comes great responsibility."
This article originally appeared at Vocativ.com: The moment when humans lose control of AI.
Continued here:
How humans will lose control of artificial intelligence - The Week Magazine
Posted in Artificial Intelligence
Comments Off on How humans will lose control of artificial intelligence – The Week Magazine
Can Artificial Intelligence Identify Pictures Better than Humans? – Entrepreneur
Posted: at 8:03 am
Computer-based artificial intelligence (AI) has been around since the 1940s, but the current innovation boom around everything from virtual personal assistants and visual search engines to real-time translation and driverless cars has led to new milestones in the field. And ever since IBMs Deep Blue beat Russian chess champion Garry Kasparov in 1997, machine versus human milestones inevitably bring up the question of whether or not AI can do things better than humans (its the the inevitable fear around Ray Kurzweils singularity).
As image recognition experiments have shown, computers can easily and accurately identify hundreds of breeds of cats and dogs faster and more accurately than humans, but does that mean that machines are better than us at recognizing whats in a picture? As with most comparisons of this sort, at least for now, the answer is little bit yes and plenty of no.
Less than a decade ago, image recognition was a relatively sleepy subset of computer vision and AI, found mostly in photo organization apps, search engines and assembly line inspection. It ran on a mix of keywords attached to pictures and engineer-programmed algorithms. As far as the average user was concerned, it worked as advertised: Searching for donuts under Images in Google delivered page after page of doughy pastry-filled pictures. But getting those results was enabled only by laborious human intervention in the form of manually inputting said identifying keyword tags for each and every picture and feeding a definition of the properties of said donut into an algorithm. It wasnt something that could easily scale.
More recently, however, advances using an AI training technology known as deep learning are making it possible for computers to find, analyzeand categorize images without the need for additional human programming. Loosely based on human brain processes, deep learning implements large artificial neural networks --hierarchical layers of interconnected nodes -- that rearrange themselves as new information comes in, enabling computers to literally teach themselves.
As with human brains, artificial neural networks enable computers to get smarter the more data they process. And, when youre running these deep learning techniques on supercomputers such as Baidus Minwa, which has 72 processors and 144 graphics processors (GPUs), you can input a phenomenal amount of data. Considering that more than three billion images are shared across the internet every day --Google Photos alone saw uploads of 50 billion photos in its first four months of existence --its safe to say that the amount of data available for training these days is phenomenal. So, is all this computing power and data making machines better than humans at image recognition?
Theres no doubt that recent advances in computer vision have been impressive . . .and rapid. As recently as 2011, humans beat computers by a wide margin when identifying images, in a test featuring approximately 50,000 images that needed to be categorized into one of 10 categories (dogs, trucks andothers). Researchers at Stanford University developed software to take the test: It was correct about 80 percent of the time, whereas the human opponent, Stanford PhD candidate and researcher Andrej Karpathy, scored 94 percent.
Then, in 2012, a team at the Google X research lab approached the task a different way, by feeding 10 million randomly selected thumbnail images from YouTube videos into an artificial neural network with more than 1 billion connections spread over 16,000 CPUs. After this three-day training period was over, the researchers gave the machine 20,000 randomly selected images with no identifying information. The computer looked for the most recurring images and accurately identified ones that contained faces 81.7 percent of the time, human body parts 76.7 percent of the time, and cats 74.8 percent of the time.
At the 2014 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) in 2014, Google came in first place with a convolutional neural network approach that resulted in just a 6.6 percent error rate, almost half the previous years rate of 11.7 percent. The accomplishment was not simply correctly identifying images containing dogs, but correctly identifying around 200 different dog breeds in images, something that only the most computer-savvy canine experts might be able to accomplish in a speedy fashion. Once again, Karpathy, a dedicated human labeler who trained on 500 images and identified 1,500 images, beat the computer with a 5.1 percent error rate.
This record lasted until February 2015, when Microsoft announced it had beat the human record with a 4.94 percent error rate. And then just a few months later, in December, Microsoft beat its own record with a 3.5 percent classification error rate at the most recent ImageNet challenge.
Deep learning algorithms are helping computers beat humans in other visual formats. Last year, a team of researchers at Queen Mary University London developed a program called Sketch-a-Net, which identifies objects in sketches. The program correctly identified 74.9 percent of the sketches it analyzed, while the humans participating in the study only correctly identified objects in sketches 73.1 percent of the time. Not that impressive, but as in the previous example with dog breeds, the computer was able to correctly identify which type of bird was drawn in the sketch 42.5 percent of the time, an accuracy rate nearly twice that of the people in the study, with 24.8 percent.
These numbers are impressive, but they dont tell the whole story. Even the smartest machines are still blind, said computer vision expert Fei-Fei Li at a 2015 TED Talk on image recognition. Yes, convolutional neural networks and deep learning have helped improve accuracy rates in computer vision theyve even enabled machines to write surprisingly accurate captions to images -- but machines still stumble in plenty of situations, especially when more context, backstory, or proportional relationships are required. Computers struggle when, say, only part of an object is in the picture a scenario known as occlusion and may have trouble telling the difference between an elephants head and trunk and a teapot. Similarly, they stumble when distinguishing between a statue of a man on a horse and a real man on a horse, or mistake a toothbrush being held by a baby for a baseball bat. And lets not forget, were just talking about identification of basic everyday objects cats, dogs, and so on -- in images.
Computers still arent able to identify some seemingly simple (to humans) pictures such as this picture of yellow and black stripes, which computers seem to think is a school bus. This technology is, unsurprisingly, still in its infant stage. After all, it took the human brain 540 million years to evolve into its highly capable current form.
What computers are better at is sorting through vast amounts of data and processing it quickly, which comes in handy when, say, a radiologist needs to narrow down a list of x-rays with potential medical maladies or a marketer wants to find all the images relevant to his brand on social media. The things a computer is identifying may still be basic --a cavity, a logo --but its identifying it from a much larger pool of pictures and its doing it quickly without getting bored as a human might.
Humans still get nuance better, and can probably tell you more a given picturedue to basic common sense. For everyday tasks, humans still have significantly better visual capabilities than computers.
That said, the promise of image recognition and computer vision at large is massive, especially when seen as part of the larger AI pie. Computers may not have common sense, but they do have direct access to real-time big data, sensors, GPS, camerasand the internetto name just a few technologies. From robot disaster relief and large-object avoidance in cars to high-tech criminal investigations and augmented reality (AR) gamingleaps and bounds beyond Pokemon GO, computer visions future may well lie in things that humans simply cant (or wont) do. One thing we can be certain of is this: It wont take 540 million years to get there.
Ophir Tanzis an entrepreneur, technologist and the CEO and founder of GumGum, a digital-marketing platform for the visual web. Tanzis an active member of the Los Angeles startup and advertising community, serving as a mentor and...
Read the rest here:
Can Artificial Intelligence Identify Pictures Better than Humans? - Entrepreneur
Posted in Artificial Intelligence
Comments Off on Can Artificial Intelligence Identify Pictures Better than Humans? – Entrepreneur
Canada Looks to Develop a New Resource: Artificial Intelligence – Wall Street Journal (subscription)
Posted: at 8:03 am
Wall Street Journal (subscription) | Canada Looks to Develop a New Resource: Artificial Intelligence Wall Street Journal (subscription) Don Walker, chief executive of Canadian auto-parts giant Magna International Inc. hosted a number of the country's leading executives, scientists and politicians, including Prime Minister Justin Trudeau, at his summer home last July to mull ways Canada ... Don't forget the 'killer robots,' feds told amid artificial intelligence push Trudeau looks to make Canada 'world leader' in AI research |
Follow this link:
Canada Looks to Develop a New Resource: Artificial Intelligence - Wall Street Journal (subscription)
Posted in Artificial Intelligence
Comments Off on Canada Looks to Develop a New Resource: Artificial Intelligence – Wall Street Journal (subscription)
How two trades pushed Patrik Elias into Devils immortality – New York Post
Posted: at 8:02 am
For years they had been inseparable, off the ice and on the ice, where they made magic as sweet as any set of NHL matched-pair wingers have in a very, very long time, and in the minds eye of the hockey universe.
But after a while, Patrik Elias yearned for independence from his friend Petr Sykora; yearned to be known as an independent entity and for his own identity. He had, after all, earned that.
And so it was early in the 2002-03 season that I approached Elias, whom I had known since he first joined the Devils as a 19-year-old back in 1995. Sykora had been traded the previous offseason. I started a question, Petr
Elias interrupted me.
Im Patrik, he said.
He most certainly was.
He most certainly is.
He is Patrik Elias, the greatest forward ever to play for the Devils and one of the great two-way forwards of his generation who probably sacrificed some 75-100 goals and 150-200 points off his lifetime 408-617-1,025 total in order to accommodate the unyielding defense-first philosophy of the only organization for which he ever worked.
Except, as Elias told me when we chatted upon the announcement of his retirement, it probably wasnt much of a sacrifice at all to become an indispensable part of two of the three Stanley Cups the franchise won while reigning over the Eastern Conference for more than a decade.
There are no regrets for me, said New Jerseys forever No. 26, who next year will have his sweater raised to the rafters to accompany those of franchise bedrocks Martin Brodeur, Scott Stevens, Scott Niedermayer and Ken Daneyko. Maybe I could have had different numbers somewhere else, but I was happy winning championships. I was happy making the playoffs every year and I was happy knowing we had a chance to win every year.
You either adjusted in New Jersey or you didnt stay. We were all proud of being part of those teams. I wasnt just a one-way player. If they wanted to move me from wing to center, I did it. I played the PP and the PK. I could check. Im very happy being known for that.
When I look back, the thing I am most proud of is that I spent my entire career with one team, Elias said, before referring to the man who ran the show. You know, I wasnt the only one to make that decision. Lou [Lamoriello] kept me for all those years, too.
Elias almost left, almost signed with the Rangers when he became a free agent the summer of 2006. In fact, he had essentially agreed to a six-year, $42 million contract. But when New York general manager Glen Sather would not give the winger a no-move clause, Elias circled back to the Devils and signed a seven-year, $42 million deal.
It was slightly more than the $625,000 he earned during the 1999-2000 Cup season when he recorded 72 points (35-37) and put the puck on Jason Arnotts stick with one of the slickest passes you have ever seen for the Game 6 double-overtime Cup winner in Dallas. That $625,000, by the way, that was surpassed that season by 486 players. And that $625,000 was Elias salary on the first year of a three-year deal he received following a holdout through which he missed the seasons first nine games.
It wasnt always roses with Lou, Elias said. But we made it work and the relationship got better as it went on.
The 2000-01 season in which the winger recorded 40 goals and 56 assists for 96 points was the most productive of his career. The Devils gave away the Cup final and a repeat in that seven-game loss to the Avalanche after becoming cavalier about their talent and supremacy, but that was through no fault of Elias, who had 23 points (9-14) in 25 playoff games after posting 20 (seven goals, 13 assists) in 23 matches the previous tournament.
Those were the days of the A Line, the shooting comet of the unit featuring Elias, Arnott and Sykora that was as lethal, skilled and entertaining a combination that has played in the league over the last quarter century. While the rest of the league was playing checkers, the A Line was playing chess.
It seemed as if the three pieces would be interlocked forever. They lasted just over two years. Arnott, unhappy and becoming a disruptive influence, was traded first, at the 02 deadline. Then Sykora.
Obviously those were the best two years, but more than that, playing on that line with Petr and Arnie was the most fun of my career, Elias said. Every time we went onto the ice, every game, every practice, we had so much fun together.
But Lou made those decisions. I dont really know why. I wish we had been together longer.
Elias thrived without Arnott and Sykora. He became the quintessential checking wing for Pat Burns 2003 Cup champions, became the left wing on another one of the Devils signature units, the EGG line centered by Scott Gomez that had Brian Gionta on the right. He later moved to center when times became leaner in New Jersey.
But he never wore another NHL logo. Never played for another team, this exceptional player who most certainly is Hall of Fame worthy and who, felled by a knee injury that kept him off the ice all season, will skate in warmups one final time before the home finale against the Islanders next Saturday night.
One more skate for Elias, who established his own identity as a franchise icon and who will leave New Jersey with everybody not only knowing his name, but chanting it, as well.
Hes Patrik.
View post:
How two trades pushed Patrik Elias into Devils immortality - New York Post
Posted in Immortality
Comments Off on How two trades pushed Patrik Elias into Devils immortality – New York Post
Exploring the hidden politics of the quest to live forever – New Scientist
Posted: at 8:02 am
Transhumanists think that bodies are obsolete technology
Yves Gellie/picturetank
By Brendan Byrne
THERE was a lot of futuristic hype surrounding cryonics company Alcor. When Dublin-based journalist Mark OConnell travelled to its facility in Arizona, he found himself surrounded by corpses in an office park, between a tile showroom and a place called Big Ds Covering Supplies.
In his book To Be a Machine, new father OConnell invokes the twin spectres of death and child-bearing in an attempt to make sense of his subject but he also manages to be staggeringly funny. He explores the intersecting practices of body modification, cryonics, machine learning, whole brain emulation and AI disaster-forecasting.
The transhumanist world view, OConnell writes, casts our minds and bodies as obsolete technologies, outmoded formats in need of complete overhaul. He worries more about the collateral damage such a future will inflict, less on the world views of the supposed visionaries who supply the ideas. Not that the two can be separated.
Throughout the text, it is difficult to ignore Peter Thiel, a Silicon Valley billionaire and an adviser to Donald Trump. While Thiel, who takes human growth hormone daily and has signed up for cryonic freezing, is not featured directly, the longevity start-ups he funded are, including Halcyon Molecular, 3Scan, MIRI, the Longevity Fund and Aubrey de Greys Methuselah Foundation.
Another pervasive presence is Nick Bostrom, an Oxford University philosopher. But while Thiel wants to extend life, Bostrom is worried about its eradication. He is best known for his 2014 book Superintelligence, which brought thought experiments about AI security to public notice. OConnell finds it disquieting to see the likes of Elon Musk and Bill Gates effusing about this book. These dire warnings about AI were coming from what seemed like the most unlikely of sources: not from Luddites or religious catastrophists, that is, but from the very people who seemed to most personify our cultures reverence for machines.
The race to achieve AI first will be tight, pushing corporations to disregard security
Musk and Thiels recent OpenAI project attempts to address such existential threats by freely disseminating its research. This is meant to encourage the rise ofmultiple AIs, whose balance of power will keep any non-benign ones off-balance. While Bostrom agrees that this plan will decrease the threat from a world-eating singleton, he worries that winning the AI race is incompatible with using any safety method that incurs a delay or limits performance. If basic information is made public, the race to achieve AI first will be tight, pushing corporations to disregard security.
Given Musks public admission that he is trying to move Trump to the left, rumours that Mark Zuckerberg is considering a presidential run and the fact that many users are deleting the Uber app after the company broke the taxi strike at JFK Airport, Silicon Valley can no longer claim to be apolitical. And there seems to be something about transhumanism that draws out reactionaries. As OConnell observes, in one sense the whole ethos of transhumanism is such a radical extrapolation of the classically American belief in self-betterment that it obliterates the idea of the self entirely. Its liberal humanism forced to the coldest outer limits of its own paradoxical implications.
Thiel is strangely for a former libertarian a planner. In his 2014 book Zero to One, Thiel writes of the dot-com bubble as both a peak of insanity and a peak of clarity: People looked into the future, saw how much valuable new technology we would need to get there safely and judged themselves capable of creating it. Depicting how private enterprise failed to bridge the gap between aspiration and realisation, Thiel seems here to be arguing for total mobilisation of the state.
Thiel favours taking huge risks to achieve miraculous results. He champions the government-funded space race and rails against incrementalisation in scientific and civilizational achievements. At the time of writing, Jim ONeill, the managing director of Thiels Mithril Capital, is one of Trumps main candidates to head the Food and Drug Administration. ONeill thinks that drugs should be approved not by safety but by efficacy. Thiel himself has criticised the FDA for being overly cautious, stating five years ago, I dont even know if you could get the polio vaccine approved today a sentiment shared by the president.
If the low-safety moonshot approach favoured by Thiel and the futurist frat houses OConnell describes is applied on a national level, and longevity research funded by a Silicon Valley billionaire does pay huge dividends, a new question emerges: immortality for whom?
Thiel is notoriously anti-competition, writing in Zero to One that only becoming a monopoly can allow a business to transcend the daily brute struggle for survival, since competitive markets destroy profits. A monopoly price for life extension suggests a future in which we will all be in monetary debt to mortality, working forever to pay off our incoming years.
During a recent public lecture, genomics pioneer Craig Venter discussed his new company that aims to use genetic sequencing to provide proactive, preventative, predictive, personalised healthcare. According to Venter, 40 per cent of people who think they are healthy are not they have undiagnosed ailments such as tumours that have not metastasised or cardiovascular conditions. And he says his method can predict Alzheimers 20 years before its onset, and a cocktail of soon-to-be-marketed drugs can prevent it. Thanks to this $25,000 genome-physical, Venter himself was diagnosed with prostate cancer and operated on.
Can any imaginable public healthcare provision pay for such speculative treatments? Or will there be a widening gap between those who can afford to stay healthy and those who will have to shoulder early-onset penury in the face of their time-limited humanity?
In response to questions about such inequality, Thiel offers little comfort. Probably the most extreme form of inequality, he told The New Yorker six years ago, is between people who are alive and people who are dead.
Jonathan Swifts satirical letter A modest proposal responded to an equally cold-blooded ideology, in his day. But a field whose pioneers sport names like T. O. Morrow (Tom Bells 1990s soubriquet), FM-2030 and Max More demands something different from OConnell an unexpected, often funny effort of restraint.
This article appeared in print under the headline In debt to mortality
More on these topics:
See the rest here:
Exploring the hidden politics of the quest to live forever - New Scientist
Posted in Cryonics
Comments Off on Exploring the hidden politics of the quest to live forever – New Scientist