The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: July 15, 2017
Artificial intelligence is going to change every aspect of …
Posted: July 15, 2017 at 11:13 pm
AP Photo/Laurent Cipriani
You've probably heard of artificial intelligence by now. It's the technology powering Siri, driverless cars, and that creepy Facebook feature that automatically tags your friends when you upload photos.
But what is AI, why are people talking about it now, and what will it mean for your everyday life? SunTrust recently released a report that breaks down how AI works, so let's tackle those questions one at a time.
What is AI?
AIis what people call computer programs that try to replicate how the human brain operates. For now, they only can replicate very specific tasks. One system can beat humans at the complicated and ancient board game called Go, for example. Lots of these AI systems are being developed, each really good at a specific task.
These AI systems all operate in basically the same way. Imagine a system that tries to identify whether a photo has a cat in it. For a human, this is fairly easy, but a computer has a hard timefiguring it out. AI systems are unique because they are set up like human brains. You feed a cat photo in one end, and it bounces around a lot of different checkpoints until it comes out the other end with a yes or no answer just like your eyes passing your view of a cat through all the neurons in your brain. AI is even talked about in terms of neurons and synapses, just like the human brain.
AI systems have to be trained, which is a process of adjusting these checkpoints to achieve better results. If one checkpoint determineswhether there is hair in the photo, training an AI system is like deciding how important the presence of hair in a photo is to decide whether there is a cat in the photo.
This training process takes a huge amount of computer process to fine tune. The better a system is trained, the better results you can get from it and the better your cat photo system will be able to determine whether there is a cat in a photo you show it. The huge amount of processing power required to run and train AI systems is what has kept AI research relatively quiet until recently, which leads to the next question.
DeepMind
Why are people talking about AI all the time now?
There is a famous AI contest where researchers pit computers against humans in a challenge to correctly identify photos. Humans usually are able to identify photos with about 95% accuracy in this content, and in 2012, computers were able to identify about 74% of photos correctly, according to SunTrust's report. In 2015, computers reached 96% accuracy, officially beating humans for the first time. This was called the "big bang" of AI, according to SunTrust.
The big bang of AI was made possible by some fancy new algorithms, three specifically. These new algorithms were better ways of training AI systems, making them faster and cheaper to run.
AI systems require lots of real-world examples to be trained well, like lots of cat photos for example. These cat photos also had to be labeled as cat photos so the system knew when it got the right answer from its algorithms and checkpoints. The new algorithms that led to the big bang allowed AI systems to be trained with fewer examples that didn't have to be labeled as well as before. Collecting enough examples to train an AI system used to be really expensive, but was much cheaper after the big bang. Advances in processing power and cheap storage also helped move things along.
Since the big bang, there have been a number of huge strides in AI technology. Tesla, Google, Apple and many of the traditional car companies are training AI systems for autonomous driving.Google, Apple and Amazon are pioneering the first smart personal assistants. Some companies are even working on AI driven healthcare solutions that could personalize treatment plans based on a patient's history, according to SunTrust.
What will AI mean for your life?
AI technology could be as simple as making your email smarter, but it could also extend your lifespan, take away your job, or end human soldiers fighting the world's wars.
SunTrust says AI has the capability to change nearly every industry. The moves we are seeing now are just the beginnings, the low hanging fruits. Cities can become smarter, TSA might be scanning your face as you pass through security and doctors could give most of their consultations via your phone thanks to increased AI advancements.
NVIDIA
SunTrust estimates the AI business will be about $47.250 billion by the year 2020. Nvidia, a large player in the AI space thanks to its GPU hardware and CUDA software platform, is a bit more conservative. It only sees AI as a $30 billion business, which is four times the current size of Nvidia.
There is no doubt AI is a huge opportunity, but there are a few companies you should watch if you're an investor looking to enter the AI space, according to SunTrust.
One is for sure. AI is exciting, sometimes scary, but ultimately, here to stay. We are just starting to see the implications of the technology, and the world is likely to change for good and bad because of artificial intelligence.
See the original post:
Artificial intelligence is going to change every aspect of ...
Posted in Artificial Intelligence
Comments Off on Artificial intelligence is going to change every aspect of …
Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization’ – Fortune
Posted: at 11:13 pm
Appearing before a meeting of the National Governors Association on Saturday, Tesla CEO Elon Musk described artificial intelligence as the greatest risk we face as a civilization and called for swift and decisive government intervention to oversee the technologys development.
On the artificial intelligence front, I have access to the very most cutting edge AI, and I think people should be really concerned about it, an unusually subdued Musk said in a question and answer session with Nevada governor Brian Sandoval.
Musk has long been vocal about the risks of AI . But his statements before the nations governors were notable both for their dire severity, and his forceful call for government intervention.
AIs a rare case where we need to be proactive in regulation, instead of reactive. Because by the time we are reactive with AI regulation, its too late," he remarked. Musk then drew a contrast between AI and traditional targets for regulation, saying AI is a fundamental risk to the existence of human civilization, in a way that car accidents, airplane crashes, faulty drugs, or bad food were not.
Those are strong words from a man occasionally associated with so-called cyberlibertarianism, a fervently anti-regulation ideology exemplified by the likes of Peter Thiel, who co-founded Paypal with Musk.
Get Data Sheet , Fortunes technology newsletter.
Musk went on to argue that broad government regulation was vital because companies are currently pressured to pursue advanced AI or risk irrelevance in the marketplace:
Thats where you need the regulators to come in and say, hey guys, you all need to just pause and make sure this is safe . . . You kind of need the regulators to do that for all the teams in the game. Otherwise the shareholders will be saying, why arent you developing AI faster? Because your competitor is.
Part of Musks worry stems from social destabilization and job loss. When I say everything, the robots will do everything, bar nothing," he said.
But Musk's bigger concern has to do with AI that lives in the network, and which could be incentivized to harm humans. [They] could start a war by doing fake news and spoofing email accounts and fake press releases, and just by manipulating information," he said. "The pen is mightier than the sword.
Musk outlined a hypothetical situation, for instance, in which an AI could pump up defense industry investments by using hacking and disinformation to trigger a war.
Im against overregulation for sure, Musk emphasized, But man, I think with weve got to get on that with AI, pronto.
Musks comments on AI only took up a small part of the hour-long exchange. He also speculated about the future of driverless cars and space travel, and mourned that meeting the sky-high expectations surrounding him was "quite a difficult emotional hardship" and "a whole lot less fun than it may seem."
Visit link:
Elon Musk Says Artificial Intelligence Is the 'Greatest Risk We Face as a Civilization' - Fortune
Posted in Artificial Intelligence
Comments Off on Elon Musk Says Artificial Intelligence Is the ‘Greatest Risk We Face as a Civilization’ – Fortune
4 fears an AI developer has about artificial intelligence – MarketWatch – MarketWatch
Posted: at 11:13 pm
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These arent world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Read: Job of the future is robot psychologist
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus isnt on determining whether I like or approve of something; it matters only that I can unveil it.
Read: 10 jobs robots already do better than you
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research wont change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the 1% and the rest of us.
Read: Two-thirds of jobs in this city could be automated by 2035
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I dont speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
Now read: 5 ETFs that may let you profit from the next tech revolution
Arend Hintze is an assistant professor of integrative biology & computer science and engineering at Michigan State University. This first appeared on The Conversation What an artificial intelligence researcher fears about AI.
View post:
4 fears an AI developer has about artificial intelligence - MarketWatch - MarketWatch
Posted in Artificial Intelligence
Comments Off on 4 fears an AI developer has about artificial intelligence – MarketWatch – MarketWatch
What an Artificial Intelligence Researcher Fears about AI – Scientific … – Scientific American
Posted: at 11:13 pm
The following essay is reprinted with permission fromThe Conversation, an online publication covering the latest research.
As an artificial intelligence researcher, I often come across the idea that many people are afraid of what AI might bring. Its perhaps unsurprising, given both history and the entertainment industry, that we might be afraid of a cybernetic takeover that forces us to live locked away, Matrix-like, as some sort of human battery.
And yet it is hard for me to look up from the evolutionary computer models I use to develop AI, to think about how the innocent virtual creatures on my screen might become the monsters of the future. Might I become the destroyer of worlds, as Oppenheimer lamented after spearheading the construction of the first nuclear bomb?
I would take the fame, I suppose, but perhaps the critics are right. Maybe I shouldnt avoid asking: As an AI expert, what do I fear about artificial intelligence?
The HAL 9000 computer, dreamed up by science fiction author Arthur C. Clarke and brought to life by movie director Stanley Kubrick in 2001: A Space Odyssey, is a good example of a system that fails because of unintended consequences. In many complex systems the RMS Titanic, NASAs space shuttle, the Chernobyl nuclear power plant engineers layer many different components together. The designers may have known well how each element worked individually, but didnt know enough about how they all worked together.
That resulted in systems that could never be completely understood, and could fail in unpredictable ways. In each disaster sinking a ship, blowing up two shuttles and spreading radioactive contamination across Europe and Asia a set of relatively small failures combined together to create a catastrophe.
I can see how we could fall into the same trap in AI research. We look at the latest research from cognitive science, translate that into an algorithm and add it to an existing system. We try to engineer AI without understanding intelligence or cognition first.
Systems like IBMs Watson and Googles Alpha equip artificial neural networks with enormous computing power, and accomplish impressive feats. But if these machines make mistakes, they lose on Jeopardy! or dont defeat a Go master. These are not world-changing consequences; indeed, the worst that might happen to a regular person as a result is losing some money betting on their success.
But as AI designs get even more complex and computer processors even faster, their skills will improve. That will lead us to give them more responsibility, even as the risk of unintended consequences rises. We know that to err is human, so it is likely impossible for us to create a truly safe system.
Im not very concerned about unintended consequences in the types of AI I am developing, using an approach called neuroevolution. I create virtual environments and evolve digital creatures and their brains to solve increasingly complex tasks. The creatures performance is evaluated; those that perform the best are selected to reproduce, making the next generation. Over many generations these machine-creatures evolve cognitive abilities.
Right now we are taking baby steps to evolve machines that can do simple navigation tasks, make simple decisions, or remember a couple of bits. But soon we will evolve machines that can execute more complex tasks and have much better general intelligence. Ultimately we hope to create human-level intelligence.
Along the way, we will find and eliminate errors and problems through the process of evolution. With each generation, the machines get better at handling the errors that occurred in previous generations. That increases the chances that well find unintended consequences in simulation, which can be eliminated before they ever enter the real world.
Another possibility thats farther down the line is using evolution to influence the ethics of artificial intelligence systems. Its likely that human ethics and morals, such as trustworthiness and altruism, are a result of our evolution and factor in its continuation. We could set up our virtual environments to give evolutionary advantages to machines that demonstrate kindness, honesty and empathy. This might be a way to ensure that we develop more obedient servants or trustworthy companions and fewer ruthless killer robots.
While neuroevolution might reduce the likelihood of unintended consequences, it doesnt prevent misuse. But that is a moral question, not a scientific one. As a scientist, I must follow my obligation to the truth, reporting what I find in my experiments, whether I like the results or not. My focus is not on determining whether I like or approve of something; it matters only that I can unveil it.
Being a scientist doesnt absolve me of my humanity, though. I must, at some level, reconnect with my hopes and fears. As a moral and political being, I have to consider the potential implications of my work and its potential effects on society.
As researchers, and as a society, we have not yet come up with a clear idea of what we want AI to do or become. In part, of course, this is because we dont yet know what its capable of. But we do need to decide what the desired outcome of advanced AI is.
One big area people are paying attention to is employment. Robots are already doing physical work like welding car parts together. One day soon they may also do cognitive tasks we once thought were uniquely human. Self-driving cars could replace taxi drivers; self-flying planes could replace pilots.
Instead of getting medical aid in an emergency room staffed by potentially overtired doctors, patients could get an examination and diagnosis from an expert system with instant access to all medical knowledge ever collected and get surgery performed by a tireless robot with a perfectly steady hand. Legal advice could come from an all-knowing legal database; investment advice could come from a market-prediction system.
Perhaps one day, all human jobs will be done by machines. Even my own job could be done faster, by a large number of machines tirelessly researching how to make even smarter machines.
In our current society, automation pushes people out of jobs, making the people who own the machines richer and everyone else poorer. That is not a scientific issue; it is a political and socioeconomic problem that we as a society must solve. My research will not change that, though my political self together with the rest of humanity may be able to create circumstances in which AI becomes broadly beneficial instead of increasing the discrepancy between the one percent and the rest of us.
There is one last fear, embodied by HAL 9000, the Terminator and any number of other fictional superintelligences: If AI keeps improving until it surpasses human intelligence, will a superintelligence system (or more than one of them) find it no longer needs humans? How will we justify our existence in the face of a superintelligence that can do things humans could never do? Can we avoid being wiped off the face of the Earth by machines we helped create?
The key question in this scenario is: Why should a superintelligence keep us around?
I would argue that I am a good person who might have even helped to bring about the superintelligence itself. I would appeal to the compassion and empathy that the superintelligence has to keep me, a compassionate and empathetic person, alive. I would also argue that diversity has a value all in itself, and that the universe is so ridiculously large that humankinds existence in it probably doesnt matter at all.
But I do not speak for all humankind, and I find it hard to make a compelling argument for all of us. When I take a sharp look at us all together, there is a lot wrong: We hate each other. We wage war on each other. We do not distribute food, knowledge or medical aid equally. We pollute the planet. There are many good things in the world, but all the bad weakens our argument for being allowed to exist.
Fortunately, we need not justify our existence quite yet. We have some time somewhere between 50 and 250 years, depending on how fast AI develops. As a species we can come together and come up with a good answer for why a superintelligence shouldnt just wipe us out. But that will be hard: Saying we embrace diversity and actually doing it are two different things as are saying we want to save the planet and successfully doing so.
We all, individually and as a society, need to prepare for that nightmare scenario, using the time we have left to demonstrate why our creations should let us continue to exist. Or we can decide to believe that it will never happen, and stop worrying altogether. But regardless of the physical threats superintelligences may present, they also pose a political and economic danger. If we dont find a way to distribute our wealth better, we will have fueled capitalism with artificial intelligence laborers serving only very few who possess all the means of production.
This article was originally published onThe Conversation. Read the original article.
Read more:
What an Artificial Intelligence Researcher Fears about AI - Scientific ... - Scientific American
Posted in Artificial Intelligence
Comments Off on What an Artificial Intelligence Researcher Fears about AI – Scientific … – Scientific American
Artificial intelligence can make America’s public sector great again – Recode
Posted: at 11:13 pm
Senator Maria Cantwell, D-Wash., just drafted forward-looking legislation that aims to establish a select committee of experts to advise agencies across the government on the economic impact of federal artificial intelligence.
AI meant for U.S. government use should be defined as a network of complementary technologies built with the ability to autonomously conduct, support or manage public sector activity across disciplines.
The move is an early step toward formalizing the exploration of AI in a government context. But it could ultimately contribute to jump-starting AI-focused programs that help stimulate the United States economy, benefit citizens, uphold data security and privacy, and eventually ensure America is successful during the initial introduction of this important technology to U.S. consumers.
The presence of legislation could also lend legitimacy to the prospect of near-term government investment in AI innovation something that may even sway Treasury Secretary Steve Mnuchin and others away from their belief that the impact of AI wont be felt for years to come.
Indeed, other than a few economic impact and policy reports conducted by the Obama Administration led by former U.S. Chief Data Scientist DJ Patil and other tech-minded government leaders this is the first policy effort toward moving the U.S. public sector past acknowledging its significance, and toward fully embracing AI technology.
Its a tall order, one that requires Sen. Cantwell and her colleagues in the Senate to define AI for the federal government, and focus on policies that govern very diverse applications of the technology.
As an emerging technology, the term artificial intelligence means different things to different people. That's why I believe it's essential for the U.S. government to take the first step in defining what AI means in legislation.
AI meant for U.S. government use should be defined as a network of complementary technologies built with the ability to autonomously conduct, support or manage public sector activity across disciplines. All AI-driven government technology should secure and advance the countrys interests. AI should not be formalized as a replacement or stopgap for standard government operations or personnel.
This is important because a central task of the committee will be to look at if AI has displaced more jobs than it has created with this definition, they will be able to make an accurate assessment.
Should the select committee succeed in establishing a federal policy, this will provide a useful benchmark to the private sector on the way that AI should be built and deployed hopefully adopting ethical standards from the start. This should include everything from the diversity of the people building the AI to the data it learns from. Adding value from the beginning, the technology and the people engaging with it need to be held accountable for outcomes of work. This will take collaboration and employee-citizen engagement.
Public-sector AI use offers an opportunity for agencies to better serve Americas diverse citizen population. AI could open up opportunities for citizens to work and engage with government processes and policies in a way that has never been possible before. New AI tools that include voice-activated processes could make areas of government accessible to people with learning, hearing and sight impairments that previously wouldnt have had the opportunity in the past.
The myriad applications of AI-driven technology offers completely different benefits to departments throughout the government, from Homeland Security to the Office of Personnel Management to the Department of Transportation.
Once the government has a handle on AI and legislation is in place, it could eventually offer government agencies opportunities way beyond those in technology
AI could open up opportunities for citizens to work and engage with government processes and policies in a way that has never been possible before.
Filling talent and personnel gaps with technology that can perform and automate specific tasks, revamp citizen engagement through new communication portals and synthesize vital health, economic and public data securely. So, while the introduction of AI will inevitably lead to a situation where some jobs will be replaced by technology, it will also foster a new sector and create jobs in its wake.
For now, businesses, entrepreneurs, and developers around the world will continue to pioneer new AI-driven platforms, technologies and tools for use both in the home and the office from live chat support software to voice-driven technology powering self-driving cars. The private sector is firmly driving the AI revolution with Amazon, Apple, Facebook, IBM, Microsoft and other American companies leading the way. However, it is clear that there is definitely room for the public sector to complement this innovation and for the government to provide the guide rails.
Personally, Ive spent my career developing AI and bot technology. My first bot brought me candy from a tech-company cafe. My last will hopefully help save the world to some extent. I think Sen. Cantwells initiative will set Americas public sector on a similarly ambitious path to bring AI that helps people into the fold and elevate the U.S. as an important contributor to the technologys global development.
Kriti Sharma is the vice president of bots and AI at Sage Group, a global integrated accounting, payroll and payment systems provider. She is the creator of Pegg, the worlds first accounting chatbot, with users in 135 countries. Sharma is a Fellow of the Royal Society of Arts, a Google Grace Hopper Scholar and a Government of India Young Leader in Science. She was recently named to Forbes 30 Under 30 list. Reach her @sharma_kriti.
Continued here:
Artificial intelligence can make America's public sector great again - Recode
Posted in Artificial Intelligence
Comments Off on Artificial intelligence can make America’s public sector great again – Recode
The big problem with artificial intelligence – Axios
Posted: at 11:13 pm
In a discussion with Nevada Gov. Brian Sandoval, Musk also touched on several other topics:
On energy:
Musk noted that it would only take about 100 square miles of solar panels to power the entire United States and the batteries needed to store the energy would only need to take about a square mile. That said, he imagines the energy shifting to a large dose of rooftop solar, some power plant solar, along with wind, hydro and nuclear power.
"It's inevitable," Musk said, speaking of shifting to sustainable energy. "But it matters if it happens sooner or later."
As for those pushing some other type of fusion, Musk notes that the sun is a giant fusion reactor in the sky. "It's really reliable," he said. "It comes up every day. if it doesn't we've got [other] problems."
On artificial intelligence:
Musk said it represents a real existential threat to humanity and a rare example of where regulation needs to be proactive, saying that if it is reactive it could come too late.
"In my opinion it is the biggest risk that we face as a civilization," he said.
No matter what, he said, "there will certainly be a lot of job disruption."
Robots will be able to do everything better than us, I mean all of us. I'm not sure exactly what to do about this. This is really like the scariest problem.
On regulation:
"It sure is important to get the rules right," Musk said. "Regulations are immortal. They never die unless somebody actually goes and kills them. A lot of times regulations can be put in place for all the right reasons but nobody goes back and kills them because they no longer make sense."
Musk also focused on the importance of incentives, saying whatever societies incentivize tends to be what happens. "It's economics 101," he said.
On what drives him:
On Tesla's stock price:
Musk said he has been on record several times as saying its stock price "is higher than we have any right to deserve" especially based on current and past performance. "The stock price obviously reflects a lot of optimism on where we will be in the future," he said. "Those expectations sometimes get out of control. I hate disappointing people, I am trying really hard to meet those expectations."
Musk also talked about Trump when answering a question from Axios at the event. More on that here.
Read this article:
Posted in Artificial Intelligence
Comments Off on The big problem with artificial intelligence – Axios
Artificial Intelligence ushers in the era of superhuman doctors – New Scientist
Posted: at 11:13 pm
By Kayt Sukel
THE doctors eyes flit from your face to her notes. How long would you say thats been going on? You think back: a few weeks, maybe longer? She marks it down. Is it worse at certain times of day? Tough to say it comes and goes. She asks more questions before prodding you, listening to your heart, shining a light in your eyes. Minutes later, you have a diagnosis and a prescription. Only later do you remember that fall you had last month should you have mentioned it? Oops.
One in 10 medical diagnoses is wrong, according to the US Institute of Medicine. In primary care, one in 20 patients will get a wrong diagnosis. Such errors contribute to as many as 80,000 unnecessary deaths each year in the US alone.
These are worrying figures, driven by the complex nature of diagnosis, which can encompass incomplete information from patients, missed hand-offs between care providers, biases that cloud doctors judgement, overworked staff, overbooked systems, and more. The process is riddled with opportunities for human error. This is why many want to use the constant and unflappable power of artificial intelligence to achieve more accurate diagnosis, prompt care and greater efficiency.
AI-driven diagnostic apps are already available. And its not just Silicon Valley types swapping clinic visits for diagnosis via smartphone. The UK National Health Service (NHS) is trialling an AI-assisted app to see if it performs better than the existing telephone triage line. In the US and
Link:
Artificial Intelligence ushers in the era of superhuman doctors - New Scientist
Posted in Artificial Intelligence
Comments Off on Artificial Intelligence ushers in the era of superhuman doctors – New Scientist
Dangers of drugs – Side effects of long term use of sodium valproate – The Village Reporter and the Hometown Huddle
Posted: at 11:11 pm
The Village Reporter and the Hometown Huddle | Dangers of drugs - Side effects of long term use of sodium valproate The Village Reporter and the Hometown Huddle Clean n clear moisturizer review indonesia as regularly inadequate impotency my the healthy. information skill and can to prefer who safety of the Memetics US life. halls millions ingredients their Canadian has it erection. with not Please no original ... |
Read more:
Posted in Memetics
Comments Off on Dangers of drugs – Side effects of long term use of sodium valproate – The Village Reporter and the Hometown Huddle
Federer on verge of Wimbledon immortality – Yahoo Sports
Posted: at 11:11 pm
London (AFP) - Five years after his last Wimbledon triumph, Roger Federer can capture a record eighth All England Club title Sunday and become the tournament's oldest men's champion of the modern era.
With his 36th birthday fast approaching, the evergreen Swiss will comfortably succeed Arthur Ashe, who was almost 32 when he won in 1975, as Wimbledon's most senior champion.
Victory over Croatian giant Marin Cilic will also give him a 19th career Grand Slam title and second in three majors this year after sweeping to a fifth Australian Open in January following a six-month absence.
"I was hoping to be in good shape when the grass court season came around," said Federer who, for good measure, also pocketed back-to-back Masters at Indian Wells and Miami as well as a ninth Halle grass court crown.
"The first three, four months were just like a dream really. So this is something I was working towards, you know, Wimbledon, to be in good shape. I'm happy it's paying off here now."
Federer admits his form in 2017 has surprised even himself after he shut down his 2016 season to rest a knee injury in the aftermath of his brutal five-set semi-final loss at Wimbledon to Milos Raonic.
He has 30 wins and just two losses this year and he has reached his 11th Wimbledon final without dropping a set.
- 'Unbelievably excited' -
Sunday's match will be his 102nd at the tournament and his 29th final at the majors.
"It makes me really happy, making history here at Wimbledon. It's a big deal. I love this tournament," said Federer, who has been tied with Pete Sampras on seven Wimbledon titles since beating Andy Murray in the 2012 final.
"All my dreams came true here as a player. To have another chance to go for number eight now, be kind of so close now at this stage, is a great feeling.
"Yeah, unbelievably excited. I hope I can play one more good match. 11 finals here, all these records, it's great. I'm so close now."
While 'Big Four' rivals Murray, Novak Djokovic and Rafael Nadal failed to even make the semi-finals, Federer has been reborn.
He came into Wimbledon having radically pruned his playing schedule, skipping the entire clay court season.
Wimbledon is just his seventh event of the year; 28-year-old Cilic is in his 15th.
Federer, revelling in the spotlight of having played all his matches on Centre Court, has hardly been troubled on his way to the final.
He has lost serve just four times and spent four and a half hours less on court than Cilic.
Federer also boasts a 6-1 career record over Cilic, the 2014 US Open champion who has made his first Wimbledon final at the 11th attempt.
However, Cilic's game is made for grass and 12 months ago he led Federer by two sets to love and held three match points in an epic quarter-final which the Swiss superstar eventually claimed.
- 'Roger's home court' -
When Cilic won his only Slam in New York three years ago, he demolished Federer in straight sets in the semi-finals.
"I don't want to say it's more relaxed going into it because I have a good head-to-head record against Marin, even though the matches were extremely close," said Federer.
"But it's not like we've played against each other 30 times. You feel like you have to reinvent the wheel.
"It's more straightforward, in my opinion. I think that's nice in some ways. It's a nice change, but it doesn't make things easier."
Cilic is only the second Croatian man to reach the Wimbledon final after Goran Ivanisevic, his former coach, who swept to a memorable title victory in 2001.
A win on Sunday would also make him the first Wimbledon champion outside of Federer, Murray, Djokovic and Nadal since Lleyton Hewitt triumphed in 2002.
However, he has only won one of his last 12 matches against a top five player at the Slams, even if that was over Federer in New York three years ago.
Cilic has fired 130 aces at Wimbledon this year and dropped just 10 service games.
"This is Roger's home court, the place where he feels the best and knows that he can play the best game," said Cilic.
"Obviously I'm going to look back, 12 months ago I was one point away from winning a match against him here. But it's still a big mountain to climb."
Federer's defeated semi-final opponent Tomas Berdych sees only one winner on Sunday.
"I don't see anything that would indicate Roger is getting older. He's just proving his greatness in our sport," said the Czech.
Read more:
Posted in Immortality
Comments Off on Federer on verge of Wimbledon immortality – Yahoo Sports
How Would the Human Body Respond to Carbonite Freezing? – Inverse
Posted: at 11:10 pm
In one of the most iconically frustrating scenes in all of modern cinema, Han Solo gets frozen in carbonite at the end of Star Wars: Episode V The Empire Strikes Back. The carbonite chamber fills with clouds of thick, white vapor as Han Solo, his face scrunched up in anxious anticipation, disappears in the carbonite gas. And while the feelings we feel during that famous scene are real, carbonite freezing is (currently) not.
But what if it were? Could our hero survive the freezing process? And if so, could he be successfully thawed? We spoke to cryonics expert Ben Best to find out. He hasnt seen Empire, but he says carbonite freezing seems similar in principle to cryonic preservation, in which human bodies are preserved at extremely low temperatures.
That sounds very similar to what is actually being done in practice by cryonics organizations, Best, the former president and CEO of the Cryonics Institute, tells Inverse. Much like carbonite freezing, cryopreservation involves cooling a body from the outside. Unlike carbonite freezing, though, cryopreservation is a gradual process, involving some very specific precautions meant to help protect the sensitive tissues of the human body against the harm that can occur during freezing. In fact, Best doesnt even like to use the word freeze to describe cryonics.
The patient is cooled down, and their blood is replaced, he explains. The water in their body is actually replaced with a vitrification solution to prevent ice formation so that the tissues harden like glass rather than freeze.
This process of vitrification is key to cryonics, allowing the human body to cool without experiencing the cell damage that can accompany crystallization. To put it crudely, think of freezer burn but inside of you. Cryonics companies avoid freezing by replacing a patients blood with a cryoprotectant, a liquid that will become viscous as it cools but wont form crystals that could damage the tender cells and tissues of the human body.
Heres the thing, though: The carbon freezing chamber in Cloud City didnt utilize any sort of cryoprotectants because, unlike in other cinematic depictions of cryogenic sleep, it wasnt made to preserve humans. Lando Calrissians mining facility was set up to process tibanna gas and encase it in blocks of carbonite so it could be shipped safely. This highly reactive substance, used to power starship blasters, needed to be stabilized for transport but didnt share humanoids unique biological needs. As such, it was fortunate that Han Solo survived freezing in the first place.
Darth Vader, who experienced carbonite freezing in his younger years, probably knew it was safe, but the fact remains that it definitely wasnt designed for living beings.
Neither is cryopreservation, though. This process isnt currently applied to living people, says Best. They have to be legally dead, as far as cryonics is concerned. Theres some talk of doing it to a living person, but its not reversible by current technology. Typically, a person is cryogenically preserved immediately after death. The hope is that science will advance to the point that eventually humans will find a cure for whatever ailment killed the person, whether its cancer, congenital illnesses, or traumatic injuries. At that point, a patient could be reanimated and healed.
As far as reanimating a cryopreserved human, well, the hope is that scientists will find a way to do that too, as there is currently no way to safely warm human tissue back up. Even if human tissue is successfully vitrified without any crystals forming, crystals almost always form during warming. Best explains that part of the problem is that cryoprotectants are too toxic to use in sufficient quantities to fully protect human tissue. And while scientists are working on developing less toxic cryoprotectants, theyre not quite there yet.
So even if Han Solo somehow survived carbonite freezing without his bodily fluids crystallizing and turning his body into a huge mass of destroyed cells, it is highly unlikely that he would be reanimated without suffering cell damage. Granted, crystallization could theoretically be avoided if a cryopreserved body was brought back up to temperature tens or hundreds of times faster than it was cooled. It also must be warmed uniformly, a huge challenge when dealing with a human body, which is made up of many types of tissues. So its possible that the intensely bright light that emanates from the carbonite block during Han Solos thawing is the byproduct of an advanced warming technology. But since the machine in which he was frozen isnt intended for humans, this seems highly unlikely.
Strangely, one of the most notable effects of Han Solos hibernation sickness was blindness, whereas corneas are one of the only human organs that scientists actually have been able to successfully vitrify and warm.
So while it may come as little surprise that a space opera didnt quite hit the mark in terms of scientific accuracy, perhaps its fitting that Han Solo, a pilot known for defying all odds, survived a procedure that should have killed him.
See the original post:
How Would the Human Body Respond to Carbonite Freezing? - Inverse
Posted in Cryonics
Comments Off on How Would the Human Body Respond to Carbonite Freezing? – Inverse