Breaking News and Updates
- Abolition Of Work
- Alternative Medicine
- Artificial Intelligence
- Atlas Shrugged
- Ayn Rand
- Basic Income Guarantee
- Chess Engines
- Cloud Computing
- Conscious Evolution
- Cosmic Heaven
- Designer Babies
- Donald Trump
- Ethical Egoism
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom of Speech
- Gene Medicine
- Genetic Engineering
- Germ Warfare
- Golden Rule
- Government Oppression
- High Seas
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Longevity
- Immortality Medicine
- Intentional Communities
- Life Extension
- Mars Colonization
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- New Utopia
- Personal Empowerment
- Political Correctness
- Politically Incorrect
- Post Human
- Post Humanism
- Private Islands
- Quantum Computing
- Quantum Physics
- Resource Based Economy
- Ron Paul
- Second Amendment
- Second Amendment
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Teilhard De Charden
- The Singularity
- Tor Browser
- Transhuman News
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Zeitgeist Movement
The Evolutionary Perspective
Category Archives: Artificial Intelligence
Posted: August 25, 2017 at 4:07 am
Under President Obamas leadership, America continues to be the worlds most innovative country, with the greatest potential to develop the industries of the future and harness science and technology to help address important challenges. Over the past 8 years, President Obama has relentlessly focused on building U.S. capacity in science and technology. This Thursday, President Obama will host the White House Frontiers Conferencein Pittsburgh to imagine the Nation and the world in 50 years and beyond, and to explore Americas potential to advance towards the frontiers that will make the world healthier, more prosperous, more equitable, and more secure.
Today, to ready the United States for a future in which Artificial Intelligence (AI) plays a growing role, the White House is releasing a report on future directions and considerations for AI called Preparing for the Future of Artificial Intelligence. This report surveys the current state of AI, its existing and potential applications, and the questions that progress in AI raise for society and public policy. The report also makes recommendations for specific further actions. A companion National Artificial Intelligence Research and Development Strategic Plan is also being released, laying out a strategic plan for Federally-funded research and development in AI.
Preparing for the Future of Artificial Intelligence details several policy opportunities raised by AI, including how the technology can be used to advance social good and improve government operations; how to adapt regulations that affect AI technologies, such as automated vehicles, in a way that encourages innovation while protecting the public; how to ensure that AI applications are fair, safe, and governable; and how to develop a skilled and diverse AI workforce.
The publication of this report follows a series of public-outreach activities spearheaded by the White House Office of Science and Technology Policy (OSTP) in 2016, which included five co-hosted public workshopsheld across the country, as well as a Request for Information(RFI) in June 2016 that received 161 responses. These activities helped inform the focus areas and recommendations included in the report.
Advances in AI technology hold incredible potential to help America stay on the cutting edge of innovation. Already, AI technologies have opened up new markets and new opportunities for progress in critical areas such as health, education, energy, and the environment. In recent years, machines have surpassed humans in the performance of certain specific tasks, such as some aspects of image recognition. Although it is very unlikely that machines will exhibit broadly-applicable intelligence comparable to or exceeding that of humans in the next 20 years, experts forecast that rapid progress in the field of specialized AI will continue, with machines reaching and exceeding human performance on an increasing number of tasks.
One of the most important issues raised by AI is its impact on jobs and the economy. The report recommends that the White House convene a study on automation and the economy, resulting in a follow-on public report that will be released by the end of this year.
In the coming years, AI will continue contributing to economic growth and will be a valuable tool for improving the world in fields as diverse as health care, transportation, the environment, criminal justice, and economic inclusion. The Administration believes that it is critical that industry, civil society, and government work together to develop the positive aspects of the technology, manage its risks and challenges, and ensure that everyone has the opportunity to help in building an AI-enhanced society and to participate in its benefits.
To read the Future of AI report, click here.And tune-in for the White House Frontiers Conference on October 13 for more on the #FutureofAI, including discussions with leading experts on harnessing the potential of AI, including data science, machine learning, automation, and robotics to engage and benefit all Americans. Watch the conference live and learn more at: http://www.frontiersconference.org.
Ed Felten is a Deputy U.S Chief Technology Officer in the White House Office of Science and Technology Policy. Terah Lyons is a Policy Advisor to the U.S. Chief Technology Officer in White House Office of Science and Technology Policy.
View original post here:
Posted: at 4:07 am
This post was adapted from a presentation at an AI Now symposium held on July 10 at the MIT Media Lab. AI Now is a new initiative working, in partnership with the ACLU, to explore the social and economic implications of artificial intelligence.
It seems to me that this is an auspicious moment for a conversation about rights and liberties in an automated world, for at least two reasons.
The first is that theres still time to get this right. We can still have a substantial impact on the legal and policy debates that will shape development and deployment of automated technologies in our everyday lives.
The second reason is Donald Trump. The democratic stress test of the Trump presidency has gotten everyones attention. Its now much harder to believe, as Eric Schmidt once assured us, that technology will solve all the worlds problems. Technologists who have grown used to saying that they have no interest in politics have realized, I believe, that politics is very interested in them.
By contrast, consider how, over the last two decades, the internet came to become the engine of a surveillance economy.
Silicon Valleys apostles of innovation managed to exempt the internet economy from the standard consumer protections provided by other industrialized democracies by arguing successfully that it was too early for government regulation: It would stifle innovation. In almost the same breath, they told us that it was also too late for regulation: It would break the internet.
And by the time significant numbers of people came to understand that maybe they hadnt gotten such a good deal, the dominant business model had become so entrenched that meaningful reforms will now require Herculean political efforts.
How smart can our smart cameras be if the humans programming them are this dumb?
When we place innovation within or atop a normative hierarchy, we end up with a world that reflects private interests rather than public values.
So if we shouldnt just trust the technologists and the corporations and governments that employ the vast majority of them then what should be our north star?
Liberty, equality, and fairness are the defining values of a constitutional democracy. Each is threatened by increased automation unconstrained by strong legal protections.
Liberty is threatened when the architecture of surveillance that weve already constructed is trained, or trains itself, to track us comprehensively and to draw conclusions based on our public behavior patterns.
Equality is threatened when automated decision-making mirrors the unequal world that we already live in, replicating biased outcomes under a cloak of technological impartiality.
And basic fairness, what lawyers call due process, is threatened when enormously consequential decisions affecting our lives whether well be released from prison, or approved for a home loan, or offered a job are generated by proprietary systems that dont allow us to scrutinize their methodologies and meaningfully push back against unjust outcomes.
Since my own work is on surveillance, Im going to devote my limited time to that issue.
When we think about the interplay between automated technologies and our surveillance society, what kinds of harms to core values should we be principally concerned about?
Let me mention just a few.
When we program our surveillance systems to identify suspicious behaviors, what will be our metrics for defining suspicious?
This is a brochure about the 8 signs of terrorism that I picked up in an upstate New York rest area. (My personal favorite is number 7: Putting people into position and moving them around without actually committing a terrorist act.)
How smart can our smart cameras be if the humans programming them are this dumb?
And of course, this means that many people are going to be logged into systems that will, in turn, subject them to coercive state interventions.
But we shouldnt just be concerned about false positives. If we worry only about how error-prone these systems are, then more accurate surveillance systems will be seen as the solution to the problem.
Im at least as worried about a world in which all of my public movements are tracked, logged, and analyzed accurately.
Bruce Schneier likes to say: Think about how you feel when a police car is driving alongside you. Now imagine feeling that way all the time.
Theres a very real risk, as my colleague Jay Stanley has warned, that pervasive automated surveillance will:
turn us into quivering, neurotic beings living in a psychologically oppressive world in which were constantly aware that our every smallest move is being charted, measured, and evaluated against the like actions of millions of other peopleand then used to judge us in unpredictable ways.
I also worry that in our eagerness to make the world quantifiable, we may find ourselves offering the wrong answers to the wrong questions.
The wrong answers because extremely remote events like terrorism dont track accurately into hard predictive categories.
And the wrong question because it doesnt even matter what the color is: Once we adopt this threat-level framework, we say that terrorism is an issue of paramount national importance even though that is a highly questionable proposition.
Think about how you feel when a police car is driving alongside you. Now imagine feeling that way all the time.
The question becomes how alarmed should we be? rather than should we be alarmed at all?
And once were trapped in this framework, the only remaining question will be how accurate and effective our surveillance machinery is not whether we should be constructing and deploying it in the first place.
If were serious about protecting liberty, equality, and fairness in a world of rapid technological change, we have to recognize that in some contexts, inefficiencies can be a feature, not a bug.
Consider these words written over 200 years ago. The Bill of Rights is an anti-efficiency manifesto. It was created to add friction to the exercise of state power.
The Fourth Amendment: Government cant effect a search or seizure without a warrant supported by probable cause of wrongdoing.
The Fifth Amendment: Government cant force people to be witnesses against themselves; it cant take their freedom or their property without fair process; it doesnt get two bites at the apple.
The Sixth Amendment: Everyone gets a lawyer, and a public trial by jury, and can confront any evidence against them.
The Eighth Amendment: Punishments cant be cruel, and bail cant be excessive.
This document reflects a very deep mistrust of aggregated power.
If we want to preserve our fundamental human rights in the world that aggregated computing power is going to create, I would suggest that mistrust should remain one of our touchstones.
Go here to see the original:
Doc.ai’s Ethereum Blockchain-Based Medical Solutions Bring Artificial Intelligence To Healthcare – ETHNews
Posted: at 4:07 am
The Ethereum blockchain will be used to power deep learning artificial intelligence bots that can answer patient inquiries.
On August 24, 2017, artificial intelligence (AI) and blockchain startup doc.ai Incorporated released details of its language processing platform that timestamps datasets using an Ethereum blockchain-based system and AI tools.
The project is the result of collaboration between developers from Stanford and Cambridge Universities. According to the announcement, doc.ai can improve patient care by “creating the most advanced natural language dialog system that generates insights from combined medical data.” The platform is built on the Ethereum blockchain, allowing for decentralized timestamping of datasets, and makes use of AI that is capable of deep learning on the edge of networks (a term for data processing near the data source on the network, thus reducing communication bandwidth between sensors and the datacenter) or on a mobile device. The AI can provide a 24-hour resource to patients by answering their questions specific to their personal health data and their physician’s analysis. In addition, the AI uses the cumulative information gathered to learn about the patient’s needs and to customize itself accordingly.
doc.ai plans to introduce, over the next year, three natural language processing models called Robo-Genomics, Robo-Hematology, and Robo-Anatomics, which will be made available to the medical industry. As described on doc.ais website, Robo-Genomics will provide users with decision support. Robo-Hematology is designed to answer any question on 400+ blood biomarkers. Robo-Anatomics uses a patent-pending Selfie2BMI module that uses a Deep Neural Network to predict a number of anatomic features from a photo of a face.
Walter De Brouwer, founder and chief executive officer of doc.ai, commented on the attributes of the platform:
“We are making it possible for lab tests to converse directly with patients by leveraging advanced artificial intelligence, medical data forensics, and the decentralized blockchain. We envision extensive possibilities for the use of this technology by doctors, patients, and medical institutions.”
De Brouwer explained to ETHNews doc.ais motivation for using blockchain technology:
In an announcement on July 24, 2017, Deloitte Life Sciences and Healthcare and doc.ai will be working together to test the use of Robo-Hematology. Deloittes Rajeev Ronanki commented that “Platforms like these open new possibilities for patients and medical organizations by providing more personalized, intelligent healthcare. We are excited to collaborate with doc.ai and to be at the forefront of this technology.”
The transition to AI and the Ethereum blockchain as a backbone for patient care is a significant step forward for the nascent technologies. We are very excited to bring these three worlds together in one: AI, blockchain and healthcare, said De Brouwer.
Jeremy Nation is a writer living in Los Angeles with interests in technology, human rights, and cuisine. He is a full time staff writer for ETHNews and holds value in Ether.
Posted: at 4:07 am
CB Insights is putting the numbers behind what industry insiders have already been noticing: Artificial intelligence is hot in healthcare right now.
From a provider standpoint, many are just beginning to explore the possibilities and see how such capabilities can fit into the care delivery setting. Many providers are looking into patient readmissions as one area for a use case.
However, due to the infancy of the current clinical use cases, artificial intelligence receives a fair amount of skepticism in the healthcare space. For one, “artificial intelligence” has become a catch-all shorthand for some disparate topics such as predictive analytics and machine learning. CB defined artificial intelligence in the space as “startups leveraging machine learning algorithms to reduce drug discovery times, provide virtual assistance to patients or improve the accuracy of medical imaging and diagnostic procedures, among other applications.”
Another issue that adds to the skepticism is the potential costs of new technology. Providers have felt burned before because of high-cost EHR systems that helped contribute to administrative burden across physician offices and health systems nationwide.
Still, companies are making a play for the space, as it’s a market that’s expected to grow.
[Hospitals] are very excited about [artificial intelligence] and are actually very bold about it, which is surprising because hospital systems dont tend to be usually bold.But theyre making investments, James Golden, managing director of PwC Health Advisory, told Healthcare Dive at HIMSS17 in February.This stuff is coming. Its coming fast. Its being viewed as a research project. In the next few years, it is not going to be a research project.”
See the rest here:
Posted: at 4:07 am
As Artificial Intelligence (AI) and Machine Learning (ML) get set to take a giant leap in improving day-to-day life, the key is to democratise these new-age tools for all and benefit the communities of developers, users and enterprise customers, a top Google executive said here on Wednesday.
The concept of AI and ML came into existence long back but with the vast availability of data today, sectors like healthcare, banking and retail are adopting the technologies at a faster pace than before.
Google, a pioneer in AI, has been focusing on four key components computing, algorithms, data and expertise to organise all the data and make it accessible.
What it entails to democratise AI, we focus on these four core components. Computing is the backbone of AI technology. Google as a company has always been at the forefront of computing AI, Fei-Fei Li, Chief Scientist of Google Cloud AI and ML, told reporters during a media interaction here.
We want to make it all accessible to our customers, added Li, also Professor of Computer Science at Stanford University in the US.
Earlier this year, Google announced the second-generation Tensor Processing Units (TPUs) (now called the Cloud TPU) at the annual Google I/O event in the US.
We announced the Cloud TPU the second-generation of our processing unit and our intention is to make it available via Google Cloud, the top executive added.
The company offers computing power including graphics processing unit (GPUs), central processing units (CPUs) and tensor processing units (TPUs) to power machine learning.
The Shazam is one such app that uses GPUs on Google Cloud.
The application uses GPUs to match snippets of user audio fingerprints against their catalogue of over 40 million songs.
That means when a user Shazams a song, the algorithm uses GPUs to search the database until it finds a match for the audio snippet the person has recorded on the phone.
Li, however, said that AI still remains among the most complex and new fields.
To make it accessible for businesses and customers where they are required to gain access to the right tools, whether it is a ML library like TensorFlow or tapping into pre-trained models via API, Li added.
Posted: August 22, 2017 at 11:58 pm
Versive thinks its AI platform can help solve security problems. (Versive Photo)
If youre working on a security startup in 2017, youre more than likely applying artificial intelligence or machine learning techniques to automate threat detection and other time-consuming security tasks. After a few years as a financial services company, five-year-old Versive has joined that parade, and has raised $12.7 million in new funding to tackle corporate security.
Seattle-based Versive started life as Context Relevant, and has now raised $54.7 million in total funding, which is a lot for a company reorganizing itself around a new mission. Versive adopted its new name and new identity as a security-focused company in May, and its existing investors are giving it some more runway to make its AI-driven security approach work at scale.
The company enlisted legendary white-hat hacker and security expert Mudge Zatko, who is currently working for Stripe, to help it architect its approach toward using AI to solve security problems, said Justin Baker, senior director of marketing for Versive, based in downtown Seattle. What weve looking for are patterns of malicious behavior that can be used to help security professionals understand the true nature of threats on their networks, he said.
Chief information security officers (CISOs) are drowning in security alerts, and a lot of those alerts are bogus yet still take time to evaluate and dismiss, Baker said. Versives technology learns how potential customers are handling current and future threats and helps them figure out which alerts are worthy of a response, which saves time, money, and aggravation if working correctly.
The internet might be a dangerous neighborhood, but those CISOs are having trouble putting more cops on the beat: there is a staggering number of unfilled security jobs because companies are finding it very hard to recruit properly trained talent and retain stars once they figure it all out. Security technologies that make it easier to do the job with fewer people are extremely hot right now, and dozens of startups are working on products and services for this market.
Versive has around 60 employees at the moment, and plans to expand sales and marketing as it ramps up product development, Baker said. Investors include Goldman Sachs, Madrona Venture Group, Formation 8, Vulcan Capital, and Mark Leslie.
Go here to see the original:
Posted: at 11:58 pm
For the last half-decade, the most exciting, contentious, and downright awe-inspiring topic in technology has been artificial intelligence. Titans and geniuses have lauded AIs potential for change, glorifying its application in nearly every industry imaginable. Such praise, however, is also met with tantamount disapproval from similar influencers and self-made billionaires, not to mention a good part of Hollywoods recent sci-fi flicks. AI is a phenomenon that will never go down easy intelligence and consciousness are prerogatives of the living, and the inevitability of their existence in machines is hard to fathom, even with all those doomsday-scenario movies and books.
On that note, however, it is nonetheless a certainty we must come to accept, and most importantly, understand. Im here to discuss the implications of AI in two major areas: medicine and finance. Often regarded as the two pillars of any nations stable infrastructure, the industries are indispensable. The people that work in them, however, are far from irreplaceable, and its only a matter of time before automation makes its presence known.
Lets begin with perhaps the most revolutionary change the automated diagnosis and treatment of illnesses. A doctor is one of humanitys greatest professions. You heal others and are well compensated for your work. That being said, modern medicine and the healthcare infrastructure within which it is lies, has much room for improvement. IBMs artificial intelligence machine, Watson, is now as good as a professional radiologist when it comes to diagnosis, and its also been compiling billions of medical images (30 billion to be exact) to aid in specialized treatment for image heavy fields like pathology and dermatology.
Fields like cardiology are also being overhauled with the advent of artificial intelligence. It used to take doctors nearly an hour to quantify the amount of blood transported with each heart contraction, and it now takes only 15 seconds using the tools weve discussed. With these computers in major hospitals and clinics, doctors can process almost 260 million images a day in their respective fields this means finding skin cancers, blood clots, and infections all with unprecedented speed and accuracy, not to mention billions of dollars saved in research and maintenance.
Next up, the hustling and overtly traditional offices of Wall Street (until now). If you dont listen to me, at least recognize that almost 15,000 startups already exist that are working to actively disrupt finance. They are creating computer-generated trading and investment models that blow those crafted by the error-prone hubris of their human counterparts out of the water. Bridgewater Associated, one of the worlds largest hedge funds, is already cutting some of their staff in favor of AI-driven models, and enterprises like Sentient, Wealthfront, Two Sigma, and so many more have already made this transition. They shed the silk suits and comb-overs for scrappy engineers and piles of graphics cards and server racks. The result? Billions of dollars made with fewer people, greater certainty, and much more comfortable work attire.
So the real question to ask is where do we go from here? Stopping the development of these machines is pointless. They will come to exist, and they will undoubtedly do many of our jobs better than we can; the solution, however, is through regulation and a hard-nosed dose of checks and balances. 40% of U.S. jobs can be swallowed by artificial intelligence machines by the early 2030s, and if we arent careful about how we assign such professions, and the degree to which we automate them, we are looking at an incredibly serious domestic threat. Get very excited about what AI can do for us, and start thinking very deeply about how it can integrate with humans, lest utter anarchy.
Posted: at 11:58 pm
Sexism is so deeply ingrained in the way we think about the world, we’ve actually passed it on to our computers, according to a new University of Virginia (UVA) and University of Washington study . Artificial intelligence (AI) is more likely to label people who are cooking, shopping, and cleaning as women and people who are playing sports, coaching, and shooting as men.
UVA computer science professor Vicente Ordez got the idea for the experiment when he noticed that his image-recognition software was associating photos of kitchens with women. After training software using two photo collections that researchers use to create image-recognition software, including one supported by Facebook and Microsoft, he and his colleagues found that not only do these collections contain gender bias, but they also multiply that bias when they pass it on to the software. The program these photo sets produced actually labeled a man a “woman” because he was standing by a stove.
This isn’t the only evidence we have that technology contains biases. In addition to image-recognition software, software that analyzes writing and speech also reflects hidden assumptions about gender, according to a study published earlier this year in Science. The researchers analyzed how computers interpreted words from Google News and a 840 billion-word data set used by computer scientists, and they found that machines linked male and man with STEM fields and woman and female with chores. The problem wasn’t just with gender either: Stereotypically white names were more likely to be associated positive words like happy and gift. Another study published last summer found that when software based on Google News was asked, “Man is to computer programmer as woman is to X, ” it responded with “homemaker.”
Of course, computers don’t make up these associations out of nowhere. They’re reflecting our own biases back to us. But when they pick those beliefs up, these can take on a life of their own. The snafu that Google’s image software made in 2015, when it mislabeled black people as gorillas , demonstrates this. Google image searches are another example: Search for hand and you get mostly white ones, while girl yields sexy photos and boy yields kids.
This tendency becomes even more problematic when AI is used to create robots that interact with people. Mark Yatskar, a researcher at the Allen Institute for Artificial Intelligence and an author of the new study, told Wired he could imagine a scenario where a robot asks a woman if she wants help with the dishes while handing a man a beer. “This could work to not only reinforce existing social biases but actually make them worse,” he said.
The way artificial intelligence identifies words and images is based on the way people use them, so in order to promote a more egalitarian world, engineers would have to intervene in the creation of the software. And that’s a possibility many are considering. Eric Horvitz, director of Microsoft Research, told Wired that Microsoft has a committee for this. “I and Microsoft as a whole celebrate efforts identifying and addressing bias and gaps in data sets and systems created out of them,” he said. “Its a really important questionwhen should we change reality to make our systems perform in an aspirational way?”
Follow this link:
Posted: August 20, 2017 at 6:16 pm
Earlier this month, tech moguls Elon Musk and Mark Zuckerberg debated the pros and cons of artificial intelligence from different corners of the internet. While SpaceXs CEO is more of an alarmist, insisting that we should approach AI with caution and that it poses a fundamental existential risk, Facebooks founder leans toward a more optimistic future, dismissing doomsday scenarios in favor of AI helping us build a brighter future.
I now agree with Zuckerbergs sunnier outlookbut I didnt used to.
Beginning my career as an engineer, I was interested in AI, but I was torn about whether advancements would go too far too fast. As a mother with three kids entering their teens, I was also worried that AI would disrupt the future of my childrens education, work, and daily life. But then something happened that forced me into the affirmative.
Imagine for a moment that you are a pathologist and your job is to scroll through 1,000 photos every 30 minutes, looking for one tiny outlier on a single photo. Youre racing the clock to find a microscopic needle in a massive data haystack.
Now, imagine that a womans life depends on it. Mine.
This is the nearly impossible task that pathologists are tasked with every day. Treating the 250,000 women in the US who will be diagnosed with breast cancer this year, each medical worker must analyze an immense amount of cell tissue to identify if their patients cancer has spread. Limited by time and resources, they often get it wrong; a recent study found that pathologists accurately detect tumors only 73.2% of the time.
In 2011 I found a lump in my breast. Both my family doctor and I were confident that it was a Fibroadenoma, a common noncancerous (benign) breast lump, but she recommended I get a mammogram to make sure. While the original lump was indeed a Fibroenoma, the mammogram uncovered two unknown spots. My journey into the unknown started here.
Since AI imaging was not available at the time, I had to rely solely on human analysis. The next four years were a blur of ultrasounds, biopsies, and surgeries. My well-intentioned network of doctors and specialists were not able to diagnose or treat what turned out to be a rare form of cancer, and repeatedly attempted to remove my recurring tumors through surgery.
After four more tumors, five more biopsies, and two more operations, I was heading toward a double mastectomy and terrified at the prospect of the cancer spreading to my lungs or brain.
I knew something needed to change. In 2015, I was introduced to a medical physicist that decided to take a different approach, using big data and a machine-learning algorithm to spot my tumors and treat my cancer with radiation therapy. While I was nervous about leaving my therapy up to this new technology, itcombined with the right medical knowledgewas able to stop the growth of my tumors. Im now two years cancer-free.
I was thankful for the AI that saved my life but then that very same algorithm changed my sons potential career path.
The positive impact of machine learning is often overshadowed by the doom-and-gloom of automation. Fearing for their own jobs and their childrens future, people often choose to focus on the potential negative repercussions of AI rather than the positive changes it can bring to society.
After seeing what this radiation treatment was able to do for me, my son applied to a university program in radiology technology to explore a career path in medical radiation. He met countless radiology technicians throughout my years of treatment and was excited to start his training off in a specialized program. However, during his application process, the program was cancelled: He was told it was because there were no longer enough jobs in the radiology industry to warrant the programs continuation. Many positions have been lost to automationjust like the technology and machine learning that helped me in my battle with cancer.
This was a difficult period for both my son and I: The very thing that had saved my life prevented him from following the path he planned. He had to rethink his education mid-application when it was too late to apply for anything else, and he was worried that his back up plans would fall through.
Hes now pursuing a future in biophysics rather than medical radiation, starting with an undergraduate degree in integrated sciences. In retrospect, we both now realize that the experience forced him to rethink his career and unexpectedly opened up his thinking about what research areas will be providing the most impact on peoples lives in the future.
Although some medical professionals will lose their jobs to AI, the life-saving benefits to patients will be magnificent. Beyond cancer detection and treatment, medical professionals are using machine learning to improve their practice in many ways. For instance, Atomwise applies AI to fuel drug discovery, Deep Genomics uses machine learning to help pharmaceutical companies develop genetic medicines, and Analytics 4 Life leverages AI to better detect coronary artery disease.
While not all transitions from automated roles will be as easy as my sons pivot to a different scientific field, I believe that AI has the potential to shape our future careers in a positive way, even helping us find jobs that make us happier and more productive.
As this technology rapidly develops, the future is clear: AI will be an integral part of our lives and bring massive changes to our society. Its time to stop debating (looking at you, Musk and Zuckerberg) and start accepting AI for what it is: both the good and the bad.
Throughout the years, Ive found myself on both sides of the equation, arguing both for and against the advancement of AI. But its time to stop taking a selective view on AI, choosing to incorporate it into our lives only when convenient. We must create solutions that mitigate AIs negative impact and maximize its positive potential. Key stakeholdersgovernments, corporates, technologists, and moreneed to create policies, join forces, and dedicate themselves to this effort.
And were seeing great progress. AT&T recently began retraining thousands of employees to keep up with technology advances and Google recently dedicated millions of dollars to prepare people for an AI-dominated workforce. Im hopeful that these initiatives will allow us to focus on all the good that AI can do for our world and open our eyes to the potential lives it can save.
One day, yours just might depend on it, too.
Learn how to write for Quartz Ideas. We welcome your comments at firstname.lastname@example.org.
Posted: at 6:16 pm
Embed from Getty Images
Have a question? Ask Siri. Want to order a pizza? Amazon Echo is there for you. There are AI technologies to help us park our car, polish our photos, and automate our time.
The technology hasnt been as widespread in K-12 education, but that could soon change. A recent piece of research predicted that AI in education could see an expansion rate of more than 47 percent by 2021.
Artificial Intelligence can be an excellent supplement for the work of the teacher.
Personalized Tutors for Students
Anyone who has spent time around children understands that they learn at different rates. Some students are also auditory or visual learners. Personalized AI tutors within the classroom could offer an alternative method for teaching students the fundamentals.
The technology is here, and being improved on. IBM and Microsoft are working on classroom applications. Other examples of tutoring AI include Thinkster Math, Carnegie Learning or Third Space Learning. Essentially, AI can help with the basic skills so that teachers can focus on the more advanced, creative-based topics at least until the technology is more developed.
Its conceivable at some point that we could have AI programs that serve as companions to students throughout the course of their K-12 education, collecting data on the student and offering custom solutions along the way.
A New Kind of Teachers Assistant
AI might be able to help the teacher with classroom tasks as well. Since artificial intelligence is good at handling repetitive tasks, it could be an asset for grading. Teachers can use any newfound time to better interact with students.
More Customized Lessons
Teachers could conceivably use AI to create more customized lessons for their students. The information for lessons can be compiled in a more personalized way than what appears inside the course textbook.
Helping Students Learn
Traditionally the K-12 system in the United States has been designed to prepare students for manufacturing work, or helping them develop the skills theyll need as they select a career and stay with an employer for many years.
The reality is that the economy and the workplace are changing. AI can help prepare our grade school students for jobs that dont yet exist. Bringing AI into the classroom can be an innovative tool to help teach students the skills they will need for the future.
Matt Brennan is a marketing copywriter, occasional parenting writer, and journalist in the Chicago area. He is also the author of Write Right-Sell Now.
Photo credit: Getty Images
Matt Brennan is a marketing copywriter, occasional parenting writer, and journalist in the Chicago area. He is also the author of Write Right-Sell Now.
The rest is here: