The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Daily Archives: June 19, 2017
World’s Most Powerful Particle Collider Taps AI to Expose Hack Attacks – Scientific American
Posted: June 19, 2017 at 7:17 pm
Thousands of scientists worldwide tap into CERNs computer networks each day in their quest to better understand the fundamental structure of the universe. Unfortunately, they are not the only ones who want a piece of this vast pool of computing power, which serves the worlds largest particle physics laboratory. The hundreds of thousands of computers in CERNs grid are also a prime target for hackers who want to hijack those resources to make money or attack other computer systems. But rather than engaging in a perpetual game of hide-and-seek with these cyber intruders via conventional security systems, CERN scientists are turning to artificial intelligence to help them outsmart their online opponents.
Current detection systems typically spot attacks on networks by scanning incoming data for known viruses and other types of malicious code. But these systems are relatively useless against new and unfamiliar threats. Given how quickly malware changes these days, CERN is developing new systems that use machine learning to recognize and report abnormal network traffic to an administrator. For example, a system might learn to flag traffic that requires an uncharacteristically large amount of bandwidth, uses the incorrect procedure when it tries to enter the network (much like using the wrong secret knock on a door) or seeks network access via an unauthorized port (essentially trying to get in through a door that is off-limits).
CERNs cybersecurity department is training its AI software to learn the difference between normal and dubious behavior on the network, and to then alert staff via phone text, e-mail or computer message of any potential threat. The system could even be automated to shut down suspicious activity on its own, says Andres Gomez, lead author of a paper describing the new cybersecurity framework.
CERNthe French acronym for the European Organization for Nuclear Research lab, which sits on the Franco-Swiss borderis opting for this new approach to protect a computer grid used by more than 8,000 physicists to quickly access and analyze large volumes of data produced by the Large Hadron Collider (LHC). The LHCs main job is to collide atomic particles at high-speed so that scientists can study how particles interact. Particle detectors and other scientific instruments within the LHC gather information about these collisions, and CERN makes it available to laboratories and universities worldwide for use in their own research projects.
The LHC is expected to generate a total of about 50 petabytes of data (equal to 15 million high-definition movies) in 2017 alone, and demands more computing power and data storage than CERN itself can provide. In anticipation of that type of growth the laboratory in 2002 created its Worldwide LHC Computing Grid, which connects computers from more than 170 research facilities across more than 40 countries. CERNs computer network functions somewhat like an electrical grid, which relies on a network of generating stations that create and deliver electricity as needed to a particular community of homes and businesses. In CERNs case the community consists of research labs that require varying amounts of computing resources, based on the type of work they are doing at any given time.
One of the biggest challenges to defending a computer grid is the fact that security cannot interfere with the sharing of processing power and data storage. Scientists from labs in different parts of the world might end up accessing the same computers to do their research if demand on the grid is high or if their projects are similar. CERN also has to worry about whether the computers of the scientists connecting into the grid are free of viruses and other malicious software that could enter and spread quickly due to all the sharing. A virus might, for example, allow hackers to take over parts of the grid and use those computers either to generate digital currency known as bitcoins or to launch cyber attacks against other computers. In normal situations, antivirus programs try to keep intrusions out of a single machine, Gomez says. In the grid we have to protect hundreds of thousands of machines that already allow researchers outside CERN to use a variety of software programs they need for their different experiments. The magnitude of the data you can collect and the very distributed environment make intrusion detection on [a] grid far more complex, he says.
Jarno Niemel, a senior security researcher at F-Secure, a company that designs antivirus and computer security systems, says CERNs use of machine learning to train its network defenses will give the lab much-needed flexibility in protecting its grid, especially when searching for new threats. Still, artificially intelligent intrusion detection is not without risksand one of the biggest is whether Gomez and his team can develop machine-learning algorithms that can tell the difference between normal and harmful activity on the network without raising a lot of false alarms, Niemel says.
CERNs AI cybersecurity upgrades are still in the early stages and will be rolled out over time. The first test will be protecting the portion of the grid used by ALICE (A Large Ion Collider Experiment)a key LHC project to study the collisions of lead nuclei. If tests on ALICE are successful, CERNs machine learningbased security could then be used to defend parts of the grid used by the institutions six other detector experiments.
Excerpt from:
World's Most Powerful Particle Collider Taps AI to Expose Hack Attacks - Scientific American
Posted in Ai
Comments Off on World’s Most Powerful Particle Collider Taps AI to Expose Hack Attacks – Scientific American
AI is our best weapon against terrorist propaganda – The Next Web – TNW
Posted: at 7:17 pm
In the past four months alone, there have been three separate terrorist attacks across the UK (and possibly a third reported just today) and thats after implementing efforts that the Defense Secretary claimed helped in thwarting 12 other incidents there in the previous year.
That spells a massive challenge for companies investing in curbing the spread of terrorist propaganda on the web. And although itd most certainly be impossible to stamp out the threat across the globe, its clear that we can do a lot more to tackle it right now.
Last week, we looked at some steps that Facebook is taking to wipe out content promoting and sympathizing with terrorists causes, which involve the use of AI and relying on reports from users, as well as the skills of a team of 150 experts to identify and take down hate-filled posts before they spread across the social network.
Now, Google has detailed the measures its implementing in this regard as well. Similar to Facebook, its targeting hateful content with machine learning-based systems that can sniff it out, and also working with human reviewers and NGOs in an attempt to introduce a nuanced approach to censoring extremist media.
The trouble is, battling terrorism isnt what these companies are solely about; theyre concerned about growing their user bases and increasing revenue. The measures they presently implement will help sanitize their platforms so theyre more easily marketable as a safe place to consume content, socialize and shop.
Meanwhile, the people who spread propaganda online dedicate their waking hours to finding ways to get their message out to the world. They can, and will continue to innovate so as to stay ahead of the curve.
Ultimately, whats needed is a way to reduce the effectiveness of this propaganda. There are a host of reasons why people are susceptible to radicalization, and those may be far beyond the scope of the likes of Facebook to tackle.
AI is already being used to identify content that human response teams review and take down. But I believe that its greater purpose could be to identify people who are exposed to terrorist propaganda and are at risk of being radicalized. To that end, theres hope in the form of measures that Google is working on. In the case of its video platform YouTube, the company explained in a blog post:
Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the Redirect Method more broadly across Europe.
This promising approach harnesses the power of targeted online advertising to reach potential ISIS recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.
In March, Facebook began testing algorithms that could detect warning signs of users in the US suffering from depression and possibly contemplating self-harm and suicide. To do this, it looks at whether people are frequently posting messages describing personal pain and sorrow, or if several responses from their friends read along the lines of, Are you okay? The company then contacts at-risk users to suggest channels they can seek out for help with their condition.
I imagine that similar tools could be developed to identify people who might be vulnerable to becoming radicalized perhaps by analyzing the content of the posts they share and consume, as well as the networks of people and groups they engage with.
The ideas spread by terrorists are only as powerful as they are widely accepted. It looks like well constantly find ourselves trying to outpace measures to spread propaganda, but what might be of more help is a way to reach out to people who are processing these ideas, accepting them as truth and altering the course their lives are taking. With enough data, its possible that AI could be of help but in the end, well need humans to talk to humans in order to fix whats broken in our society.
Naturally, the question of privacy will crop up at this point and its one that well have to ponder before giving up our rights but its certainly worth exploring our options if were indeed serious about quelling the spread of terrorism across the globe.
Read next: How secure is your favorite messaging app?
More here:
AI is our best weapon against terrorist propaganda - The Next Web - TNW
Posted in Ai
Comments Off on AI is our best weapon against terrorist propaganda – The Next Web – TNW
Google advances AI with ‘one model to learn them all’ – VentureBeat
Posted: at 7:17 pm
Google quietly released an academic paper that could provide a blueprint for the future of machine learning. Called One Model to Learn Them All, it lays out a template for how to create a single machine learning model that can address multiple tasks well.
The MultiModel, as the Google researchers call it, was trained on a variety of tasks, including translation, language parsing, speech recognition, image recognition, and object detection. While its results dont show radical improvements over existing approaches, they illustrate that training a machine learning system on a variety of tasks could help boost its overall performance.
For example, the MultiModel improved its accuracy on machine translation, speech, and parsing tasks when trained on all of the operations it was capable of, compared to when the model was just trained on one operation.
Googles paper could provide a template for the development of future machine learning systems that are more broadly applicable, and potentially more accurate, than the narrow solutions that populate much of the market today. Whats more, these techniques (or those they spawn) could help reduce the amount of training data needed to create a viable machine learning algorithm.
Thats because the teams results show that when the MultiModel is trained on all the tasks its capable of, its accuracy improves on tasks with less training data. Thats important, since it can be difficult to accumulate a sizable enough set of training data in some domains.
However, Google doesnt claim to havea master algorithm that can learn everything at once. As its name implies, the MultiModel network includes systems that are tailor-made to address different challenges, along with systems that help direct input to those expert algorithms. This research does show that the approach Google took could be useful for future development of similar systems that address different domains.
Its also worth noting that theres plenty more testing to be done. Googles results havent been verified, and its hard to know how well this research generalizes to other fields. The Google Brain team has released the MultiModel code as part of the TensorFlow open source project, so other people can experiment with it and find out.
Google also has some clear paths to improvement. The team pointed out that they didnt spend a lot of time optimizing some of the systems fixed parameters (known as hyperparameters in machine learning speak), and going through more extensive tweaking could help improve accuracy in the future.
Updated 10:45: This story initially said that there was not a timetable for releasing the MultiModel code under an open source license. The code was released last week. This story has been updated to note that and include a link to the repository.
Here is the original post:
Google advances AI with 'one model to learn them all' - VentureBeat
Posted in Ai
Comments Off on Google advances AI with ‘one model to learn them all’ – VentureBeat
3 ways AI is already impacting ecommerce – VentureBeat
Posted: at 7:17 pm
Advances in artificial intelligence and deep learning have changed our lives. We are already using it even without realizing it: AI helps to power Googles search engine, Teslas self-driving cars, Apples voice assistant, and Amazons shopping recommendations.
The impact of artificial intelligence in retail and ecommerce is also growing. While ecommerce giants like Amazon, Walmart, and eBay have used these capabilities behind the scenes for years, ecommerce entrepreneurs can now also do the same. Algorithmic technology and AI can be incredibly helpful tools to grow sales and optimize various aspects of ecommerce operation, from pricing to demand planning.
Here are the three most crucial applications for this tech.
Todays online retail industry is rapidly changing and presenting new challenges to ecommerce startups. The markets have become increasingly competitive, to the point where the price of each individual product must change frequently in response to market dynamics. Therefore, even for an online merchant with a couple hundred SKUs, continuous adjustment of prices quickly becomes a challenge.
Repricing merchandise strategically is particularly crucial on Amazon, where sellers constantly compete to land the Amazon Buy Box a coveted spot that essentially guarantees its winner vast sales. To select products for placement, Amazon uses sophisticated algorithms to assess merchants performance metrics such as ratings, reviews, shipping, pricing, and quality of service. For these reasons, optimal pricing of merchandise on Amazon requires sellers to go much deeper than guesstimates.
AI solves this problem by repricing merchandise using complex learning algorithms that continuously assess the market dynamics and changes in competitive environment.
Managing inventory availability across channels is one of the biggest worries for ecommerce businesses. Being out of stock is a nightmare scenario, as it takes days to replenish products and can heavily affect merchants revenues. On the other hand, overstocking increases business risks and capital requirements.
The problem with forecasting inventory velocity in a rapidly changing market is that both demand and competition change quite frequently. In such markets, a hindsight perspective traditionally implemented with the help of BI technology is no longer sufficient. In order to reach operational efficiency, retailers must employ accurate demand forecasting and predictive analytics.
Artificial intelligence and learning algorithms can help with order velocity forecasting. They can identify key factors that affect the velocity of orders, and monitor the factors impact to accurately model velocity and inventory requirements. The beauty of learning systems is that they get smarter over time, enabling merchants to accurately predict their inventory needs.
The other key aspect of retailer operation is managing the assortment of products that is, which products to keep selling, which products to add, and which products to discontinue. Like inventory planning, assortment planning requires a good amount of forecasting. Merchants need to monitor market trends and changes in demand to understand the competitiveness of products.
Although a person can analyze the past performance of products and categories, accurate forecasting requires a sophisticated algorithmic model. It must assess the relationships across products, influences of various events, and impact of competition and pricing.
Giants like Amazon and Walmart constantly monitor their product assortment and have a team of data scientists dedicated to this task. For the first time, these advanced capabilities are now available to ecommerce startups, thanks to advances in AI and algorithmic technology.
The beauty of online commerce is that it is completely digitalized. All the data from the operations, the market, and the competition can be consolidated and analyzed. It can be examined historically and now, with the help of AI technology, forecasted as well.
Now is the time for ecommerce businesses to get smarter and reach operational excellence. Logistics used to be the core competency of retail; today, algorithms constantly crunch data, predict market trends, and respond to market changes in real time. Such advancements are only possible because of AI.
Victor Rosenman is the founder and CEO of Feedvisor, an Algo-Commerce company that helps online retailers make business-critical decisions in real time.
Excerpt from:
Posted in Ai
Comments Off on 3 ways AI is already impacting ecommerce – VentureBeat
Where Artificial Intelligence Will Pay Off Most in Health Care – Fortune
Posted: at 7:17 pm
Of all the places where artificial intelligence is gaining a foothold, nowhere is the impact likely to be as greatat least in the near termas in healthcare. A new report from Accenture Consulting, entitled Artificial Intelligence: Healthcares New Nervous System , projects the market for health-related AI to grow at a compound annual growth rate of 40% through 2021to $6.6 billion, from around $600 million in 2014.
In that regard, the Accenture report, authored by senior managing director Matthew Collier and colleagues, echoes earlier assessments of the market. A comprehensive research briefing last September by CB Insights tech analyst Deepashri Varadharajan, for examplewhich tracked AI startups across industries from 2012 through the fall of 2016showed healthcare dominating every other sector, from security and finance to sales & marketing. Varadharajan calculated there were 188 deals across various healthcare segments from Jan. 2012 to Sept. 2016, worth an aggregate $1.5 billion in global equity funding.
But the Accenture report suggestsand, I think smartlythat the biggest returns on investment for healthcare AI are likely to come from areas where the density (and dollar value) of deals isnt that substantial right now. In terms of startup and deal volume, for instance, two hotshot areas have been medical imaging & diagnostics and drug discovery. Accentures analysis, though, points to 10 other AI applications that may return more bang for the buck.
Top of the list of investments that will likely pay for themselves (and then some) is robot-assisted surgery, Accenture says. Cognitive robotics can integrate information from pre-op medical records with real-time operating metrics to physically guide and enhance the physicians instrument precision, explain the reports authors. The technology incorporates data from actual surgical experiences to inform new, improved techniques and insights. The consultants estimate that the use of such surgical technology, which includes machine learning and other forms of AI, will result not only in better outcomes but also in a 21 percent reduction in the length of patient hospital stays. They estimate such smart robotic surgery will return $40 billion in value, or potential annual benefitsby 2026.
The second valuable use of AI, they project, will come from virtual nursing assistant applications ($20 billion in value)which, in theory, will save money by letting medical providers remotely assess a patients symptoms and lessen the number of unnecessary patient visits. Next in line are intelligent applications for administrative workflow (worth $18 billion), fraud detection ($17 billion), andfascinatinglydosage error reduction ($16 billion).
As these, and other AI applications gain more experience in the field, their ability to learn and act will continually lead to improvements in precision, efficiency and outcomes, say the authors. Its a compelling argument.
This essay appears in today's edition of the Fortune Brainstorm Health Daily. Get it delivered straight to your inbox.
See the original post here:
Where Artificial Intelligence Will Pay Off Most in Health Care - Fortune
Posted in Artificial Intelligence
Comments Off on Where Artificial Intelligence Will Pay Off Most in Health Care – Fortune
Becoming One Of Tomorrow’s Unicorns In The World Of Artificial Intelligence – Forbes
Posted: at 7:17 pm
Forbes | Becoming One Of Tomorrow's Unicorns In The World Of Artificial Intelligence Forbes Everyone is buzzing about the impact of AI on work, and many leaders feel insecure about what it will mean in terms of their own career development and roles. Deep learning, machine learning, automation and robotics are creating a seismic shift across ... |
Go here to read the rest:
Becoming One Of Tomorrow's Unicorns In The World Of Artificial Intelligence - Forbes
Posted in Artificial Intelligence
Comments Off on Becoming One Of Tomorrow’s Unicorns In The World Of Artificial Intelligence – Forbes
Artificial intelligence and the coming health revolution – Phys.Org
Posted: at 7:17 pm
June 19, 2017 by Rob Lever Artificial intelligence can improve health care by analyzing data from apps, smartphones and wearable technology
Your next doctor could very well be a bot. And bots, or automated programs, are likely to play a key role in finding cures for some of the most difficult-to-treat diseases and conditions.
Artificial intelligence is rapidly moving into health care, led by some of the biggest technology companies and emerging startups using it to diagnose and respond to a raft of conditions.
Consider these examples:
California researchers detected cardiac arrhythmia with 97 percent accuracy on wearers of an Apple Watch with the AI-based Cariogram application, opening up early treatment options to avert strokes.
Scientists from Harvard and the University of Vermont developed a machine learning toola type of AI that enables computers to learn without being explicitly programmedto better identify depression by studying Instagram posts, suggesting "new avenues for early screening and detection of mental illness."
Researchers from Britain's University of Nottingham created an algorithm that predicted heart attacks better than doctors using conventional guidelines.
While technology has always played a role in medical care, a wave of investment from Silicon Valley and a flood of data from connected devices appear to be spurring innovation.
"I think a tipping point was when Apple released its Research Kit," said Forrester Research analyst Kate McCarthy, referring to a program letting Apple users enable data from their daily activities to be used in medical studies.
McCarthy said advances in artificial intelligence has opened up new possibilities for "personalized medicine" adapted to individual genetics.
"We now have an environment where people can weave through clinical research at a speed you could never do before," she said.
Predictive analytics
AI is better known in the tech field for uses such as autonomous driving, or defeating experts in the board game Go.
But it can also be used to glean new insights from existing data such as electronic health records and lab tests, says Narges Razavian, a professor at New York University's Langone School of Medicine who led a research project on predictive analytics for more than 100 medical conditions.
"Our work is looking at trends and trying to predict (disease) six months into the future, to be able to act before things get worse," Razavian said.
NYU researchers analyzed medical and lab records to accurately predict the onset of dozens of diseases and conditions including type 2 diabetes, heart or kidney failure and stroke. The project developed software now used at NYU which may be deployed at other medical facilities.
Google's DeepMind division is using artificial intelligence to help doctors analyze tissue samples to determine the likelihood that breast and other cancers will spread, and develop the best radiotherapy treatments.
Microsoft, Intel and other tech giants are also working with researchers to sort through data with AI to better understand and treat lung, breast and other types of cancer.
Google parent Alphabet's life sciences unit Verily has joined Apple in releasing a smartwatch for studies including one to identify patterns in the progression of Parkinson's disease. Amazon meanwhile offers medical advice through applications on its voice-activated artificial assistant Alexa.
IBM has been focusing on these issues with its Watson Health unit, which uses "cognitive computing" to help understand cancer and other diseases.
When IBM's Watson computing system won the TV game show Jeopardy in 2011, "there were a lot of folks in health care who said that is the same process doctors use when they try to understand health care," said Anil Jain, chief medical officer of Watson Health.
Systems like Watson, he said, "are able to connect all the disparate pieces of information" from medical journals and other sources "in a much more accelerated way."
"Cognitive computing may not find a cure on day one, but it can help understand people's behavior and habits" and their impact on disease, Jain said.
It's not just major tech companies moving into health.
Research firm CB Insights this year identified 106 digital health startups applying machine learning and predictive analytics "to reduce drug discovery times, provide virtual assistance to patients, and diagnose ailments by processing medical images."
Maryland-based startup Insilico Medicine uses so-called "deep learning" to shorten drug testing and approval times, down from the current 10 to 15 years.
"We can take 10,000 compounds and narrow that down to 10 to find the most promising ones," said Insilico's Qingsong Zhu.
Insilico is working on drugs for amyotrophic lateral sclerosis (ALS), cancer and age-related diseases, aiming to develop personalized treatments.
Finding depression
Artificial intelligence is also increasingly seen as a means for detecting depression and other mental illnesses, by spotting patterns that may not be obvious, even to professionals.
A research paper by Florida State University's Jessica Ribeiro found it can predict with 80 to 90 percent accuracy whether someone will attempt suicide as far off as two years into the future.
Facebook uses AI as part of a test project to prevent suicides by analyzing social network posts.
And San Francisco's Woebot Labs this month debuted on Facebook Messenger what it dubs the first chatbot offering "cognitive behavioral therapy" onlinepartly as a way to reach people wary of the social stigma of seeking mental health care.
New technologies are also offering hope for rare diseases.
Boston-based startup FDNA uses facial recognition technology matched against a database associated with over 8,000 rare diseases and genetic disorders, sharing data and insights with medical centers in 129 countries via its Face2Gene application.
Cautious optimism
Lynda Chin, vice chancellor and chief innovation officer at the University of Texas System, said she sees "a lot of excitement around these tools" but that technology alone is unlikely to translate into wide-scale health benefits.
One problem, Chin said, is that data from sources as disparate as medical records and Fitbits is difficult to access due to privacy and other regulations.
More important, she said, is integrating data in health care delivery where doctors may be unaware of what's available or how to use new tools.
"Just having the analytics and data get you to step one," said Chin. "It's not just about putting an app on the app store."
Explore further: Artificial intelligence predicts patient lifespans
2017 AFP
A computer's ability to predict a patient's lifespan simply by looking at images of their organs is a step closer to becoming a reality, thanks to new research led by the University of Adelaide.
Watson, IBM Corp.'s supercomputer that famously competed on the television show "Jeopardy," is coming West.
As a patient, your electronic medical record contains a wealth of information about you: vital signs, notes from physicians and medications.
IBM on Monday announced alliances with Apple and others to put artificial intelligence to work drawing potentially life-saving insights from the booming amount of health data generated on personal devices.
Barrow Neurological Institute and IBM Watson Health today announced results of a revolutionary study that has identified new genes linked to Amyotrophic Lateral Sclerosis (ALS), also known as Lou Gehrig's disease. The discovery ...
Apple on Monday confirmed that it has bought US machine learning startup Turi as Silicon Valley giants focus on a future rich with artificial intelligence.
Researchers at UC Santa Barbara professor Yasamin Mostofi's lab have given the first demonstration of three-dimensional imaging of objects through walls using ordinary wireless signal. The technique, which involves two drones ...
A data analytics firm that worked on the Republican campaign of Donald Trump exposed personal information belonging to some 198 million Americans, or nearly every eligible registered voter, security researchers said Monday.
Your next doctor could very well be a bot. And bots, or automated programs, are likely to play a key role in finding cures for some of the most difficult-to-treat diseases and conditions.
From "The Jetsons" to "Chitty Chitty Bang Bang", flying cars have long captured the imagination.
In what could be a major step forward for a new generation of solar cells called "concentrator photovoltaics," University of Michigan researchers have developed a new semiconductor alloy that can capture the near-infrared ...
Engineers at the University of California San Diego have developed a breakthrough in electrolyte chemistry that enables lithium batteries to run at temperatures as low as -60 degrees Celsius with excellent performancein ...
Adjust slider to filter visible comments by rank
Display comments: newest first
Medical records are still faxed between institutions can you believe it?? Getting 2nd opinions is a nightmare and you're lucky if your doctors even take the time to look at what he gets.
Clinical trials are often where the best treatments can be found and it is left up to the patient to find the right one and get them the proper info to determine eligibility.
I do not want humans climbing around inside me if there's a chance a robot can do it.
Please sign in to add a comment. Registration is free, and takes less than a minute. Read more
The rest is here:
Artificial intelligence and the coming health revolution - Phys.Org
Posted in Artificial Intelligence
Comments Off on Artificial intelligence and the coming health revolution – Phys.Org
Putting (machine) learning and (artificial) intelligence to work – The Register
Posted: at 7:17 pm
MCubed Blue sky thinking is great, but if youre interested in what machine learning and AI means for your business right now, you should really join us at MCubed London in October.
If youre just beginning to examine what machine learning, AI and advanced analytics can do for your organisation - or your competitors - well be covering the technologies and techniques that every business needs to know.
But well also be going deep on practice, with speakers from companies like Ocado, OpenTable and ASOS as well as experts whove worked with real businesses to get projects up and running.
And of course, well be taking a close-up look at specific technologies and techniques, such as TensorFlow or Graph Analysis, in advanced conference sessions, and our optional day three workshops.
Throughout, our aim is to show you how you can apply tools and methodologies to allow your business or organisation to take advantage of ML, AI and advanced analytics to solve the problems you face today, as well as prepare you for tomorrow.
None of this happens in a vacuum of course, so well also be looking at the organisational, ethical and legal implications of rolling out these technologies. And yes, we will be taking a look at robotics and driverless cars and whacking great lasers.
Its a mind and business expanding lineup, and youll be pleased to know this all takes place at 30 Euston Square in Central London between October 9 and 11.
As well as being easy to get to, this is simply a really pleasant environment in which to enjoy the presentations, and discuss them on the sidelines with your fellow attendees and the speakers. Of course, well ensure theres plenty of top notch food and drink to fuel you through the formal and less formal parts of the programme.
Tickets will be limited, so if you want to ensure your place, head over to our website and snap up your early-bird ticket now.
Read the rest here:
Putting (machine) learning and (artificial) intelligence to work - The Register
Posted in Artificial Intelligence
Comments Off on Putting (machine) learning and (artificial) intelligence to work – The Register
For NVIDIA, Gaming Is the Story Now, but Artificial Intelligence Is the Future – Motley Fool
Posted: at 7:17 pm
NVIDIA (NASDAQ:NVDA) stock has returned a scorching 225% over the one-year period through June 15.Investors have been enthused by the chipmaker's strong financial performance across its four target market platforms: gaming, data center, professional visualization, and automotive.
Gaming currently accounts for the largest percentage of revenue for the graphics chip specialist, but artificial intelligence (AI) is the future for the company -- and that's a great thing for investors because the burgeoning AI market is widely predicted to be beyond humongous.
Image source: Getty Images.
Here's how NVIDIA's business broke out in its most recently reported quarter, Q1 of fiscal 2018.
Platform
Fiscal Q1 2018 Revenue
Percentage of Revenue
Gaming
$1.027 billion
53%
Data center
$409 million
21.1%
Professional visualization
$205 million
10.6%
Auto
$140 million
7.2%
OEM and IP* (not target platforms)
$156 million
8.1%
Total
$1.937 billion
100%
Data source: NVIDIA. YOY = year over year. *OEM and IP = original equipment manufacturers and intellectual property.
NVIDIA's gaming business has some seasonality, with the fourth quarter of each fiscal year getting a boost from the holidays. That means the gaming business is somewhat more important even than the 53% figure above suggests. In Q4 fiscal 2017 and the full fiscal year, gaming accounted for 62% and 58.8%, respectively, of the company's revenue.
(NVIDIA doesn't break out operating income or any other form of earnings by platform, so we don't know the relative profitability of these platforms.)
Here's how fast each of NVIDIA's platforms grew in fiscal Q1 2018.
Platform
Revenue Growth (YOY)
Gaming
49%
Data center
186%
Professional visualization
8%
Auto
24%
OEM and IP
(10%)
Data source: NVIDIA. YOY = year over year.
Data center revenue nearly tripled year over year last quarter, making the platform NVIDIA's most powerful growth engine. Since it now accounts for just 21% of NVIDIA's revenue, it might take a while for it to pass gaming, but it's on track to do so.
Here's how quickly the platform has grown as a percentage of NVIDIA's business:
Period
Data Center's Percentage of Total Revenue
Q1 Fiscal 2018
21.1%
Q1 Fiscal 2017
11%
Q1 Fiscal 2016
7.6%
Data source: NVIDIA.
In just two years, the data center segment has grown from just 7.6% of NVIDIA's total quarterly revenue to more than 21%. That phenomenal growth is being fueled by demand for NVIDIA's graphics processing unit-based deep-learning approach to artificial intelligence. On last quarter's earnings call, CFO Colette Kress said:
Driving growth was demand from cloud-service providers and enterprises-building training clusters for web services plus strong gains in high-performance computing, GRID graphics visualization and our DGX-1 AI supercomputer. ...
All of the world's major Internet and cloud service providers now use NVIDIA Tesla-based GPU [graphics processing units]accelerators:AWS, Facebook, Google, IBM, and Microsoft, as well as Alibaba, Baidu, and Tencent.
Autonomous cars are emerging as a major growth driver for NVIDIA. Image source: Getty Images.
Revenue from the automotive platform jumped 24% year over yearin Q1, accounting for 7.2% of NVIDIA's total. Auto revenue has traditionally come from sales of Tegra processors for automakers' infotainment systems.In the last year, this platform has begun to profit from the technological shift toward driverless cars, which is in the early stages and promises to be both massive and long. Fully autonomous vehicles are expected to be legal on public roads across the United States within a decade.
A year ago, NVIDIA began shipping its DRIVE PX 2 AI car platform, which is a supercomputer for processing and interpreting the scads of data taken in by cameras, lidar, radar, and other sensors about the surroundings of semi-autonomous and fully autonomous cars. More than 225 automakers, suppliers, and other entities have started developing autonomous driving systems using it. Moreover, the company recently announced that the world's No. 1 automaker, Toyota,will use the DRIVE PX 2 platform to power its autonomous driving systems on vehicles slated for market introduction.
To wrap up, as Kress put it on the Q1 earnings call: "AI has quickly emerged as the single most powerful force in technology. And at the center of AI are NVIDIA GPUs."
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Beth McKenna has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), Baidu, Facebook, and Nvidia. The Motley Fool has a disclosure policy.
Excerpt from:
For NVIDIA, Gaming Is the Story Now, but Artificial Intelligence Is the Future - Motley Fool
Posted in Artificial Intelligence
Comments Off on For NVIDIA, Gaming Is the Story Now, but Artificial Intelligence Is the Future – Motley Fool
Artificial intelligence and privacy engineering: Why it matters NOW – ZDNet
Posted: at 7:17 pm
As artificial intelligence proliferates, companies and governments are aggregating enormous data sets to feed their AI initiatives.
Although privacy is not a new concept in computing, the growth of aggregated data magnifies privacy challenges and leads to extreme ethical risks such as unintentionally building biased AI systems, among many others.
Privacy and artificial intelligence are both complex topics. There are no easy or simple answers because solutions lie at the shifting and conflicted intersection of technology, commercial profit, public policy, and even individual and cultural attitudes.
Given this complexity, I invited two brilliant people to share their thoughts in a CXOTALK conversation on privacy and AI. Watch the video embedded above to participate in the entire discussion, which was Episode 229 of CXOTALK.
Michelle Dennedy is the Chief Privacy Officer at Cisco. She is an attorney, author of the book The Privacy Engineer's Manifesto, and one of the world's most respected experts on privacy engineering.
David Bray is Chief Ventures Officer at the National Geospatial-Intelligence Agency. Previously, he was an Eisenhower Fellow and Chief Information Officer at the Federal Communications Commission. David is one of the foremost change agents in the US federal government.
Here are edited excerpts from the conversation. You can read the entire transcript at the CXOTALK site.
Michelle Dennedy: Privacy by Design is a policy concept that was hanging around for ten years in the networks and coming out of Ontario, Canada with a woman named Ann Cavoukian, who was the commissioner at the time of Ontario.
But in 2010, we introduced the concept at the Data Commissioner's Conference in Jerusalem, and over 120 different countries agreed we should contemplate privacy in the build, in the design. That means not just the technical tools you buy and consume, [but] how you operationalize, how you run your business; how you organize around your business.
And, getting down to business on my side of the world, privacy engineering is using the techniques of the technical, the social, the procedural, the training tools that we have available, and in the most basic sense of engineering to say, "What are the routinized systems? What are the frameworks? What are the techniques that we use to mobilize privacy-enhancing technologies that exist today, and look across the processing lifecycle to build in and solve for privacy challenges?"
And I'll double-click on the word "privacy." Privacy, in the functional sense, is the authorized processing of personally-identifiable data using fair, moral, legal, and ethical standards. So, we bring down each one of those things and say, "What are the functionalized tools that we can use to promote that whole panoply and complicated movement of personally-identifiable information across networks with all of these other factors built in?" [It's] if I can change the fabric down here, and our teams can build this in and make it as routinized and invisible, then the rest of the world can work on the more nuanced layers that are also difficult and challenging.
David Bray: What Michelle said about building beyond and thinking about networks gets to where we're at today, now in 2017. It's not just about individual machines making correlations; it's about different data feeds streaming in from different networks where you might make a correlation that the individual has not given consent to with [...] personally identifiable information.
For AI, it is just sort of the next layer of that. We've gone from individual machines, networks, to now we have something that is looking for patterns at an unprecedented capability, that at the end of the day, it still goes back to what is coming from what the individual has given consent to? What is being handed off by those machines? What are those data streams?
One of the things I learned when I was in Australia as well as in Taiwan as an Eisenhower Fellow; it's a question about, "What can we do to separate this setting of our privacy permissions and what we want to be done with our data, from where the data is stored?" Because right now, we have this more simplistic model of, "We co-locate on the same platform," and then maybe you get an end-user agreement that's thirty or forty pages long, and you don't read it. Either accept, or you don't accept; if you don't accept, you won't get the service, and there's no opportunity to say, "I'm willing to have it used in this context, but not these contexts." And I think that means Ai is going to raise questions about the context of when we need to start using these data streams.
Michelle Dennedy: We wrote a book a couple of years ago called "The Privacy Engineer's Manifesto," and in the manifesto, the techniques that we used are based on really foundational computer science.
Before we called it "computer science" we used to call it "statistics and math." But even thinking about geometric proof, nothing happens without context. And so, the thought that you have one tool that is appropriate for everything has simply never worked in engineering. You wouldn't build a bridge with just nails and not use hammers. You wouldn't think about putting something in the jungle that was built the same way as a structure that you would build in Arizona.
So, thinking about use-cases and contexts with human data, and creating human experiences, is everything. And it makes a lot of sense. If you think about how we're regulated primarily in the U.S., we'll leave the bankers off for a moment because they're different agencies, but the Federal Communications Commission, the Federal Trade Commission; so, we're thinking about commercial interests; we're thinking about communication. And communication is wildly imperfect why? Because it's humans doing all the communicating!
So, any time you talk about something that is as human and humane as processing information that impacts the lives and cultures and commerce of people, you're going to have to really over-rotate on context. That doesn't mean everyone gets a specialty thing, but it doesn't mean that everyone gets a car in any color that they want so long as it's black.
David Bray: And I want to amplify what Michelle is saying. When I arrived at the FCC in late 2013, we were paying for people to volunteer what their broadband speeds were in certain, select areas because we wanted to see that they were getting the broadband speed that they were promised. And that cost the government money, and it took a lot of work, and so we effectively wanted to roll up an app that could allow people to crowdsource and if they wanted to, see what their score was and share it voluntarily with the FCC. Recognizing that if I stood up and said, "Hi! I'm with the U.S. government! Would you like to have an app [...] for your broadband connection?" Maybe not that successful.
But using the principles that you said about privacy engineering and privacy design, one, we made the app open source so people could look at the code. Two, we made it so that, when we designed the code, it didn't capture your IP address, and it didn't know who you were in a five-mile-radius. So, it gave some fuzziness to your actual, specific location, but it was still good enough for informing whether or not broadband speed is as desired.
And once we did that; also, our terms and conditions were only two pages long; which, again, we dropped the gauntlet and said, "When was the last time you agreed to anything on the internet that was only two pages long?" Rolling that out, as a result, ended up being the fourth most-downloaded app behind Google Chrome because there were people that looked at the code and said, "Yea, verily, they have privacy by design."
And so, I think that this principle of privacy by design is making the recognition that one, it's not just encryption but then two, it's not just the legalese. Can you show something that gives people trust; that what you're doing with their data is explicitly what they have given consent to? That, to me, is what's needed for AI [which] is, can we do that same thing which shows you what's being done with your data, and gives you an opportunity to weigh in on whether you want it or not?
David Bray: So, I'll give the simple answer which is "Yes." And now I'll go beyond that.
So, shifting back to first what Michelle said, I think it is great to unpack that AI is many different things. It's not a monolithic thing, and it's worth deciding are we talking about simply machine learning at speed? Are we talking about neural networks? This matters because five years ago, ten years ago, fifteen years ago, the sheer amount of data that was available to you was nowhere near what it is right now, and let alone what it will be in five years.
If we're right now at about 20 billion networked devices on the face of the planet relative to 7.3 billion human beings, estimates are at between 75 and 300 billion devices in less than five years. And so, I think we're beginning to have these heightened concerns about ethics and the security of data. To Scott's question: because it's just simply we are instrumenting ourselves, we are instrumenting our cars, our bodies, our homes, and this raises huge amounts of questions about what the machines might make of this data stream. It's also just the sheer processing capability. I mean, the ability to do petaflops and now exaflops and beyond, I mean, that was just not present ten years ago.
So, with that said, the question of security. It's security, but also we may need a new word. I heard in Scandinavia, they talk about integrity and being integral. It's really about the integrity of that data: Have you given consent to having it used for a particular purpose? So, I think AI could play a role in making sense of whether data is processed securely.
Because the whole challenge is right now, for most of the processing we have to decrypt it at some point to start to make sense of it and re-encrypt it again. But also, is it being treated with integrity and integral to the individual? Has the individual given consent?
And so, one of the things raised when I was in conversations in Taiwan is the question, "Well, couldn't we simply have an open-source AI, where we give our permission and our consent to the AI to have our data be used for certain purposes?" For example, it might say, "Okay, well I understand you have a data set served with this platform, this other platform over here, and this platform over here. Are you willing to have that data be brought together to improve your housekeeping?" And you might say "no." He says, "Okay. But would you be willing to do it if your heart rate drops below a certain level and you're in a car accident?" And you might say "yes."
And so, the only way I think we could ever possibly do context is not going down a series of checklists and trying to check all possible scenarios. It is going to have to be a machine that can talk to us and have conversations about what we do and do now want to have done with our data.
Michelle Dennedy: Madeleine Clare Elish wrote a paper called "Moral Crumple Zones," and I just love even the visual of it. If you think about cars and what we know about humans driving cars, they smash into each other in certain known ways. And the way that we've gotten better and lowered fatalities of known car crashes is using physics and geometry to design a cavity in various parts of the car where there's nothing there that's going to explode or catch fire, etc. as an impact crumple zone. So all the force and the energy goes away from the passenger and into the physical crumple zone of the car.
Madeleine is working on exactly what we're talking about. We don't know when it's unconscious or unintentional bias because it's unconscious or unintentional bias. But, we can design-in ethical crumple zones, where we're having things like testing for feeding, just like we do with sandboxing or we do with dummy data before we go live in other types of IT systems. We can decide to use AI technology and add in known issues for retraining that database.
I'll give you Watson as an example. Watson isn't a thing. Watson is a brand. The way that the Watson computer beat Jeopardy contestants is by learning Wikipedia. So, by processing mass quantities of stated data, you know, given whatever levels of authenticity that pattern on.
What Watson cannot do is selectively forget. So, your brain and your neural network are better at forgetting data and ignoring data than it is for processing data. We're trying to make our computer simulate a brain, except that brains are good at forgetting. AI is not good at that, yet. So, you can put the tax code, which would fill three ballrooms if you print it out on paper. You can feed it into an AI type of dataset, and you can train it in what are the known amounts of money someone should pay in a given context?
What you can't do, and what I think would be fascinating if we did do, is if we could wrangle the data of all the cheaters. What are the most common cheats? How do we cheat? And we know the ones that get caught, but more importantly, how do [...] get caught? That's the stuff where I think you need to design in a moral and ethical crumple zone and say, "How do people actively use systems?"
The concept of the ghost in the machine: how do machines that are well-trained with data over time experience degradation? Either they're not pulling from datasets because the equipment is simply ... You know, they're not reading tape drives anymore, or it's not being fed from fresh data, or we're not deleting old data. There are a lot of different techniques here that I think have yet to be deployed at scale that I think we need to consider before we're overly relying [on AI], without human checks and balances, and processed checks and balances.
David Bray: I think it's going to have to be a staged approach. As a starting point, you almost need to have the equivalent of a human ombudsman - a series of people looking at what the machine is doing relative to the data that was fed in.
And you can do this in multiple contexts. It could just be internal to the company, and it's just making sure that what the machine is being fed is not leading it to decisions that are atrocious or erroneous.
Or, if you want to gain public trust, share some of the data, and share some of the outcomes but abstract anything that's associated with any one individual and just say, "These types of people applied for loans. These types of loans were awarded," so can make sure that the machine is not hinging on some bias that we don't know about.
Longer-term, though, you've got to write that ombudsman. We need to be able to engineer an AI to serve as an ombudsman for the AI itself.
So really, what I'd see is not just AI as just one, monolithic system, it may be one that's making the decisions, and then another that's serving as the Jiminy Cricket that says, "This doesn't make sense. These people are cheating," and it's pointing out those flaws in the system as well. So, we need the equivalent of a Jiminy Cricket for AI.
CXOTALK brings you the world's most innovative business leaders, authors, and analysts for in-depth discussion unavailable anywhere else. Enjoy all our episodes and download the podcast from iTunes and Spreaker.
Read the original post:
Artificial intelligence and privacy engineering: Why it matters NOW - ZDNet
Posted in Artificial Intelligence
Comments Off on Artificial intelligence and privacy engineering: Why it matters NOW – ZDNet