The Prometheus League
Breaking News and Updates
- Abolition Of Work
- Ai
- Alt-right
- Alternative Medicine
- Antifa
- Artificial General Intelligence
- Artificial Intelligence
- Artificial Super Intelligence
- Ascension
- Astronomy
- Atheism
- Atheist
- Atlas Shrugged
- Automation
- Ayn Rand
- Bahamas
- Bankruptcy
- Basic Income Guarantee
- Big Tech
- Bitcoin
- Black Lives Matter
- Blackjack
- Boca Chica Texas
- Brexit
- Caribbean
- Casino
- Casino Affiliate
- Cbd Oil
- Censorship
- Cf
- Chess Engines
- Childfree
- Cloning
- Cloud Computing
- Conscious Evolution
- Corona Virus
- Cosmic Heaven
- Covid-19
- Cryonics
- Cryptocurrency
- Cyberpunk
- Darwinism
- Democrat
- Designer Babies
- DNA
- Donald Trump
- Eczema
- Elon Musk
- Entheogens
- Ethical Egoism
- Eugenic Concepts
- Eugenics
- Euthanasia
- Evolution
- Extropian
- Extropianism
- Extropy
- Fake News
- Federalism
- Federalist
- Fifth Amendment
- Fifth Amendment
- Financial Independence
- First Amendment
- Fiscal Freedom
- Food Supplements
- Fourth Amendment
- Fourth Amendment
- Free Speech
- Freedom
- Freedom of Speech
- Futurism
- Futurist
- Gambling
- Gene Medicine
- Genetic Engineering
- Genome
- Germ Warfare
- Golden Rule
- Government Oppression
- Hedonism
- High Seas
- History
- Hubble Telescope
- Human Genetic Engineering
- Human Genetics
- Human Immortality
- Human Longevity
- Illuminati
- Immortality
- Immortality Medicine
- Intentional Communities
- Jacinda Ardern
- Jitsi
- Jordan Peterson
- Las Vegas
- Liberal
- Libertarian
- Libertarianism
- Liberty
- Life Extension
- Macau
- Marie Byrd Land
- Mars
- Mars Colonization
- Mars Colony
- Memetics
- Micronations
- Mind Uploading
- Minerva Reefs
- Modern Satanism
- Moon Colonization
- Nanotech
- National Vanguard
- NATO
- Neo-eugenics
- Neurohacking
- Neurotechnology
- New Utopia
- New Zealand
- Nihilism
- Nootropics
- NSA
- Oceania
- Offshore
- Olympics
- Online Casino
- Online Gambling
- Pantheism
- Personal Empowerment
- Poker
- Political Correctness
- Politically Incorrect
- Polygamy
- Populism
- Post Human
- Post Humanism
- Posthuman
- Posthumanism
- Private Islands
- Progress
- Proud Boys
- Psoriasis
- Psychedelics
- Putin
- Quantum Computing
- Quantum Physics
- Rationalism
- Republican
- Resource Based Economy
- Robotics
- Rockall
- Ron Paul
- Roulette
- Russia
- Sealand
- Seasteading
- Second Amendment
- Second Amendment
- Seychelles
- Singularitarianism
- Singularity
- Socio-economic Collapse
- Space Exploration
- Space Station
- Space Travel
- Spacex
- Sports Betting
- Sportsbook
- Superintelligence
- Survivalism
- Talmud
- Technology
- Teilhard De Charden
- Terraforming Mars
- The Singularity
- Tms
- Tor Browser
- Trance
- Transhuman
- Transhuman News
- Transhumanism
- Transhumanist
- Transtopian
- Transtopianism
- Ukraine
- Uncategorized
- Vaping
- Victimless Crimes
- Virtual Reality
- Wage Slavery
- War On Drugs
- Waveland
- Ww3
- Yahoo
- Zeitgeist Movement
-
Prometheism
-
Forbidden Fruit
-
The Evolutionary Perspective
Monthly Archives: May 2020
How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends
Posted: May 4, 2020 at 11:09 pm
The global healthcare industry is booming. As per recent research, it is expected to cross the $2 trillion mark this year, despite the sluggish economic outlook and global trade tensions. Human beings, in general, are living longer and healthier lives.
There is increased awareness about living organ donation. Robots are being used for gallbladder removals, hip replacements, and kidney transplants. Early diagnosis of skin cancers with minimum human error is a reality. Breast reconstructive surgeries have enabled breast cancer survivors to partake in rebuilding their glands.
All these jobs were unthinkable sixty years ago. Now is an exciting time for the global health care sector as it progresses along its journey for the future.
However, as the worldwide population of 7.7 billion is likely to reach 8.5 billion by 2030, meeting health needs could be a challenge. That is where significant advancements in machine learning (ML) can help identify infection risks, improve the accuracy of diagnostics, and design personalized treatment plans.
source: Deloitte Insights 2020 global health care outlook
In many cases, this technology can even enhance workflow efficiency in hospitals. The possibilities are endless and exciting, which brings us to an essential segment of the article:
Do you understand the concept of the LACE index?
Designed in Ontario in 2004, it identifies patients who are at risk of readmission or death within 30 days of being discharged from the hospital. The calculation is based on four factors length of stay of the patient in the hospital, acuity of admission, concurring diseases, and emergency room visits.
The LACE index is widely accepted as a quality of care barometer and is famously based on the theory of machine learning. Using the past health records of the patients, the concept helps to predict their future state of health. It enables medical professionals to allocate resources on time to reduce the mortality rate.
This technological advancement has started to lay the foundation for closer collaboration among industry stakeholders, affordable and less invasive surgery options, holistic therapies, and new care delivery models. Here are five examples of current and emerging ML innovations:
From the initial screening of drug compounds to calculating the success rates of a specific medicine based on physiological factors of the patients the Knight Cancer Institute in Oregon and Microsofts Project Hanover are currently applying this technology to personalize drug combinations to cure blood cancer.
Machine learning has also given birth to new methodologies such as precision medicine and next-generation sequencing that can ensure a drug has the right effect on the patients. For example, today, medical professionals can develop algorithms to understand disease processes and innovative design treatments for ailments like Type 2 diabetes.
Signing up volunteers for clinical trials is not easy. Many filters have to be applied to see who is fit for the study. With machine learning, collecting patient data such as past medical records, psychological behavior, family health history, and more is easy.
In addition, the technology is also used to monitor biological metrics of the volunteers and the possible harm of the clinical trials in the long-run. With such compelling data in hand, medical professionals can reduce the trial period, thereby reducing overall costs and increasing experiment effectiveness.
Every human body functions differently. Reactions to a food item, medicine, or season differ. That is why we have allergies. When such is the case, why is customizing the treatment options based on the patients medical data still such an odd thought?
Machine learning helps medical professionals determine the risk for each patient, depending on their symptoms, past medical records, and family history using micro-bio sensors. These minute gadgets monitor patient health and flag abnormalities without bias, thus enabling more sophisticated capabilities of measuring health.
Cisco reports that machine-to-machine connection in global healthcare is growing at a rate of 30% CAGR which is the highest compared to any other industry!
Machine learning is mainly used to mine and analyze patient data to find out patterns and carry out the diagnosis of so many medical conditions, one of them being skin cancer.
Over 5.4mn people in the US are diagnosed with this disease annually. Unfortunately, the diagnosis is a virtual and time-taking process. It relies on long clinical screenings, comprising a biopsy, dermoscopy, and histopathological examination.
But machine learning changes all that. Moleanalyzer, an Australia-based AI software application, calculates and compares the size, diameter, and structure of the moles. It enables the user to take pictures at predefined intervals to help differentiate between benign and malignant lesions on the skin.
The analysis lets oncologists confirm their skin cancer diagnosis using evaluation techniques combined with ML, and they can start the treatment faster than usual. Where experts could identify malignant skin tumors, only 86.6% correctly, Moleanalyzer successfully detected 95%.
Healthcare providers have to ideally submit reports to the government with necessary patient records that are treated at their hospitals.
Compliance policies are continually evolving, which is why it is even more critical to ensure the hospital sites to check if they are being compliant and functioning within the legal boundaries. With machine learning, it is easy to collect data from different sources, using different methods and formatting them correctly.
For data managers, comparing patient data from various clinics to ensure they are compliant could be an overwhelming process. Machine learning helps gather, compare, and maintain that data as per the standards laid down by the government, informs Dr. Nick Oberheiden, Founder and Attorney, Oberheiden P.C.
The healthcare industry is steadily transforming through innovative technologies like AI and ML. The latter will soon get integrated into practice as a diagnostic aid, particularly in primary care. It plays a crucial role in shaping a predictive, personalized, and preventive future, making treating people a breeze. What are your thoughts?
Image: Depositphotos.com
Read this article:
How Machine Learning Is Redefining The Healthcare Industry - Small Business Trends
Comments Off on How Machine Learning Is Redefining The Healthcare Industry – Small Business Trends
Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights
Posted: at 11:09 pm
Last year, the fastest-growing job title in the world was that of the machine learning (ML) engineer, and this looks set to continue for the foreseeable future. According to Indeed, the average base salary of an ML engineer in the US is $146,085, and the number of machine learning engineer openings grew by 344% between 2015 and 2018. Machine learning engineers dominate the job postings around artificial intelligence (A.I.), with 94% of job advertisements that contain AI or ML terminology targeting machine learning engineers specifically.
This demonstrates that organizations understand how profound an effect machine learning promises to have on businesses and society. AI and ML are predicted to drive a Fourth Industrial Revolution that will see vast improvements in global productivity and open up new avenues for innovation; by 2030, its predicted that the global economy will be$15.7 trillion richersolely because of developments from these technologies.
The scale of demand for machine learning engineers is also unsurprising given how complex the role is. The goal of machine learning engineers is todeploy and manage machine learning modelsthat process and learn from the patterns and structures in vast quantities of data, into applications running in production, to unlock real business value while ensuring compliance with corporate governance standards.
To do this, machine learning engineers have to sit at the intersection of three complex disciplines. The first discipline is data science, which is where the theoretical models that inform machine learning are created; the second discipline is DevOps, which focuses on the infrastructure and processes for scaling the operationalization of applications; and the third is software engineering, which is needed to make scalable and reliable code to run machine learning programs.
Its the fact that machine learning engineers have to be at ease in the language of data science, software engineering, and DevOps that makes them so scarceand their value to organizations so great. A machine learning engineer has to have a deep skill-set; they must know multiple programming languages, have a very strong grasp of mathematics, and be able to understand andapply theoretical topics in computer science and statistics. They have to be comfortable with taking state-of-the-art models, which may only work in a specialized environment, andconverting them into robust and scalable systems that are fit for a business environment.
As a burgeoning occupation, the role of a machine learning engineer is constantly evolving. The tools and capabilities that these engineers have in 2020 are radically different from those they had available in 2015, and this is set to continue evolve as the specialism matures. One of the best ways to understand what the role of a machine learning engineer means to an organization is to look at the challenges they face in practice, and how they evolve over time.
Four major challenges that every machine learning engineer has to deal with are data provenance, good data, reproducibility, and model monitoring.
Across a models development and deployment lifecycle, theres interaction between a variety of systems and teams. This results in a highly complex chain of data from a variety of sources. At the same time, there is a greater demand than ever for data to be audited, and there to be a clear lineage of its organizational uses. This is increasingly a priority for regulators, with financial regulators now demandingthat all machine learning data be stored for seven years for auditing purposes.
This does not only make the data and metadata used in models more complex, but it also makes the interactions between the constituent pieces of data far more complex. This means machine learning engineers need to put the right infrastructure in place to ensure the right data and metadata is accessible, all while making sure it is properly organized.
Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now
In 2016, it was estimated that the US alonelost $3.1 trillionto bad datadata thats improperly formatted, duplicated, or incomplete. People and businesses across all sectors lose time and money because of this, but in a job that requires building and running accurate models reliant on input data, these issues can seriously jeopardize projects.
IBM estimates that around80 percent of a data scientists timeis spentfinding, cleaning up, and organizing the data they put into their models. Over time, however, increasingly sophisticated error and anomaly detection programs will likely be used to comb through datasets and screen out information that is incomplete or inaccurate.
This means that, as time goes on and machine learning capabilities continue to develop, well see machine learning engineers have more tools in their belt to clean up the information their programs use, and thus be able to focus more time spent on putting together ML programs themselves.
Reproducibility is often defined as the ability to be able to keep a snapshot of the state of a specific machine learning model, and being able to reproduce the same experiment with the exact same results regardless of the time and location. This involves a great level of complexity, given that machine learning requires reproducibility of three components: 1) code, 2) artifacts, and 3) data. If one of these change, then the result will change.
To add to this complexity, its also necessary to keep reproducibility of entire pipelines that may consist of two or more of these atomic steps, which introduces an exponential level of complexity. For machine learning, reproducibility is important because it lets engineers and data scientists know that the results of a model can be relied upon when they are deployed live, as they will be the same if they are run today as if they were run in two years.
Designing infrastructure for machine learning that is reproducible is a huge challenge. It will continue to be a thorn in the side of machine learning engineers for many years to come. One thing that may make this easier in coming years is the rise of universally accepted frameworks for machine learning test environments, which will provide a consistent barometer for engineers to measure their efforts against.
Its easy to forget that the lifecycle of a machine learning model only begins when its deployed to production. Consequently, a machine learning engineer not only needs to do the work of coding, testing, and deploying a model, but theyll have to also develop the right tools to monitor it.
The production environment of a model can often throw up scenarios the machine learning engineer didnt anticipate when they were creating it. Without monitoring and intervention after deployment, its likely that a model can end up being rendered dysfunctional or produce skewed results by unexpected data. Without accurate monitoring, results can often slowly drift away from what is expected due to input data becoming misaligned with the data a model was trained with, producing less and less effective or logical results.
Adversarial attacks on models, often far more sophisticated than tweets and a chatbot, are of increasing concern, and it is clear that monitoring by machine learning engineers is needed to stop a model being rendered counterproductive by unexpected data. As more machine learning models are deployed, and as more economic output becomes dependent upon these models, this challenge is only going to grow in prominence for machine learning engineers going forward.
One of the most exciting things about the role of the machine learning engineer is that its a job thats still being defined, and still faces so many open problems. That means machine learning engineers get the thrill of working in a constantly changing field that deals with cutting-edge problems.
Challenges such as data quality may be problems we can make major progress towards in the coming years. Other challenges, such monitoring, look set to become more pressing in the more immediate future. Given the constant flux of machine learning engineering as an occupation, its of little wonder that curiosity and an innovative mindset are essential qualities for this relatively new profession.
Alex Housley is CEO ofSeldon.
See the original post:
Machine Learning Engineer: Challenges and Changes Facing the Profession - Dice Insights
Comments Off on Machine Learning Engineer: Challenges and Changes Facing the Profession – Dice Insights
Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International
Posted: at 11:09 pm
The effect the coronavirus pandemic is having on energy systems and environmental policy in Europe was discussed at a recent machine learning and climate change workshop, along with the help artificial intelligence can offer to those planning electricity access in Africa.
The impact of Covid-19 on the energy system was discussed in an online climate change workshop that also considered how machine learning can help electricity planning in Africa.
This years International Conference on Learning Representations event included a workshop held by the Climate Change AI group of academics and artificial intelligence industry representatives which considered how machine learning can help tackle climate change.
Bjarne Steffen, senior researcher at the energy politics group at ETH Zrich, shared his insights at the workshop on how Covid-19 and the accompanying economic crisis are affecting recently introduced green policies. The crisis hit at a time when energy policies were experiencing increasing momentum towards climate action, especially in Europe, said Steffen, who added the coronavirus pandemic has cast into doubt the implementation of such progressive policies.
The academic said there was a risk of overreacting to the public health crisis, as far as progress towards climate change goals was concerned.
Lobbying
Many interest groups from carbon-intensive industries are pushing to remove the emissions trading system and other green policies, said Steffen. In cases where those policies are having a serious impact on carbon-emitting industries, governments should offer temporary waivers during this temporary crisis, instead of overhauling the regulatory structure.
However, the ETH Zrich researcher said any temptation to impose environmental conditions to bail-outs for carbon-intensive industries should be resisted. While it is tempting to push a green agenda in the relief packages, tying short-term environmental conditions to bail-outs is impractical, given the uncertainty in how long this crisis will last, he said. It is better to include provisions that will give more control over future decisions to decarbonize industries, such as the government taking equity shares in companies.
Steffen shared with pv magazine readers an article published in Joule which can be accessed here, and which articulates his arguments about how Covid-19 could affect the energy transition.
Covid-19 in the U.K.
The electricity system in the U.K. is also being affected by Covid-19, according to Jack Kelly, founder of London-based, not-for-profit, greenhouse gas emission reduction research laboratory Open Climate Fix.
The crisis has reduced overall electricity use in the U.K., said Kelly. Residential use has increased but this has not offset reductions in commercial and industrial loads.
Steve Wallace, a power system manager at British electricity system operator National Grid ESO recently told U.K. broadcaster the BBC electricity demand has fallen 15-20% across the U.K. The National Grid ESO blog has stated the fall-off makes managing grid functions such as voltage regulation more challenging.
Open Climate Fixs Kelly noted even events such as a nationally-coordinated round of applause for key workers was followed by a dramatic surge in demand, stating:On April 16, the National Grid saw a nearly 1 GW spike in electricity demand over 10 minutes after everyone finished clapping for healthcare workers and went about the rest of their evenings.
Read pv magazines coverage of Covid-19; and tell us how it is affecting your solar and energy storage operations. Email editors@pv-magazine.com to share your experiences.
Climate Change AI workshop panelists also discussed the impact machine learning could have on improving electricity planning in Africa. The Electricity Growth and Use in Developing Economies (e-Guide) initiative funded by fossil fuel philanthropic organization the Rockefeller Foundationaims to use data to improve the planning and operation of electricity systems in developing countries.
E-Guide members Nathan Williams, an assistant professor at the Rochester Institute of Technology (RIT) in New York state, and Simone Fobi, a PhD student at Columbia University in NYC, spoke about their work at the Climate Change AI workshop, which closed on Thursday. Williams emphasized the importance of demand prediction, saying: Uncertainty around current and future electricity consumption leads to inefficient planning. The weak link for energy planning tools is the poor quality of demand data.
Fobi said: We are trying to use machine learning to make use of lower-quality data and still be able to make strong predictions.
The market maturity of individual solar home systems and PV mini-grids in Africa mean more complex electrification plan modeling is required.
Modeling
When we are doing [electricity] access planning, we are trying to figure out where the demand will be and how much demand will exist so we can propose the right technology, added Fobi. This makes demand estimation crucial to efficient planning.
Unlike many traditional modeling approaches, machine learning is scalable and transferable. Rochesters Williams has been using data from nations such as Kenya, which are more advanced in their electrification efforts, to train machine learning models to make predictions to guide electrification efforts in countries which are not as far down the track.
Williams also discussed work being undertaken by e-Guide members at the Colorado School of Mines, which uses nighttime satellite imagery and machine learning to assess the reliability of grid infrastructure in India.
Rural power
Another e-Guide project, led by Jay Taneja at the University of Massachusetts, Amherst and co-funded by the Energy and Economic Growth program on development spending based at Berkeley uses satellite imagery to identify productive uses of electricity in rural areas by detecting pollution signals from diesel irrigation pumps.
Though good quality data is often not readily available for Africa, Williams added, it does exist.
We have spent years developing trusting relationships with utilities, said the RIT academic. Once our partners realize the value proposition we can offer, they are enthusiastic about sharing their data We cant do machine learning without high-quality data and this requires that organizations can effectively collect, organize, store and work with data. Data can transform the electricity sector but capacity building is crucial.
By Dustin Zubke
Read more from the original source:
Comments Off on Tackling climate change with machine learning: Covid-19 and the energy transition – pv magazine International
SoftSmile Reveals New Aligner Software Solution Using Biomechanics and Advanced Machine Learning Algorithms at Annual Meeting of American Association…
Posted: at 11:09 pm
"We have long been perfecting our technology to provide orthodontists with the tools they need to fully utilize their skills, lower the cost of the treatment for the patients and usher in the next revolution in orthodontics," said Khamzat Asabaev, Co-Founder and CEO. "We are confident that the proprietary technology and algorithms integrated in our software will help to transform orthodontists' practices and such problems as uncertainty in the result, high cost and lack of accessible treatment will vanish. We believe that SoftSmile is the solution orthodontists and patients have been waiting for."
The SoftSmile aligner software helps to create a 3-D model of the orthodontic treatment plan, including the revolutionary visualization of teeth roots and movement of the lower jaw during the treatment. It models optimized teeth movement and suggests, along with the knowledge and skill from the orthodontist, the exact number of aligners required for achieving the optimized result. The software is enhanced by AI, which generates key aspects of a treatment plan. Machine learning algorithms automate the time-consuming processes and facilitate the final setup of aligned teeth based on orthodontic measurements, allowing for the industry's most precise software for modelling a digitally optimized treatment plan.
Following the creation of the treatment plan, SoftSmile's software automates 3-D printing for orthodontists, making the entire manufacturing process smooth, intuitive and straightforward. SoftSmile is currently working with the pioneering 3D printing company Carbon to integrate software and manufacturing so that distributed dental printing will become accessible to a wider group of orthodontists.
"I was very impressed when I first started using the SoftSmile technology in my practice that specializes in the application of lingual braces," said Roberto Stradi, DDS MSc in Caserta, Italy. "The learning curve for this technology was fairly minimal and the software was completely customizable, allowing me to remain in control and produce a successful result for my patients."
The SoftSmile team started developing their aligner software technology in mid-2019. Their digital platform is based on years of work and research by SoftSmile co-founders, experienced orthodontists and engineers including pioneers who first printed lingual braces with a 3-D printer who devoted years to building the best solutions for orthodontists. Over 100 of the most acclaimed orthodontists from around the world have tested the technology developed by the SoftSmile team, treating over 1,000 people outside of the U.S. In the United States, SoftSmile is backed by the Columbia Start Up Lab and cooperates with Columbia University in general.
To date, the SoftSmile team has been granted 12 U.S. patents for their innovative technologies (over 10 additional U.S. patents are pending). Within the upcoming months SoftSmile will be submitting to the U.S. Federal Drug Administration a Section 510(k) premarket notification of intent to market its software. Ahead of the notification, orthodontists can request to use the SoftSmile software in their practice on a trial basis.
About SoftSmile, Inc.
SoftSmile is a New York-based innovation company that helps orthodontists to deliver custom, high-quality and affordable treatment to their patients. Established in 2019, SoftSmile designs and develops an advanced, AI-driven orthodontic software package that applies cutting-edge machine learning technology with sound biomechanical and mathematical principles in a user-friendly interface, giving orthodontists the ability to create aligners or lingual braces treatment plans in-house. Precise and high-quality aligners and braces are created using robust 3-D printing technologies that can be produced on-site at an orthodontist's office or provided by SoftSmile. These products give orthodontists unparalleled control and precision of the treatment they deliver to their patients.
For additional information about SoftSmile, Inc. please visit https://www.softsmile.com/.
SOURCE SoftSmile
View original post here:
Comments Off on SoftSmile Reveals New Aligner Software Solution Using Biomechanics and Advanced Machine Learning Algorithms at Annual Meeting of American Association…
Microsoft: This is how to protect your machine-learning applications – TechRepublic
Posted: at 11:09 pm
Understanding failures and attacks can help us build safer AI applications.
Modern machine learning (ML) has become an important tool in a very short time. We're using ML models across our organisations, either rolling our own in R and Python, using tools like TensorFlow to learn and explore our data, or building on cloud- and container-hosted services like Azure's Cognitive Services. It's a technology that helps predict maintenance schedules, spots fraud and damaged parts, and parses our speech, responding in a flexible way.
SEE:Prescriptive analytics: An insider's guide (free PDF)(TechRepublic)
The models that drive our ML applications are incredibly complex, training neural networks on large data sets. But there's a big problem: they're hard to explain or understand. Why does a model parse a red blob with white text as a stop sign and not a soft drink advert? It's that complexity which hides the underlying risks that are baked into our models, and the possible attacks that can severely disrupt the business processes and services we're building using those very models.
It's easy to imagine an attack on a self-driving car that could make it ignore stop signs, simply by changing a few details on the sign, or a facial recognition system that would detect a pixelated bandanna as Brad Pitt. These adversarial attacks take advantage of the ML models, guiding them to respond in a way that's not how they're intended to operate, distorting the input data by changing the physical inputs.
Microsoft is thinking a lot about how to protect machine learning systems. They're key to its future -- from tools being built into Office, to its Azure cloud-scale services, and managing its own and your networks, even delivering security services through ML-powered tools like Azure Sentinel. With so much investment riding on its machine-learning services, it's no wonder that many of Microsoft's presentations at the RSA security conference focused on understanding the security issues with ML and on how to protect machine-learning systems.
Attacks on machine-learning systems need access to the models used, so you need to keep your models private. That goes for small models that might be helping run your production lines as much as the massive models that drive the likes of Google, Bing and Facebook. If I get access to your model, I can work out how to affect it, either looking for the right data to feed it that will poison the results, or finding a way past the model to get the results I want.
Much of this work has been published in a paper in conjunction with the Berkman Klein Center, on failure modes in machine learning. As the paper points out, a lot of work has been done in finding ways to attack machine learning, but not much on how to defend it. We need to build a credible set of defences around machine learning's neural networks, in much the same way as we protect our physical and virtual network infrastructures.
Attacks on ML systems are failures of the underlying models. They are responding in unexpected, and possibly detrimental ways. We need to understand what the failure modes of machine-learning systems are, and then understand how we can respond to those failures. The paper talks about two failure modes: intentional failures, where an attacker deliberately subverts a system, and unintentional failures, where there's an unsafe element in the ML model being used that appears correct but delivers bad outcomes.
By understanding the failure modes we can build threat models and apply them to our ML-based applications and services, and then respond to those threats and defend our new applications.
The paper suggests 11 different attack classifications, many of which get around our standard defence models. It's possible to compromise a machine-learning system without needing access to the underlying software and hardware, so standard authorisation techniques can't protect ML-based systems and we need to consider alternative approaches.
What are these attacks? The first, perturbation attacks, modify queries to change the response to one the attackers desire. That's matched by poisoning attacks, which achieve the same result by contaminating the training data. Machine-learning models often include important intellectual property, and some attacks like model inversion aim to extract that data. Similarly, a membership inference attack will try to determine whether specific data was in the initial training set. Closely related is the concept of model stealing, using queries to extract the model.
SEE:5G: What it means for IoT(free PDF)
Other attacks include reprogramming the system around the ML model, so that either results or inputs are changed. Closely related are adversarial attacks that change physical objects, adding duct tape to signs to confuse navigation or using specially printed bandanas to disrupt facial-recognition systems. Some attacks depend on the provider: a malicious provider can extract training data from customer systems. They can add backdoors to systems, or compromise models as they're downloaded.
While many of these attacks are new and targeted specifically at machine-learning systems, they are still computer systems and applications, and are vulnerable to existing exploits and techniques, allowing attackers to use familiar approaches to disrupt ML applications.
It's a long list of attack types, but understanding what's possible allows us to think about the threats our applications face. More importantly they provide an opportunity to think about defences and how we protect machine-learning systems: building better, more secure training sets, locking down ML platforms, and controlling access to inputs and outputs, working with trusted applications and services.
Attacks are not the only risk: we must be aware of unintended failures -- problems that come from the algorithms we use or from how we've designed and tested our ML systems. We need to understand how reinforcement learning systems behave, how systems respond in different environments, if there are natural adversarial effects, or how changing inputs can change results.
If we're to defend machine-learning applications, we need to ensure that they have been tested as fully as possible, in as many conditions as possible. The apocryphal stories of early machine-learning systems that identified trees instead of tanks, because all the training images were of tanks under trees, are a sign that these aren't new problems, and that we need to be careful about how we train, test, and deploy machine learning. We can only defend against intentional attacks if we know that we've protected ourselves and our systems from mistakes we've made. The old adage "test, test, and test again" is key to building secure and safe machine learning -- even when we're using pre-built models and service APIs.
Be your company's Microsoft insider by reading these Windows and Office tips, tricks, and cheat sheets. Delivered Mondays and Wednesdays
See more here:
Microsoft: This is how to protect your machine-learning applications - TechRepublic
Comments Off on Microsoft: This is how to protect your machine-learning applications – TechRepublic
Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…
Posted: at 11:09 pm
Originally published in Medium, April 28, 2020
The landscape is evolving quickly. Machine Learning will transition to a commonplace part of every Software Engineers toolkit.
In every field we get specialized roles in the early days, replaced by the commonplace role over time. It seems like this is another case of just that.
Lets unpack.
Machine Learning Engineer as a role is a consequence of the massive hype fueling buzzwords like AI and Data Science in the enterprise. In the early days of Machine Learning, it was a very necessary role. And it commanded a nice little pay bump for many! But Machine Learning Engineer has taken on many different personalities depending on who you ask.
The purists among us say a Machine Learning Engineer is someone who takes models out of the lab and into production. They scale Machine Learning systems, turn reference implementations into production-ready software, and oftentimes cross over into Data Engineering. Theyre typically strong programmers who also have some fundamental knowledge of the models they work with.
But this sounds a lot like a normal software engineer.
Ask some of the top tech companies what Machine Learning Engineer means to them and you might get 10 different answers from 10 survey participants. This should be unsurprising. This is a relatively young role and the folks posting these jobs are managers, oftentimes of many decades who dont have the time (or will) to understand the space.
To continue reading this article, click here.
See the original post here:
Comments Off on Machine Learning Engineers Will Not Exist In 10 Years – Machine Learning Times – machine learning & data science news – The Predictive Analytics…
Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company – insideBIGDATA
Posted: at 11:08 pm
Tecton.ai emerged from stealth and formally launched with its data platform for machine learning. Tecton enables data scientists to turn raw data into production-ready features, the predictive signals that feed machine learning models. Tecton is in private beta with paying customers, including a Fortune 50 company.
Tecton.ai also announced $25 million in seed and Series A funding co-led by Andreessen Horowitz and Sequoia. Both Martin Casado, general partner at Andreessen Horowitz, and Matt Miller, partner at Sequoia, have joined the board.
Tecton.ai founders Mike Del Balso (CEO), Kevin Stumpf (CTO) and Jeremy Hermann (VP of Engineering) worked together at Uber when the company was struggling to build and deploy new machine learning models, so they createdUbers Michelangelo machine learning platform. Michelangelo was instrumental in scaling Ubers operations to thousands of production models serving millions of transactions per second in just a few years, and today it supports a myriad of use cases ranging from generating marketplace forecasts, calculating ETAs and automating fraud detection.
Del Balso, Stumpf and Hermann went on to found Tecton.ai to solve the data challenges that are the biggest impediment to deploying machine learning in the enterprise today. Enterprises are already generating vast amounts of data, but the problem is how to harness and refine this data into predictive signals that power machine learning models. Engineering teams end up spending the majority of their time building bespoke data pipelines for each new project. These custom pipelines are complex, brittle, expensive and often redundant. The end result is that 78% of new projects never get deployed, and 96% of projects encounter challenges with data quality and quantity(1).
Data problems all too often cause last-mile delivery issues for machine learning projects, said Mike Del Balso, Tecton.ai co-founder and CEO. With Tecton, there is no last mile. We created Tecton to empower data science teams to take control of their data and focus on building models, not pipelines. With Tecton, organizations can deliver impact with machine learning quickly, reliably and at scale.
Tecton.ai has assembled a world-class engineering team that has deep experience building machine learning infrastructure for industry leaders such as Google, Facebook, Airbnb and Uber. Tecton is the industrys first data platform that has been designed specifically to support the requirements of operational machine learning. It empowers data scientists to build great features, serve them to production quickly and reliably and do it at scale.
Tecton makes the delivery of machine learning data predictable for every company.
The ability to manage data and extract insights from it is catalyzing the next wave of business transformation, said Martin Casado, general partner at Andreessen Horowitz. The Tecton team has been on the forefront of this change with a long history of machine learning/AI and data at Google, Facebook and Airbnb and building the machine learning platform at Uber. Were very excited to be partnering with Mike, Kevin, Jeremy and the Tecton team to bring this expertise to the rest of the industry.
The founders of Tecton built a platform within Uber that took machine learning from a bespoke research effort to the core of how the company operated day-to-day, said Matt Miller, partner at Sequoia. They started Tecton to democratize machine learning across the enterprise. We believe their platform for machine learning will drive a Cambrian explosion within their customers, empowering them to drive their business operations with this powerful technology paradigm, unlocking countless opportunities. We were thrilled to partner with Tecton along with a16z at the seed and now again at the Series A. We believe Tecton has the potential to be one of the most transformational enterprise companies of this decade.
Sign up for the free insideBIGDATAnewsletter.
Excerpt from:
Comments Off on Tecton.ai Launches with New Data Platform to Make Machine Learning Accessible to Every Company – insideBIGDATA
Rise in the demand for Machine Learning & AI skills in the post-COVID world – Times of India
Posted: at 11:08 pm
The world has seen an unprecedented challenge and is battling this invisible enemy with all their might. The Novel coronavirus spread has left the global economies holding on to strands, businesses impacted and most people locked down. But while the physical world has come to a drastic halt or slow-down, the digital world is blooming. And in addition to understanding the possibilities of home workspaces, companies are finally understanding the scope of Machine Learning and Artificial Intelligence. A trend that was already gardening all the attention in recent years, ML & AI have particularly taken the centre-stage as more and more brands realise the possibilities of these tools. According to a research report released in February, demand for data engineers was up 50% and demand for data scientists was up 32% in 2019 compared to the prior year. Not only is machine learning being used by researchers to tackle this global pandemic, but it is also being seen as an essential tool in building a world post-COVID.
This pandemic is being fought on the basis of numbers and data. This is the key reason that has driven peoples interest in Machine Learning. It helps us in collecting, analysing and understanding a vast quantity of data. Combined with the power of Artificial Intelligence, Machine Learning has the power to help with an early understanding of problems and quick resolutions. In recent times, ML & AI are being used by doctors and medical personnel to track the virus, identify potential patients and even analyse the possible cure available. Even in the current economic crisis, jobs in data science and machine learning have been least affected. All these factors indicate that machine learning and artificial intelligence are here to stay. And this is the key reason that data science is an area you can particularly focus on, in this lockdown.
The capabilities of Machine Learning and Data Sciences One of the key reasons that a number of people have been able to shift to working from home without much hassle has to be the use of ML & AI by businesses. This shift has also motivated many businesses, both small-scale and large-scale, to re-evaluate their functioning. With companies already announcing plans to look at a more robust working mechanism, which involves less office space and more detailed and structured online working systems, the focus on Machine Learning is bound to increase considerably.
The Current PossibilitiesThe world of data science has been coming out stronger during this lockdown and the interest and importance given to the subject are on the rise. AI-powered mechanics and operations have already made it easier to manage various spaces with lower risks and this trend of turning to AI is bound to increase in the coming years. This is the reason that being educated in this field can improve your skills in this segment. If you are someone who has always been intrigued by data sciences and machine learning or are already working in this field and are looking for ways to accelerate your career, there are various courses that you can turn to. With the increased free time that staying at home has facilitated us with, beginning an additional degree to pad up your resume and also learn some cutting-edge concepts while gaining access to industry experts.
Start learning more about Machine Learning & AIIf you are wondering where to begin this journey of learning, a leading online education service provider, upGrad, has curated programs that would suit you! From Data Sciences to in-depth learnings in AI, there are multiple programs on their website that covers various domains. The PG Diploma in Machine Learning and AI, in particular, has a brilliant curriculum that will help you progress in the field of Machine Learning and Artificial Intelligence. A carefully crafted program from IIIT Bangalore which offers 450+ hours of learning with more than 10 practical hands-on capstone projects, this program has been designed to help people get a deeper understanding of the real-life problems in the field.
Understanding the PG Diploma in Machine Learning & AIThis 1-year program at upGrad has been articulated especially for working professionals who are looking for a career push. The curriculum consists of 30+ Case Studies and Assignments and 25+ Industry Mentorship Sessions, which help you to understand everything you need to know about this field. This program has the perfect balance between the practical exposure required to instil better management and problem-solving skills as well as the theoretical knowledge that will sharpen your skills in this category. The program also gives learners an IIIT Bangalore Alumni Status and Job Placement Assistance with Top Firms on successful completion.
See the original post here:
Rise in the demand for Machine Learning & AI skills in the post-COVID world - Times of India
Comments Off on Rise in the demand for Machine Learning & AI skills in the post-COVID world – Times of India
AI, machine learning and automation in cybersecurity: The time is now – GCN.com
Posted: at 11:08 pm
INDUSTRY INSIGHT
The cybersecurity skills shortage continues to plague organizations across regions, markets and sectors, and the government sector is no exception.According to (ISC)2, there are only enough cybersecurity pros to fill about 60% of the jobs that are currently open -- which means the workforce will need to grow by roughly 145% to just meet the current global demand.
The Government Accountability Office states that the federal government needs a qualified, well-trained cybersecurity workforce to protect vital IT systems, and one senior cybersecurity official at the Department of Homeland Security has described the talent gap as a national security issue. The scarcity of such workers is one reason why securing federal systems is on GAOs High Risk list.Given this situation, chief information security officers who are looking for ways to make their existing resources more effective can make great use of automation and artificial intelligence to supplement and enhance their workforce.
The overall challenge landscape
Results of our survey, Making Tough Choices: How CISOs Manage Escalating Threats and Limited Resources show that CISOs currently devote 36% of their budgets to response and 33% to prevention.However, as security needs change, many CISOs are looking to shift budget away from prevention without reducing its effectiveness. An optimal budget would reduce spend on prevention and increase spending on detection and response to 33% and 40% of the security budget, respectively.This shift would give security teams the speed and flexibility they need to react quickly in the face of a threat from cybercriminals who are outpacing agencies defensive capabilities.When breaches are inevitable, it is important to stop as many as possible at the point of intrusion, but it is even more important to detect and respond to them before they can do serious damage.
One challenge to matching the speed of todays cyberattacks is that CISOs have limited personnel and budget resources. To overcome these obstacles and attain the detection and response speeds necessary for effective cybersecurity, CISOs must take advantage of AI, machine learning and automation.These technologies will help close gaps by correlating threat intelligence and coordinating responses at machine speed. Government agencies will be able to develop a self-defending security system capable of analyzing large volumes of data, detecting threats, reconfiguring devices and responding to threats without human intervention.
The unique challenges
Federal agencies deal with a number of challenges unique to the public sector, including the age and complexity of IT systems as well as the challenges of the government budget cycle.IT teams for government agencies arent just protecting intellectual property or credit card numbers; they are also tasked with protecting citizens sensitive data and national security secrets.
Charged with this duty but constrained by limited resources, IT leaders must weigh the risks of cyber threats against the daily demands of keeping networks up and running. This balancing act becomes more difficult as agencies migrate to the cloud, adopt internet-of-things devices and transition to software-defined networks that have no perimeter. These changes mean government networks are expanding their attack surface with no additional -- or even fewerdefensive resources. Its part of the reason why the Verizon Data Breach Investigations Report found that government agencies were subjected to more security incidents and more breaches than any other sector last year.
To change that dynamic, the typical government set-up of siloed systems must be replaced with a unified platform that can provide wider and more granular network visibility and more rapid and automated response.
How AI and automation can help
The keys to making a unified platform work are AI and automation technologies. Because organizations cannot keep pace with the growing volume of threats by manual detection and response, they need to leverage AI/ML and automation to fill these gaps. AI-driven solutions can learn what normal behavior looks like in order to detect anomalous behavior.For instance, many employees typically access a specific kind of data or only log on at certain times. If an employees account starts to show activity outside of these normal parameters, an AI/ML-based solution can detect these anomalies and can inspect or quarantine the affected device or user account until it is determined to be safe or mitigating action can be taken.
If the device is infected with malware or is otherwise acting maliciously, that AI-based tool can also issue automated responses. Making these tactical tasks the responsibility of AI-driven solutions frees security teams to work on more strategic problems, develop threat intelligence or focus on more difficult tasks such as detecting unknown threats.
IT teams at government agencies that want to implement AI and automation must be sure the solution they choose can scale and operate at machine speeds to keep up with the growing complexity and speed of the threat. In selecting a solution, IT managers must take time to ensure solutions have been developed using AI best practices and training techniques and that they are powered by best-in-class threat intelligence, security research and analytics technology. Data should be collected from a variety of nodes -- both globally and within the local IT environment -- to glean the most accurate and actionable information for supporting a security strategy.
Time is of the essence
Government agencies are experiencing more cyberattacks than ever before, at a time when the nation is facing a 40% cybersecurity skills talent shortage. Time is of the essence in defending a network, but time is what under-resourced and over-tasked government IT teams typically lack. As attacks come more rapidly and adapt to the evolving IT environment and new vulnerabilities, AI/ML and automation are rapidly becoming necessities.Solutions built from the ground up with these technologies will help government CISOs counter and potentially get ahead of todays sophisticated attacks.
About the Author
Jim Richberg is a Fortinet field CISO focused on the U.S. public sector.
Here is the original post:
AI, machine learning and automation in cybersecurity: The time is now - GCN.com
Comments Off on AI, machine learning and automation in cybersecurity: The time is now – GCN.com
African AI Platform To Host Webinar On How Machine Learning Can Be Used To Fight COVID-19 – Technology Zimbabwe
Posted: at 11:08 pm
Zindi the data science platform that connects data scientists and people who need their problems solved is hosting a free webinar titled From Models to Medical Care: in conversation with epidemiologists on the scientific frontlines.
The webinar will see 3 epidemiologists; Prof Wim Delva, Dr Brooke Nichols and Dr Elaine Nsoesie discuss how machine learning models are put into practice in the fight against COVID-19.
Zindis data scientist Johno Whitaker will also discuss Machine Learning approaches from one of their recently held competitions along with a question and answer session.
For data scientists and those in healthcare this will be an insightful webinar to tune into. The Zoom webinar will be hosted on the 5th of May from 5 PM- 7 PM.
If anything goes wrong, click here to enter your query.
Continued here:
Comments Off on African AI Platform To Host Webinar On How Machine Learning Can Be Used To Fight COVID-19 – Technology Zimbabwe







