Artificial intelligence and machine learning continue to gain a foothold in our everyday lives. Whether for complex tasks like computer vision and natural language processing, or something as basic as an online chatbot, their popularity shows no signs of slowing. Companies have also started to explore deep learning, which is an advanced subset of machine learning. By applying deep neural networks deep learning takes inspiration from how the human brain works. Unlike machine learning, deep learning can actually train its processes directly on raw data, requiring little to no human intervention.
Recent research from analyst firm Gartner noted that the number of companies implementing AI technology has increased by around 270 per cent over the past four years. The return on investment is unmistakable as so many industries have started to implement the technology. However, even with the significant progress and given the nature of AI, that same, once helpful technology could fall into the wrong hands and be used to inflict damage on a company or the end user.
This ongoing battle that pits AI for good versus AI for malicious purposes, may not be something playing out in front of our eyes yet, but its not far off. Thankfully, the cost for implementing malicious AI, at any scale, is still somewhat cost prohibitive and requires tools and skills not readily available on the market. But knowing that it could become reality one day, means that companies should start preparing early for what lies ahead.
Heres a look into what that could look like, and what companies can do now to quell the storm.
When malware uses AI algorithms as an integral part of its business logic it learns from its situation and gets smarter at evading detection. But unlike typical malware that is one program running on a server, for example, AI-based malware can shift and change its behavior quickly, adjusting its evasion techniques as needed when it senses something is wrong or detects a threat to its own systems. Its a capability that most companies simply arent prepared for yet.
One example of situational awareness in AI-based malware came from BlackHat 2018. Created by IBM Security, DeepLocker is an encrypted ransomware that can autonomously decide which computer to attack based on a facial recognition algorithm. And as researchers noted, its designed to be stealthy.
The highly targeted malware hides itself in unsuspecting applications, evading detection by most antivirus scanning programs until it has identified its target victim. Once the target is identified through several indicators, including facial feature recognition, audio, location or system-level features, the AI algorithm unlocks the malware and launches the attack. According to researchers, IBM created it to demonstrate how they could combine open-source AI tools with straightforward evasion techniques to build a targeted, evasive and highly effective malware.
The amplified efficiency of AI means that once a system is trained and deployed, malicious AI can attack a far greater number of devices and networks more quickly and cheaply than a malevolent human actor.
And while the researchers also noted that they havent seen something like DeepLocker in the wild yet, the technology they used to create it is readily available, as were the malware techniques they used. Only time will tell whether something like it will emerge that is, if it hasnt already.
Companies can guard against malware like this by fighting fire with fire, using cybersecurity solutions that are based on deep learning, the most advanced form of AI. Its not enough to just get a firewall or basic anti-virus system, companies need to implement systems that can detect AI based malware and take the necessary steps to prevent harm. But also, to go one step further to achieve longer-term detection and pre-emptively stop continued damage. A necessary task with a future that includes AI-based malware.
Another harmful scenario is when malicious AI-based algorithms are used to hinder the functionality of benign AI algorithms by using the same algorithms and techniques used in traditional machine learning.
Rather than provide any helpful functionality, the malware is used to breach the useful algorithm, and manipulate it as a means to take over the functionality or use it for malicious purposes.
One example comes from several researchers studying adversarial machine learning. They investigated how self-driving cars processed street signs, and whether the technology could be manipulated. And while most self-driving cars have the ability to read street signs and act accordingly, researchers were able to trick the technology into believing it was reading a street sign, in this case a stop sign as a speed limit. This was a simple change that the technology onboard the vehicle couldnt detect as harmful. Taking a step back to look at the implications, it meant that the technology available today in self-driving cars could be exploited into causing collisions, resulting in possible deadly outcomes.
Adversarial learning can also be applied to subvert and confuse the efforts of computer vision algorithms, NLP (Natural Language Processing) and malware classifiers, which trick the technology into thinking its something else. The process typically injects malicious data into benign data streams with the intent to overwhelm or to block legitimate data. An example of this is a Distributed Denial of Service (DDoS) attack, which is when a cyberattack aimed at a server is purposely overwhelmed with data and internet traffic, disrupting normal traffic or service to that server, effectively bringing it down.
To block the harmful effects of this technology, companies need a system that understands when an algorithm is benign and working properly, versus one thats been tampered. Its not only protecting systems and the overall functionality of the tech, but could be protecting lives, as seen in the stop sign example. This is where advanced AI becomes necessary for analysis capabilities that enable it to understand and identify when something is amiss.
This type of attack is seen when malware runs on the victims endpoint, but AI-based algorithms are used on the server side to facilitate the attack. A command and control server which is used by an attacker to send and receive information from systems compromised by malware can control any number of functions.
For example, malware that steals data and information, which it then uploads onto a command and control server. Once complete, an additional algorithm identifies relevant details e.g. credit card numbers, passwords and the like which it then passes on to the server and ultimately the attacker on the other end. Through the use of AI, the malware can be executed on mass, without requiring any human intervention and be disseminated on a large scale to encompass thousands of victims.
One recent example Deep Instinct researchers uncovered was ServHelper. A new variant of the ServHelper malware uses an Excel 4.0 macro Dropper, a legacy mechanism still supported by Microsoft Office, and an executable payload signed with a valid digital signature. ServHelper can receive several types of commands from its Command & Control server, including: download a file, enter sleep mode, or even a self-kill function that allows it to remove the malware from the infected machine. This is a classic example of hacker groups using increasingly sophisticated methods, such as certificates, to propagate malware and launch cyberattacks.
Similar to the others, its not enough to just put up a firewall and hope for the best. Companies need to think holistically and protect all of an organisations endpoints and devices from Windows through servers and other platforms such as Mac, Android and iOS. An AI-based solution can help by constantly learning from what is or isnt malicious, helping its human counterparts to act once its identified and ideally stopped the harmful malware from spreading and hurting systems more.
Companies are just beginning to grasp that AI and machine learning can help with customer-facing technology and be used to help create stronger defences against a future of AI-enabled attacks. While the future of malware using AI might still be a few years away, companies can prepare themselves now against attacks of the future.
By using these technologies to spot trends and patterns in behavior now, companies can better prepare themselves against a future that employs AI against them. One way to ensure the technological advantage over any potential AI-based threat is a deep learning-based approach, which fights malicious AI with friendly AI.
Unlike other forms of anti-virus that remain stagnant, once implemented, deep learning is highly scalable. This is especially important as AI-based malware can grow and change constantly, and deep learning can scale to hundreds of millions of training samples, which means that as the training dataset gets larger, the deep learning neural network can continuously improve its ability to detect anomalies, no matter what the future will bring. Its truly fighting AI with AI.
Nadav Maman, CTO and co-founder, Deep Instinct
See original here:
- What Is The Artificial Intelligence Revolution And Why Does It Matter To Your Business? - Forbes - August 12th, 2020
- The AI Cosmos Intelligent Algorithms Begin Processing the Universe - The Daily Galaxy --Great Discoveries Channel - August 12th, 2020
- 4 Hard-To-Ignore Reasons Why You Should Use AI To Make More Intelligent Products - Forbes - August 12th, 2020
- A beginners guide to AI: The difference between video game AI and real AI - The Next Web - August 12th, 2020
- Evil AI: These are the 20 most dangerous crimes that artificial intelligence will create - ZDNet - August 12th, 2020
- Reveal Acquires NexLP to become the leading AI-powered eDiscovery Solution - PR Newswire India - August 12th, 2020
- 3 Daunting Ways Artificial Intelligence Will Transform The World Of Work - Forbes - August 12th, 2020
- Guidehouse Insights Report Shows AI-Based Solutions for T&D Network Management Are Expected to Experience a 14% Compound Annual Growth Rate from... - August 12th, 2020
- Dartmouth professor working on AI cancer cure | Education - The Union Leader - August 12th, 2020
- The Guardian view on artificial intelligence's revolution: learning but not as we know it - The Guardian - August 12th, 2020
- Digitalized Discrimination: COVID-19 and the Impact of Bias in Artificial Intelligence - JD Supra - August 12th, 2020
- Metro Bank and Sensibill partner on AI money management | Technology & AI - FinTech Magazine - The FinTech & InsurTech Platform - August 12th, 2020
- Tackling the problem of bias in AI software - Federal News Network - August 12th, 2020
- Hypotenuse AI wants to take the strain out of copywriting for e-commerce - TechCrunch - August 12th, 2020
- 7 AI Stocks to Buy for the Increasing Digitization of Healthcare - InvestorPlace - August 12th, 2020
- AI can speed up the search for new treatments here's how - World Economic Forum - August 12th, 2020
- Why Do Solar Farms Kill Birds? Call in the AI Bird Watcher - WIRED - August 12th, 2020
- How to make AI less racist - Bulletin of the Atomic Scientists - August 12th, 2020
- A US Air Force pilot is taking on AI in a virtual dogfight here's how to watch it - The Next Web - August 12th, 2020
- How AI is taking the pain out onboarding for the HR team - Tech Wire Asia - August 12th, 2020
- AI Technology Detects Deterioration in COVID-19 Patients by Identifying Predictive Patterns in Their Vital Signs - HospiMedica - August 12th, 2020
- NIH harnesses AI for COVID-19 diagnosis, treatment, and monitoring - National Institutes of Health - August 12th, 2020
- Leveraging AI to reduce COVID-19 risk: 'It's not enough to rely on test and trace' - FoodNavigator.com - August 12th, 2020
- How AI is reinventing the onboarding process for HR - TechHQ - August 11th, 2020
- The Impact Of AI On Call Centres - Forbes - August 11th, 2020
- AI is learning when it should and shouldnt defer to a human - MIT Technology Review - August 11th, 2020
- Adopting IT Advances: Artificial Intelligence and Real Challenges - CIO Applications - August 11th, 2020
- The Ethics Of AI And Death - Big Easy Magazine - August 11th, 2020
- Facebook Stock Rated Neutral This Week By AI - Forbes - August 11th, 2020
- SCAN Health Plan Leverages AI Based Predictive Models to Improve Identification of High Risk Members - PRNewswire - August 11th, 2020
- Frost & Sullivan Recognizes Berkshire Grey for Its Innovation and Leadership in AI-Based Robotic Fulfillment - Business Wire - August 11th, 2020
- Discover Unlimited Possibilities with OpenAI's AI Tool GPT-3 - Analytics Insight - August 11th, 2020
- Stanford Center for Health Education Launches Online Program in Artificial Intelligence in Healthcare to Improve Patient Outcomes - PRNewswire - August 11th, 2020
- Before we put $100 billion into AI - VentureBeat - August 11th, 2020
- RSIP Vision CEO: AI in medical devices is reducing dependence on human skills and improving surgical procedures and outcome - PRNewswire - August 11th, 2020
- Examine the future of AI for IoT to achieve business success - TechTarget - August 11th, 2020
- Computer vision: Why its hard to compare AI and human perception - TechTalks - August 11th, 2020
- $2.5 Million NIH Grant Will Support AI Approach to Study and Predict Excessive Drinking | | SBU News - Stony Brook News - August 11th, 2020
- MIT develops AI that identifies similarities between unrelated artworks, spanning centuries, artists, and mediums - Art Critique - August 11th, 2020
- Is AIOps the Answer to Your AI Woes? - RTInsights - August 10th, 2020
- Applications of Artificial Intelligence in Bioinformatics - AI Daily - August 10th, 2020
- Using AI and data to drive diversity - Human Resource Executive - August 10th, 2020
- AI Used to Prevent Californian Wildfires - RTInsights - August 10th, 2020
- LitLingo Advocates AI-driven Prevention as the Key to Modernizing the $45B Litigation and Compliance Industry - Business Wire - August 10th, 2020
- Insight into AI highlights its harmful infection potential - Poultry World - August 10th, 2020
- The Importance of Latinx Participation in the AI Community - Grit Daily - August 10th, 2020
- How The Federal Governments AI Center Of Excellence Is Impacting Government-Wide Adoption Of AI - Forbes - August 10th, 2020
- Lost your job due to coronavirus? Artificial intelligence could be your best friend in finding a new one - The Conversation US - August 10th, 2020
- Hoffman-Yee research grants focus on AI | Stanford News - Stanford University News - August 10th, 2020
- Best Competency With Artificial Intelligence is by Having Intelligent Experience - ReadWrite - August 10th, 2020
- R&D Roundup: Supercomputer COVID-19 insights, ionic spiderwebs, the whiteness of AI - TechCrunch - August 10th, 2020
- AI Experts Rank Deepfakes and 19 Other AI-Based Crimes By Danger Level - Unite.AI - August 9th, 2020
- Google's AI thinks women wearing masks have mouths covered with duct tape - ZDNet - August 9th, 2020
- How the Army Is Really Using AI - AI Daily - August 9th, 2020
- Why human-like is a low bar for most AI projects - The Next Web - August 9th, 2020
- Top Voice AI Stories in the First Half of 2020 with Lau, Prescott and Knig - Voicebot Podcast Ep 162 - Voicebot.ai - August 9th, 2020
- Global AI in Manufacturing Market 2020-2026: COVID-19's Impact on the Industry and Future Projections - PRNewswire - August 9th, 2020
- Viewpoint: Using AI to Identify Work Comp Fraud Related to COVID-19 - Claims Journal - August 9th, 2020
- Cloak your photos with this AI privacy tool to fool facial recognition - The Verge - August 9th, 2020
- Introducing The AI & Machine Learning Imperative - MIT Sloan - August 9th, 2020
- NIH launches imaging AI collaboration for COVID-19 and beyond - FierceBiotech - August 9th, 2020
- Announcing Sight Tech Global, an event on the future of AI and accessibility for people who are blind or visually impaired - TechCrunch - August 9th, 2020
- Ushering in a new era of work with RPA and AI - GCN.com - August 8th, 2020
- 3 Important Ways Artificial Intelligence Will Transform Your Business And Turbocharge Success - Forbes - August 8th, 2020
- Dont believe the hype: AI is no silver bullet - ComputerWeekly.com - August 8th, 2020
- Sensei Ag Uses AI Platform and Hydroponic Technology to Grow Food - The Spoon - August 8th, 2020
- New Baylor Study Will Train AI to Assist Breast Cancer Surgery - HITInfrastructure.com - August 8th, 2020
- The U.S. Has AI Competition All Wrong - Foreign Affairs - August 8th, 2020
- Game Tree and Optimization under Adversarial in AI - Analytics Insight - August 8th, 2020
- Does AI mean the end for breast radiologists? - AI in Healthcare - August 8th, 2020
- AI is struggling to adjust to 2020 - TechCrunch - August 8th, 2020
- How to Keep an Edge in the Era of Pervasive AI? - Via News Agency - August 8th, 2020
- Two US Army projects seek to improve comms between soldiers and AI - C4ISRNet - August 6th, 2020
- AI bias detection (aka: the fate of our data-driven world) - ZDNet - August 6th, 2020
- What's wrong with this picture? Teaching AI to spot adversarial attacks - GCN.com - August 6th, 2020
- Navigating the AI and Analytics Job Market During COVID-19 - Datanami - August 6th, 2020
- 3 Ethical Considerations When Investing in AI - Manufacturing Business Technology - August 6th, 2020
- How new tech raises the risk of nuclear war - Axios - August 6th, 2020
- Could a new academy solve the AI talent problem? - Defense Systems - August 6th, 2020
- New AI diagnostic tool knows when to defer to a human, MIT researchers say - Healthcare IT News - August 6th, 2020