This article is part of a VB special issue. Read the full series here: AI and Security.
The cybersecurity skills shortage is well documented, but the gap seems to be widening. The 2019 Cybersecurity Workforce study produced by nonprofit (ISC) looked at the cybersecurity workforce in 11 markets. The report found that while 2.8 million people currently work in cybersecurity roles, an additional 4 million were needed a third more than the previous year due to a global surge in hiring demand.
As companies battle a growing array of external and internal threats, artificial intelligence (AI), machine learning (ML), and automation are playing increasingly large roles in plugging that workforce gap. But to what degree can machines support and enhance cybersecurity teams, and do they or will they negate the need for human personnel?
These questions permeate most industries, but the cost of cybercrime to companies, governments, and individuals is rising precipitously. Studies indicate that the impact of cyberattacks could hit a heady $6 trillion by 2021. And the costs are not only financial. As companies harness and harvest data from billions of individuals, countless high-profile data breaches have made privacy a top concern. Reputations and in some cases peoples lives are on the line.
Against that backdrop, the market for software to protect against cyberattacks is also growing. The global cybersecurity market was reportedly worth $133 billion in 2018, and that could double by 2024. The current value of the AI-focused cybersecurity market, specifically, is pegged at around $9 billion, and it could reach $38 billion over the next six years.
We checked in with key people from across the technology spectrum to see how the cybersecurity industry is addressing the talent shortage and the role AI, ML, and automation can play in these efforts.
I think the concern around the cybersecurity skills gap and workforce shortfall is a temporary artifact of large companies scrambling to try to recruit more people to perform the same types of commodity cybersecurity activities for example, monitoring security logs and patching vulnerabilities, said Shuman Ghosemajumder, a former Googler who most recently served as chief technology officer at cybersecurity unicorn Shape Security.
Ghosemajumder compares this to undifferentiated heavy lifting, a term first coined by Amazons Jeff Bezos to describe the traditional time-consuming IT tasks companies carry out that are important but dont contribute a great deal to the broader mission. Bezos was referring to situations like developers spending 70% of their time working on servers and hosting, something Amazon sought to address with Amazon Web Services (AWS).
Similar patterns could emerge in the cybersecurity realm, according to Ghosemajumder.
Any time companies are engaged in undifferentiated heavy lifting, that points to the need for a more consolidated, services-based approach, he said. The industry has been moving in that direction, and that helps significantly with the workforce shortfall companies wont need to have such large cybersecurity teams over time, and they wont be competing for the exact same skills against one another.
Ghosemajumder was dubbed the click fraud czar during a seven-year stint at Google that ended in 2010. He developed automated techniques and systems to combat automated (and human-assisted) click fraud, when bad actors fraudulently tap on pay-per-click (PPC) ads to increase site revenue or diminish advertisers budgets. Manually reviewing billions of transactions on a daily basis would be impossible, which is why automated tools are so important. Its not about combating a workforce shortfall per se; its about scaling security to a level that would be impossible with humans alone.
Ghosemajumder said the most notable evolution he witnessed with regard to AI and ML was in offline non-real-time detection.
We would zoom out and analyze the traffic of an AdSense site, or thousands of AdSense sites, over a longer time period, and anomalies and patterns would emerge [that] indicated attempts to create click fraud or impression fraud, he continued. AI and ML were first hugely beneficial, and then became absolutely essential in finding that activity at scale so that our teams could determine and take appropriate action in a timely fashion. And even taking appropriate action was a fully automated process most of the time.
In 2012, Ghosemajumder joined Shape Security, which reached a $1 billion valuation late last year and was gearing up for an IPO. Instead, networking giant F5 came along last month and bought the company for $1 billion, with Ghosemajumder now serving as F5s global head of AI.
Shape Security focuses on helping big businesses (e.g., banks) prevent various types of fraud such as imitation attacks, where bots attempt to access peoples accounts through credential stuffing. The term, first coined by Shape Security cofounder Sumit Agarwal, refers to attempts to log into someones account using large lists of stolen usernames and passwords.
This is another example of how automation is increasingly being used to combat automation. Many cyberattacks center around automated techniques that prod online systems until they find a way in. For example, an attacker may have an arsenal of stolen credit card details, but it would take too long to manually test each one. Instead, an attacker performs a check once and then trains a bot to carry out that same approach on other card details until they have discovered which ones are usable.
Just as its relatively easy to carry out large-scale cyberattacks through imitation and automation, Shape Security uses automation to detect such attacks. Working across websites, mobile apps, and any API endpoint, Shape Security taps historical data, machine learning, and artificial intelligence to figure out whether a user is real, employing signals such as keystrokes, mouse movements, and system configuration details. If the software detects what it believes to be a bot logging into an account, it blocks the attempt.
While were now firmly in an era of machine versus machine cyberwarfare, the process has been underway for many years.
Automation was used 20-plus years ago to start to generate vast quantities of email spam, and machine learning was used to identify it and mitigate it, Ghosemajumder explained. [Good actors and bad actors] are both automating as much as they can, building up DevOps infrastructure and utilizing AI techniques to try to outsmart the other. Its an endless cat-and-mouse game, and its only going to incorporate more AI approaches on both sides over time.
To fully understand the state of play in AI-powered security, its worth stressing that cybersecurity spans many industries and disciplines. According to Ghosemajumder, fraud and abuse are far more mature in their use of AI and ML than approaches like vulnerability searching.
One of the reasons for this is that the problems that are being solved in those areas [fraud and abuse] are very different from problems like identifying vulnerabilities, Ghosemajumder said. They are problems of scale, as opposed to problems of binary vulnerability. In other words, nobody is trying to build systems that are 100% fraud proof, because fraud or abuse is often manifested by allowed or legitimate actions occurring with malicious or undesirable intent. You can rarely identify intent with infallible accuracy, but you can do a good job of identifying patterns and anomalies when those actions occur over a large enough number of transactions. So the goal of fraud and abuse detection is to limit fraud and abuse to extremely low levels, as opposed to making a single fraud or abuse transaction impossible.
Machine learning is particularly useful in such situations where the haystack youre looking for needles in, as Ghosemajumder puts it, is vast and requires real-time monitoring 24/7.
Curiously, another reason AI and ML evolved more quickly in the fraud and abuse realm may be down to industry culture. Fraud and abuse detection wasnt always associated with cybersecurity; those spheres once operated separately inside most organizations. But with the rise of credential stuffing and other attacks, cybersecurity teams became increasingly involved.
Traditionally, fraud and abuse teams have been very practical about using whatever works, and success could be measured in percentages of improvement in fraud and abuse rates, Ghosemajumder said. Cybersecurity teams, on the other hand, have often approached problems in a more theoretical way, since the vulnerabilities they were trying to discover and protect against would rarely be exploited in their environment in ways they could observe. As a result, fraud and abuse teams started using AI and ML more than 10 years ago, while cybersecurity teams have only recently started adopting AI- and ML-based solutions in earnest.
For now, it seems many companies use AI as an extra line of defense to help them spot anomalies and weaknesses, with humans on hand to make the final call. But there are hard limits to how many calls humans are able to make in a given day, which is why the greatest benefit of cybersecurity teams using AI and humans in tandem could simply be to ensure that machines improve over time.
The optimal point is often to use AI and automation to keep humans making the maximum number of final calls every day no more, but also no less, Ghosemajumder noted. That way you get the maximum benefit from human judgment to help train and improve your AI models.
Scalability is a theme that permeates any discussion around the role of AI and ML in giving cybersecurity teams sufficient resources. As one of the worlds biggest technology companies, this is something Facebook knows only too well.
Dan Gurfinkel is a security engineering manager at Facebook, supporting a product security team that is responsible for code and design reviews, scaling security systems to automatically detect vulnerabilities, and addressing security threats in various applications. According to Gurfinkels experiences at Facebook, the cybersecurity workforce shortfall is real and worsening but things could improve as educational institutions adapt their offerings.
The demand for security professionals, and the open security roles, are rising sharply, often faster than the available pool of talent, Gurfinkel told VentureBeat. Thats due in part to colleges and universities just starting to offer courses and certification in security. Weve seen that new graduates are getting more knowledgeable year over year on security best practices and have strong coding skills.
But is the skills shortage really more pronounced in cybersecurity than in other fields? After all, the tech talent shortage spans everything from software engineering to AI. In Gurfinkels estimation, the shortfall in cybersecurity is indeed more noticeable than in other technical fields, like software engineering.
In general, Ive found the number of software engineering candidates is often much larger than those who are specialized in security, or have a special expertise within security, such as incident response or computer emergency response [CERT], he said.
Its also worth remembering that cybersecurity is a big field requiring a vast range of skill sets and experience.
For mid-level and management roles, in particular, sometimes the candidate pool can be smaller for those who have more than five years of experience working in security, Gurfinkel added. Security is a growing field thats becoming more popular, so I would expect that to change in the future.
Facebook is another great example of how AI, ML, and automation are being used not so much to overcome gaps in the workforce but to enable security on a scale that would otherwise be impossible. With billions of users across Facebook, Messenger, Instagram, and WhatsApp, the sheer size and reach of the companys software makes it virtually impossible for humans alone to keep its applications secure. Thus, AI and automated tools become less about plugging workforce gaps and more about enabling the company to keep on top of bugs and other security issues. This is also evident across Facebooks broader platform, with the social networking giant using AI to automate myriad processes, from detecting illegal content to assisting with translations.
Facebook also has a record of open-sourcing AI technology it builds in-house, such as Sapienz, a dynamic analysis tool that automates software testing in a runtime environment. In August 2019, Facebook also announced Zoncolan,* a static analysis tool that can scan the companys 100 million lines of code in less than 30 minutes to catch bugs and prevent security issues from arising in the first place. It effectively helps developers avoid introducing vulnerabilities into Facebooks codebase and detect any emerging issues, which, according to Facebook, would take months or years to do manually.
Most of our work as security engineers is used to scale the detection of security vulnerabilities, Gurfinkel continued. We spend time writing secure frameworks to prevent software engineers from introducing bugs in our code. We also write static and dynamic analysis tools, such as Zoncolan, to prevent security vulnerabilities earlier in the development phase.
In 2018, Facebook said Zoncolan helped identify and triage well over 1,000 critical security issues that required immediate action. Nearly half of the issues were flagged directly to the code author without requiring a security engineer.
This not only demonstrates how essential automation is in large codebases, it also illustrates ways it can empower software developers to manage bugs and vulnerabilities themselves, thus lightening security teams workloads.
It also serves as a reminder that humans are still integral to the process, and likely will be long into the future, even as their roles evolve.
When it comes to security, no company can solely rely on automation, Gurfinkel said. Manual and human analysis is always required, be it via security reviews, partnering with product teams to help design a more secure product, or collaborating with security researchers who report security issues to us through our bug bounty program.
According to Gurfinkel, static analysis tools that is, tools used early in the development process before the code is executed are particularly useful for identifying standard web security bugs, such as OWASPs top 10 vulnerabilities, as it can surface straightforward issues that need to be addressed immediately. This frees up human personnel to tackle higher priority issues.
While these tools help get things on our radar quickly, we need human analysis to make decisions on how we should address issues and come up with solutions for product design, Gurfinkel added.
As BlackBerry has transitioned from phonemaker to enterprise software provider, cybersecurity has become a major focus for the Canadian tech titan, largely enabled by AI and automation. Last year, the company shelled out $1.4 billion to buy AI-powered cybersecurity platform Cylance. BlackBerry also recently launched a new cybersecurity research and development (R&D) business unit that will focus on AI and internet of things (IoT) projects.
BlackBerry is currently in the process of integrating Cylances offerings into its core products, including its Unified Endpoint Management (UEM) platform that protects enterprise mobile devices, andmore recently its QNX platform to safeguard connected cars. With Cylance in tow, BlackBerry will enable carmakers and fleet operators to automatically verify drivers, address security threats, and issue software patches. This integration leans on BlackBerrys CylancePersona, which can identify drivers in real time by comparing them with a historical driving profile. It looks at things like steering, braking, and acceleration patterns to figure out who is behind the wheel.
This could be used in multiple safety and security scenarios, and BlackBerry envisages the underlying driving pattern data also being used by commercial fleets to detect driver fatigue, enabling remote operators to contact the driver and determine whether they need to pull off the road.
Moreover, with autonomous vehicles gearing up for prime time, safety is an issue of paramount importance and one companies like BlackBerry are eager to capitalize on.
Back in 2016, BlackBerry launched theAutonomous Vehicles Innovation Centre (AVIC) to advance technology innovation for connected and autonomous vehicles. The company has since struck some notable partnerships, including with Chinese tech titan Baidu to integrate QNX with Baidus autonomous driving platform. Even though BlackBerry CEO John Chen believes autonomous cars wont be on public roads for at least a decade, the company still has to plan for that future.
Here again, the conversation comes back to cybersecurity and the tools and workforce needed to to maintain it. Much as Facebook is scaling its internal security setup, BlackBerry is promising its business customers it can scale cybersecurity, improve safety, and enable services that would not be possible without automation.
AI and automation are more about scalability, as opposed to plugging specific skills gaps, BlackBerry CTO Charles Eagan told VentureBeat. AI is also about adding new value to customers and making things and enabling innovations that were previously not possible. For example, AI is going to be needed to secure an autonomous vehicle, and in this case it isnt about scalability but rather about unlocking new value.
Similarly, AI-powered tools promise to free up cybersecurity professionals to focus on other parts of their job.
If we remove 99% of the cyberthreats automatically, we can spend much more quality time and energy looking to provide security in deeper and more elaborate areas, Eagan continued. The previous model of chasing AV (antivirus) patterns would never scale to todays demands. The efficiencies introduced by quality, preventative AI are needed to simply keep up with the demand and prepare for the future.
AI-related technologies are ultimately better than humans at tackling certain problems, such as looking at large data sets and spotting patterns and automating tasks. But people also have skills that are pretty difficult for machines to top.
The human is involved in more complex tasks that require experience, context, critical thinking, and judgement, Eagan said. The leading-edge new attacks will always require humans to triage and look for areas where machine learning can be applied. AI is very good at quantifying similarities and differences and therefore identifying novelties. Humans, on the other hand, are better at dealing with novelties, where they can combine experience and analytical thinking to respond to a situation that has not been seen before.
Even with this symbiosis between humans and machines, the cybersecurity workforce shortfall is increasing largely due to factors such as spreading internet connectivity, escalating security issues, growing privacy concerns, and subsequent demand spikes. And the talent pool, while expanding in absolute terms, simply cant keep up with demand, which is why more needs to be done from an education and training perspective.
As the awareness of security increases, the shortage is felt more acutely, Eagan said. We as an industry need to move quickly to attack this issue on all fronts a big part of which is sparking interest in the field at a young age, in the hope that by the time these same young people start looking at the next stage in their education, they gravitate to the higher education institutions out there that offer cybersecurity as a dedicated discipline.
For all the noise BlackBerry has been making about its investments in AI and security, it is also investing in the human element. It offers consulting services that include cybersecurity training courses, and it recently launched a campaign to draw more women into cybersecurity through a partnership with the Girl Guides of Canada.
Similar programs include the U.S. Cyber Challenge (USCC), operated by Washington, D.C.-based nonprofit Center for Strategic and International Studies (CSIS), which is designed to significantly reduce the shortage in the cyber workforce by delivering programs to identify and recruit a new generation of cybersecurity professionals. This includes running competitions and cyber summer camps through partnerships with high schools, colleges, and universities.
Efforts to nurture interest in cybersecurity from a young age are already underway, but there is simultaneously a growing awareness that higher education programs geared toward putting people in technical security positions arent where they need to be.
According to a2018 report from the U.S. Departments of Homeland Security and Commerce, employers are expressing increasing concern about the relevance of certain cybersecurity-related education programs in meeting the real needs of their organization, with educational attainment serving as a proxy for actual applicable knowledge, skills, and abilities (KSAs). For certain work roles, a bachelors degree in a cybersecurity field may or may not be the best indicator of an applicants qualifications, the report noted. The study team found many concerns regarding the need to better align education requirements with employers cybersecurity needs and how important it is for educational institutions to engage constantly with industry.
Moreover, the report surfaced concerns that some higher education cybersecurity courses concentrated purely on technical knowledge and skills, with not enough emphasis on soft skills, such as strategic thinking, problem solving, communications, team building, and ethics. Notably, the report also found that some of the courses focused too much on theory and too little on practical application.
For companies seeking personnel with practical experience, a better option could be upskilling ensuring that existing security workers are brought up to date on the latest developments in the security threat landscape. With that in mind, Immersive Labs, which recently raised $40 million from big-name investors including Goldman Sachs, has set out to help companies upskill their existing cybersecurity workers through gamification.
Immersive Labs was founded in 2017 by James Hadley, a former cybersecurity instructor for the U.K.s Government Communications Headquarters (GCHQ), the countrys intelligence and security unit. The platform is designed to help companies engage their security workforce in practical exercises which may involve threat hunting or reverse-engineering malware from a standard web browser. Immersive Labs is all about using real-world examples to keep things relatable and current.
While much of the conversation around AI seems to fall into the humans versus machines debate, that isnt helpful when were talking about threats on a massive scale. This is where Hadley thinks Immersive Labs fills a gap its all about helping people find essential roles alongside the automated tools used by many modern cybersecurity teams.
AI is indeed playing a bigger role in the security field, as it is in many others, but its categorically not a binary choice between human and machine, Hadley told VentureBeat in an interview last year. AI can lift, push, pull, and calculate, but it takes people to invent, contextualize, and make decisions based on morals. Businesses have the greatest success when professionals and technologies operate cohesively. AI can enhance security, just as [AI] can be weaponized, but we must never lose sight of the need to upskill ourselves.
Other companies have invested in upskilling workers with a proficiency in various technical areas Cisco, for example, launched a $10 million scholarship to help people retrain for specific security disciplines. Shape Securitys Ghosemajumder picked up on this, noting that some companies are looking to retrain technical minds for a new field of expertise.
Many companies are not trying to hire cybersecurity talent at all, but instead find interested developers, often within the company, and train them to be cybersecurity professionals if they are interested, which many are these days, Ghosemajumder explained.
There is clearly a desire to get more people trained in cybersecurity, but one industry veteran thinks other factors limit the available talent pool before the training process even begins. Winn Schwartau is founder of the Security Awareness Company and author of several books most recently Analogue Network Security, in which he addresses internet security with a mathematically based approach to provable security.
According to Schwartau, there is a prevailing misconception about who makes a good cybersecurity professional. Referring to his own experiences applying for positions with big tech companies back in the day, Schwartau said he was turned down for trivial reasons once for being color-blind, and another time for not wanting to wear a suit. Things might not be quite the same as they were in the 1970s, but Schwartau attributes at least some of todays cybersecurity workforce problem to bias about who should be working in the field.
In 2012, when then-Secretary for Homeland Security Janet Napolitano said We cant find good cybersecurity people, I said, thats crap thats just not true, Schwartau explained. What you mean is you cant find lily-white, perfect people who have never done anything wrong, who meet your myopic standards of normal, and who dont smoke weed. No wonder you cant find talent. But the worst part is, we dont have great training grounds for the numbers of people who want in to security. Training is expensive, and we are training on the wrong topics.
Will the shortfall get worse? Much worse, especially as anthro-cyber-kinetic (human, computer, physical) systems are proliferating, Schwartau continued. Without a strong engineering background, the [software folks] dont get the hardware, and the [hardware folks] dont get the AI, and no one understands dynamic feedback systems. Its going to get a whole lot worse.
Above: Winn Schwartau
Image Credit: Winn Schwartau
Schwartau isnt alone in his belief that the cybersecurity workforce gap is something of an artificial construct. Fredrick Lee has held senior security positions at several high-profile tech companies over the past decade, including Twilio, NetSuite, Square, and Gusto and he also thinks the skills shortage is more of a creativity problem in hiring.
To close the existing talent gap and attract more candidates to the field, we need to do more to uncover potential applicants from varied backgrounds and skill sets, instead of searching for nonexistent unicorn candidates people with slews of certifications, long tenures in the industry, and specialized skills in not one, but several, tech stacks and disciplines, he said.
What Lee advocates is dropping what he calls the secret handshake society mindset that promotes a lack of diversity in the workforce by deterring potential new entrants.
Schwartau is also a vocal critic of AI on numerous grounds, one being the lack of explainability. Algorithms may give different results on different occasions to resolve the same problem without explaining why. We need to have a mechanism to hold them accountable for their decisions, which also means we need to know how they make decisions, he said.
While many companies deploy AI as an extra line of defense to help them spot threats and weaknesses, Schwartau fears that removing the checks and balances human beings provide could lead to serious problems down the line.
Humans are lazy, and we like automation, he said. I worry about false positives in an automated response system that can falsely indict a person or another system. I worry about the We have AI, let the AI handle it mindset from vendors and C-suiters who are far out of their element. I worry that we will have increasing faith in AI over time. I worry we will migrate to these systems and not design a graceful degradation fallback capability to where we are now.
Beyond issues of blind faith, companies could also be swept up by the hype and hoodwinked into buying inferior AI products that dont do what they claim to.
My biggest fear about AI as a cybersecurity defense in the short term is that many companies will waste time by trying half-baked solutions using AI merely as a marketing buzzword, and when the products dont deliver results, the companies will conclude that AI/ML itself as an approach doesnt work for the problem, when in fact they just used a poor product, Schwartau said. Companies should focus on efficacy first rather looking for products that have certain buzzwords. After all, there are rules-based systems, in cybersecurity and other domains, that can outperform badly constructed AI systems.
Its worth looking at the role that rules-based automated tools where AI isnt part of the picture play in plugging the cybersecurity skills gap. After all, the end goal is ultimately the same. Not enough humans to do the job? Heres some technology that can fill the void.
Dublin-based Tines is one company thats setting out to help enterprise security teams automate repetitive workflows.
For context, most big companies employ a team of security professionals to detect and respond to cyberattacks typically aided by automated tools such as firewalls and antivirus software. However, these tools create a lot of false alarms and noise, so people need to be standing by to dig in more deeply. With Tines, security personnel can prebuild what the company calls automation stories. These can be configured to carry out a number of steps after an alert is triggered doing things like threat intelligence searches or scanning for sensitive data in GitHub source code, such as passwords and API keys. The repository owner or on-call engineer can then be alerted automatically (e.g., through email or Slack).
In short, Tines saves a lot of repetitive manual labor, leaving security personnel to work on more important tasks or go home at a reasonable hour. This is a key point, given that burnout can exacerbate the talent shortfall, either through illness or staff jumping ship.
Tines CEO and cofounder Eoin Hinchy told VentureBeat that 79% of security teams are overwhelmed by the volume of alerts they receive. [And] security teams are spending more and more time performing repetitive manual tasks.
In terms of real-world efficacy, Tines claims that one of its Fortune 500 customers saves the equivalent of 70 security analyst hours a week through a single automation story that automates the collection and enrichment of antivirus alerts.
This kind of time-saving is not unusual for Tines customers and is material when you consider that most Tines customers will have about a dozen automation stories providing similar time-savings, Hinchy continued.
Tines also helps bolster companies cybersecurity capabilities by empowering non-coding members of the team. Anyone including security analysts can create their own automations (similar to IFTTT) through a drag-and-drop interface without relying on additional engineering resources. We believe that users on the front line, with no development experience, should be able to automate any workflow, Hinchy said.
Hinchy also touched on a key issue that could make manually configured automation more appealing than AI in some cases: explainability. As Schwartau noted, a human worker can explain why they carried out a particular task the way they did, or arrived at a certain conclusion, but AI algorithms cant. Rules-based automated tools, on the other hand, just do what their operator tells them to there is no black box here.
Our customers really care about transparency when implementing automation. They want to know exactly why Tines took a particular decision in order to develop trust in the platform, Hinchy added. The black box nature of AI and ML is not conducive to this.
Other platforms that help alleviate cybersecurity teams workload include London-based Snyk, which last month raised $150 million at a $1 billion valuation for an AI platform that helps developers find and fix vulnerabilities in their open source code.
With Snyk, security teams offer guidance, policies, and expertise, but the vast majority of work is done by the development teams themselves, Snyk cofounder and president Guy Podjarny told VentureBeat. This is a core part of how we see dev-first security: security teams modeling themselves after DevOps, becoming a center of excellence building tools and practices to help developers secure applications as they build it, at their pace. We believe this is the only way to truly scale security, address the security talent shortage, and improve the security state of your applications.
The importance of AI, ML, and automation in cybersecurity is clear but its often less about plugging skills gaps than it is about enabling cybersecurity teams to provide real-time protection to billions of end users. With bad actors scaling their attacks through automation, companies need to adopt a similar approach.
But humans are a vital part of the cybersecurity process, and AI and other automated tools enable them to do their jobs better while focusing on more complex or pressing tasks that require experience, critical thinking, moral considerations, and judgment calls. Moreover, threats are constantly growing and evolving, which will require more people to manage and build the AI systems in the first place.
The sheer number of cybersecurity threats out there far exceeds the current solution space, BlackBerrys Eagan said. We will always need automation and more cybersecurity professionals creating that automation. Security is a cat and mouse game, and currently more money is spent in threat development than in protection and defense.
Companies also need to be wary of AI as a marketing buzzword. Rather than choosing a poor product that either doesnt do what it promises or is a bad fit for the job, they can turn to simple automated systems.
For me, machines and automation will act as a mechanism to enhance the efficiency and effectiveness of teams, Tines Hinchy said. Machines and humans will work together, with machines doing more of the repetitive, routine tasks, freeing up valuable human resources to innovate and be creative.
Statistical and anecdotal evidence tends to converge around the idea of the cybersecurity workforce gap, but there is general optimism that the situation will correct itself in time through a continued shift toward a more consolidated, services-based cybersecurity approach, as Ghosemajumder put it, as well as by improving education for young people and upskilling and retraining existing workers.
The workforce is getting larger in absolute terms, Ghosemajumder said. There is greater interest in cybersecurity and more people going into it, in the workforce, as well as in schools, than ever before. When I studied computer science, there were no mandatory security courses. When I studied management, there were no cybersecurity courses at all. Now, cybersecurity is one of the most popular subjects in computer science programs, and is taught in most leading business schools and law schools.
*Post updated 02/12/20 to clarify that Zoncolan has not been open-sourced.
See the article here:
- Automation | Definition of Automation at Dictionary.com - April 2nd, 2020
- Automation - Advantages and disadvantages of automation ... - April 2nd, 2020
- Automation - The Car Company Tycoon Game on Steam - April 2nd, 2020
- Bridging the Network Automation Skills Gap - DevOps.com - April 2nd, 2020
- Yard work: Automation strolls out the warehouse door - ZDNet - April 2nd, 2020
- Five Steps To Get Started With Robotic Automation - Forbes - April 2nd, 2020
- ValueLink drives efficiency in the valuation process through a combination of automation, AI and analytics - HousingWire - April 2nd, 2020
- Bosses speed up automation as virus keeps workers home - The Guardian - April 2nd, 2020
- What is Feature Engineering and Why Does It Need To Be Automated? - Datanami - April 2nd, 2020
- Parascript and Le Mans Tech Partner to Offer Integrated Automation Solutions to Small and Midsize Banks - Yahoo Finance - April 2nd, 2020
- Smartly.io Powers Digital Advertising Innovation and Automation on Pinterest - Business Wire - April 2nd, 2020
- Overcoming the challenges of intelligent automation - ITProPortal - April 2nd, 2020
- Will the COVID-19 Pandemic Promote Mass Automation? - Walter Bradley Center for Natural and Artificial Intelligence - April 2nd, 2020
- Attractive Opportunities in Automation - Morningstar - April 2nd, 2020
- Automating with a solid foundation - ITProPortal - April 2nd, 2020
- Industry could fast-track automation amid COVID-19 fallout - www.mining-journal.com - April 2nd, 2020
- Building automation to generate $44bn in revenue per annum - Smart Energy - April 2nd, 2020
- The Idea Behind Automating Your Commissions - Yahoo Finance - April 2nd, 2020
- Why Intelligent Automation is the Need of the Hour - CXOToday.com - April 2nd, 2020
- Using RPA to automate internal audits where to start - TechHQ - April 2nd, 2020
- Virtual assistant 'Larry the Chat Bot' brings automated solution... - Hays Free Press - April 2nd, 2020
- Aleyant Celebrates 15 Years of Innovation in Automated Print Software - Industry Analysts Inc - April 2nd, 2020
- Automation Anywhere helps deliver business continuity with RPA industry's first bot security program - Continuity Central - April 2nd, 2020
- Java vs. Python for test automation? Why Ruby is the better choice - TechBeacon - April 2nd, 2020
- Aido Technologies Launches Free Use of AI-Powered Automated CPT and Diagnosis Coding During the US Response to COVID-19 - Associated Press - April 2nd, 2020
- How automation is speeding delivery of COVID-19 results to hospitals - TechRepublic - March 31st, 2020
- Reevaluating the Conversation on Automation and the Future of Work - Georgetown Public Policy Review - Georgetown Public Policy Review - March 31st, 2020
- What will drive the future of automation? - Essential Install - March 31st, 2020
- Home Working Causes Bosses to Increase Automation - DIGIT.FYI - March 31st, 2020
- Load.ng commits to leveraging Tech for Bitcoin to Naira Exchange Automation. - Techpoint Africa - March 31st, 2020
- news digest: Automation Anywhere's Bot Security, Linux 5.6, and the IntelliSense Code Linter for C++ - SD Times - SDTimes.com - March 31st, 2020
- COVID-19 Pandemic Pushes Logistic Automation up the Agenda - Sea News - March 31st, 2020
- Robotic Process Automation (RPA): Is It Recession Proof? - Forbes - March 16th, 2020
- How AI+Automation Can Transform Tedious Office-Tasks? - Analytics Insight - March 16th, 2020
- Software release cycles accelerate, but automation is not keeping up - ZDNet - March 16th, 2020
- Wannabe Wired: Don't fear the (automated) reaper - The Lawton Constitution - March 16th, 2020
- Study Shows 93% of Employees Believe Process Automation Will Make Jobs Easier - Supply and Demand Chain Executive - March 16th, 2020
- Centralizing Your Systems - Automation World - March 16th, 2020
- Automated trucking, a technical milestone that could disrupt hundreds of thousands of jobs, hits the road - 60 Minutes - CBS News - March 16th, 2020
- Smart bathrooms have been around since the 1980s. But do they help against coronavirus? - NorthJersey.com - March 16th, 2020
- How do financial advisors stay relevant in the age of automation? - Moneyweb.co.za - March 16th, 2020
- How SEOs can benefit from using AI and automation - Econsultancy - March 16th, 2020
- Robotic Process Automation Market World Informing, Growth Analysis And Opportunities Outlook 2020 To 2026 - News Times - March 16th, 2020
- Digital Transformation: UAE Banking Executive Explains How Fintech, AI, Robotic Process Automation is Transforming Finance Sector - Crowdfund Insider - March 16th, 2020
- Why robotic process automation (RPA) is the perfect technology for logistics - FreightWaves - March 16th, 2020
- Research Predicts Automated Deliveries Will Generate up to $48.4B in Revenue by 2030 - Supply and Demand Chain Executive - March 16th, 2020
- SOARIZON by Thales and Iris Automation announce partnership - sUAS News - March 16th, 2020
- This husband-wife duos startup helps companies automate their workflows - YourStory - March 16th, 2020
- Why Police Love the Idea of Automated Content Moderation - Slate - March 16th, 2020
- Smart Elevator Automation System Market 2019 Trends, Size, Segments, Emerging Technologies and Industry Growth by Forecast to 2023 - Packaging News 24 - March 16th, 2020
- Connectivity is the key to early wins in connected and automated mobility - FleetNews - March 16th, 2020
- 2.7m Aussie jobs at risk of automation - ACS - March 16th, 2020
- Bots bring automation to the war on data entry - C4ISRNet - March 16th, 2020
- Will The Tesla Model Y Allow Fully Automated Production To Happen? - InsideEVs - March 16th, 2020
- Bolt-on Software and Automation Technologies Showcased at MODEX - Benzinga - March 11th, 2020
- Helping workers survive automation - Bangkok Post - March 11th, 2020
- AI and Automation in the Workplace - Electropages - March 11th, 2020
- Packsize and 6 River Systems Collaborate on Next-generation Warehouse Automation Technology at MODEX - DC Velocity - March 11th, 2020
- Automation startup Zinier raises $90 million to expand in Brazil - Capacity Media - March 11th, 2020
- Don't miss the big Brexit automation opportunity to improve digital skills - Information Age - March 11th, 2020
- Why enterprises are turning to partnership automation - Gigabit Magazine - Technology News, Magazine and Website - March 11th, 2020
- Kryon Solidifies Leadership in Singapore's Robotic Process Automation Marketplace With Government Accreditation From IMDA - PRNewswire - March 11th, 2020
- Leveraging artificial intelligence to automate data extraction from geotagged images - Geospatial World - March 11th, 2020
- Royal Society of Arts calls for training amid automation rise - The National - March 11th, 2020
- Automated Material Handling Market Expected to Grow with a CAGR of 11.30% During the Forecast Period, 2020-2025 - ResearchAndMarkets.com - Business... - March 11th, 2020
- New and notable: Automation and AI - Greenhouse Canada - March 11th, 2020
- Call Center Automation Depends on Collaborative AI - Customer Think - March 11th, 2020
- Process Automation Software Market (2020 To 2027) is booming worldwide with Adobe, AppSheet, Automation Anywhere, Blue Prism - News Times - March 11th, 2020
- Munich Re Automation Solutions on the rise of the insurtech sector - Insurance Business - March 11th, 2020
- APAC Automation & Control Systems Market in the Upstream Oil & Gas Industry, Forecast to 2023 - Positive Market Sentiments Point to Slow... - March 6th, 2020
- Microsoft Goes All-In On RPA (Robotic Process Automation) - Forbes - March 6th, 2020
- Aging and automation shaping workforce, labor secretary tells Worcester business group - Worcester Telegram - March 6th, 2020
- How automation is taking some of the burden for healthcare workers - Tech Wire Asia - March 6th, 2020
- Cisco Focuses on Industrial Cybersecurity and the Edge - Automation World - March 6th, 2020
- Industrial Automation and Instrumentation Market in India 2020-2024 | Evolving Opportunities with ABB Ltd. and Eaton Corp. Plc | Technavio - Yahoo... - March 6th, 2020
- HomePod volume blasting you away? Try this volume automation - 9to5Mac - March 6th, 2020
- Automated Material Handling (AMH) Market, Forecast to 2025 - Daifuku, BEUMER Group, Siemens, JBT, and Honeywell Intelligrated are Dominating - Yahoo... - March 6th, 2020
- IS AUTOMATION A THREAT TO CONSTRUCTION WORKFORCES? - KHL Group - March 6th, 2020
- Growing Importance of Automation and Robotics in Manufacturing & Supply Chains - Analytics Insight - March 6th, 2020
- United States Library Expenditure Market Outlook to 2024 - Increasing Spending on Automation in Libraries Drives Market Growth - PRNewswire - March 6th, 2020