Easily the hottest topic in computer science of the last 20 years, artificial intelligence is considered the next significant stratum of technology as mankind endeavors to have machines do our thinking for us. It seems that every walk of life is being targeted for some application of AI, with advancements in the space being announced daily.
The cybersecurity industry is primed for the power of artificial intelligence—it will help provide the power and bulk capabilities to efficiently process the astounding mass of security data collected every second. It will learn and make quicker analytical decisions about threats and response tactics. In fact, many vendors in the security tooling space are already claiming to do just that.
Exactly how and to what extent AI is currently being leveraged in cybersecurity is an interesting study in the focused application of machine learning. While the term “AI” seems to be overused in the industry at present, the real-world benefits of artificial intelligence techniques are lending new power to threat detection.
Artificial intelligence, today—really
Our colloquial understanding of artificial intelligence is born mostly from Hollywood tropes, the kind that powers androids with what seems like humanlike intelligence. But hopefully anyone reading this article has some understanding of the technical specifics of AI more firmly rooted in science and engineering. How does a computer functionally replicate human cognition and decision making to a degree worthy of the term “intelligence?” While leaps are being made daily, we’re not quite there yet for practical application.
A more vocational definition of Artificial Intelligence would describe comprehensive and complex conditional logic that’s programmed to automatically learn from data in some purposeful way. AI hosts a subset of technologies, including computer vision and image recognition, and machine learning and natural language processing, among other disciplines; these often work in concert to simulate the neural network of the human brain to determine relationship and meaning from the inputs. Most applications of AI today are used to process data—huge amounts of data—within the scope of some type of pre-programmed model. One type of AI, called Machine Learning, is a methodology that allows software to become better and more accurate at its designed purpose by algorithmically improving its output over a large number of iterations.
Machine Learning (ML) falls into two categories: Supervised and unsupervised. Supervised ML is tightly constrained in both its input data models and expected output—a useful tool for very specific purposes. On the other hand, unsupervised ML can be used without as much predetermined modeling of inputs, allowing it to generate unknown but hopefully insightful outputs. One example is grouping and aggregation of an unknown dataset; an unsupervised machine learning algorithm might find useful meaning that data analysts didn’t know existed prior.
Exploring how AI can improve cybersecurity tooling
In terms of AI application, the cybersecurity industry has a clear motive: to protect people and the assets and data they produce from damaging attacks by “bad actors,” or threat actors. The industry frequently employs metaphors for warfare—defense, offense, and threat landscape—because they’re fitting. Most modern attacks are motivated by financial gain or espionage campaigns by nation states.
Like the 1980s G.I. Joe cartoon that imprinted kids with the credo of, “knowing is half the battle,” information and intelligence about cyber adversaries are critical assets for both sides of the fence. The primary defensive strategy by the industry has been to collect massive amounts of data from multiple sources, including device inventories and endpoints (workstations and servers) telemetry like installed applications and running processes, as well as inbound and outbound network traffic. Analyzing this data can reveal evidence of successful or attempted attacks—in real time or in response.
Today, a nearly inconceivable volume of data is collected every minute of every hour by tools designed specifically to analyze and correlate disparate data and detect potentially malicious activity. In many cases, detection is accomplished using traditional data analysis methods supported by human-led expertise—analysts reviewing computer-guided results to confirm their accuracy and advising on the best next step.
Artificial intelligence could revolutionize the efficacy of data analysis in numerous ways, potentially saving money by automating rote tasks that require some human intelligence for complex anomaly detection and filtering, as well as supporting the in-depth investigation of security incidents. And machine learning is an excellent way to find correlation among data—often anomalies that may signal suspicious activity.
The reality of AI in modern cybersecurity tools
If a new capability from a cybersecurity software vendor claims “Artificial Intelligence” as a supporting technology, feel free to be skeptical. Today, the term is used liberally to describe any kind of advanced processing. There is often a major rift between engineering, product management, and marketing departments in terms of what a product actually does and what’s on the label. At this point in time, machine learning is an extremely useful application of AI because data is our greatest resource in detection of cyberattacks. It’s not AI in the sense that it can autonomously learn and make decisions without human guidance, but it is incredibly valuable nonetheless.
Here’s an example from an unnamed XDR vendor describing their product:
Not all XDR solutions are created equal. For instance, an AI-driven XDR offering leverages artificial intelligence and machine learning to scale and bring efficiency to their detection and response efforts. These capabilities enable security teams to quickly understand the entire malicious operation from root cause across every affected device and user.
This is a great description for the current limits of AI in cybersecurity, summed up by a product’s features. It’s a tool to better handle big data and develop useful and actionable information for analysts, more efficiently.
What’s the downside to products erroneously touting AI?
While AI in security tooling is exciting, expectations should be tempered for any single technology. Buying into the marketing hype of AI can sometimes instill a false sense of security. This is particularly dangerous for small and medium-sized businesses (SMB) with a defined, often strict budget for security tools—they need a swift solution, and they need it to be as comprehensive as possible. Sometimes marketing claims for AI-enhanced security tools can overpromise in an attempt to be everything in a single package.
Nevertheless, an effective security posture is often the result of a mature program—one specifically developed for an organization. This is true even with the latest tools at our disposal, like AI.
SMBs must get beyond the idea of a magic bullet product; while understandably appealing, they aren’t a replacement for good strategy. Here’s an example: The Center for Internet Security publishes a list of 18 controls—a prioritized set of tactics to “mitigate the most prevalent cyber-attacks against systems and networks […] mapped to and referenced by multiple legal, regulatory, and policy frameworks.” This list of controls is considered the recommended starting point for maturing a security program, and it ranks malware defenses at #10—after asset inventory, data protections, vulnerability management, and audit logging.
And guess what ranks #2 in security tooling spending? Malware defenses. So today, malware defense tooling, the kind where AI detections are focused, is routinely the loudest message we encounter in the market.
So, what if we can’t rely on AI?
The messaging-driven promises of AI change nothing. The course to an effective security posture is the same.
The trick is to dismiss the idea that a product’s “AI” provides true intelligence. Many EDR, SIEM, and now XDR products are strong tools if taken for their actual capabilities and specific security boosting functionality. The AI features of these tools are powerful and have specific scopes.
Businesses looking to improve their cybersecurity program can begin by implementing standardized control frameworks specified by CIS or SANS—lists were compiled by researchers working outside the interests of tooling vendors. Implement CIS controls 7-8, then circle back to 1-2.
Endpoint vulnerabilities are responsible for the lion’s share of cyberattacks today; building a program to deal with them effectively and quickly should be business’ primary focus. Additionally, good vulnerability and patching programs need accurate asset inventory solutions, so back to CIS controls 1 and 2.
This is the most effective strategy as we know it. Products utilizing AI will be more powerful than those that do not, but like any tool they will perform a prescribed function and not provide the full solution.
What’s required for AI to gain efficacy in cybersecurity?
We’ll get there, in time—but it may take longer than marketing campaigns would have us believe. Thankfully, other applications of artificial intelligence have proven to be successful.
Consider its application in games like chess; it was more than 25 years ago that Gary Kasparov was first beaten by Deep Blue, powered by an IBM supercomputer that was in development for over 11 years. Modern applications are even more impressive and being developed at a much quicker pace. Google’s AlphaZero project reportedly learned the game and defeated the reigning world champion in just 4 hours. Notably, that champion was Stockfish 9, also a computer.
It’s key to acknowledge that games are a good example of well-defined scenarios with known data models and tightly constrained actions—an application where AI excels. The more defined and automated a task in security, the better AI applications perform. If we can build an AlphaZero to learn specific, well-defined security problems, it could change cybersecurity forever.
What’s next for now?
Despite promising news headlines on computing power increasing at exponential multiples, Hollywood artificial intelligence is still a dream—at least for the cybersecurity industry. But strategically applied contemporary AI techniques will make our tools immensely more powerful.
Security effectiveness and program maturity continues to be achieved through practice and time, led by human experts implementing control frameworks, working with tooling that’s similarly architected and orchestrated by clever humans.
There is no simple solution—no superior intelligence that can yet handle the myriad dimensions of cyber threats on our behalf—but if we stay the established course for security maturity and technological innovation, it’s likely we’ll eventually see real artificial intelligence as an effective addition to cybersecurity tools. For now, let’s focus on building comprehensive cybersecurity programs that last, with an eye on the horizon. We’ve got to remain at least one step ahead of threat actors.
Source
This article was originally penned by Pillr for the Ai4 2022 conference by Adam Gray, Chief Technology Officer for Novacoast.