In February 2024, an employee at a multinational engineering firm was a target of an AI-driven phishing campaign. This employee was manipulated during a live video call using AI deepfakes of senior executives at the company, including the CFO. Convinced they were talking to real people, the employee executed 15 transactions totaling approximately HK$200 million (about $25 million USD) to criminal-owned bank accounts. This incident highlights the growing threat posed by AI-driven cyber-attacks. We all have heard about how cybercriminals will use AI to drive attacks, but it was always a hypothetical scenario; now it isn’t. As a college student, I have come across my fair share of AI-generated content, mostly memes, but it’s shocking how rapidly AI-generated content is starting to look “real,” especially to those who are older or have no experience with technology or AI. This is a newer and more sophisticated form of social engineering, and we need to address it, but how?
How Is AI Used in the Cyberspace?
- Threat Detection
AI can analyze vast amounts of data in real-time to detect anomalies that indicate a cyber-attack. Even though SIEMs have already been utilizing this kind of technology in threat detection, AI is making it more advanced by helping to detect more minuscule attack signatures, as well as newer attacks. AI can also observe user behavior using UBA (User Behavior Analytics) models to catch subtle signs of insider threats or compromised credentials, thus blocking and flagging potential indicators of compromise. Companies like Darktrace use AI to detect threats that traditional firewalls and antivirus software might consider “normal”, thus, decreasing the chance of false negatives.
- Incident Response Automation
When a cyber breach is detected, an IPS (Intrusion Prevention System) can be used to mitigate or potentially eliminate the risk in real-time. By incorporating AI into Incident Response, companies can decrease the amount of manual intervention needed. For example, AI can logically quarantine affected devices, eliminating the need for human manual intervention, thereby saving time, capital, and potentially preventing data losses in critical attacks.
- Phishing and Spam Detection
AI models can be trained using knowledge bases consisting of AI-generated phishing and spam content, and prevent them from reaching your inbox. Google’s Gmail reportedly blocks 100 million phishing emails daily using AI. But AI isn’t limited to just spam detection. Let’s take the Hong Kong employee as an example. If there were an AI model set up in place that was trained with real images and deepfakes, it would flag the call as a deepfake using facial recognition, as well as analyze the voice patterns and use voice pattern recognition to verify the company executive’s identities, in turn preventing the attack from taking place and saving the company millions in losses.
How Are Attackers and Cybercriminals Using AI?
- AI Deepfakes
The Hong Kong incident is one of many ways threat actors can use AI in social engineering attempts. Deepfakes can also be used in ransomware attacks, where an attacker can create a deepfake video of a victim and ask them for money, or they will release those deepfakes to the public. AI can also be used in digital smear campaigns or by political opponents to discredit them and skew the public opinion.
- Phishing Using AI
Attackers can use AI to generate personalized and grammatically sound emails, making them harder to spot as phishing. Tools like ChatGPT can be used to send fake job offers or password reset emails. Another use of this is to target older people or children who are more susceptible to such content on the Internet by asking them for their personal information in return for lottery tickets or in-game currency.
- Attacks on AI systems
Ironically, attackers can use AI to attack AI systems. Hackers can feed AI threat detection software knowledge bases with false positives and false negatives, decreasing the likelihood of detecting an actual cyber-attack. Chatbots such as ChatGPT and Gemini can be fed false information, thus, corrupting their knowledge bases and their responses.
These examples are just a small portion of the impact and use of AI in cybersecurity and cyber-attacks. This dual use has created what is referred to as the “digital arms race.” As defenders are building smarter tools, attackers are using AI to adapt even faster. The question is, what are governments, lawmakers, and organizations doing to make AI trustworthy, ethical, and resilient to manipulation?
The Evolution of Cybersecurity Frameworks
Cybersecurity frameworks, such as the NIST Cybersecurity Framework (CSF), ISO/IEC 27001, and the CIS Controls, have served as the backbone for building secure digital environments. Every cybersecurity college course or online certification will teach you about these frameworks and how important they are. They guide organizations in identifying vulnerabilities, protecting critical assets, detecting threats, responding to incidents, and recovering from cyber-attacks.
These frameworks, however, don’t need to be completely rewritten from the ground up because of AI; they need to be reshaped and updated in their implementation. For instance, the “Detect” and “Respond” functions of the NIST CSF can be enhanced by using machine learning algorithms that analyze network behavior in real-time. Instead of relying solely on pre-defined rules or manual monitoring, AI can now flag anomalies in milliseconds, adapt to threats, and even initiate automated responses without human intervention. This kind of speed and adaptability is something these static frameworks need, and it’s pushing both industry and policymakers to rethink how these guidelines must evolve.
Still, Deepfakes, Zero-day AI attacks, and automated phishing campaigns create complex challenges that existing frameworks were never designed to handle. As a result, there’s a growing demand for AI-aware cybersecurity standards and frameworks that not only use AI but also defend against it. This means that embedding bias detection, AI model transparency, and accountability directly into cybersecurity policy is what we need moving forward.
In short, AI is no longer just a tool that fits into cybersecurity frameworks; it’s becoming a force that is redefining how those frameworks operate and evolve. As organizations increasingly use AI and AI tools, they will need to rethink compliance, retrain their workforce, and restructure their digital defenses to align with the capabilities and the risks that AI introduces.
Legislation and Governance around AI
AI is rapidly becoming a key part of our day-to-day lives, and lawmakers are starting to catch up. But why are AI laws so important? AI is something that, if left unchecked and unregulated, can spiral out of control and become very, very dangerous. Think of it as a powerful being that, if set free, will do unseen amounts of damage to systems and people across the world. Think about how it could affect confidential areas like personal data, surveillance, law enforcement, and even military systems.
This has, in turn, raised questions about the ethical and moral use of AI. Who will control these AI systems? Who is keeping the people who own these AIs in check? What and where is this AI being used that we don’t know about? Who will take responsibility? This brings us to the laws and policies surrounding AI.
- EU Artificial Intelligence Act (August, 2025)
It is the first comprehensive law to regulate AI across the entire European Union. It classifies AI into four distinct risk levels: unacceptable, high, limited, or minimal. High-risk systems will need to meet strict criteria related to cybersecurity, transparency, and governance policies. These high-risk AI’s will also need to be robust, accurate, secure, and resilient to cyber threats and attacks like data poisoning or tampering, and will require incident reporting and technical documentation. These systems will also need to be designed with security in mind, requiring comprehensive lifecycles and fallback mechanisms. This act is on track to become a global model similar to the GDPR (General Data Protection Regulation) for data privacy.
- S. Executive Order on Safe, Secure, and Trustworthy AI (October, 2023)
This executive order contains Federal directives on AI safety and security within the United States. This requires developers of high-risk AI systems to share their safety reports and penetration testing results with the United States government before releasing their AI models publicly. New standards will also be developed to protect against the use of AI in engineering biological materials that pose a danger to the American people by regulating federal funding and conducting biological synthesis screenings. Agencies such as the Department of Homeland Security and Commerce must also create standards for AI safety, including watermarking AI content and guidelines for the use of AI in critical infrastructure. A new board will also be created to oversee the use of AI in sectors such as the power grid and finance, and a National Security Memorandum will direct further actions on the use of AI and its security.
- United Nations Artificial Intelligence Resolution (March, 2024)
The United Nations passed its first-ever global resolution on AI safety, supported by 123 countries, including the U.S., China, and EU member states. The goal of this resolution was to promote the safe and trustworthy use of AI worldwide, especially in military, surveillance, and intelligence. It urged nations to develop risk-based regulations that consider human rights, cybersecurity, and overall transparency. Governments were also encouraged to integrate cyber resilience and threat detection in their national defense and AI strategies. This resolution also aimed to create global cooperation in helping developing countries with less AI infrastructure.
Such laws are only a beginning towards a future that is marked by Artificial Intelligence. As time goes on, laws and legislations will become more comprehensive and detailed as a response to the ever-growing cyber threats.
Conclusion
All these laws and legislations are for companies and governments, but what should we as members of society do about this AI buzz? Well, you don’t need to be an expert in AI and ML, or a cybersecurity student (like me), to understand what AI is and what good and bad can be done with it. We have all seen how AI is used in movies like Mission Impossible, and even though most of it is fiction, it is slowly becoming a reality.
As time goes by, AI is becoming a part of our lives, whether we like it or not, and the best thing to do is adapt. It should all start with educating yourself about AI and keeping up to date with global trends in technology, even if through social media. Understand how deepfakes work and what they look like so that if someone tries to attempt a social engineering attack on you, it is easier to identify what is real and what is not. Inform the people around you, your grandparents, even your parents, and everyone else who has limited exposure to technology, so that they do not become victims of AI-driven phishing attacks.
We’re entering a world where AI will play a major role in everything. The question isn’t whether it’s coming or when it is coming—it’s whether we’re ready for it. As students, we are the future cybersecurity analysts, policy and lawmakers, and leaders, and therefore we should educate ourselves about this new, rapidly developing double-edged sword: Artificial Intelligence.







