By Marcelina Horrillo Husillos, Journalist and Correspondent at The European Business Review
The Quest of Governments, Institutions, and Businesses for Regulation
AI has become a powerful tool in the geopolitical arena, where state-sponsored attacks are increasingly common. The third quarter of 2024 saw the average weekly cyber-attacks per organization climb to an all-time high of 1,876, marking a staggering 75% increase from the same period in 2023 and a 15% uptick from the previous quarter. This surge underscores the trend that virtual threats are becoming more frequent and sophisticated. These attacks are not just about financial gain but are being used as weapons in global politics, disrupting economies and undermining democratic institutions.
The NATO Cooperative Cyber Defense Centre of Excellence (CCDCOE) has called for international policy on the use of AI in cyber warfare. Without global cooperation, the AI arms race could escalate, leading to further destabilization.
Artificial intelligence (AI) is rapidly transforming industries and reshaping the global economy, yet this surge in AI integration comes with an alarming downside: the growing risk of AI-driven cyberattacks. From autonomous hacking systems to sophisticated phishing scams, AI is not only empowering businesses—it’s also arming cybercriminals.
The critical question we must address is: Can current cybersecurity strategies evolve fast enough to confront this challenge? And equally important, what role should governments and institutions play in protecting individuals from this unprecedented threat?
AI: Both a Defender and an Attacker
AI offers tremendous opportunities to enhance cybersecurity. Advanced AI-powered systems can now detect anomalies in real-time, respond to threats faster than any human, and predict potential vulnerabilities before they can be exploited. According to a Capgemini report, 69% of organizations believe AI is necessary to respond to cybersecurity threats.
However, there’s a dark side to this technology. Cybercriminals are increasingly using AI to automate attacks, refine techniques, and carry out large-scale breaches. Phishing remains a major threat in cybersecurity, with attacks growing more frequent and sophisticated each year. In 2024, we have seen an alarming 856% increase in malicious emails, illustrating the scale and severity of the problem. The dramatic rise in phishing in 2024 is fueled by technological advancements, with cybercriminals using tools like generative AI to create convincing phishing emails and malicious code. These tools allow attackers to automate and refine their tactics, making it easier to deceive their targets. As a result, phishing has become a more formidable challenge for both individuals and organizations. These are no ordinary phishing attempts—AI can craft highly personalized, convincing emails that easily bypass traditional detection methods.
Worse still, attackers can leverage AI to develop adversarial attacks—where AI models themselves are manipulated to perform incorrectly. This has been seen in cases where attackers feed incorrect data to AI systems, causing them to misclassify, overlook vulnerabilities, or even grant unauthorized access.
Current cybersecurity frameworks may already be struggling to keep pace with AI-enhanced attacks. Traditional methods like firewalls, anti-virus programs, and manual threat detection cannot effectively counteract the scale, speed, and sophistication of AI-driven breaches. The sheer volume of data AI can process means that a targeted attack can happen in seconds, and often without raising any immediate alarms.
Can Cybersecurity Keep Up?
Current cybersecurity frameworks may already be struggling to keep pace with AI-enhanced attacks. Traditional methods like firewalls, anti-virus programs, and manual threat detection cannot effectively counteract the scale, speed, and sophistication of AI-driven breaches. The sheer volume of data AI can process means that a targeted attack can happen in seconds, and often without raising any immediate alarms.
“From my hands-on experience in the cybersecurity domain, one case stands out: a global financial services firm that experienced a breach due to an AI-powered botnet. The botnet, which was designed to infiltrate their network using adaptive learning, bypassed traditional security protocols. Within 12 hours, it had compromised sensitive financial data across five continents. Had the firm been equipped with AI-enhanced cybersecurity systems, the breach may have been detected in real-time, rather than weeks later “ states Dr. Amr Kenawy, Executive Director, Growth at Cyberteq
The Human Element in AI Cybersecurity
Despite the sophistication of AI tools, human expertise remains indispensable. Cybersecurity professionals, especially those specializing in adversarial AI, are crucial in developing countermeasures for AI attacks. As someone who has spent years on the front lines of cybersecurity, I’ve witnessed firsthand how AI systems can be deceived by bad actors and the importance of human intervention in mitigating damage. One notable incident involved an AI-based fraud detection system that was bypassed due to a carefully designed adversarial attack. Only manual inspection by experienced professionals identified the breach and prevented further loss.
In fact, research from MIT found that a hybrid approach combining human oversight with AI resulted in a 22% increase in cybersecurity effectiveness compared to AI alone. This underscores the importance of training cybersecurity teams to collaborate with AI tools rather than relying on automation entirely.
Governments and Institutions: The Guardians of Cybersecurity
With AI-driven cyberattacks growing in complexity, governments and institutions must play a proactive role in protecting citizens and critical infrastructure. But what should this role look like?
1. Regulation and Standardization
Governments must set clear AI safety standards. This leaves a significant gap in global preparedness. We need enforceable regulations around the ethical use of AI, mandatory cybersecurity protocols for businesses, and greater transparency in the development of AI systems.
For instance, in 2023, the European Union passed the AI Act, which requires companies to undergo rigorous testing of AI models to ensure their safety, particularly in high-risk applications. Governments worldwide must adopt similar legislation.
2. Public-Private Partnerships
A coordinated response between government agencies and private sectors is vital to combating AI threats. The 2023 SolarWinds attack, which compromised multiple U.S. federal agencies, highlighted the need for stronger collaboration between the public and private sectors. Governments must lead initiatives where threat intelligence is shared seamlessly, allowing companies to quickly act on vulnerabilities.
3. Investment in AI-Driven Cyber Defense
Governments should invest heavily in AI-enhanced cybersecurity technologies. The global cybersecurity market is projected to reach $366 billion by 2028, with AI playing a key role in this growth. By providing grants and incentives to companies developing AI-based defense systems, governments can ensure that their nation’s infrastructure remains secure against AI-driven threats.
4. Education and Public Awareness
Cybersecurity literacy is essential for individuals and businesses alike. One of the biggest challenges we face is public ignorance about AI threats. Google’s Threat Analysis Group reported that over 60% of phishing victims had never heard of AI-generated phishing attacks. Governments must lead educational campaigns that not only inform the public about these emerging threats but also provide actionable advice on how to protect against them.
Governance, Risk, and Compliance: A Critical Pillar in AI Cybersecurity
In the face of AI-driven cyber threats, Governance, Risk, and Compliance (GRC) frameworks are more critical than ever. Effective GRC ensures that organizations not only comply with regulatory requirements but also manage risks proactively. With the rise of AI, these frameworks must evolve to address new challenges.
AI brings significant risk to data integrity, privacy, and security, and organizations must be vigilant in aligning their AI usage with ethical standards and legal requirements.
According to a Deloitte survey, many companies are beginning to test or use Generative AI, yet more than half (56%) of respondents don’t know or are unsure if their organizations have ethical standards guiding its use, according to Deloitte’s second annual report on the “State of Ethics and Trust in Technology. Yet AI regulations a major concern, particularly as AI systems can easily breach privacy laws like GDPR or CCPA when improperly governed. Furthermore, AI models used in sensitive industries, such as healthcare or finance, are particularly vulnerable to regulatory scrutiny.
Yet GRC poor practices can lead to AI models unknowingly violating compliance standards—ultimately resulting in both financial penalties and reputational damage. Implementing robust GRC measures, including continuous risk assessments and regular audits of AI systems, is essential to ensuring these technologies operate within a safe and lawful framework.
Governments and institutions must work together to update compliance requirements to cover the unique risks AI introduces. Failing to address these concerns could leave organizations exposed not only to cyberattacks but also to severe legal and financial consequences.
A Collective Response to an Evolving Threat
AI is both an opportunity and a threat in the world of cybersecurity. On one side, it enhances our defenses and gives us the tools to anticipate attacks. On the other, it empowers adversaries to launch more sophisticated and far-reaching breaches. Governments, institutions, and businesses must come together, combining AI with human expertise, and implementing strong regulations to protect individuals from this growing menace.
As we enter this new era of AI-powered threats, the future of cybersecurity will depend not only on technology but also on global collaboration, regulation, and awareness. The stakes are higher than ever—and failure to act could leave us all vulnerable.