I just spent a significant part of my Master’s degree building a tool to automate security auditing. My Python script can query an entire AWS environment in minutes, checking thousands of configurations against regulations like DORA. It can find an exposed port or a misconfigured S3 bucket far faster and more reliably than any human.
So, it might seem strange for me to now argue for the limits of the very thing I worked so hard to create.
But building this tool gave me a profound appreciation not for what automation can do, but for what it cannot. We are racing to automate GRC and security operations, and for good reason. Research shows that traditional, manual audits are insufficient for detecting real-time risks in dynamic cloud environments, often failing to provide the continuous validation that regulations now demand (Karakasilioti et al., 2024). Yet, in our rush to convert every compliance check into a line of code, we risk forgetting that the most critical security decisions are not binary.
The future of security isn’t a world without human auditors; it’s a world where human auditors are freed from robotic work to do what only they can. Here’s where human judgment remains undefeated.
1. Understanding Intent and Business Context #
My scanner can tell you that an IAM role has “Action”: “*”
and “Resource”: “*”
permissions. It will flag this, correctly, as a high-risk configuration.
What my scanner cannot tell you is why that policy exists.
- Is it a forgotten remnant from a hasty development sprint? (A finding to be fixed immediately).
- Or is it a temporary, highly-monitored role created for a critical data migration that has been documented and approved by the CISO? (An accepted, managed risk).
An automated tool sees a deviation from a baseline. A human expert sees a potential story. As research highlights, a misconfiguration flagged by a scanner does not always translate to a direct compliance violation, requiring further contextual analysis and validation (Prakash et al., 2024). The tool flags the “what,” but only a human can truly investigate the “why” by asking about business process, change requests, and intent. This ability to fuse technical data with business context is the difference between generating noise and generating actionable intelligence.
2. Identifying Novel Threats and “Unknown Unknowns” #
AI and machine learning models are exceptionally good at pattern recognition. We can train them on petabytes of log data to spot the known signatures of malware or the typical patterns of a brute-force attack.
The problem is that our adversaries are creative. They are constantly developing new attack vectors and TTPs (Tactics, Techniques, and Procedures) that have no historical precedent. An AI model is fundamentally limited by its training data; it cannot identify a threat it has never been taught to recognize.
This is where human intuition, creativity, and lateral thinking come into play. A seasoned threat hunter can connect disparate, weak signals that an algorithm would dismiss as statistical noise. They might notice a minor performance anomaly, a strange login time, and a small, unexpected data egress. Individually, these are non-events. A human, drawing on experience, can piece them together to uncover a novel attack campaign. AI is brilliant at finding the needle in the haystack; human intuition is what tells you to check a different haystack entirely.
3. Navigating Strategic Ambiguity and Risk Appetite #
Regulations like DORA or standards like ISO 27001 are written to be comprehensive, but they are not always prescriptive. They contain phrases like “appropriate measures” and “sound security.”
What is “appropriate” for a global bank is different from what is “appropriate” for a small FinTech startup. This is the domain of risk appetite. A business might make a conscious, strategic decision to accept a certain level of risk because the cost of mitigation is far greater than the potential impact, or because it enables a key competitive advantage.
An automated scanner is a purist. It has no concept of risk appetite. It will flag every single deviation from its predefined “perfect” state. A human advisor, however, can participate in a strategic conversation. They can present the findings from the automated tool and then ask, “Given our business goals, what is our tolerance for this specific risk? Should we fix it, or should we formally accept, document, and monitor it?”
This ability to navigate ambiguity and make judgment calls about acceptable risk is a fundamentally human, strategic function that separates a compliance checker from a true security advisor.
The Future is a Cyborg Auditor, Not an Automaton #
The goal of automation should not be to replace the human security professional, but to augment them. Research increasingly points toward hybrid security models that combine automated testing with expert analysis as the most comprehensive approach to security coverage (Xiong & Bu, 2024). This ideal is a “cyborg” model, where machine and human work in a symbiotic loop:
- Automation does the grunt work: It continuously scans the vast landscape, handles the millions of mundane checks, and filters out the noise.
- The machine escalates: It presents a prioritized, contextualized list of the most complex, ambiguous, or novel anomalies to the human expert.
- The human investigates and decides: Freed from tedious tasks, the human applies their deep contextual knowledge, intuition, and strategic thinking to the problems that matter most.
- The human teaches the machine: The outcomes of these human-led investigations are fed back into the system, fine-tuning the models to make them smarter for the next iteration.
I am proud of the scanner I built. It is a powerful tool that can make organizations more secure. But its greatest strength is that it allows us to stop wasting our best human minds on problems a machine can solve, and instead aim their formidable judgment at the problems a machine cannot.
References #
- Karakasilioti, G.M. (2024). Supporting the digital operational resilience of the financial sector : the EU’s DORA Digital Operational Resilience Act. [online] dione.lib.unipi.gr. Available at: https://dione.lib.unipi.gr/xmlui/handle/unipi/16273.
- Prakash, S., Malaiyappan, J.N.A., Thirunavukkarasu, K. and Devan, M., 2024. Achieving regulatory compliance in cloud computing through ML. AIJMR-Advanced International Journal of Multidisciplinary Research, 2(2).
- Xiong, Y. and Bu, H., 2024, October. Adaptive Security Policy Modeling for Intelligent Financial Systems. In 2024 8th International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud)(I-SMAC) (pp. 843-848). IEEE.