Organizations can enhance their preparedness, response, and recovery by leveraging agentic AI for cybersecurity. AI agents enable organizations to predict and address vulnerabilities by:
- monitoring the digital landscape 24/7
- detecting anomalies
- responding to threats quicker than humans
For example, AppSec AI agents like Aptori can integrate into your IDE and CI/CD pipeline to run automated pentest to identify if your APIs are free from vulnerabilities.
Examples of AI agents in cybersecurity
- Tier 1 agents are responsible for the initial detection and triage of a potential security threat.
- Tier 2 agents are responsible for taking actions like:
- isolating affected systems
- removing malware
- patching vulnerabilities
- restoring compromised data
- Tier 3 agents are responsible for leveraging security tools for threat hunting and in-depth analysis. These agents often have capabilities like:
- automated threat detection
- complex vulnerability scanning
- pentesting
- malware analysis
Agentic AI and security operations (SecOps)
Security operations (SecOps) is a collaborative approach between IT security and IT operations teams focused on proactively identifying, detecting, and responding to cyber threats.
The problem: SecOps face serious fatigue since teams deal with vast data from diverse systems and rapidly evolving threats while navigating complex organizational structures and compliance requirements.
How can agentic AI help: AI is especially effective at “reasoning tasks” such as analyzing alerts, conducting predictive research, and synthesizing data from tools.
Thus, AI agents in SecOps can help automate tasks that require real-time analysis, and decision-making such as phishing, malware, credential breaches, lateral movement, and incident response.
For example, these tools can be trained on MITRE ATT&CK knowledge bases to mimic the expertise of human analysts or use incident response playbooks to:
- enrich alerts
- detect impacted systems
- isolate/triage infected systems
- create incident reports
Source:
Real-life use cases: Agentic AI in SecOps
1. Triage and investigation
Agentic AI detects security alerts before they reach human analysts. It automates the triage and investigation processes, imitating human SOC workflows and decision-making. AI agents in initial triage and investigation can leverage:
Alert deduplication: Identifying duplicate events to reduce noise.
Alerts grouping: Clustering alerts related to a specific asset (e.g., endpoint, server).
Alert enrichment: Adding critical context for more effective investigations, including:
- IOC (indicator of compromise) enrichment:
- Check if an IP address on a blacklist
- Compare file hashes to malware databases
- Machine enrichment: (e.g. provides data about affected systems)
- Account enrichment: (e.g. provides data about user identities)
Real-life case study: AI agents leveraging triage and investigation
Challenges — A digital insurance company serving over 2 million customers has faced issues handling large volumes of claims and managing policies efficiently.
The company’s early security configuration required manual alert management, which was resource-intensive. This created several challenges, including:
- High volume of security alerts: As the volume of security alerts increased, the SOC team was challenged to conduct manual investigations.
- Time-consuming processes: Manually investigating each alert required significant work from the SOC team. Analysts had to filter through large amounts of data to detect potential risks.
- Need for continuous 24/7 monitoring: Maintaining 24/7 surveillance with a human-only team was challenging and costly.
Solutions and outcome: The company deployed a cybersecurity AI agent and integrated this agent with existing systems like AWS, Google Workspace, and Okta. The following outcomes have been achieved:
- Reducing the manual burden allowed SOC analysts to prioritize higher-value tasks.
- Continuous monitoring ensured no missed alerts, resulting in an improved level of vigilance than human-only teams.
- Detailed investigation reports provided a granular level of analysis, increasing the visibility into IOC (indicator of compromise).
- Reduction in false positives improved accuracy in threat detection, allowing the team to focus on major risks.
2. Adaptive threat hunting
Agentic AI can be used in cybersecurity systems to detect and respond to threats in real-time. For example, these agents can identify unusual network behavior and isolate impacted devices autonomously to prevent a compromise without human intervention.
While leveraging threat hunting, AI agents take several actions, including:
Decomposing the alert:
- Indicator classification: Categorizing the alerts into various types of indicators:
- Atomic Indicators: Basic elements like IP addresses, domain names, email addresses, and file hashes.
- Computed Indicators: Information derived from data analysis, such as malware file sizes or encoded strings.
- Behavioral indicators: Patterns of behavior, including tactics, techniques, and procedures (TTPs) employed by threat actors.
Searching for atomic (e.g. IP address) and computed indicators ( e.g. behavioral anomalies):
- Creating queries to search historical data across SIEMs, or other relevant tools for the identified IOCs.
- Accessing numerous systems and requesting all relevant platforms simultaneously to collect data from many sources.
Analyzing behavioral indicators:
- Mapping computer network protocol for control systems by connecting behavioral indicators and using frameworks like MITRE ATT&CK.
- Searching historical alerts and data across connected systems.
Real-life case study: AI agents leveraging threat hunting
The University of Kansas Health System, one of the Midwest’s largest medical providers, serves almost 2.5 million patients across three hospitals.
Challenges — The University of Kansas Health System had difficulties in coordinating incident response, some of the key challenges include:
- Lack of visibility: Distributed systems and tools made it challenging to mitigate threats across the entire attack surface.
- Limited incident response: No centralized or standardized process for response caused poor coordination between teams.
- Employee resource constraints: A small team of employees managed the entire incident response workload, leading to overextension and burnout.
Solutions and outcome: The University of Kansas Health System implemented a security platform with Agentic AI capabilities to improve visibility and automate incident response threat-hunting. The following outcomes have been achieved:
- Visibility across systems increased by over 98%
- Detection coverage has improved by 110% within six months.
- Automated incident response processes filtered and resolved 74,826 out of 75,000 alerts, escalating only 174 for manual review.
- True positives among escalated alerts totaled 38, reducing noise and enabling focused responses.
3. Response actions
Generating infrastructure as code: Using code to manage and provision computing resources instead of manual processes, examples include:
- Generating OpenTofu and Pulumi templates for remediation, ready for DevOps review.
- Configuring components like operating systems, middleware, and applications.
Performs endpoint actions: Entering a response action command in the console’s input area.
Security controls: Updating blocklists or firewall rules as new security incidents emerge.
Real-life case study: AI agents leveraging response actions
Challenges — APi Group, a contracting and distribution organization, faces as part of their growth strategy and managing IT security across acquisitions after acquiring smaller companies:
- Diverse technology stacks: Acquired companies came with varied and often incompatible IT security technology stacks (Microsoft E5 security suite).
- Visibility across the ecosystem: The company’s expanding attack surface from acquisitions creates blind spots.
Solutions and outcome: To address the above challenges, APi Group implemented ReliaQuest’s agentic AI platform to enhance threat detection for its Microsoft environments. The following outcomes have been achieved:
- Reduced response times by 52% Through automation and integrated playbooks.
- Achieved a 47% increase in visibility across Microsoft 365, Cisco, and Palo Alto stacks.
- Expanded MITRE ATT&CK coverage by 275%, enabling better prioritization of resources.
Agentic AI and application security (AppSec)
Application security entails protecting apps during their full lifecycle, which covers design, development, deployment, and continuous maintenance.
The problem: As hosted apps became increasingly important as key revenue drivers for public-scale enterprises, so did their security—this created recent trends such as:
- Wide usage of Cloud, SaaS applications has moved security earlier in the SDLC to minimize risks before they reach production.
- With the increase in cloud-native programming, more migration to third-party platforms such as AWS has occurred, thus the attack surface for apps becomes more exposed to vulnerabilities.
As a result of increasing attack surface and potential, attackers developed new and inventive methods of compromising apps.
How can agentic AI help: Agentic AI can help enhance AppSec by integrating and automating various stages of the application lifecycle to enhance security, including monitoring your CI/CD pipelines or automating your pent testing.
Real-life use cases: Agentic AI in AppSec
5. Risk identification
Agentic AI serves as a vigilant sentinel, continuously analyzing your environment for threats and potential vulnerabilities in applications and code bases. AI agents can execute, external and internal discovery to identify threats:
External discovery:
- Storing and classifying data about your apps, and APIs.
- scanning for exposed web servers.
- discovering open ports on internet-facing IP addresses.
Internal discovery:
- Evaluating runtime configurations, identifying issues, and prioritizing.
- API accessibility & functionality visualization
- App-API visualization and usage
- Agentless AWS & Azure API workload monitoring
- App traffic volume & pattern analysis
Real-life tool example: Tools like Ghost integrate into CI/CD pipelines to provide continuous visibility and risk analysis during application development.
6. Application test creation and adaptation
AI agents generate tests automatically depending on user interactions with the application. As testers or developers use the tool to capture test cases, the AI monitors and creates test scripts.
If the application’s UI changes (for example, an element’s ID changes or the layout changes), the AI agent may identify these changes and customize the test scripts to avoid failure.
7. Dynamic application test execution
Agentic AI continuously executes tests in varied contexts (e.g., across multiple browsers and devices) without human interaction. The AI agents can schedule tests and analyze application behavior autonomously to ensure complete testing coverage.
They can also dynamically customize test parameters, such as copying different user data inputs or changing network conditions, to allow for a more thorough application analysis.
8. Autonomous reporting and predictive suggestions
AI Agents can examine application testing data autonomously, finding failure patterns and determining core causes.
For example, if numerous tests fail due to the same problem, the AI Agent will combine the findings and highlight the underlying issue to the development team.
Based on previous test data, the AI agents can predict potential future failures and recommend application testing methodologies to address these issues.
9. Autonomous remediation
Agentic AI automates the remediation process, for example, if the AI agent detects that certain tests are redundant or do not adequately cover specific risks, it can optimize the test suite by deleting unrelated tests and prioritizing those that focus on more relevant areas.
The AI agent can also detect when a test fails due to minor errors (such as a minor UI change) and “remediate” the test script to comply with the revised application, eliminating false positives and requiring less manual involvement.
10. Automated pentesting
Agentic AI automates the penetration testing process, including the identification of vulnerabilities, generation of attack plans, and execution. Some key practices of AI agents in pentesting initiatives include:
Real-time adversary simulation:
- Conducting simulations like network, application, and social engineering attacks.
- Executing penetration tests such as DAST (dynamic application security testing).
Reconnaissance:
- Scanning the internet, including the deep, dark, and surface web, to detect exposed IT assets (e.g., open ports, misconfigured cloud buckets).
- Integrating OSINT (open-source intelligence) and threat intelligence to map attack surfaces.
Real-life tool example: Tools like FireCompass provide semantic testing for APIs, creating tailored attack scenarios that automate pentesting efforts.
4 benefits of Agentic AI for security teams
By implementing an agentic AI strategy, SOCs may gain tremendous benefits in terms of operational efficiency and team morale. Here are four major benefits of this technology:
- Finding more attacks: Agentic AI evaluates each alert, connects data from several sources, and conducts extensive investigations. This allows SOCs to identify detection signals that indicate real attacks, exposing dangers that would otherwise go undetected.
- Reducing mean time to response (MTTR): By minimizing the manual bottleneck of triage and investigation, Agentic AI accelerates remediation, reducing MTTR.
- Increasing productivity: Agentic AI allows for the review of each security alert, which would be difficult for human analysts to perform on a large scale. This relieves analysts of repetitive jobs, allowing them to focus on more complicated security initiatives and strategic work.
- Enhancing analyst retention: Agentic AI improves analyst morale and retention by performing routine triage and investigation work, transforming the function of SOC analysts. Instead of performing tedious, repetitive tasks, analysts can focus on evaluating reports and focusing on high-value initiatives. This move increases job satisfaction, which helps to retain skilled analysts and improves overall productivity.
Challenges of agentic AI in cybersecurity
1. Lack of transparency and interpretability
- Opaque decision-making: AI-driven security operations and systems can be difficult to interpret, especially when they modify security policies or decisions on their own. Test engineers and developers may struggle to comprehend why certain actions were made or to confirm the AI’s decisions.
- Trust and reliability: Without explicit explanations, it might be difficult for teams to trust the AI’s recommendations or revisions, leading to resistance to implementing agentic AI solutions.
2. Data quality concerns
- Data reliance: AI agents need diverse data to learn how to perform actions effectively. Insufficient or biased data can result in false actions or incorrect forecasts.
- Edge cases in system configurations: If an organization’s IT infrastructure includes bespoke configurations or rare software combinations, an AI agent may misinterpret normal behaviors as anomalies or fail to detect genuine threats.
3. Maintaining reliability
- False positives and negatives: Agentic AI can incorrectly classify data related to SecOps or AppSec, resulting in false positives (reporting bugs when none exist) or false negatives (failing to detect actual issues). These errors may compromise trust in the system and require manual intervention to validate results.
- Adaptability problems: Although agentic AI is designed to adapt to changes, certain complex or unexpected changes in the application (for example, major UI redesigns or backend architecture changes) may still cause security operations to fail, necessitating human intervention to update the AI’s models.
4. Complexity of implementation
- Difficulty in secure API integration: AI agents frequently interface with external systems, therefore protecting APIs is critical. API tokenization and validation are all measures that help to ensure a reliable interaction.
- Training and deployment: Agentic AI models should be trained on large datasets and diverse scenarios to be effective, which can be resource-intensive and time-consuming.
5. Human oversight requirements
- Continuous monitoring: While agentic AI aims to reduce human involvement, it still requires monitoring and maintenance to ensure that it functions properly. Security teams need to verify the AI’s results, adjust models as needed, and get involved when the AI encounters complex or unexpected scenarios.
- Highly skilled personnel requirements: Managing agentic AI necessitates expertise in AI, machine learning, or application security. Organizations may have difficulty finding or training staff with the required skills.
What is Agentic AI: The path from LLMs
Agentic AI, also known as autonomous AI or self-directed AI, refers to artificial intelligence systems that can operate autonomously to perform specific goals.
Unlike traditional AI systems, which require human input and guidance, agentic AI systems can make decisions, conduct actions, and learn from their experiences without ongoing human interaction.
This is an important shift from the current most typical application of AI, which frequently involves LLMs and humans interacting with AI via prompts.
- LLMs specialize in processing and generating language or ideas based on user prompts. It uses techniques like
- prompt engineering to process writing instructions to guide AI models to produce specific responses.
- Retrieval-augmented generation (RAG) to improve the accuracy of generative AI models with facts fetched from external sources.
- AI agents, by contrast, are action-oriented systems. They autonomously perform tasks such as scanning networks to find unusual activity or managing workflows with minimal human oversight.
For more: Agentic AI: 5 steps from chatbots to secure enterprise AI agents.
Agentic AI for cybersecurity
In cybersecurity, agentic AI functions as an autonomous decision-maker capable of monitoring networks, and analyzing data, to take proactive security approaches against threats.
Unlike traditional security systems that depend on pre-defined rules and manual interventions—often too slow or narrow to address modern threats—agentic AI leverages its ability to learn dynamically from its environment. It can take responsive actions, automate software development processes, or automate pentesting.
This autonomy allows agentic AI to respond to attacks more effectively than human-controlled systems, providing enhanced agility.