But some organisations have been hesitant to adopt AI for security.
Firstly, there are novel security vulnerabilities inherent in AI models, such as data poisoning or adversarial attacks, where small, undetectable input changes cause the AI to malfunction or bypass security. This leads to a lack of transparency and explainability, making it difficult for human security teams to understand why the AI made a decision or failed to spot a threat.
Furthermore, there are significant implementation challenges, including the high cost of development, a major shortage of AI-skilled employees to manage and oversee the systems, and concerns about regulatory compliance when large volumes of sensitive data are used to train the models.
It’s already being deployed by security teams for use cases like autonomous threat detection and response, advanced threat hunting, automated incident investigation, real-time fraud protection, and more.
So, how is agentic AI working overtime to help security analysts build a more resilient security posture? In this most recent episode of AI can do what now?!, Anas Khatri, security solutions architect at Elastic, explains how agentic AI can address skills shortages, ease alert fatigue management, and enhance existing AI security tools. Find out how Agentic AI security tools fundamentally shift security operations by combining multiple key components.
How to protect data regardless of where it resides.
Best practices for effective security logging.
Build robust defence against ransomware to safeguard your data.
Share this story
We're a community where IT security buyers can engage on their own terms.
We help you to better understand the security challenges associated with digital business and how to address them, so your company remains safe and secure.