Deployment of Explainable AI for Transparent Threat Analysis and Decision Support in Enterprise Application Security Frameworks

This article has 0 evaluations Published on
Read the full article Related papers
This article on Sciety

Abstract

This paper investigates the integration of Explainable Artificial Intelligence (XAI) techniques within enterprise application security frameworks to enhance transparency in threat analysis and decision-making processes. As cyber threats become increasingly sophisticated, traditional AI-driven security solutions often act as black boxes, limiting understanding of their outputs. Deploying XAI addresses this challenge by providing interpretable insights into threat detection and response mechanisms, thereby improving trust, accountability, and strategic security management. The study explores methods for embedding XAI in security workflows, evaluates their impact on operational efficacy, and discusses implications for compliance and governance in enterprise environments. By facilitating transparent decision support, XAI empowers stakeholders to make informed choices that strengthen overall security posture.

Related articles

Related articles are currently not available for this article.