New Study: Nearly Half of Companies Exclude Cybersecurity Teams When Developing, Onboarding and Implementing AI Solutions

ISACA research shows automating threat detection/response and endpoint security are the most popular applications of AI for security operations

SCHAUMBURG, Ill.–(BUSINESS WIRE)–Only 35 percent of cybersecurity professionals or teams are involved in the development of policy governing the use of AI technology in their enterprise, and nearly half (45 percent) report no involvement in the development, onboarding, or implementation of AI solutions, according to the recently released 2024 State of Cybersecurity survey report from ISACA, a global professional association advancing trust in technology.

New #ISACA study: nearly half of companies exclude #cybersecurity teams when developing, onboarding and implementing #AI solutions.

In response to new questions asked by the annual study, sponsored by Adobe—which showcases the feedback of more than 1,800 cybersecurity professionals on topics related to the cybersecurity workforce and threat landscape—security teams noted they are primarily using AI for:

  • Automating threat detection/response (28 percent)
  • Endpoint security (27 percent)
  • Automating routine security tasks (24 percent)
  • Fraud detection (13 percent)

“In light of cybersecurity staffing issues and increased stress among professionals in the face of a complex threat landscape, AI’s potential to automate and streamline certain tasks and lighten workloads is certainly worth exploring,” says Jon Brandt, ISACA Director, Professional Practices and Innovation. “But cybersecurity leaders cannot singularly focus on AI’s role in security operations. It is imperative that the security function be involved in the development, onboarding and implementation of any AI solution within their enterprise – include existing products that later receive AI capabilities.”

Exploring the Latest AI Developments

In addition to the 2024 State of Cybersecurity survey report findings on AI, ISACA has been developing AI resources to help cybersecurity and other digital trust professionals navigate this transformational technology:

  • EU AI Act white paper: Enterprises need to be aware of the timeline and action items involved with the EU AI Act, which puts requirements in place for certain AI systems used in the European Union and bans certain AI uses—most of which will apply beginning 2 August 2026. ISACA’s new white paper, Understanding the EU AI Act: Requirements and Next Stepsrecommends some key steps, including instituting audits and traceability, adapting existing cybersecurity and privacy policies and programs, and designating an AI lead who can be tasked with tracking AI tools in use and the enterprise’s broader approach to AI.
  • Authentication in the deepfake era: Cybersecurity professionals should be aware of both the advantages and risks of AI-driven adaptive authentication, says new ISACA resource, Examining Authentication in the Deepfake Era. While AI can enhance security by being used in adaptive authentication systems that adapt to each user’s behavior, making it harder for attackers to access, AI systems can also be manipulated through adversarial attacks, are susceptible to bias in AI algorithms, and can come with ethical and privacy concerns. Other developments, including research into integrating AI with quantum computing that could have implications for cybersecurity authentication, should be monitored, according to the paper.
  • AI policy considerations: Organizations adopting a generative AI policy can ask themselves a set of key questions to ensure they are covering their bases, according to ISACA’s Considerations for Implementing a Generative Artificial Intelligence Policy—including “Who is impacted by the policy scope?”, “What does good behavior look like, and what are the acceptable terms of use?” and “How will your organization ensure legal and compliance requirements are met?”

LEAVE A REPLY

Please enter your comment!
Please enter your name here