Warning: Undefined variable $publishedDate in /srv/users/serverpilot/apps/production/public/wp-content/themes/adapt/templates/single-post.php on line 18
In this Security Edge keynote, Gabby Fredkin, Head of Analytics and Insights at ADAPT, reveals why AI governance and security maturity remain critically low across enterprises.
He says organisations struggle to define ownership and governance of AI risk, despite AI’s dual potential.
Data governance is now the top investment for CISOs and data leaders, and second for CIOs, yet 62% of data leaders operate with only basic or minimal controls.
Just 3% automate governance decisions, highlighting scalability issues as generative AI gains autonomy.
The CISO is well placed to lead improvements, but without strategic involvement, security leaders risk being sidelined as businesses pursue AI independently.
Frameworks like NIST’s AI Risk Management and CSIRO’s Data61 exist, but compliance complexity, especially under Australia’s Privacy Act and CPS 230/234, continues to slow progress.
ADAPT’s April survey finds most organisations rate themselves below five out of ten in using AI for cyber security and defending against AI-driven attacks, despite its established role in fraud detection and threat analysis.
Interest in agentic AI focuses on automating repetitive tasks (44%), followed by patching, policy enforcement and real-time compliance monitoring.
Key concerns include uncontrolled system access, reduced traceability and skill atrophy from overreliance.
AI currently amplifies existing risks, increasing attack volume and sophistication, particularly in phishing and deepfakes, rather than introducing new ones.
Security teams see promise in generative AI freeing analysts for advanced threat hunting, but unchecked automation raises compliance and visibility risks.
Only 1% of organisations feel fully prepared for enterprise-wide AI adoption, and just 24% express confidence in readiness.
While 45% of digital leaders expect ROI within a year, 32% face no budget increase, prompting shifts from project-based to product-based delivery.
Administrative and document automation are the most common AI uses, with cyber security second among digital and data leaders.
Despite growing exposure, 23% offer no formal AI training, and 38% rely on static SharePoint guidelines, leaving users vulnerable to data leaks and prompt manipulation.
AI promises productivity and innovation, but risks remain, including poor ROI, unclear ownership, immature governance and rising compliance pressure.
Distributed accountability, measurable outcomes and user education are key to balancing innovation with control.
Key takeaways:
- AI governance is lagging behind ambition: Although data governance is the top priority for CISOs and data leaders, 62% of organisations still operate with basic or minimal controls. Only 3% have automated decision-making for governance, leaving AI initiatives exposed to poor data quality and unclear accountability.
- Security teams face both opportunity and risk: While 44% of leaders see automation of repetitive tasks as the most promising AI use case, fears of uncontrolled system access and reduced visibility dominate. Most organisations rate themselves below 5/10 in using or defending against AI-driven attacks, showing a maturity gap despite long-term exposure to AI in cyber security tools.
- AI readiness across the business remains low: Just 1% of organisations feel fully prepared to safely harness AI, and 23% offer no formal AI training for employees. This increases the risk of data leakage and shadow AI. Yet, 45% of digital leaders expect ROI