Source: site

Technology and security leaders are warning that artificial intelligence is reshaping data privacy risk across Australia and New Zealand, as organisations prepare for Data Privacy Day and confront a surge in AI-driven tools, agents and attacks.
Executives from Qualys, CyberArk and SailPoint say rapid AI adoption is outpacing governance, expanding attack surfaces and exposing gaps in how companies manage access to sensitive information. They highlight that many of the most serious incidents now stem from identity and access weaknesses rather than novel technical exploits.
They also argue that data privacy can no longer be treated as a narrow compliance or storage problem. They frame it instead as an access, identity and accountability challenge in an environment where humans and machines share decision-making.
Shadow AI
Sam Salehi, Managing Director ANZ at Qualys, said the daily use of consumer and third-party AI tools by employees is creating exposure that security teams struggle to see or control.
“On Data Privacy Day, enterprises should confront a modern reality: we’re handing over more data than we realise – not in a single breach moment, but through thousands of fast, everyday decisions. LLMs have made “copy, paste, prompt” the new workflow. Teams drop documents, code, incident notes, customer details and internal strategy into tools that feel helpful – even when they sit outside approved environments,” Salehi said.
He stated that shadow IT has upgraded to shadow AI, creating an unobserved risk surface that security teams can’t properly see, govern or audit. At the same time, attackers are using AI to scale what already works. He noted phishing and deepfakes are more convincing, and the line between real and fake is blurring at speed, making privacy and security inseparable.
“The response shouldn’t be a blanket ban. Enterprises need to treat AI like any material risk surface: know what’s being used, control what’s being shared, and enforce guardrails based on business context – with approved pathways, strong access controls, clear handling rules and continuous monitoring. The fundamentals still apply. The attack surface is just more conversational now. And here’s the catch: the next privacy incident won’t always look like a breach. Sometimes it’ll look like productivity,” said Salehi.
Security practitioners in the region report that so-called shadow AI is emerging within marketing, software development and operations teams. These groups often adopt AI assistants and generative tools outside formal procurement or security review processes, which makes data flows hard to trace and audit.
AI accountability
CyberArk’s Area Vice President ANZ, Thomas Fikentscher, said the discussion around AI and privacy now extends beyond data collection into questions of liability and control when systems act autonomously.
“As AI systems move from analysis to autonomous decision-making, Data Privacy Day is no longer just about how data is collected or stored – it’s about accountability. Organisations are deploying AI into high-impact environments faster than governance frameworks can keep up, raising hard questions around liability, data quality and oversight when AI-driven systems produce unintended consequences,” Fikentscher said
He also said that while the scope of what AI may achieve is fascinating, there is a rising responsibility gap as AI decisions touch people, outcomes, and trust. For companies, the focus must be to secure AI at the point of greatest privacy risk, which is the AI agent itself. He mentioned these agents work with speed, scale, and access that frequently exceed human users, creating a new class of highly privileged identity.
“Treating AI agents as trusted software rather than privileged identities is a very risky endeavour. In a hybrid world of human and machine collaboration, agentic AI security becomes a core privacy control – requiring least-privilege access, continuous monitoring and clear human accountability. With regulation still evolving, organisations must take the lead to protect privacy in the AI era,” said Fikentscher.
Security firms describe AI agents as machine identities that often sit deep inside business processes. These agents can initiate transactions, retrieve or alter records and interact with other systems without constant human oversight. That pattern raises concerns around excessive access, opaque decision-making and unclear lines of responsibility when something goes wrong.
Privilege sprawl
CyberArk Senior Manager, PAM Transformation ANZ, Olly Stimpson, said recent privacy incidents in Australia and New Zealand continue to exploit gaps in basic access controls.
“In an increasingly chaotic world, the idea of ‘taking control’ is no longer just an aspiration but a business imperative, in particular when it comes to handling data. While we struck an optimistic tone on Data Privacy Day last year, 2025 regrettably saw several high-profile incidents across Australia and New Zealand in which significant volumes of personal data were compromised,” Stimpson said.
He also said that the incidents primarily targeted old weaknesses like social engineering, stolen credentials, and lax identification and access controls rather than fresh tactics, revealing a chronic lack of control in corporate technology systems. This has resulted in numerous data breaches and decreased trust.
He mentioned that the heart of the problem is a mismatch between current operating systems and outdated access governance: privileged access now goes beyond administrators to cloud services, third parties, automated workloads, and machine-driven processes. He said that without visibility and control over this sprawl, attackers can log in and move laterally via trusted systems, a danger that grows more crucial as businesses speed their AI use.
“The rush to implement AI – and reap the rewards it promises- is increasingly colliding with a lack of control that is already evident across many environments. AI initiatives don’t replace existing access models – they sit on top of them, inheriting the same privilege gaps and blind spots. Without strong, modern privileged access management in place, AI becomes a force multiplier for risk, increasing the speed, scale and potential impact of identity-driven attacks,” said Stimpson.
He added that in this setting, privacy, security, and access cannot be viewed as independent issues. He noted that strengthening PAM foundations is about more than just decreasing today’s exposure, but it’s about ensuring that businesses can implement AI and automation without exacerbating the high risks they’re already fighting to manage.
Security teams in the region are revisiting privileged access management (PAM) projects as AI rollouts extend into finance, operations and customer service functions. They report that legacy models often assume a limited number of privileged users, which no longer reflects environments that include partners, cloud platforms and automated workloads.
Access over storage
Gary Savarino, Identity Strategist for APAC at SailPoint, said organisations now face a structural challenge in tracking who and what can interact with sensitive data. He noted that they have reached a new era of organisational complexity, and on Data Privacy Day, organisations throughout Australia and New Zealand confront a straightforward but unsettling question: “Do we really understand who, or what, has access to our most sensitive data?”
“AI agents now act autonomously, machine identities are multiplying, and sensitive data is constantly moving between systems, people and services. The security perimeters organisations once relied on, including networks, departments and firewalls, no longer hold. Attackers understand this shift. Increasingly, they are not exploiting new technical vulnerabilities, but walking straight through the front door using compromised, over-privileged or poorly governed identities,” Savarino said.
Savarino mentioned that in SailPoint’s data, there is a widening governance gap. While 82% of businesses utilise AI agents, less than half have sufficient control in place. These agents routinely get access to sensitive data, often beyond their intended scope, testing the limits of standard identity measures. Static, manual access models cannot keep up with dynamic, machine-driven settings, resulting in organisations meeting compliance requirements but lacking true access control. He said as a result, data privacy in 2026 will include more than only safeguarding data at rest, but also enabling ongoing, responsible access governance.
“It is about understanding access in real time, reducing unnecessary privilege and adapting controls as risk changes. Without that shift, the rapid adoption of AI will accelerate exposure, not innovation. It is time to move from static identity management to adaptive identity. That means treating identity security as the control layer for data privacy by unifying identity, data and security, continuously validating access, and delivering context-aware protection as risk evolves,” said Savarino.
“Adaptive Identity helps organisations lead with confidence, innovation, and trust by reducing standing privilege and providing visibility into relationships between human and non-human identities. It is the way forward that will define the next era of enterprise security: security that moves as fast as the enterprise it protects,” he added.
Vendors and security leaders in Australia and New Zealand now describe identity security and access controls as central elements of data privacy strategies. They expect regulatory attention on AI and privacy to increase, but say organisations cannot wait for new rules before addressing gaps in how human and machine identities interact with data.




