Source: site
Perforce has released its 2025 State of Data Compliance and Security Report, highlighting ongoing challenges and confusion surrounding the handling of sensitive data in AI and software development environments.
Key findings
The report, based on responses from 280 organisations worldwide, found that 60% have experienced data breaches or theft within software development, testing, AI, and analytics environments. This reflects an increase of 11% compared to the previous year, demonstrating an upward trend in security incidents involving sensitive data.
Survey data indicates that 84% of organisations continue to permit compliance exceptions in non-production environments despite the associated risks, and 65% noted data-driven decision-making as the primary justification for storing sensitive data in these systems. Additionally, 95% use sensitive data in software testing, 90% in AI applications, and 78% in software development generally.
The survey also revealed that 32% of respondents have faced audit issues, while 22% reported regulatory non-compliance or fines, underlining the regulatory and compliance implications for organisations using real data in environments outside normal production systems.
Contradictions in AI and data privacy
One notable finding was the conflicting attitudes toward AI data use. While 91% of organisations believe sensitive data should be allowed in AI model training, and 82% believe such use is safe, 78% are highly concerned about theft or breach of model training data. Meanwhile, 68% worry about privacy and compliance audit failures related to AI, demonstrating a complex and sometimes contradictory stance toward risk and best practice.
“The rush to adopt AI presents a dual challenge for organizations: Teams are feeling both immense pressure to innovate with AI and fear about data privacy in AI,” said Steve Karam, Principal Product Manager, Perforce. “To navigate this complexity, organizations must adopt AI responsibly and securely, without slowing down innovation. You should never train your AI models with personally identifiable information (PII), especially when there are secure ways to rapidly deliver realistic but synthetic data into AI pipelines.”
This highlights the current confusion in many organisations regarding how best to strike a balance between the rapid development of AI systems and ensuring compliance with data protection regulations.
Data privacy investments
The findings indicate a trend toward increased investment in privacy technologies, with 86% of organisations stating their intention to invest in AI data privacy solutions within the next one to two years. Nearly half (49%) already employ synthetic data in AI development, and 95% of respondents use static data masking techniques to reduce risk and exposure.
“These findings underscore the critical need for organizations to address the growing data security and compliance risks in non-production environments,” said Ross Millenacker, Senior Product Manager, Perforce. “There’s a perception that protecting sensitive data through measures like masking is cumbersome and manual. Too many organizations see the cure of masking data and implementing those steps as worse than the disease of allowing exceptions. But this leads to a significant vulnerability. It’s time to close these gaps and truly protect sensitive data.”
Continued exposure and mitigation measures
The persistence of sensitive data in non-production environments remains a challenge, as does the perception that compliance measures are laborious or disruptive. The report authors suggest that these factors contribute to ongoing exposures, despite available technologies allowing organisations to better protect sensitive data and comply with privacy regulations, even while advancing AI capabilities.
Perforce has responded to these trends by introducing AI-powered synthetic data generation in its Delphix DevOps Data Platform, joining masking, delivery, and synthetic data capabilities to help organisations address privacy and compliance requirements while enabling robust AI and machine learning model development.
The report’s findings reflect a period of significant change for organisations leveraging AI and related technologies with heightened risk awareness but also ongoing uncertainty about the best approaches for securing sensitive information. The industry is expected to continue investing in technologies and frameworks that address these evolving needs.