TransUnion today added an ability to create digital fingerprints without relying on cookies that identify, in real time, risky devices and other hidden anomalies to its Device Risk service for combatting fraud.
Clint Lowry, vice president of global fraud solutions at TransUnion, said these capabilities extend a service that makes use of machine learning models to identify potential instances of fraud to include an ability to identify devices that, for example, are suddenly seeking access to an e-commerce site from a new IP address.
Devices with risky attributes, suspicious histories or questionable associations are often used to commit fraud. The TransUnion service now recognizes and tracks devices across multiple sessions and platforms without relying on cookies that expose organizations to data privacy regulations, said Lowry.
TransUnion claims that this approach will boost fraud detection rates by up to 50% over legacy static device recognition technologies by analyzing thousands of device attributes and behavioral signals in real time to generate a unique device fingerprint to block or limit suspicious activity. The overall goal is to add another layer of defense enabled by predictive machine learning algorithms to combat fraud without disrupting the experience of legitimate customers, he added.
While there may never be a way to thwart all fraudulent online activity, there is clearly an opportunity to reduce it. A recent TransUnion report estimates fraudulent transactions cost businesses a total of $534 billion a year, or roughly 7.7% of annual revenues. In total, there was a 21% increase in digital account takeover incidents year-over-year in the first half of 2025, with 8.3% of all digital account creation attempts in the first half of 2025 suspected of fraud, marking a 26% year-over-year increase.
The TransUnion service is based on multiple machine learning models that continuously adapt to evolving fraud patterns by incorporating feedback from confirmed fraud cases. It can detect and flag virtual environments, remote access tools and automated bot activity while making it harder for cybercriminals to bypass detection.
Most businesses assume that fraud is a cost of doing business, but any reduction of fraudulent activity should drop to the bottom line. Organizations, however, need to carefully evaluate the total cost of fraud prevention, which includes the cost of the tools, platforms and specialists required to prevent it, to strike the right balance, noted Lowry.
The one thing that is certain is that in the age of artificial intelligence (AI) the amount of fraud being perpetrated is only going to increase. The perpetrators of these schemes have plenty of resources so it’s only a matter of time before they use various generative AI tools to create more complex schemes that will be difficult to detect. In effect, organizations will need to rely more on other forms of AI to combat schemes that rely on tools such as ChatGPT to, for example, craft realistic documents that they upload via compromised accounts. The challenge, as always, is to detect as much of that illicit activity as possible to reduce the percentage of those costs that are passed on to legitimate customers.





