Source: site

Could your artificial intelligence-powered voice agents break the law in talking to consumers?
Experts behind agentic AI technology in the mortgage space suggest it’s already happening. The digital voice assistants offered by leading lenders, which help hundreds of thousands of customers today weigh home buying options and facilitate monthly mortgage payments, could also run afoul of federal protections against spam calls and other lending laws.
Marr Labs cofounder and CEO Dave Grannan said he’s heard many originators describe trials with vendors who were unable to show their technology was compliant with federal regulations. The AI-based voice software provider, which works with Rocket Mortgage, emphasizes its own strong adherence to compliance in helping originators connect with borrowers.
“We’ve heard lots of horror stories,” he said. “AI quoting outrageously low interest rates to prospective clients, of completely failing to disclose they’re AI, or they’re recording the call.”
While there’s been no prominent example of a lender’s agentic AI wreaking havoc, or getting into legal or regulatory trouble, Grannan said the vendor anecdotes are bad for the industry.
“When these kinds of things happen, it spooks the industry,” he said. “People at large think this is not ready, or dangerous, or not going to work well.”
Rocket Mortgage, and other lenders who tout agentic AI tools with catchy nicknames, didn’t respond to requests for comment for this story. While those firms have each faced ubiquitous Telephone Consumer Protection Act complaints for allegedly pestering customers, none of those lawsuits have blamed the companies’ ever-expanding AI agents for unlawful contact.
However, it isn’t hard for such AI agents to run afoul of the nuances of the law which carries hefty penalties for each violation, experts explained.
How agentic AI could violate TCPA
The Federal Trade Commission doesn’t require consumers to provide extra consent to be contacted by AI, which is treated like an autodialer, explained Rishi Choudhary, cofounder and CEO of Kastle. His company, created in 2024, also puts a heavy emphasis on compliance, today serves some of the industry’s biggest servicers.
Besides requiring a customer’s consent, such as not calling people on the National Do Not Call Registry, the law includes other steps, including restrictions on calls to landlines versus cell phones.
“Most platforms don’t have that,” said Choudhary, referring to AI companies. “They’re not checking for landlines when they’re making those dials. A lot of those guys have been noncompliant under TCPA.”
Companies could be fined $500 per violation of the law, or up to $1,500 per violation if a judge determines the errors were willful or knowing.
The voice agents need to also keep track of which consumers tell them not to call again, he added. Agentic AI experts also advised companies to shore up the databases the technology is using to operate.
Mark McKinney, vice president of Market Intelligence and Innovation at Gryphon.ai, suggested companies tap a compliance agent in the loop. His company offers the service which can help a company confirm that the consumer being called has given express written consent to be called, more current than an internal opt-out.
“Before the AI agent actually makes a call, we can do that last mile check to make sure the call doesn’t take place illegally,” he said.
Other possible agentic AI violations
In touting their compliance efforts, Kastle and Marr Labs say they audit their agentic AI’s phone calls. Choudhary said some players making calls on behalf of direct-to-consumer lenders don’t offer any sort of visibility into their bots’ workflows or what’s happening on a per-call level.
“That is dangerous, because when the auditor comes you do not have anything to show,” he said.
AI is also capable of hallucinating, Grannan cautioned, recalling the example of an AI agent offering a consumer an ultra-low mortgage rate. Lenders have to ensure their humanlike tools are disclosing to consumers in certain states that it’s artificial intelligence, that the call is being recorded, or that the respondent has a right to opt out.
“Our challenge in technology is to make sure we never have these kinds of errors in our system,” he said.




