Securus Used Prison Calls to Train AI Surveillance Models, Report Says
The news A US prison telecommunications company has for years been recording inmate phone and video calls and using those conversations to build proprietary artificial intelligence models aimed at mon
The news
A US prison telecommunications company has for years been recording inmate phone and video calls and using those conversations to build proprietary artificial intelligence models aimed at monitoring alleged criminal activity, according to reporting cited from MIT Technology Review. The company, Securus Technologies, is backed by private equity and operates across a range of detention facilities in the United States.
MIT Technology Review reports that Securus has been actively developing these AI products since 2023, though the recordings used to train the systems date back much further. The company’s tools are now being used to surveil people in custody, including those in jails, long term prisons, and federal immigration detention centers.
Why it matters
For people in Bay Area jails and prisons who rely on Securus or similar systems to communicate with family, friends, and lawyers, the reporting highlights how those calls can become raw material for AI surveillance without what advocates consider meaningful consent.
The Bay Area has a large incarcerated and immigrant population spread across local jails, state prisons, and federal facilities that use contracted phone providers. The report raises questions about how conversations involving people in Northern California, including those awaiting trial or in immigration proceedings, may be captured and analyzed to train AI.
The story also underscores the commercial stakes behind prison communications. Inmate phone calls are part of a multibillion dollar national market dominated by a small number of firms, one of which is Securus. The use of those calls to train AI models expands that profit model beyond per minute fees into data extraction from incarcerated people and their communities.
Key details
According to MIT Technology Review, Securus has built AI models meant to detect what the company describes as criminal activity in real time by listening to and analyzing recorded calls. Securus president Kevin Elder told the publication that one of the company’s large language models was trained specifically on seven years of calls from inmates in Texas state prisons, and that this model is used within Texas.
That training approach suggests Securus is tailoring some AI systems to local or state level conditions, relying on historical call data from particular jurisdictions to refine how the models flag conversations. The report states that the company holds large archives of recorded calls from a range of facilities, including local jails, long term prisons, and Immigration and Customs Enforcement detention centers. The precise facilities and geographic breakdown of that dataset were not specified.
Elder described the company’s goal to MIT Technology Review as using a large language model on what he called a “treasure trove” of data to detect and understand when crimes are being thought about or contemplated, in order to intervene earlier in what he framed as a criminal cycle.
People on both sides of a Securus managed call hear a notice that the conversation is being recorded. However, advocacy groups argue that this is not meaningful consent. Bianca Tylek, executive director of Worth Rises, an organization that advocates for people in prison and their families, is quoted as calling this “coercive consent.” She notes that for many incarcerated people, there is effectively no alternative if they want to communicate with their families.
Concerns about Securus’s voice surveillance practices predate the current wave of AI tools. John Dukes, who was incarcerated at Sing Sing prison in New York, told The Intercept in a 2019 interview that Securus had tested voice recognition software on him years earlier. He described the experience as another part of himself that he had to give away within the prison system.
According to the new reporting, Securus’s AI capabilities now extend beyond earlier voice recognition tools. The systems can reportedly recognize and process the voices of people held both pre trial and post trial, as well as the voices of their family members, friends, and lawyers.
MIT Technology Review reports that the company is marketing these models as tools for prison officials who want to closely monitor specific inmates suspected of planning or organizing crimes over the phone, or to conduct more random audits of calls across the wider incarcerated population.
The report also places Securus within the broader Inmate Calling Services industry, which it describes as a formal market for prison phone systems. Citing the Prison Journalism Project, it notes that the US market for these services is about 1.2 billion dollars annually and is largely dominated by two companies, including Securus.
What people are saying
Advocates quoted in the reporting argue that people in custody have little genuine choice about whether to participate in this system, because phone access is often the only way for them to maintain relationships with their families and communities.
Tylek of Worth Rises characterizes the consent to recorded calls as coercive, pointing to the lack of alternative communication channels that do not come with extensive surveillance.
Formerly incarcerated people have also raised concerns about the loss of privacy and autonomy. Dukes told The Intercept in 2019 that Securus’s earlier use of voice recognition technology felt like being forced to surrender yet another personal attribute to the prison system.
Critics cited in the reporting point to a convergence of two powerful forces: a large incarcerated population and an expanding data economy. With nearly 2 million people imprisoned in the United States, according to the figures referenced, every phone call can become both a billable event and a data source for corporate AI tools.
The story does not include detailed responses from Securus beyond Elder’s description of the models and their intended use to detect possible criminal planning. It also does not provide comment from corrections departments or ICE facilities that contract with Securus.
What happens next
Based on the reporting, Securus is already using its AI models in at least some facilities and is continuing to develop tools that can search, analyze, and flag inmate communications. The company’s Texas focused model, trained on seven years of state prison calls, is one concrete example of this deployment.
For local governments and corrections agencies in the Bay Area and across California that rely on private companies like Securus to provide phone and video services, the findings raise questions about how those contracts handle data collection, retention, and AI training. The reporting does not specify which California or Bay Area facilities contribute data to Securus’s models or currently use its AI products.
Future developments will likely depend on how regulators, courts, and contracting agencies respond to concerns about consent, surveillance, and the use of inmate communications for commercial AI development. The current reporting does not detail any specific regulatory actions or lawsuits tied to Securus’s AI models.
For now, the picture that emerges from MIT Technology Review and earlier reporting is that inmate phone and video calls in the United States serve a dual role. They remain a costly lifeline for incarcerated people and their families, and they also function as a large, ongoing dataset used by a private company to train and deploy AI systems that monitor those same conversations.