California uses algorithms to predict whether incarcerated people will commit crimes again. It has used predictive technology to deny 600,000 people unemployment benefits. Nonetheless, state administrators have concluded that not a single agency uses high-risk forms of automated decisionmaking technology.
That’s according to a report the California Department of Technology provided to CalMatters after surveying nearly 200 state entities. The agencies are required by legislation signed into law in 2023 to report annually if they use high-risk automated systems that can make decisions about people’s lives. “High-risk” means any system that can assist or replace human decisionmakers when it comes to encounters with the criminal justice system or whether people get access to housing, education, employment, credit and health care.
The California Department of Technology doesn’t know which algorithms state agencies use today and only reported what agencies told them, state Chief Technology Officer Jonathan Porat told CalMatters. When asked if the employment or corrections department algorithms qualify, Porat said it’s up to agencies to interpret the law.
“I only know what they report back up to us, because even if they have the contract… we don’t know how or if they’re using it, so we rely on those departments to accurately report that information up,” he said.
“We don’t know how or if they’re using it… We rely on those departments to accurately report that.”Jonathan Porat, Chief Technology Officer, California Department of Technology
The agencies, which were required to submit answers by the end of August 2024, reported high-risk automated systems used within the past year. Had any found any high-risk systems, they were required to report the kind of personal data these systems use to make decisions about people, and the steps they take to reduce the likelihood of that usage resulting in discrimination, bias, or unfair outcome.
Some automated systems used by state agencies raise questions about how risk is being defined. The California Department of Corrections and Rehabilitation, for example, assigns recidivism scores to the vast majority of inmates to determine their needs when they enter and leave prison. One algorithm it uses, COMPAS, has a documented history of racial bias, but the corrections department told the Department of Technology it uses no high-risk automation.
The California Employment Development Department also reported no use of high-risk automated systems. Between the Christmas and New Years holidays in 2020, the department paused unemployment benefits for 1.1 million people after the agency used AI tools from Thomson Reuters to give unemployment applicants fraud scores. Some 600,000 of those claims were later confirmed as legitimate, according to a state analysis.
The employment department refuses to say if that algorithm is in use today, providing a written statement that its fraud detection processes are confidential “to ensure we don’t provide criminals with information that could aid criminal activity.”
‘They’re talking out of both sides of their mouth here’
The report also appears to be out of sync with a trio of analyses carried out in the past year by California Legislature staff, which indicated the state would have to spend hundreds of millions of dollars or more each year to monitor the government’s use of high-risk algorithms.
Last year, Assemblymember Rebecca Bauer-Kahan proposed a bill that would have required state agencies to conduct risk assessments of algorithms that can make a “consequential decision” about people’s lives—much like the sorts of algorithms in the new Department of Technology report.
Three different legislative analyses of her concept by appropriations committee staff concluded it would be an expensive endeavor, costing hundreds of millions of dollars a year, with ongoing costs in the billions of dollars.
If there are no high-risk automated systems in California government, how can it cost millions or billions of dollars to assess them?
That’s what one source familiar with the analyses wondered. The person, who requested anonymity out of concern for potential professional consequences, said they see little daylight between the definition of a high-risk automated system in the Department of Technology report and a consequential decision in the Bauer-Kahan bill. They think somebody’s lying.
“There’s no way those two things can be true,” they said. “They’re talking out of both sides of their mouth here.”
Authors of the legislative analyses did not respond to multiple requests for comment. And Porat of the technology department was also at a loss. “I can’t fully explain that,” he told CalMatters. “It’s possible that a department and agency in partnership with some group or even within the state may be considering something for the future that did not meet the definition that was laid out in the requirements last year.”
The legislation that required the high-risk automation reports specifically mentions systems that produce scores. Given the pervasiveness of tools that assign risk scores, the result of the Department of Technology inventory is surprising, said Deirdre Mulligan, director of the UC Berkeley Center for Law & Technology, who helped develop AI policy for the Biden administration.
Mulligan said it’s essential that the government put rules in place to ensure automation doesn’t deprive people of their rights. She agrees that analyses that predict potentially billions in testing costs may signal future plans to use high risk automation by state agencies, which makes now an opportune time to make sure such protections are in place.
Samantha Gordon, chief program officer of advocacy group TechEquity, which has called for more transparency around how California is using AI, said that state agencies need to expand their definition of high-risk systems if it does not include algorithms such as the one EDD used in 2020, which can deny people of unemployment benefits and imperil their ability to keep a roof over their head, feed their family, or pay for child care.
“I think if you asked an everyday Californian if losing their unemployment benefits at Christmas time when they have no job caused a real risk to their livelihood, I bet they’d say yes,” she said.
High-risk generative AI in state’s future
The high-risk automated decisionmaking report comes at a time when state agencies are rolling out a slew of potentially risky AI applications. In recent weeks, Gov. Gavin Newsom announced that state agencies are adopting AI tools to do things like speak with Californians about wildfires, manage traffic safety, quicken the rebuilding process after wildfires in Los Angeles, and inform state employees who help businesses file their taxes.
Legislators want to track these sorts of systems in part because of the potential that they could make mistakes. A 2023 state report about risks and opportunities for government adoption of generative AI cautions that it can produce convincing but inaccurate results, deliver different answers to the same prompt, and can suffer from model collapse, when predictions stray from accurate results. Generative AI also carries the risk of automation bias, when people become overly trusting and reliant on automated decisionmaking, the report said.
In late 2023, Newsom ordered the technology department to compile a different report, an inventory of high risk usage of generative AI by executive branch state agencies. CalMatters requested a copy of that document, but the Department of Technology declined to share it; Chief Information Security Officer Vitaliy Panych called doing so an unnecessary security risk.
What AI deserves a high risk label is an ongoing debate and part of the legal regime emerging in democratic nations around the world. Tech that earns that label is often subject to more testing before deployment and ongoing monitoring. In the European Union, for example, the AI Act labels as high risk models used in critical infrastructure operations as well as those that decide access to education, employment, and public benefits. Similarly, the AI Bill of Rights compiled during the Biden administration defined as high risk AI that can make decisions about your employment, health care, and housing.
The California Legislature is currently considering dozens of bills to regulate AI in the coming months, if Congress doesn’t place a moratorium on state AI regulation for a decade. A report ordered by the governor about how to balance innovation and guardrails for AI is due out this summer.
The law that requires the high-risk automated systems report has some notable exceptions, including the entire state judiciary and licensing bodies like the California Bar Association, which triggered controversy last month after it used AI to write questions for its high-stakes exam. The law also does not require compliance from local governments, who often use AI in criminal justice or policing environments, and school districts. where teachers are using AI to grade papers. The health care marketplace Covered California, which The Markup revealed is sharing the personal information of Californians with LinkedIn, is also using generative AI but that entity is not required to report to the Department of Technology.