Tech

AI Use by Cops, Child Services In NYC Is a Mess: Report

"NYC does not have an effective AI governance framework," a new report concluded.
NYPD traffic cameras
Image: picture alliance / Contributor via Getty Images

AI systems are being used by agencies like the police, child services, and more, and the policies guiding their implementation in New York City are a mess, a new report has found. 

New York State’s Comptroller Tom DiNapoli released a report last week calling out NYC agencies for their lack of ethical and legal guardrails when it comes to the use of machine learning programs, which include algorithmic modeling, facial recognition, and other software used to monitor members of the public. According to the report, “NYC does not have an effective AI governance framework. While agencies are required to report certain types of AI use on an annual basis, there are no rules or guidance on the actual use of AI.”

Advertisement

Among the agencies under scrutiny were the NYPD, the Department of Buildings and the Department of Education (both of which say they does not use any predictive algorithms), and the Administration of Children’s Services (ACS), which deploys an in-house risk modeling system called the Severe Harm Predictive Model “to prioritize child abuse and neglect cases for quality assurance reviews.”

The use of predictive models and facial recognition tools are now widely associated with policing and the possible harms in that realm, opposition is growing to the pervasive use of risk modeling in the child welfare system. About 11 states have agencies that use risk modeling in the child welfare system, and about half the states in the country have considered using it in their child welfare agencies, according to an ACLU report.

In late January, the Associated Press reported that the Department of Justice has been investigating a tool used by a Pennsylvania child services agency after officials were concerned that the tool was discriminating against families with disabilities. The DOJ’s inquiries came after the AP reported on a Carnegie Melon study showing an algorithm used in Allegheny County, PA had flagged one-third of Black children for review by the child services agency, compared to one-fifth of white children. 

Advertisement

Yet NYC has pushed forward with its use of predictive modeling in child welfare and has made efforts to head off criticism that its models are biased. The agency has presented its use of algorithms as a way to negate bias, claiming that it can reduce racial discrimination in the system. 

The comptroller’s report found that while the agency says it routinely does bias testing, it was unable to produce logs of that testing or update the agency on how frequently it occurs or when it revises the algorithm. And while the agency says it has guidelines on how it uses predictive models, it “did not provide us with evidence that these guidelines are required to be followed in the same way that formal policies would be.” (ACS told the comptroller that it was in the process of making their guidelines formal.) There was also no statutory requirement at the city level requiring them to follow any of those guidelines. In a written response included in the report, ACS said all of the comptroller’s concerns “are being or have been addressed by ACS.”

ACS says that it has an advisory group reviewing their predictive models and other algorithmic tools, including people “impacted by the child welfare system such as data scientists, legal advocates, individuals involved in the NYC child welfare system, and contract providers.” But when the comptroller asked if members of the public could review the algorithm or log complaints about the use of the algorithm, the agency said that “there would be no basis for a complaint” because the algorithm does not make the final decision, it merely flags cases to prioritize for ACS employees to review. 

Advertisement

ACS says that the Severe Harm Predictive Model, which officials refer to as SHM, is actually less biased than the discretion of caseworkers. “ACS officials further compared the SHM's results to that of experienced caseworkers and determined the SHM was better at identifying risk, producing fewer false-positive results, and was more equitable across race/ethnicity,” according to the report.

Predictive models, including SHM, are trained on historical data that advocates say is already biased because of the biases in the child welfare system. Advocates argue that these models are part of the problem, as they predict who is more likely to be flagged for removal by a child welfare agent, which ends up most often being families who are poor and Black. ACS claims it has reduced racial disparities by “eliminating certain types of racial and ethnic data,” according to the comptroller’s report. 

But not everyone is convinced. “You can never create guardrails sufficient to counteract the bias inherent in predictive analytics algorithms in child welfare. That’s because the very thing they measure—likelihood of future involvement with the family police—is, itself, so much a function of bias,” Richard Wexler, Executive Director of the National Coalition for Child Welfare Reform, told Motherboard. 

“And even if you could create guardrails, there is nothing in the track record of family policing to suggest that family policing agencies would use them,” Wexler said, using the term that advocates use to refer to child welfare and children’s services agencies. “Predictive analytics is family policing’s nuclear weapon—and over and over, the family police have shown they can’t control their nukes.”

Advertisement

NYPD fared even worse than ACS in the comptroller’s report. The NYPD does not maintain an inventory of its AI tools, the report said, and there are indications the force doesn’t have a great handle on what systems it does use. While the NYPD told the comptroller that it primarily used facial recognition tools from DataWorks Plus and only used Clearview AI for a trial period of 90 days in 2019, the report found evidence of officers requesting and receiving access to Clearview AI for six months after the trial ended. 

NYPD also said it only uses AI tools that are approved by the National Institute for of Standards and Technology (NIST), but it “did not review the results of NIST’s evaluation of the facial recognition technology used by NYPD, nor did it establish what level of accuracy would be acceptable,” the report states. NYPD told the comptroller that human officers review potential matches, which they said reduces bias. 

There have been numerous cases of racial bias involving facial recognition tools used by police, however. In 2019, a New Jersey man, who is Black, was falsely arrested after being flagged as a potential match for a robbery suspect by NYPD tools.

While NYPD does not have any general policy for the use of AI tools, it does create “specifically tailored policies and procedures for the use of a technology based on the capabilities, specifications and proposed use of the tool; not simply because the tool may incorporate AI,” it told the comptroller.    

Advertisement

Alleghany County’s system assigned children a risk score for received complaints, and even assigned risk scores to babies indicating how likely it was that they would be placed in foster care in the first three years of life. NYC’s child welfare agency, on the other hand,  says its tool is used to identify “likelihood of substantiated allegations of physical or sex abuse within the next 18 months.” ACS says that the tool is only used to prioritize cases for further review and doesn’t assign a score that can be considered when a determination is made in a family’s case.

Anjana Samat, a senior attorney with the ACLU who co-authored the report on predictive modeling in child services, says it’s hard to say how biased ACS’s algorithm is without more details from the agency.

“It's hard to say in the absence of full-on transparency of how the model was developed, what variables were used, what weights were assigned to different variables,” Samat said.

Samat said there are many open questions as to how ACS is using the tool, including where in the process the tool is deployed and what data the city is using to determine it is less biased than a caseworker. She also cautioned that the agency’s description of the tool’s purpose as predicting “substantiated allegations” is more flawed than it seems. 

Advertisement

“There's implicitly an assumption that these substantiations have been largely accurate,” Samat said, pointing out that caseworkers have historically had broad leeway to define abuse. And while the tool does not officially factor into a caseworker’s determination, Samat said it could create pressure on an ACS employee to substantiate a case.

One reason the agency may be relying more on algorithms to flag cases is because the use of mandatory reporting, including schoolworkers, to flag cases have come under fire recently, including an online rally attended by 85 advocates and parents last week.

As with the criminal justice system, politicians are pressured to widen the net of surveillance in the child welfare system when an act of violence occurs. The deaths of two young children in 2017 led to outrage when the public learned the children were known to ACS but had not been removed from their home. In response, then Mayor De Blasio appointed a new commissioner of the agency who, among other measures, moved forward with algorithmic modeling that the previous commissioner had bristled at. 

There has long been a push for more algorithmic transparency in NYC, including a 2017 bill that would have made all city’s algorithms open source but never received a vote. A 2019 executive order under the previous mayor required agencies to educate the public about the use of algorithmic tools and create a complaint resolution process for members of the public impacted by those tools. But Mayor Adams removed those requirements when he took office in January 2022, relegating all reporting on AI to a new agency called the Office of Technology and Innovation.