Lisa Monaco, deputy attorney general at the Justice Department (DoJ), announced last week the launch of Justice AI – a new initiative that aims to understand and prepare for how the emerging technology will affect the criminal justice system.  

During a speech at the University of Oxford on Feb. 14, Monaco said DoJ will begin convening civil society, academia, and industry to inform a report to President Biden at the end of the year on the use of AI within the department.  

“Over the next six months, we will convene individuals from across civil society, academia, science, and industry to draw on varied perspectives,” Monaco said. “And to understand and prepare for how AI will affect the department’s mission and how to ensure we accelerate AI’s potential for good while guarding against its risks.”  

“These discussions will include foreign counterparts grappling with many of the same questions. And it will inform a report to President Biden at the end of the year on the use of AI in the criminal justice system,” she said. 

Monaco emphasized that she and Attorney General Merrick Garland are “laser focused” on AI and what it means for DoJ’s core mission.  

During her speech, the department’s number two highlighted how DoJ has already deployed AI, including:  

  • To classify and trace the source of opioids and other drugs; 
  • To help triage and understand the more than one million tips submitted to the FBI by the public every year; and 
  • To synthesize huge volumes of evidence collected in some of the most significant cases, including Jan. 6. 

The department has come under fire for its use of AI – specifically facial recognition technology – from Congress.  

Most recently, a group of senators led by Sen. Raphael Warnock, D-Ga., and Senate Majority Whip Dick Durbin, D-Ill., called on the DoJ to conduct better oversight of the use of facial recognition technologies across its component agencies. 

In a Jan. 18 letter, the group of 17 Democrats and one independent demanded the DoJ explain how the agency’s policies and practices ensure that law enforcement’s efforts to leverage facial recognition technology comply with civil rights protections. 

“We are deeply concerned that facial recognition technology may reinforce racial bias in our criminal justice system and contribute to arrests based on faulty evidence. Errors in facial recognition technology can upend the lives of American citizens,” the senators wrote. “Should evidence demonstrate that errors systematically discriminate against communities of color, then funding these technologies could facilitate violations of federal civil rights laws.” 

Monaco noted in her speech that the DoJ is undertaking a “major effort” across Federal agencies to create guidance to govern the department’s use of AI. 

“These new rules will ensure the department, along with the whole U.S. government, applies effective guardrails for AI uses that impact rights and safety,” she said. “So, hypothetically, if the department wanted to use new AI systems to – say – assist in identifying a criminal suspect or support a sentencing decision – we must first rigorously stress test that AI application and assess its fairness, accuracy, and safety.” 

The deputy attorney general emphasized that the DoJ is “not starting from a blank page.”  

“We’re applying existing and enduring legal tools to their fullest extent – and looking to build on them where new ones may be needed,” she said.  

“As it did with cyber, the law governing AI will develop. But our existing laws offer a firm foundation,” she said. “Discrimination using AI is still discrimination. Price fixing using AI is still price fixing. Identity theft using AI is still identity theft.”  

Finally, Monaco announced that DoJ is bringing together its law enforcement and civil rights teams, and other experts, to form an Emerging Technology Board – which will be spearheaded by DoJ’s chief AI officer – that will advise on the responsible and ethical uses of AI by the department and its components. 

“We have a responsibility to lead when it comes to our own use and governance of AI in the Justice Department,” she said. “We have to look and listen beyond our own walls.”  

Read More About
Recent
More Topics
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags