The American Civil Liberties Union described full adoption of artificial intelligence at any cost as a “recipe for tyranny.”
Ben Wizner, director of the ACLU Speech, Privacy, and Technology Project, wrote in a blog post that the government needs to consider the rights of citizens as artificial intelligence becomes more ingrained in society.
“Liberty is threatened when the architecture of surveillance that we’ve already constructed is trained, or trains itself, to track us comprehensively and to draw conclusions based on our public behavior patterns,” Wizner said. “Equality is threatened when automated decision-making mirrors the unequal world that we already live in, replicating biased outcomes under a cloak of technological impartiality. And basic fairness, what lawyers call ‘due process,’ is threatened when enormously consequential decisions affecting our lives—whether we’ll be released from prison, or approved for a home loan, or offered a job—are generated by proprietary systems that don’t allow us to scrutinize their methodologies and meaningfully push back against unjust outcomes.”
Wizner said that the potential effects of artificial intelligence fall under the Fourth Amendment of the Constitution, regarding the fact that government can’t order a search or seizure without a warrant supported by probable cause of wrongdoing; and the Fifth Amendment, regarding the fact that the government can’t force people to be witnesses against themselves and it can’t take their freedom or their property without fair process. Artificial intelligence could create opportunities for governments to infringe on these rights, according to Wizner.
Wizner recommended that when systems are trained to identify people who look “suspicious,” to meticulously define the parameters of what makes a person “suspicious.” Wizner also said that the government should be focusing on what questions are the right questions to ask while tracking a potentially dangerous situation.
“The question becomes ‘how alarmed should we be?’ rather than ‘should we be alarmed at all?’ ” Wizner said. “And once we’re trapped in this framework, the only remaining question will be how accurate and effective our surveillance machinery is—not whether we should be constructing and deploying it in the first place.”
Kirke Everson, managing director of the Federal advisory practice at KPMG, said on Government Matters on Aug. 29 that government agencies need artificial intelligence to manage all of the data that comes through their systems. The ability of agencies varies and includes using software to automate processes, watching workers to suggest automation opportunities, and high-level machine learning capabilities.
“There’s already use cases out there where they’re using venues to understand threat patterns and, you know, some insider threat issues are being looked at through a cognitive perspective,” Everson said.
Wizner mentioned that industry leaders argue that creating policy about a new technology too early will stifle innovation.
“When we place ‘innovation’ within—or atop—a normative hierarchy, we end up with a world that reflects private interests rather than public values,” Wizner said.