The White House’s Office of Science and Technology Policy (OSTP) issued a new blueprint for an “AI Bill of Rights” to help guide organizations on the development and deployment of artificial intelligence, in an effort to help protect the rights of Americans in the age of AI.
As part of the blueprint, the Biden administration lays out a set of five core principles and protections for all Americans when it comes to AI. The blueprint addresses AI in education as part of three of the principles:
- Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.
- Data Privacy: You should be protected from abusive data practices via built-in protections, and you should have agency over how data about you is used.
- Alternative Options: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.
As part of the Algorithmic Discrimination Protections principle, the blueprint urges schools to ensure that race and other demographic categories are not being improperly used by algorithms in a way that unfairly disadvantages students.
Specifically, the blueprint cites a predictive model that was marketed as being able to predict whether students are likely to drop out of school. However, the model was found to use race directly as a predictor and also shown to have large disparities by race. Under the model, Black students were as many as four times as likely as their otherwise similar white peers to be deemed at high risk of dropping out. The Biden administration said these risk scores are used by advisors to guide students towards or away from certain majors.
Further, the blueprint cites concerns from the National Disabled Law Students Association, which believes that individuals with disabilities are more likely to be flagged as potentially suspicious by remote proctoring AI systems because of their disability-specific access needs such as needing longer breaks or using screen readers or dictation software.
Concerning data privacy, the blueprint urges schools to not rely on AI-based continuous surveillance systems. The blueprint said that these systems “have the potential to limit student freedom to express a range of emotions at school and may inappropriately flag students with disabilities who need accommodations or use screen readers or dictation software as cheating.”
When it comes to what should be expected of automated systems, the Biden administration said that schools need to limit access to sensitive data and derived data.
“Sensitive data and derived data should not be sold, shared, or made public as part of data brokerage or other agreements,” the blueprint says. “Access to such data should be limited based on necessity and based on a principle of local control, such that those individuals closest to the data subject have more access while those who are less proximate do not (e.g., a teacher has access to their students’ daily progress data while a superintendent does not).”
In addition to limiting access, the Biden administration also says automated systems that handle sensitive data should be closely tailored to their purpose and provide “meaningful access for oversight, include training for any people interacting with the system, and incorporate human consideration for adverse or high-risk decisions.”