In order to “maintain the integrity of its merit review process,” the National Science Foundation (NSF) has released new guidelines on the use of generative AI for both proposers and reviewers.  

According to the Dec. 14 memo, NSF is aiming to safeguard the integrity of the development and evaluation of proposals in the merit review process by encouraging proposers to indicate in the project description the extent to which generative AI technology was used and how it was used to develop their proposal. 

The guidance also notes that NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools. 

“Generative artificial intelligence (GAI) systems have great potential to support the U.S. National Science Foundation’s mission to promote the progress of science. They could facilitate creativity and aid in the development of new scientific insights and streamline agency processes by enhancing productivity through the automation of routine tasks,” the memo reads.  

“While NSF will continue to support advances in this new technology, the agency must also consider the potential risks posed by it,” the memo continues. “The agency cannot protect non-public information disclosed to third-party GAI from being recorded and shared.”  

The new guidelines explain that proposers are responsible for the accuracy and authenticity of their proposal submission in consideration for merit review, including content developed with the assistance of generative AI tools.  

Generative AI tools may create risks such as fabrication, falsification, or plagiarism, and proposers and awardees are responsible for ensuring the integrity of their proposal and reporting of research results, NSF said. The agency noted that it may publish further guidelines for proposers leveraging generative AI as needed. 

A key observation for reviewers at the agency, NSF said, is that sharing proposal information with generative AI technology via the open internet violates the confidentiality and integrity principles of NSF’s merit review process. The loss of control over the uploaded information can pose significant risks to researchers and their control over their ideas.  

In addition, the source and accuracy of the information derived from this technology is not always clear, which can lead to research integrity concerns including the authenticity of authorship, NSF said.  

“If information from the merit review process is disclosed without authorization to entities external to the agency, through generative AI or otherwise, NSF loses the ability to protect it from further release,” the memo reads. “This type of disclosure of information, especially if it is proprietary or privileged, creates potential legal liability for and erodes trust in the agency. The agency maintains public trust by ensuring that it safeguards scientific ideas, non-public data and personal information that stem from proposals, review information and related records in the merit review process.” 

The obligation to maintain confidentiality of merit review related information extends to the use of generative AI tools, the agency said.  

NSF reviewers are prohibited from uploading any content from proposals, review information and related records to non-approved generative AI tools. If reviewers take this action, NSF will consider it a violation of the agency’s confidentiality pledge and other applicable laws, regulations and policies, the memo states.  

The agency clarified that NSF reviewers may share publicly available information with current generation generative AI tools. 

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags