The Department of Homeland Security (DHS) Chief Information Officer (CIO) and first-ever Chief AI Officer (CAIO) Eric Hysen is encouraging “DHS personnel to responsibly use commercial products to harness the benefits of Gen AI and ensure we continuously adapt to the future of work,” according to a memo dated Oct. 24.

The new guidance aims to facilitate appropriate use of commercially available generative AI tools in the department, and follows policies rolled out in September on the use of facial recognition technologies.

DHS also recently issued a privacy impact statement on commercial generative AI tools conditionally approved for use within the department, including ChatGPT, Bing Chat, Claude 2, and DALL-E2.

DHS said it developed the new policies in response to the White House’s AI executive order and the corresponding Office of Management and Budget draft memo on how Federal agencies should implement the order.

Hysen’s memo offers initial guidance to facilitate appropriate use during this early stage of technological development of generative AI, including developing and maintaining a list of conditionally approved commercial Gen AI tools for use on open-source information.

The memo also states the intention to update DHS IT and cybersecurity policies and standards to include new requirements for use of approved commercial Gen AI tools by DHS personnel in their work.

Until these new requirements are in place, Hysen offered guidance for using the tools in the interim – such as protecting department data and personally identifiable information when using the tools.

The memo also calls on DHS personnel to obtain approval from their supervisors prior to using the tools and complete training on the responsible use of AI along with the annual Protecting Personally Identifiable Information and Cybersecurity Awareness trainings.

“Immediate appropriate applications of commercial Gen AI tools to DHS business could include generating first drafts of documents that a human would subsequently review, conducting and synthesizing research on open-source information, and developing briefing materials or preparing for meetings and events,” the CAIO wrote. “I have personally found these tools valuable in these use cases already, and encourage employees to learn, identify, and share other valuable uses with each other.”

DHS Secretary Alejandro Mayorkas named Hysen — who’s been CIO of the department since 2021 — as DHS’s first CAIO in September. And back in April, Mayorkas launched a DHS Artificial Intelligence Task Force that is co-chaired by Hysen and is responsible for producing the policies.

DHS Unveils Privacy Impact Assessment

The department also recently publicly released its Privacy Impact Assessment for the Use of Conditionally Approved Commercial Generative Artificial Intelligence Tools, dated Nov. 19, which documents the potential privacy risks DHS’s use of commercial Gen AI technologies presents.

DHS has established the Gen AI Tool Conditional Approval Process. The process consists of four steps, through which the Department will analyze each tool individually to determine its appropriateness for departmental use, based on factors such as: potential use cases; privacy, civil rights, civil liberties, and legal issues; security; and terms of service.

The privacy impact assessment of the use of commercial generative AI tools at DHS includes more than a dozen privacy risks and mitigation tactics outlined by the department’s Chief Technology Officer Dave Larrimore and signed by CIO Hysen and Chief Privacy Officer Mason Clutter.

One such privacy risk includes DHS not appropriately handling commercial Gen AI tool output data that could include personally identifiable information.

DHS said it has “partially mitigated” this risk by requiring all DHS personnel to be trained on how to appropriately handle output and adhere to the DHS Gen AI Rules of Behavior, in addition to Federal law, the general DHS Rules of Behavior, and DHS privacy policy, when using, accessing, or collecting any data.

Another privacy concern laid out in the impact report includes the department not having sufficient training for each conditionally approved commercial Gen AI tool.

The department said this risk is mitigated. “All DHS users are required to complete the DHS Generative AI Annual Training prior to use of any conditionally approved commercial Gen AI tool. This training will be continuously reviewed and updated as necessary as the Department’s understanding of this technology and the tools themselves evolve. DHS may also establish tool-specific training should it determine such training is necessary,” the document reads.

“As commercial Gen AI tools continue to evolve, the DHS Office of the Chief Information Officer will continue to work with the Privacy Office, Office of the General Counsel, Office for Civil Rights and Civil Liberties, Science and Technology Directorate, and other oversight and subject matter expert offices to observe and learn how the conditionally approved commercial Gen AI tools are used in the Department, their intended and unintended outputs, including potentially biased, discriminatory, or privacy sensitive outputs, and how they continue to develop commercially,” the document reads.

“During this initial, conditional use stage, the Privacy Office will assess whether additional privacy policy, compliance requirements, and/or guidance is needed to address and appropriately safeguard any privacy implications posed by commercial Gen AI technologies. The Privacy Office will work with the DHS Office of the Chief Information Officer to develop a mechanism by which conventional privacy compliance and oversight procedures may adapt to this new era of large-scale and ever changing and learning technology,” the department concluded. “This approach will ensure that privacy is a key component of new technologies and tools introduced to and approved for Department use.”

Read More About
About
Cate Burgan
Cate Burgan
Cate Burgan is a MeriTalk Senior Technology Reporter covering the intersection of government and technology.
Tags