Building AI Guardrails for Our Parliament
About the U.S. House of Representatives | Written on March, 2024 | Author: Jessica Smith
This event template was created by the U.S. House of Representatives in March 2024 and is being shared as a resource to aid other parliamentary staff.
It is intended to facilitate a discussion about appropriate AI guardrails to put in place for a Parliament to effectively use artificial intelligence – as opposed to human intelligence -- in its daily operations. This event template allows for a wide-ranging, two-hour discussion about how much authority the Parliament might delegate to AI technologies in the pursuit of operational efficiencies. It is intended to be customized and adapted as appropriate by other parliamentary staff who are exploring this question.
Suggested Event Format
The event is invite-only and in-person.
The event is private to encourage frank discussion.
The event is transcribed for later staff reference.
The first hour of the event brings together Elected Officials and External AI Experts.
The second hour of the event brings together External AI Experts and Institutional Participants.
At the conclusion of the event there are light refreshments.
Suggested Run of Show
First Hour: Elected Officials & External AI Experts
Elected Officials ask prepared questions or off-the-cuff questions. Some questions will naturally lead to a “show of hands” response from roundtable participants. Others will invite a “hop-in” style response from participants.
Second Hour: External AI Experts & Institutional Participants
Elected Officials ask prepared questions or off-the-cuff questions with the goal of encouraging open discussion between several External AI Experts and institutional Participants on one topic. Most of these questions will touch on complex functions within the Parliament and will need a coordinated approach across a few functional units to effectively address.
After Event: Light refreshments & informal mingling.
Suggested Room Setup
Anticipate between 100 and 200 people.
Setup tables in a large square or circle, encouraging the ability to talk openly.
Audience seating fills remaining space. Ideal audience is senior staff with AI-related policy expertise.
Sample Questions for Elected Officials
Why do we need AI guardrails at all? Why can’t we just rely on our existing Rules or IT policies to govern the use of AI just like we do for any other IT system?
People in my District need to reach me and my staff. They need help with real life problems. How might I build guardrails so my staff can use AI tools to keep up, but my constituents know there’s a real person over here paying attention to them?
The public expects transparency as we move towards a digital-first government which uses AI systems. How should we build guardrails around disclosure, so that the public can follow our AI uses, and so that we can encourage public discourse?
What can we do to ensure our AI tools produce outputs that are politically neutral? Bill summaries are a great example -- they must accurately and comprehensively synthesize large bills. In a future where AI tools may be helping us quickly understand politically sensitive material, what guardrails do we need to make sure that we can trust the final results?
Are there ways you have seen government effectively use AI to save time? Are there ways you think Parliament could help harness the potential of AI to achieve this?
We use a lot of different software to run our offices right now. We don’t build that all ourselves, of course. Most is purchased from a vendor. A lot of those vendors have been rolling out new AI features right into their software. I want to ask about AI acquisitions.
How should we be thinking about our AI acquisitions pipeline? What sort of critical questions should we be asking of our software companies before we renew our annual licenses?
There might be times we need to stop using certain software quickly. How should we be planning an escape hatch, if you will, into our software contracts so that we have a clean exit if we need it?
How might we think about agile AI governance? With this technology changing so quickly, and acknowledging that our understanding of the technology will likely change, does it make sense for us to plan for our governance to evolve? And at what pace?
I know that AI systems can enhance our cybersecurity posture if they are used as a tool to detect and stop external threats. Are there any guardrails that we should be aware of in this space?
We need to rely on a variety of tech companies to accomplish any AI-related projects or make real progress. How do we craft AI guardrails in a way that will increase trust with our constituents and the public, even if trust in the tech sector falls?
We know many citizens may be concerned about impacts or harms against them resulting from the Parliament’s own use of AI. What guardrails can we put in place to help improve the public’s trust in our Parliament’s use of AI?
Sample Questions for Facilitated Discussions
IT Modernization in a Rapidly Changing Tech Landscape
This topic is for [Name, Name, and Name] to discuss.
AI tools have the potential to level the playing field between a lean congressional branch and the vast executive branch it oversees. AI-boosted workflows could dramatically improve the internal operations of the Parliament and drastically improve the support Elected Officials are able to provide to their constituents.
However, there are major challenges that can result from a porous IT environment where sensitive data leaks out to unapproved third-party AI websites with unclear data protections and security risks.
How might we plan reasonable guardrails to ensure a secure IT environment, while allowing for the use of some high-impact AI systems?
Keeping Our Legislative Experts in the Loop
This topic is for [Name, Name, and Name] to discuss.
AI tools could help staff immediately draft proposed bills and help compare multiple versions of a bill in a way that could quickly flag meaningful differences. These tools could instantly identify areas of law that might be unintentionally affected by a proposed bill. They could more easily format proposed bills and could help us manage an increased volume of bill submissions.
However, this immediacy -- and the appearance of well-polished drafts – may rob our human legal experts of the chance to catch critical mistakes. This may create false assumptions of accuracy and reliability that result in worse legislation.
How might we plan reasonable guardrails to ensure our legislative experts are on the loop while allowing for the benefits of AI-boosted workflows?
Moving From Practicing to AI Best Practices
This topic is for [Name, Name, and Name] to discuss.
Many Member Offices are eager for AI Best Practices that they can share with their staff. Before we get to AI Best Practices... we just need some basic practice. [Insert description of small ways staff are experimenting.]
However, those efforts were just the first step. There are many different roles in the Parliamentary ecosystem, including roles that deal with security, physical infrastructure, administration and acquisitions, and audits. There are a wide variety of software that we have access to, and significant differences in digital literacy.
How might we plan reasonable guardrails for staff which allow for small-scale innovation and practice, and which helps the Parliament to move towards holistic AI Best Practices?