Artificial Intelligence (AI) is rapidly becoming the cornerstone of technology innovation, as evidenced by our nation’s spending and investment decisions, and the public sector is no exception. Stanford’s “Artificial Intelligence Index Report 2023,” which tracks and collates data related to AI in pursuit of becoming the most credible and authoritative source for AI data and insights, reports that Federal spending on AI contracts totaled $3.3 billion in 2022 – an increase of 2.5-fold since 2017i. With global private investments tracking at $91.9B last year, the U.S. accounted for over 50% of the world’s total spend and more than threefold the second highest investor: China.
With AI’s undeniable prevalence, so has the increasing urgency to evaluate its multifaceted impact and the regulatory challenges around operations, ethics, fairness, and accountability. As innovations continue to flourish, and federal agencies further embrace these unprecedented capabilities, safeguarding against potential risks requires interdisciplinary collaboration on a global scale.
Considerations for Holistic Rulemaking
As they address the regulatory imperatives, lawmakers must balance responsible oversight and protection of citizens against the promotion of innovation and our nation’s competitiveness. The areas of consideration ensuring a sustainable future for the responsible use of AI span the following major areas:
1. Ethics and Morality: Because of the speed and efficiency with which it can process large volumes of data, AI holds immense promise for expediting the decision-making process. However, with the expansivity of AI models, efforts to prevent discriminatory outcomes can become more challenging. Ethical guardrails must be implemented to detect and mitigate biases in order to ensure fairness and equity in decision-making (or decision-influencing process) as we build our trust and reliance on these systems. Beyond designing AIs and refining its training models, the increase of AI use and its generative capabilities also introduces questions around what may be considered ethically adjacent challenges such as intellectual property (IP), ownership, and creative licensingii.
2. Transparency and Accountability: AI systems should be designed with traceability in mind and be able to explain their decisions and actions, especially in contexts where those decisions significantly impact individuals or society. This process requires developers and operators to approach designs with openness in mind and disclosure around algorithms, data sources, and other information pertinent to audits detecting bias and errors. Additionally, while there is a consensus that this level of transparency is crucial so users can understand how AI arrives at a particular outcome and to identify the proper accountability, lawmakers must also consider protection of the proprietary design of the underlying technology and investment interests. This is further complicated by certain use cases where transparency must be managed alongside security, like military applications.
3. Privacy and Security: An AI’s utility is only as valuable as the data involved. While the heuristics vary, a lotiii is commonly agreed upon starting point. With these vast data sets comes the challenge of handling sensitive citizen information, safeguarding against AI-related vulnerabilities and threats, and protective measures against adversarial attacks. When considering the use of AI in use cases such as critical infrastructure, our current understanding of AI vulnerability management and mitigation becomes woefully inadequateiv. Commensurate with our discovery progress that bolsters our nation’s competitive edge is the level of unknown implications and the level of adversarial exploitation. Oversight that ensures parallel investment into the research and development of AI protective measures is a necessity for this field’s continual expansion.
4. Roles and Responsibilities: Human oversight and direction are required – but the question becomes “whose?” As a field that does not exist without interdisciplinary collaboration, AI stakeholders may be almost as layered as its artificial neural network. Defining an inclusive but adaptive governance requires thoughtful analysis of the diverse stakeholder groups involved. On top of that, identifying the right level, point, and amount of accountability for individual or possible outcomes can turn into a corporate hot potato game where the victims are the very people we should be serving – who are held accountable for copyright violations or the spread of misinformationv? Clear lines of responsibility upfront can be blurred when liability awaits at the end.
Recognizing the importance of setting a reliable foundation, the European Union has already led the charge by drafting rules through the Artificial Intelligence Act in May 2023 that “would ensure a human-centric and ethical development of Artificial Intelligence (AI) in Europe.” In the same month, OpenAI’s leadership testified before Congress in America. |
5. Standardization Framework: Standardization sets a fundamental foundation for seamless collaboration and data sharing – across federal agencies and, given the global nature of AI, with international partners. Establishing consistency reduces friction for system-to-system (and inter-agency) interoperability and for harmonizing international cooperation. This also means that as rule-makers determine the common ground, they must consider and respect international and national sovereignty and reconcile disparities across industries, Departments, Agencies, and public opinion.
6. Workforce Impact: Legislation is enacted for the benefit of its people, which entails preparing its workforce for potential disruptions and readying them for AI-related jobs and skills. Since the era of industrialization and the internet boom, our relationship with technology has been constantly redefined. Lawmakers hold a critical responsibility to ensure sustainable jobs and a vibrant, thriving economy. Whether this means economic diversification initiatives to re-define jobs and human-machine collaborations with automation in mind or creating a new digital dexterity infrastructure, managing workplace disruptions is a critical component to AI success.
Code of Federal Regulations (CFR) for Code of AIs?
As summarized in Government’s Real Relationship with Artificial Intelligencevi¸ several major actions are already underway at the Presidential and Congressional levels. While the February 2023 Executive Order on Further Advancing Racial Equity and Support for Underserved Communities Through the Federal Government (EO 13985vii) does not centralize around AI, it does explicitly emphasize that the specific technology is subject to equity standards as early as in its design phase and should uphold objectives to root out bias. While this is not the first incorporationviii of AI-specific stipulations in policy, it’s demonstrative of AI’s growing role at the forefront of discussions around our technological future.
Did you know? Government Accountability Office (GAO) already published an Accountability Framework for AI over three years ago. |
Less than three months later, the Biden-Harris Administration announcedix new actions to promote responsible American innovation in a way that continues to protect citizens’ rights and safety. In these announcements, several key federal documents are re-introduced, including the White House’s October 2022 Blueprint for an AI Bill of Rightsx, the National AI Research Resource (NAIRR) Task Force’s January 2023 roadmap for Strengthening and Democratizing the U.S. AI Innovation Ecosystemxi, and the AI Risk Management Framework (RMF)xii published by the National Institution of Standards and Technology (NIST) in January 2023.
Despite the remarkable progress and recently refreshed surge of attention, much of “U.S. national strategy on AI is defined through legislation and Executive Ordersxiii.” While no comprehensive exists, enacted legislation such as the National Artificial Intelligence Initiative (NAII) of 2021, AI in Government Act of 2020, Executive Order 13960 – Promoting the Use of Trustworthy AI in the Federal Government (2020), and Executive Order 13859 – Maintaining American Leadership in AI (2019) serve to codify fragmented but essential components such as the sustaining consistent support for AI R&D, establishing the GSA AI Center of Excellence, and setting principles for AI use case cataloging.
Undeployed Code
A singularly coherent federal strategy may not be too far away. Several draft regulations are underway, including the following that would create enterprise implications across the federal government:
- The Office of Management and Budget (OMB) has reportedly drafted a more robust guidancexiv that sets direction for federal agencies’ use and management of AI. It outlines ten (10) requirements, including naming a Chief AI Officer, and seven (7) risk management areas for agencies to consider as the federal government works to remove barriers to AI use, expand adoption, and implement guardrails – it is expected to be released for public comment in the Federal Register in October.
- AI Training Expansion Act of 2023xv is a bill introduced in July 2023 that amends the Artificial Intelligence Training for the Acquisition Workforce Act to require skilling up federal workers and set aside funding for this professional development to prepare the federal workforce for upcoming disruptions.
- National AI Commission Actxvi is a bipartisan bill introduced in June 2023 that would establish a governance entity whose goals include reviewing “the United States’ current approach to AI regulations” and providing recommendations on necessary improvements (including governmental structures) to mitigate AI risks and harms while maintaining America’s role as a global innovation leader.
- Digital Platform Commission Actxvii is a bill from last year that was revised and re-introduced in May 2023 by Colorado Senators, calling for a “comprehensive regulation of digital platforms to protect consumers, promote competition, and defend the public interest.” Senator Bennet also introduced several other draft policies, including the “Oversee Emerging Technology Actxviii”(May 2023) and “Assuring Safe, Secure, Ethical, and Stable Systems for AI (ASSESS AI) Actxix” (April 2023) which would initiate Senior Official designation and oversight across Federal Government and a total review of existing AI policies, respectively.
The power and versatility of AI across a spectrum of applications – from streamlining administrative processes to bolstering national security – offers efficiency improvements and cost reductions at an unprecedented rate. America’s ability to fully harness this power and maximize its potential is critical to its global competitiveness and closely tied to its ability to navigate and finalize guidance proportionately.
Compiling the Code
In the era of AI-driven federal technology, regulation is not a hindrance but a necessity. As the Nation continues to lead in technology development and investments in AI, it needs to shift some focus towards our shared commitment to responsible AI regulation. Defining regulations is only step one – after the lengthy process of soliciting input, engaging in dialogue with industry experts, evaluating the impact of AI on federal operations, and garnering public support, lawmakers are up against consensus building and rule implementation. Even following the successful rollout of new regulations, agencies must measure the impact and effectiveness of AI regulations to make informed adjustments and ensure our technology empowers, rather than threatens, the public interest.
Balancing innovation with responsible and future-proofed AI practices is paramount, and our federal government’s commitment to responsible AI regulation, in tandem with international cooperation and research-backed measurement, will shape the future of AI in federal technology and the world.