The Biden-Harris Administration announced issuance of an Executive Order (“EO”) on October 30, 2023 that directs various Executive Agencies to study and draft regulations to address concerns regarding Artificial Intelligence (“AI”). Apprehension about the rapid proliferation of AI stems from its increasing use and influence throughout the information technology environment. Government and industry concerns have grown over its potential effects on personal privacy, national and individual security, and accuracy of content in the public and private sectors. In contrast, the potential benefits of AI include magnitudes of improvement in the quality and speed of online services, informational delivery, and online commerce. In particular, generative AI, the branch of AI that can create synthetic content, raises many alarms for the risk of causing confusion, discrimination, reputational harm, and disseminating false narratives, yet it does have its virtues. Examples of generative AI range from creation of video movies and games for entertainment, to development of artificial models on websites that create interest in new products, to maliciously constructed videos and images depicting celebrities and government officials in baneful, yet entirely contrived ways. In a worst-case scenario, misuse of generative AI could contribute to social mistrust and societal upheaval. Without a doubt, AI can be a tool for good or for great harm.
With this EO, the Administration is jump-starting an effort to balance the risks and benefits of this growing technology. The EO attempts to tackle AI’s impacts on national security, data privacy, consumers, employees, and American intellectual property (“IP”). The EO aims to find safeguards for citizen data in employment and commerce, and with that in mind, the Department of Commerce is designated as the lead agency for coordinating interdepartmental collaboration on AI policies, guidelines, and regulations. Commerce’s current role in developing and implementing federal privacy policies and practices makes it the logical choice for that role. Nevertheless, other departments and agencies are tasked with addressing AI in ways that fit their respective missions. In particular, the EO directs the Director of National Intelligence and Secretary of Homeland Security to develop testing standards for critical infrastructure. As well, the EO directs the Secretary of Commerce to protect American IP through its traditional role in economic and commercial development.
The new EO modestly builds on the Administration’s previous Blueprint for an AI Bill of Rights (“BoR”), published in October of 2022. While the BoR and the EO outline the priorities of the Administration, it is unclear exactly what the scope of any regulations will look like, whether they can be effectively implemented, and whether they can withstand legal challenges. The EO laudably cites the importance of the effort to control AI in order to protect Americans’ civil rights while enhancing security. Concepts such as crime forecasting using AI analytics come to mind as areas where special attention will need to be paid in order to strike a balancing of those interests.
There is a great deal of overlap between the risks associated with uncontrolled use of AI and similar concerns that prompted development of personal data privacy laws in recent years. The national legislative framework for privacy consists of federal laws enacted in the 1980s, 1990s, and 2000s that, in large part, were designed to address privacy within limited commercial sectors such as the banking industry (e.g., the Graham-Leach Bliley Act) and the health care field (HIPAA). However, the inability of the federal government, so far, to enact comprehensive legislation on privacy protection that goes beyond the narrow confines of specific industries or business sectors has led to a patchwork of varying and often conflicting laws at the state level that, for all of the good intentions, has made compliance by the business community challenging. For example, consumer and employee data privacy in the United States is regulated currently by state laws, such as the California Consumer Privacy Act (“CCPA”) and the California Privacy Rights Act (“CPRA”), and other states’ variations.
The lack of comprehensive privacy legislation at the national level has been problematic, for example, in implementing (and maintaining) viable cross-border data transfer rules with the European Union, which has its own set of privacy laws and infrastructure which tend to favor countries having reciprocal comprehensive standards and laws governing privacy protection. Because of the accelerated pace of AI growth and its dramatic effect on the national economy, the importance of federal oversight for this unique technology cannot be overstated. Ideally, the federal government’s leadership in developing practical and reasonable standards and regulations for AI will fare better than we have seen with respect to privacy, and the EO may well be the Administration’s opening salvo to Congress to bring legislation forward, rather than merely holding hearings.