AI Regulation: The Next Frontier
As the federal government continues to wrestle with the complex issue of regulating Artificial Intelligence (AI) in the wake of the release of President Biden’s Executive Order, states have already proposed or enacted AI regulation, and even more will attempt to tackle the issue in 2024. Two recent developments in AI regulation from California and New Hampshire highlight different approaches states are taking in the absence of federal preemption. In the meantime, the European Union is also proceeding with efforts to flesh out a regulatory framework for AI. Inconsistencies and operational challenges are already apparent in reviewing these frameworks. What does this mean for businesses and consumers?
CPPA Releases Draft Automated Decisionmaking Technology Regulations
The California Privacy Protection Agency (CPPA) has released draft Automated Decisionmaking Technology (ADMT) regulations. The CPPA Board discussed the proposal at its December 8 meeting, but formal rulemaking is not expected to begin until next year.
As provided for in the California Consumer Privacy Act (CCPA), the draft ADMT regulations implement a consumer’s right to information about a business’s use of ADMT as well as the right to opt-out of a business’s use of the technology to process consumer data. The CPPA proposes to define ADMT as “any system, software, or process—including one derived from machine-learning, statistics, or other data-processing or artificial intelligence—that processes personal information and uses computation as whole or part of a system to make or execute a decision or facilitate human decisionmaking.” The CPPA also makes clear that the definition includes profiling.
The draft regulations require a business using ADMT to provide a pre-use notice informing consumers of the business’s use of ADMT (which could be in the form of a link to an unabridged risk assessment regarding use of ADMT) and the consumer’s rights to access further information and opt-out. The pre-use notice must be readily available and offered in the primary manner that the business interacts with the consumer, and the notice must include in plain language a description of the following: the purpose of the business’s ADMT use; a consumer’s right to opt-out and how to do so; a consumer’s access rights and how to apply them; and a simple and easy-to-use method by which the consumer can gain additional information about the business’s ADMT use. In turn, the draft outlines additional mandatory disclosures on the logic used, intended output, and role of any human involvement, and the regulations require a business to notify the consumer of any adverse decision producing a legal or similarly significant effect that results from the business’s use of ADMT.
Consumers must have a right to opt-out when ADMT is used to make a decision that produces legal or similarly significant effects concerning a consumer or is used to profile a consumer in his or her capacity as an employee, independent contractor, job applicant, student, or when a consumer is in a publicly accessible place. The Board will also consider whether profiling for behavioral advertising, profiling a consumer known to be under 16, and processing information to train on ADMT will require an opt-out provision, the conundrum being that most age-screening tools, both age estimation and age verification, do rely on AI technologies.
Given the breadth of the CCPA itself, these latest proposed ADMT regulations will affect many businesses around the world, including those that use ADMT to facilitate marketing, those that rely on large language model AI technologies to offer connected and other products, and those with California employees. While the level of detail for notices specified in the proposal is likely to generate significant comment, mechanisms to provide the detailed notices on the growing thousands of products that use AI but do not have a screen raise an entirely separate complication.
New Hampshire’s AI Code of Ethics
Earlier this year, New Hampshire released a Code of Ethics for the Use and Development of Generative Artificial Intelligence (GAI) and Automated Decision Systems (ADS). The Code broadly alludes to the benefits and use cases of GAI and ADS, but also acknowledges the risks that can be associated with the technology.
The Code first lays out fundamental rights that for New Hampshire serve as the foundation of ethical use and development of GAI and ADS technology. The Code requires that AI systems be developed with human dignity in mind and acknowledges that humans must have individual freedom. It also highlights the importance of equal access to AI benefits and commitment to a Code that is human-centric at its core. From a governmental perspective, the Code requires respect for democratic processes and calls for GAI and ADS that maintain individual rights for residents and visitors. Finally, the Code describes the importance of unbiased AI outputs to ensure moral worth and dignity.
The Code then lays out four ethical principles: respect for human autonomy, prevention of harm, fairness, and explicability. The ethical principles are meant to align with the fundamental rights to optimize the use of AI systems in the state.
Lastly, the Code describes technical and social requirements designed to support the implementation of GAI and ADS technology. These include human agency and oversight for final decisions supported by AI systems as well as technical robustness and safety of the systems. The Code calls for privacy, data governance, and transparency in AI use and processes. It also emphasizes the need for consideration of social issues such as non-discrimination, diversity, and societal wellbeing. The Code also requires accountability measures for every use case of AI in the state.
New Hampshire will review the Code annually and, when significant changes occur relating to best practices of AI use and development, determine if updates to the Code are required.
European Union: Automated Decisionmaking Case Law and AI Legislative Progress
The reaching of a political agreement on the EU AI Act was widely relayed in the media, but this is not the end of the saga regarding the European Union’s general legislation on artificial intelligence, as that political agreement still has to be translated into legal wording and technical annexes. Those details will be significant for the scope of the obligations with which businesses have to comply.
So far, the main outcome of the AI Act for most businesses will be a practical one; each business will need to have an AI governance framework to help it identify which uses of AI tools are permitted, which ones are subject to stricter compliance obligations, and which ones are prohibited.
For instance:
- the use of systems for recruitment will likely be subject to heightened compliance obligations, such as human oversight, transparency, accuracy, and risk mitigation;
- chatbots would generally be subject to more limited transparency obligations (e.g., an obligation to inform users that they are interacting with an AI system); and
- spam filters would likely be freely usable.
The AI Act is just one element, however. The General Data Protection Regulation (GDPR), for instance, already regulates automated decisionmaking that involves the processing of personal data, only authorizing it in three scenarios (where the automated decisionmaking is (i) based on explicit consent of the data subject or individual; (ii) necessary for entering into or performance of a contract between the data subject and a controller; or (iii) authorized by law in the EU, subject to specific conditions), and in each case subject to the possibility of human intervention.
On December 7, 2023, the Court of Justice of the European Union (CJEU) rendered a significant judgment on this topic. In that case, a credit reference agency prepared a credit score at the request of a financial institution, and the facts showed that the credit score played a decisive role in the financial institution’s decision whether or not to grant a loan to an individual. For this reason, the CJEU considered that this was automated decisionmaking. In a separate decision on December 19, 2023, the Belgian Data Protection Authority was the first to quote the CJEU’s reasoning when it considered that a car sharing platform provider had carried out automated decisionmaking by automatically suspending a user’s account further to an automated and routine verification of accounts.
The lesson? If you rely on indicators from third parties, consider documenting how your employees combine factors to come to their own decision or your justification for what is potentially automated decisionmaking.
Conclusion
It is clear that the potential impact AI will have on consumers and employees is prompting discussions around the world on appropriate regulatory responses. While there are common themes and concerns, governments will likely differ in their approaches. That will create ongoing compliance challenges while policies are debated. The objectives for civil society remain to develop approaches for stakeholders to work together to identify and manage risks of AI while promoting innovation to realize the benefits and efficiencies it may also bring.