Having an AI policy that outlines acceptable use, and documenting assessments that establish that AI systems are used in a manner consistent with the policy and that the benefits outweigh potential harms, can go a long way in managing legal and reputational risk.
The use of Artificial Intelligence (AI) tools has increased dramatically in the past six months, fueled in no small part by the launch of the ChatGPT last November. Although sophisticated algorithms are part of many commonly-used technology solutions, company management and legal departments have found themselves often unprepared to assess and manage the risks associated with business use of generative AI, and other AI technology. Developing and implementing a policy and governance program for AI use requires understanding the specific use cases for AI, the inputs and outputs and how the processing works, how the AI impacts the company, people and other entities, what laws apply, and what and when notice and consent are required or prudent. Having a policy that outlines acceptable use, and documenting assessments that establish that AI systems are used in a manner consistent with the policy and that the benefits outweigh potential harms, can go a long way in managing legal and reputational risk.
Understand the Terms of Art
The increase in use and application of AI has resulted in a lot of jargon floating around. At its core, AI is automated processing of data, based on training data and processing prompts, that can generate outputs for specified objectives, such as predictions, recommendations or objectives. Understanding the key terms and making sure that your team has a common understanding of these terms is important to understanding and assessing AI risks and policy. A lexicon is included as an appendix. And, as the Federal Trade Commission (FTC) warns: “AI is defined in many ways and often in broad terms … it may depend on who is defining it and for whom … what matters more is output and impact.”
Educate Stakeholders On AI Risks and Regulation
One common misconception about AI is that it is not regulated. In fact, existing privacy, intellectual property, and employment laws, just to name a few, already regulate AI, its use, its inputs, how those data inputs are or were collected and, whether an AI system user has a legal right to use the inputs, decisions made by AI and its other outputs, the effects of those decisions and outputs, and so on.
The FTC has signaled greater scrutiny of the use of AI is coming. A recent FTC advance notice of public rulemaking requests comment from the public on whether the FTC should ‘‘forbid or limit the development, design, and use of automated decision-making systems that generate or otherwise facilitate outcomes that [are “unfair” or “deceptive”].’’Given the FTC’s broad and fluid interpretation of what constitutes “unfair” outcomes, a business seeking to implement AI needs to carefully consider the various ways that it could impact individuals and ensure that it could defend its use. The FTC has recently blogged that “If you develop or offer a synthetic media or generative AI product, consider at the design stage and thereafter the reasonably foreseeable — and often obvious — ways it could be misused for fraud or cause other harm.” The FTC is also concerned about false or exaggerated claims about the use of AI and of the capability of AI-enabled products and service. Other federal agencies are following the FTC’s lead, and on April 25, 2023, the FTC issued a joint statement with the CFPB, DOJ and EEOC explaining that each agency would be using their respective enforcement authorities to regulate use of AI to protect consumers from discrimination, bias and other harms. Several states have introduced AI-specific legislation in the last several years and new state consumer privacy laws regulate some types of AI. The hype around generative AI has accelerated legislative activity. Regulators outside of the U.S. are concerned with AI, too. Canada is considering comprehensive AI legislation: the Artificial Intelligence and Data Act, which proposes to regulate how AI is developed and used. The European Union is considering new legal frameworks, including the EU AI Act or a new Directive on AI liability. The European Union’s supervisory authorities are not waiting for specific AI legislation and are already looking at AI through the lens of data protection law, launching investigations into the use of personal data to train AI, and, in some territories, have even taken action (including temporary bans in Italy) on providers of AI services. China issued for public comment, its draft Administrative Measures for Generative Artificial Intelligence Services on April 11, 2023, which consultation closed on May 10, 2023, and which proposed that a security assessment must be filed on services provided to the public from generative AI. South Korea is in the process of passing into law its Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI, which will identify what is classified as high-risk AI for which more stringent requirements will be imposed.
Privacy laws and regulations, and the regulators who enforce them, need to be considered when personal data is involved. The now-lifted ban of ChatGPT by the Italian data protection authority and investigations by a handful of others have made clear that privacy and data protection should be top of mind for organizations implementing AI systems if personal data is implicated. Compliance challenges are amplified where processing involves more than one jurisdiction. Privacy laws are territory specific, and many of these have cross-border transfer restrictions or requirements. In the Asia Pacific region, several jurisdictions have data localization rules that will make AI-related processing especially tricky. Further, if an AI system will process personal data, both the lawful basis for the use of that data, as well as how data subject rights (such as for access, objection to processing and deletion/erasure can be honored) are key considerations. More broadly, securing sensitive data used in connection with the AI system, identifying, responding to a remediating actual or suspected information security compromise, as well as updating information security practices to address the AI system need to be part of the development and deployment of any AI system. AI systems deployed in the employment context are particularly risky. In Europe, even if candidates declare their express consent for the use of AI, the employees whose characteristics are used for matching probably will be deemed not be in a position to provide freely given consent. In addition, works councils, trade unions or other employee representative committees may have co-determination rights with regard to the implementation of AI, as it may change processes in companies significantly or will enable performance or behavior control. In Germany, for instance, where works council rights are historically strong, most AI applications will require the prior signing of an agreement with the work council, and violations could lead to criminal fines. California’s omnibus privacy law now fully applies to California HR data as of Jan. 1, 2023, and this year, the California privacy agency will issue regulations on automated decision-making and profiling that will likely have a sweeping effect on the use of AI in HR use cases. New York City’s law regulating use of AI in employment decisions (Local Law 144) is in effect and will be enforced by the city starting on July 5. It also provides a private right of action.
The overlap between AI and intellectual property (IP) protection and enforcement is significant. Companies need to consider these issues when seeking IP protection (e.g., patents and copyrights) and also when assessing the risk of IP infringement, such as through the use of third-party data, images, content, and other materials as inputs to a generative AI system and content generated by those systems. To add complexity for global organizations, many key IP issues and concepts differ across jurisdictions.
Develop an AI Governance Policy and Framework
An AI policy, and a framework for applying the policy to AI development and use, is crucial to ensuring legal compliance, ethical processing and risk minimization.
Determine where you are positioned. Is your company an AI user, an AI provider, or both? This informs the potential risks and impacts, and how to address them.
Define what AI means in your organization and your use cases. Without a clear and common definition and an understanding of how your company is using AI, it will be impossible to build an AI framework.
Leverage existing processes and procedures to address AI risks and impact — privacy and data governance, third-party risk/vendor assessments, etc. Involve necessary stakeholders (e.g., IT, IS, Legal, HR, Marketing, etc.) into the process of developing and operating the company’s policy and framework for development and implementation.
Finally, don’t reinvent the wheel. Borrow and incorporate responsible AI Principles from existing frameworks, such as the Organization for Economic Cooperation and Development (OECD) and the National Institute of Standards and Technology (NIST). The key components of a responsible AI framework include: ethical purpose; accountability; transparency; fairness and non-discrimination; respect for privacy, confidentiality and proprietary rights; compliance with applicable laws and ensuring that AI is safe, reliable, and secure. There is a growing body of AI governance frameworks starting from the World Economic Forum and Singapore who have published a Model AI Governance Framework and self-assessment checklist for organizations that deploy AI.
Conduct Risk and Impact Assessments
Internal development of AI and use of third-party AI tools should undergo an initial risk and ongoing impact assessments to identify risks of harm, the appropriateness of inputs, the credibility, non-bias and non-infringement of outputs and the effectiveness of mitigation efforts. Numerous new and proposed laws and industry frameworks call for risk and impact assessments. In addition, claims you make about AI need to be assessed as any other marketing claim.
AI is dependent upon training the AI with data sets to develop and improve the processing that powers AI. First, biased, stale and faulty inputs will result in output errors and other harms. Next, unauthorized use of personal data and third-party intellectual property can result in claims related to both the use of the training data, as well as arising out of the derivatives created from its processing. Finally, unless otherwise agreed with third-party AI providers, such as in a license for a private instance of the AI tool, use of company confidential and proprietary data may be used for non-company purposes, threatening trade secrets and intellectual property protections. Private instances can also be configured with user controls that restrict certain uses.
The outputs of an AI system are essentially derivative works of the inputs, and if the inputs lacked sufficient consent to their use, the outputs can infringe third-party personal and proprietary rights. Also, there may be issues regarding the ownership of the outputs. Does the AI provider contractually take or share ownership? In the US, works not established by human authorship are not entitled to copyright protection and, thus, if the AI is generating content that might be protectible if created by a human author, the company will likely lack the exclusive rights or authorship that come with copyright if AI generates the content, which may or may not be important, depending on context. Finally, outputs may lack credibility and accuracy (e.g,. AI “hallucinations,” which could be libelous or otherwise harmful due to inaccuracy) and absent proper controls, can be objectionable in a variety of ways (e.g. biased, profane, or relating to illegal or undesirable activities).
In addition, ensure that claims you make about your use of AI and your AI-enabled products, are accurate, not misleading, and substantiated.
Treat AI Governance a Business Imperative and Compliance Imperative
ChatGPT has catalyzed the discussion around, and adoption of, AI. AI governance will help avoid “legal as a roadblock” mentality and will assist your organization in complying with existing laws and preparing for forthcoming AI-specific regulation. If your organization is an AI provider, in the near future, laws requiring not only your organization’s AI governance and compliance, but also to enable and assist with your customers’ compliance are highly likely. If your organization is an AI user, legal limitations, obligations and risks, as well as reputational risks are likely if prudent decisions are not made to ensure that the benefits outweigh the risk of harms.
Part of governance will be contract management. However, contracting related to use of AI technology is particularly thorny because of the newness of most AI technology and the rapidly evolving legal landscape. Parties on both sides must carefully consider privacy, confidentiality, data protection, data ownership and use rights, as well as the more traditional terms related to warranties, indemnities, limitations and exclusions. Their respective interest will not always align. For example, an AI technology provider may offer its AI technology “as is,” reflecting the position that risk with technology innovations is a cost of doing business. The AI technology user’s position, however, is likely that the AI technology provider must stand behind its technology, including by providing risk and impact assessments verifying that use of the technology will not harm any individual affected by its use. The AI tech provider may assert that data ingested by, and processed through, the technology is needed to improve the technology, whereas the technology user may want to ensure that the personal and confidential information that it submits to and through the AI technology remains private and confidential. Many AI providers offer the ability to license a private instance for a fee that allows for greater protection for the licensor, including custom controls and confidentiality of inputs and outputs, and this should be explored during the AI procurement process.
The term “artificial intelligence” or “AI” has evolved as a catch-all term for a continuum of technology by which algorithms use inputs to produce outputs. On one end of the continuum is task-specific automated processing that can handle large amounts of data to complete a task infinitely faster than a human could complete the same task. On the other end is so-called artificial general intelligence (AGI), which is a man-made intelligence that is indistinguishable from the human mind.
Most experts agree that AGI is still out of reach — and perhaps not achievable at all — but, between the task-specific algorithms and AGI are increasingly powerful AI systems trained to draw inferences from massive data in order to achieve particular outcomes. This acceleration in algorithmic sophistication — made possible by the decreased cost and increased power of cloud computing — may explain why experts have not yet settled on a consensus definition for AI.
*This article appeared in Cybersecurity Law & Strategy, an ALM publication for privacy and security professionals, Chief Information Security Officers, Chief Information Chief Technology Corporate Counsel, Internet and Tech Practitioners, In-House Counsel. Visit the website to learn more. Reprinted with permission.