Artiﬁcial intelligence (AI) is being used in fintech at an increasing rate and is poised to have a major impact in the industry. AI applies and reﬁnes, or “trains,” a series of algorithms on a large data set in order to identify patterns and make predictions for new data. The US Department of the Treasury, in its recent report, states that “PricewaterhouseCoopers estimated that by 2030, AI technologies could increase North American gross domestic product (GDP) by US$3.7 trillion and global GDP in US$15.7 trillion. Within the ﬁnancial services sector, large banks report that AI could help cut costs and boost returns.”
Commentators and regulators have been increasingly grappling with how AI impacts fintech. But practically, how should ﬁnancial institutions contract for AI services given the evolving regulatory view of AI? Like many other forms of fintech services technology, when ﬁnancial institutions contract for AI services, they should follow the guidance of the Federal Financial Institutions Examination Council (FFIEC)’s Outsourcing Technology Services’ handbook (Handbook), but the Handbook does not mention AI speciﬁcally. As Governor Lael Brainard of the Federal Reserve discussed in her November 13, 2018 speech on AI, ﬁnancial institutions’ use of AI tools such as “chatbots, anti-money-laundering/ know your customer compliance products, or new credit evaluation tools” will likely be classiﬁed as services to the ﬁnancial institutions.
So how should ﬁnancial institutions apply the Handbook to speciﬁc AI services? This article addresses certain regulatory concerns about AI currently expressed by ﬁnancial regulators – speciﬁcally, AI bias, explainability and accountability, in order to understand the general guideposts in contracting for AI services. It then addresses certain contractual areas noted in the Handbook that should be accounted for when engaging an AI service provider – speciﬁcally, service provider selection, services scope and ongoing monitoring.