Corresponding author: Alette Tammenga ( alettetammenga@hotmail.com ) Academic editor: Chris D. Knoops
© 2020 Alette Tammenga.
This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits to copy and distribute the article for non-commercial purposes, provided that the article is not altered or modified and the original author and source are credited.
Citation:
Tammenga A (2020) The application of Artificial Intelligence in banks in the context of the three lines of defence model. Maandblad Voor Accountancy en Bedrijfseconomie 94(5/6): 219-230. https://doi.org/10.5117/mab.94.47158
|
The use of Artificial Intelligence (AI) and Machine Learning (ML) techniques within banks is rising, especially for risk management purposes. The question arises whether the commonly used three lines of defence model is still fit for purpose given these new techniques, or if changes to the model are necessary. If AI and ML models are developed with involvement of second line functions, or for pure risk management purposes, independent oversight should be performed by a separate function. Other prerequisites to apply AI and ML in a controlled way are sound governance, a risk framework, an oversight function and policies and processes surrounding the use of AI and ML.
Artificial intelligence, banks, machine learning, risk management, three lines of defence, governance
The use of Artificial Intelligence and Machine Learning in the banking industry is increasing. What do these techniques entail? What are their main applications and what are the risks concerned? Is the three lines of defence model still fit for purpose when using these techniques? These are the topics that will be addressed in this article.
Technology and data are playing an increasingly important role in the banking industry. While Artificial Intelligence (AI) was initially mostly used in client servicing domains of the bank, more and more applications for risk management purposes can be observed.
A common model to use within banks is the three lines of defence (3LoD) model. This model consists of a first line in the business, being responsible for managing risks, a second line risk management function in an oversight role and a third line function: internal audit. Given the expanding use of AI and machine learning (ML) within banks, the question arises whether this 3LoD model is still fit for purpose given these new developments, or if changes to the model are necessary.
This article aims to answer the question: “How can the application of Artificial Intelligence and Machine learning techniques within banks be placed in the context of the Three lines of defence model?”
This article will first address the basic concepts of AI and ML and the 3LoD model. It will then give an overview of the applications observed throughout banks and the risks and challenges of using AI and ML. After that, AI and ML are placed in the context of the 3LoD model, addressing the prerequisites to apply AI and ML in a controlled way. The article finishes with a regulatory view, the emergence of potential new market wide risks, conclusions and recommendations.
As a start, it is important to clarify the concepts of Artificial Intelligence (AI) and Machine Learning (ML), which are often interchanged. Several definitions can be found. AI is mostly viewed as intelligence demonstrated by machines, with intelligence being defined with reference to what we view intelligence as in humans (Turing 1952 cf Shieber 2004 in
AI uses instances of Machine Learning as components of the larger system. These ML instances need to be organized within a structure defined by domain knowledge, and they need to be fed data that helps them complete their allotted prediction tasks (
As
In unsupervised learning, a dataset is analysed without a dependent variable to estimate or predict. Rather, the data is analysed to show patterns and structures in a dataset (
So the main difference between supervised and unsupervised ML is the tagging of historical data with business outcomes in supervised learning, where this is not done in unsupervised learning. ‘Reinforcement learning’ falls in between supervised and unsupervised learning. In this case, the algorithm is fed an unlabelled set of data, chooses an action for each data point, and receives feedback (perhaps from a human) that helps the algorithm learn. For instance, reinforcement learning can be used in robotics, game theory, and self-driving cars (
In discussions about AI, the concept of deep learning or neural networks is also mentioned often. In deep learning, multiple layers of algorithms are stacked to mimic neurons in the layered learning process of the human brain. Each of the algorithms is equipped to lift a certain feature from the data. This so-called representation or abstraction is then fed to the following algorithm, which again lifts out another aspect of the data. The stacking of representation-learning algorithms allows deep-learning approaches to be fed with all kinds of data, including low-quality, unstructured data; the ability of the algorithms to create relevant abstractions of the data allows the system as a whole to perform a relevant analysis. Crucially, these layers of features are not designed by human engineers, but learned from the data using a general-purpose learning procedure. They are also called ‘hidden layers’ (
Also, the concepts of predictive versus prescriptive AI are relevant. Predictive AI is about understanding and predicting the future, so about using statistical models and forecast techniques to understand the future to predict what could happen. Prescriptive AI uses optimization and simulation algorithms to advice on possible outcomes and to instigate what action to take.
Other concepts within AI are speech recognition and Natural Language Processing (NLP). This is the ability to understand and generate human speech the way humans do by, for instance extracting meaning from text or generating text that is readable, stylistically natural and grammatically correct (
One could wonder in which way AI and ML are different from more traditional statistical modelling techniques. Statistical modelling gives insight in correlation, derives patterns in the data using mathematics. It is a formalization of relationships between variables in the form of mathematical equations.The main difference compared to AI/ML is that the ML model trains itself using algorithms, it can learn from data without relying on rule based programming (
In the 3LoD Defence model (
Regardless of how the 3LoD model is implemented, senior management and governing bodies should clearly communicate the expectation that information should be shared and activities coordinated among each of the groups responsible for managing the organization’s risks and controls (
The 3LoD model has also received criticism. The core concern according to
To get a better insight in the risks associated with using AI and ML, this section addresses some use cases of AI and ML within banks throughout all of the 3LoD functions. These are depicted in figure 2 as well.
AI and ML techniques are frequently used in servicing clients. Applications such as chatbots for e.g. customer support or robo advice (digital platforms that provide automated, algorithm-driven financial planning services with little to no human supervision) have increased in the past years. A big 4 audit firm has developed a voice analytics platform that uses deep learning and various ML algorithms to monitor and analyse voice interactions, and identify high risk interactions through Natural Language processing. The interactions are then mapped to potential negative outcomes such as complaints or conduct issues and the platform then provides details as to why they have occurred (
In the field of market risk, the use cases of ML from a risk management perspective appear to be limited and are mainly observed in first line functions. Here, the focus is on e.g. market volatility or market risk from a portfolio or investment risk management perspective. Also, ML is increasingly being applied within financial institutions for the surveillance of conduct breaches by traders working for the institution. Examples of such breaches include rogue trading, benchmark rigging, and insider trading – trading violations that can lead to significant financial and reputational costs for financial institutions (
Modelling credit risk has been standard practice for several years already. In banks, such models are developed within a modelling department that is often part of a risk management function, with the involvement of business users. The model is used by the business in the first line. The general approach to credit risk assessment has been to apply a classification technique on past customer data, including delinquent customers, to analyse and evaluate the relation between the characteristics of a customer and their potential failure. This could be used to determine classifiers that can be applied in the categorization of new applicants or existing customers as good or bad (Leo et al. 2019). Enhancing the existing models with ML applications increases the quality of the models and therefore, the accurate predictions of e.g. default. The aim is to better identify the early signs of credit deterioration at a client or the signs for an eventual default based on time series data of defaults. When the accuracy of creditworthiness prediction increases, the loan portfolio could grow and become more profitable. ML techniques can be effectively used for Regression based forecasting as well. Primarily, forecasting models for Probability of Default (PD), Loss Given Default (LGD) and Credit Conversion Factor (CCF) can show greater levels of accuracies in forecasting the quantum of risk with greater degree of precision and accuracy (
In the field of credit risk, ML is used not only for predicting payment problems or default but also in the credit approval process in the first line. ML could help analyse and interpret a pattern associated with approvals and develop an algorithm to predict it more consistently (
Within the Operational risk domain, a field where ML is frequently used is Transaction monitoring as part of anti-money laundering. This is performed in the first line, with the second line Compliance function involved. ML techniques are able to detect patterns surrounding suspicious transactions based on historical data. Clustering algorithms identify customers with similar behavioural patterns and can help to find groups of people working together to commit money laundering. Also, fraud detection can be improved by using ML techniques. Models are estimated based on samples of fraudulent and legitimate transactions in supervised detection methods while in unsupervised detection methods outliers or unusual transactions are identified as potential cases of fraud. Both seek to predict the probability of fraud in a given transaction (Leo et al. 2019).
Optimization of bank’s regulatory capital with ML is another use case. AI and ML tools build on the foundations of computing capabilities, big data, and mathematical concepts of optimization to increase the efficiency, accuracy, and speed of capital optimization (
Liquidity risk has limited use cases (Leo et al. 2019). One of the largest asset managers has recently shelved a promising AI Liquidity risk model because they have not been able to explain the models’ output to senior management (
Application of AI and ML for Model risk management purposes is expected to increase. A few use cases have been observed for model validation, where unsupervised learning algorithms help model validators in the ongoing monitoring of internal and regulatory stress-testing models, as they can help determine whether those models are performing within acceptable tolerances or drifting from their original purpose (
Similarly, AI and ML techniques can also be applied to stress testing. The increased use of stress testing following the financial crisis has posed challenges for banks as they work to analyse large amounts of data for regulatory stress tests. In one use case, AI and ML tools were used for modelling capital markets business for bank stress testing, aiming to limit the number of variables used in scenario analysis for ‘Loss Given Default” and “Probability of Default” models. By using unsupervised learning methods to review large amounts of data, the tools can document any bias associated with selection of variables, thereby leading to better models with greater transparency (
According to Leo et al. (2019), much of the other areas of non-financial risk management, country risk management, compliance risk management — aside from money laundering related uses — and conduct risk cases haven’t been explored adequately.
No Applications of AI and ML have been observed in the third line yet.
Obviously, a number of benefits arise from the use of AI and ML. The techniques may enhance machine-based processing of various operations in financial institutions, thus increasing revenues and reducing costs (
It is expected that the time for data analysis and risk management will decrease, making risk management more efficient and less costly. AI and ML can be used for risk management through earlier and more accurate estimation of risks. For example, to the extent that AI and ML enable decision-making based on past correlations among prices of various assets, financial institutions could better manage these risks. Despite being critiqued for operating like a black box, the ability of ML techniques to analyse volumes of data without being constrained by assumptions of distribution and deliver much value in exploratory analysis, classification and predictive analytics, is significant (Leo et al. 2019). Also, meeting regulatory requirements could become more efficient by automating repetitive reporting tasks and by the increased ability to organize, retrieve and cluster non-conventional data such as documents (
As depicted in figure 3, there are quite a few risks that need to be addressed when using AI and ML techniques.
As
As ML bases much of the modelling upon learning from available data, it could be prone to the same problems and biases that affect traditional statistical methods. As machine-learning methods are compared to traditional statistical techniques, it would be beneficial to evaluate and understand how problems inherent to traditional statistical research methods fare when treated by ML techniques (Leo et al. 2019). An AI ML model could fail if it is not properly trained for all eventualities or in case of poor training data (
The lack of information about the performance of these models in a variety of financial cycles, has been noted by authorities as well. AI and ML based tools might miss new types of risks and events because they could potentially ‘over train’ on past events. The recent deployment of AI and ML strategies means that they remain untested at addressing risk under shifting financial conditions (
DNB (
According to DNB (
Then there is the issue of consumer protection. All processing of personal data has to be authorized by the consumer and be subject to privacy and security standards (
A risk that is also present here is losing consumer confidence and reputational risk arising from AI and ML decisions that might negatively affect customers. Efforts to improve the interpretability of AI and ML may be important conditions not only for risk management, but also for greater trust from the general public as well as regulators and supervisors in critical financial services (
There are also ethical issues when using AI and ML. AI could adopt societal biases. “Even if all data is tightly secured and AI is kept limited to its intended use, there is no guarantee that the intended use is harm free to consumers. Predictive algorithms often assume there is a hidden truth to learn, which could be the consumer’s gender, income, location, sexual orientation, political preference or willingness to pay. However, sometimes the to-be-learned ‘truth’ evolves and is subject to external influence. In that sense, the algorithm may intend to discover the truth but end up defining the truth. This could be harmful, as algorithm developers may use the algorithms to serve their own interest, and their interests – say earning profits, seeking political power, or leading cultural change – could conflict with the interest of consumers” (
According to
There is the issue of transparency. As mentioned above, deep learning techniques might pose a risk in itself, as the ‘black box’ system hinders effective risk oversight. These techniques are often quite opaque, leading to difficulties in terms of transparency, explainability and auditability towards management of the bank as well as its auditors. It can also cause regulatory compliance issues around demonstrating model validity to auditors and regulators (
More complex AI algorithms lead to an inability of humans to visualize and understand the patterns. AI algorithms update themselves over time, and are by their nature unable to communicate its reasoning (
Also, ‘black box’ techniques could create complications in tail risk events. According to the Financial Stability Board (2017), “Black boxes’ in decision-making could create complicated issues, especially during tail events. In particular, it may be difficult for human users at financial institutions – and for regulators – to grasp how decisions, such as those for trading and investment, have been formulated. Moreover, the communication mechanism used by such tools may be incomprehensible to humans, thus posing monitoring challenges for the human operators of such solutions. If in doubt, users of such AI and ML tools may simultaneously pull their ‘kill switches’, that is manually turn off systems. After such incidents, users may only turn systems on again if other users do so in a coordinated fashion across the market. This could thus add to existing risks of system-wide stress and the need for appropriate circuit-breakers. In addition, if AI and ML based decisions cause losses to financial intermediaries across the financial system, there may be a lack of clarity around responsibility” (
Specialized and skilled staff is required to implement new techniques such as AI and ML. It might be challenging to attract sufficient personnel possessing these specific skills. At Board of directors’ level, sufficient knowledge should be present, enabling the Board to assess the risks of AI. Second line personnel should be trained to understand AI specific challenges and risks. Personnel working with AI applications should be made aware of the strengths and limitations (
When there is some or full automation of the process from data gathering to decision making, human oversight is essential. This becomes more necessary as the level of automation rises, or when ML techniques become more prescriptive.
When taking all of the risks mentioned above into account, it seems apparent that the use of AI and ML techniques also brings about extra challenges in the context of the common ambition of integrated risk management within banks. Use cases being dispersed throughout different parts of the bank could hinder integrated risk management and an integrated approach towards these risks.
As shows from the use cases mentioned above, AI and ML can be used within each of the 3LoD, or throughout multiple lines. It appears that the techniques are most used within the first line, or in use cases where first and second line are both involved.
If used purely in the first line, the 3LoD model can be applied as designed. In this case, it is important to safeguard that sufficient knowledge of the techniques and its use is also present in second and third line functions, to ensure compliance, to identify and manage risks, to challenge the first line on replicability of decisions and validity of the model and to perform audits effectively. As mentioned above, the scarcity of resources with the required skills and knowledge can be an issue (
For a number of applications, such as credit risk modelling and approval, transaction monitoring or fraud detection, both the first and the second line are involved. Here it gets more difficult to apply the 3LoD model. Depending on the nature of the involvement of the second line function, e.g. whether they are developing AI ML tools themselves, there should be an independent function involved that provides independent validation and challenge. So applying the 3LoD model without any adjustments does not seem wise in this case. When zooming in on the second line risk management function, this function “facilitates and monitors the implementation of effective risk management practices by operational management and assists risk owners in defining the target risk exposure and reporting adequate risk-related information throughout the organization” (
A potential better way of ensuring a controlled deployment of AI and ML techniques, which is at the same time in line with the principles of the 3LoD model is to assign specific roles (
This could be performed by an independent function, or if the size of the bank is insufficient, by data scientists who are not associated with the specific model or project at hand.
Together with the business owners, a group of data owners and data scientists comprise the first line of defence. The validators comprise the second line of defence, together with the governance personnel. The third line function could be performed by independent internal auditors, provided that they have the expertise needed. This set up is necessary to safeguard an effective challenge throughout the model lifecycle by multiple parties, separate from the model developers. In assigning these specific roles, the principles of the 3LoD model are safeguarded.
Some other points are relevant when thinking about AI and ML in the context of the 3LoD model and controlled application. All ML projects should start by clearly documenting initial objectives and underlying assumptions, which should also include major desired and undesired outcomes. This should be circulated and challenged by all stakeholders. Data scientists, for example, might be best positioned to describe key desired outcomes, while legal personnel might describe specific undesired outcomes that could give rise to legal liability. “Such outcomes, including clear boundaries for appropriate use cases, should be made obvious from the outset of any ML project. Additionally, expected consumers of the model — from individuals to systems that employ its recommendations – should be clearly specified as well” (
The materiality of the model that is deployed should be taken into account in all three lines (
How ‘black box’ the AI technique is, is often a result of choices made by developers of the model. Predictive accuracy and explainability are frequently subject to a trade-off; higher levels of accuracy may be achieved, but at the cost of decreased levels of explainability. This trade off should be documented from the start, and challenged by other functions. “Any decrease in explainability should always be the result of a conscious decision, rather than the result of a reflexive desire to maximize accuracy. All such decisions, including the design, theory, and logic underlying the models, should be documented as well” (
When viewing the significant amount of risks in using AI and ML as described above, and the challenges when it comes to applying the 3LoD model, a sound governance surrounding the use of AI and ML is essential. The risks concerned need to be properly identified, assessed, controlled and monitored. This also means clearly defining the roles and responsibilities for the functions involved, be it in the first, second or third line of defence. “Any uncertainty in the governance structure in the use of AI and ML might increase the risks to financial institutions” (
According to the Financial Stability Board (2017), because AI and ML applications are relatively new, there are no known dedicated international standards in this area yet. Apart from papers on this topic published by regulatory authorities in Germany, France, Luxembourg, The Netherlands and Singapore, no European or international standards were published. Although calls to regulate AI and ML are heard more often, the current regulatory framework is not designed with the use of such tools in mind. Some regulatory practices may need to be revised for the benefits of AI and ML techniques to be fully harnessed. “In this regard, combining AI and ML with human judgment and other available analytical tools and methods may be more effective, particularly to facilitate causal analysis” (
DNB recently published a set of general principles for the use of AI in the financial sector (
“The Basel Committee on Banking Supervision (BCBS) notes that a sound development process should be consistent with the firm’s internal policies and procedures and deliver a product that not only meets the goals of the users, but is also consistent with the risk appetite and behavioural expectations of the firm. In order to support new model choices, firms should be able to demonstrate developmental evidence of theoretical construction; behavioural characteristics and key assumptions; types and use of input data; numerical analysis routines and specified mathematical calculations; and code writing language and protocols (to replicate the model). Finally, it notes that firms should establish checks and balances at each stage of the development process” (
Many of the use cases described in this article could result in improvements in risk management, compliance, and systemic risk monitoring, while potentially reducing regulatory burdens. AI and ML can continue to be a useful tool for financial institutions by implementing so called “RegTech”, aiming to facilitate regulatory compliance more efficiently and effectively than existing capabilities. The same goes for supervisors via “SupTech”, which is the use of AI and ML by public sector regulators and supervisors. The objective of “SupTech” is to enhance efficiency and effectiveness of supervision and surveillance (
From a market wide perspective, there are also potential new and/or systemic risks to take into account when using AI and ML techniques. If a similar type of AI and ML is used without appropriately ‘training’ it or introducing feedback, reliance on such systems may introduce new risks. For example, if AI and ML models are used in stress testing without sufficiently long and diverse time series or sufficient feedback from actual stress events, there is a risk that users may not spot institution-specific and systemic risks in time. These risks may be pronounced especially if AI and ML are used without a full understanding of the underlying methods and limitations. “Tools that mitigate tail risks could be especially beneficial for the overall system” (
A more hypothetical issue is that models used by different banks might converge on similar optimums for trading causing systemic risk as well (
“AI and ML may affect the type and degree of concentration in financial markets in certain circumstances. For instance, the emergence of a relatively small number of advanced third-party providers in AI and ML could increase concentration of some functions in the financial system” (
“The lack of interpretability or ‘auditability’ of AI and ML methods has the potential to contribute to macro-level risk if not appropriately audited. Many of the models that result from the use of AI or ML techniques are difficult or impossible to interpret”. Auditing of models may require skills and expertise that may not be present sufficiently at the moment. “The lack of interpretability may be overlooked in various situations, including, for example, if the model’s performance exceeds that of more interpretable models. Yet the lack of interpretability will make it even more difficult to determine potential effects beyond the firms’ balance sheet, for example during a systemic shock. Notably, many AI and ML developed models are being ‘trained’ in a period of low volatility. As such, the models may not suggest optimal actions in a significant economic downturn or in a financial crisis, or the models may not suggest appropriate management of long-term risks” (
Artificial Intelligence (AI) refers to machines that are capable of performing tasks that, if performed by a human, would be said to require intelligence. AI uses instances of Machine Learning (ML) as components of a larger system. ML is able to detect meaningful patterns in data. The main difference when comparing AI ML techniques with more traditional statistical modelling techniques is that the AI/ML model trains itself using algorithms, so it can learn from data without relying on rule based programming or instructions from a human programmer.
Among the most used AI and ML techniques within banks are credit risk modelling- and approval, transaction monitoring regarding Know Your Customer and Anti Money Laundering and fraud detection, which are usually jointly developed by first and second line functions. Frequently observed use cases in the first line are client servicing solutions and market risk monitoring- and portfolio management. The techniques are used to a lesser extent for pure second line risk management purposes until now, while no use cases have been observed for third line functions. It is expected that applications in the risk management and internal audit domain will increase in the years to come.
There are obvious benefits to using AI and ML techniques, they may enhance machine-based processing of various operations in financial institutions, thus increasing revenues and reducing costs. It is expected that the time for data analysis and risk management will decrease, e.g. by earlier and more accurate estimation of risk, making risk management more efficient and less costly. The ability of ML techniques to analyse volumes of data without being constrained by assumptions of distribution is significant. Also, meeting regulatory requirements could become more efficient by automating repetitive reporting tasks and by the increased ability to organize, retrieve and cluster non-conventional data such as documents.
There are also numerous risks and challenges to address. Modelling issues and data issues can occur when insufficient suitable data is available, or when hackers maliciously manipulate big data. Also, the model outcomes have not been tested through a financial cycle yet. There are risks regarding consumer protection and privacy as well as reputational risks stemming from ethical issues. Sufficient specialized and skilled staff is needed within banks and there are numerous risks regarding transparency and auditability.
This article aimed to answer the question: “How can the application of Artificial Intelligence and Machine learning techniques within banks be placed in the context of the Three lines of defence model?”
When AI and ML are placed in the context of the 3LoD model, there are quite some prerequisites to apply AI and ML in a controlled way. If the second line risk management function is involved in the operational development of the model, independent oversight, challenge, validation and assurance should be safeguarded by a separate function performing the second line role. In addition, the internal audit function must be involved. Ensuring the proper functioning of the 3LoD model could also be done by assigning specific roles within each AI/ML project, that safeguard the controlled deployment of AI and ML techniques. Data owners and data scientists comprise the first line of defence, together with the business owner. The second line role could then be comprised of validators and other governance personnel that review and approve the work from a technical and a compliance perspective, respectively. Other prerequisites are a sound governance surrounding the use of AI and ML, clearly defined roles and responsibilities, a dedicated oversight function, a sound model risk management framework, a sound framework for managing all of the risks and policies and processes for the use of AI and ML, ensuring that the deployment of these techniques fit the strategy and risk appetite of the bank.
Collective adoption of AI and ML tools may introduce new systemic risks. If e.g. a critical segment of financial institutions rely on the same data sources and algorithmic strategies, under certain market conditions a shock could affect this entire segment and thus spread the impact of the shock throughout multiple financial institutions. Without sufficiently long and diverse time series or feedback from actual stress events, it is possible that tail risks are not spotted in time. The current regulatory framework does not sufficiently address the field of AI and ML and therefore needs to be revised and updated. This is perceived necessary to address all new risks at hand, as well as the challenges presented regarding the application of the three lines of defence model. In this effort, regulators might leverage on the existing regulation for e.g. credit risk modelling. Risk managers should follow the developments in this field closely, to be able to assess the (new) risks within individual institutions and for the financial system as a whole. Also, sufficiently skilled resources should be available within the internal and external audit community, as to ensure the proper auditing of the techniques deployed by banks.
Taking into account the risks, the application of AI and ML could be expanded in the area of market risk, liquidity risk, model risk management, stress testing and in the third line. Also, the use of AI and MI to manage tail risk could be further investigated. Another area to monitor and possibly further investigate is the role of BigTech companies and their duality in being suppliers of AI and ML technology as well as competitors of banks. Given the expanding use of AI and ML techniques, new issues and risks will undoubtedly emerge and may warrant further research. It is key that existing governance is strengthened and adjusted following these new issues and risks.
A.Z. Tammenga MSc. is working as a consultant at Transcendent Group Netherlands and is also a student in the Postgraduate program “Risk management for Financial Institutions” at the Free University in Amsterdam.