1 Introduction

1.1 Access to Finance

Artificial intelligence (AI) is seen as a threat to employment, as it will take away manual jobs.Footnote 1 This threat is also true for financial services.Footnote 2 Nutmeg Review in 2017 revealed that a large proportion of the traditional business of financial advisers is threatened by automated services.Footnote 3 Despite this, the uncertainty that AI brings to incumbent operators, consumers, and financial regulators has not stopped investment in AI FinTech. To strengthen its legitimacy in the financial services sector and to gain public acceptance, AI must provide some social benefits. Therefore, access to finance,Footnote 4 also referred to as financial inclusion, should be set as the policy priority in the regulation of the design, development, and deployment of AI. According to the definition given by the Financial Inclusion Report 2018/19, ‘financial inclusion means that individuals, regardless of their background or income, have access to useful and affordable financial products and services. This includes products and services, as well as transactions and payment systems, and the use of financial technology.’Footnote 5 The provision of access to finance should aim to enable those currently excluded by the existing systems to participate in the financial markets. To benefit from the economic growth, these individuals should have access to capital pools for personal or business finance, access to real-time information with the aid of data analytics, and have access to a range of investment providers. In this article, the author will focus on three aspects of how AI can enhance financial inclusion: (1) increasing participation in peer-to-peer (p2p) platforms by providing security for them, (2) closing the advisory gap in investment services; and (3) allowing more financial outlets to be operated in the financial services sector by reducing their operational costs, such as compliance costs.

The paper will discuss the regulatory objectives and methods of regulation in these three areas by looking at close parallels. The first is the use of AI in the trading platforms for capital optimisation, such as an increase in efficiency, accuracy, and the speed of capital optimisation through the foundations of computing capabilities, big data and mathematical concepts built by AI and machine learning (ML).Footnote 6 The second is the use of robo-advisers to provide investment services, such as identifying wider sources of available funds for FinTech lending to small and medium-sized enterprises and clients through AI and ML’s advanced credit scoring.Footnote 7 And the third is the use of AI in RegTech services to streamline compliance costs,Footnote 8 such as the costs involved in the Know Your Consumer processes (‘KYC’). In this way, the author will examine how AI-facilitated access to finance can align with current regulatory objectives and methods of regulation. Whether the provision of this access will conflict with other values, such as privacy, data protection, and ethics will also be examined.

2 Applications of AI in Finance

2.1 Algorithmic Trading on Regulated Platforms

The UK authorities regard financial technology as an efficient tool to tackle financial exclusion and also as a way to encourage firms to develop innovative processes and thus increase consumer access to the financial services.Footnote 9 P2p platforms have been regarded as providing a more economic way of bringing businesses and investors together. Compared with bank saving, p2p platforms offer higher interest rates.Footnote 10 In addition, investments through p2p platforms offer higher liquidity than traditional property investments.Footnote 11 Most importantly, p2p platforms normally split capital into several parts for multiple borrowers, thus lowering the risk of incurring major losses. Algorithmic trading can be used in p2p platforms to increase access to finance, particularly in capital allocation. The tightening of bank lending policies followed the financial crisis and p2p lending has become a major player in global financial markets. For instance, LendingClub Inc. developed its own platform, relying on sophisticated algorithms to pair borrowers and investors and also to evaluate the attributes of both sides.Footnote 12 Not only on the p2p lending platforms, it is envisaged that algorithms may also be used for secondary securities trading on blockchain-based trading platforms such as Initial Coin Offerings (ICOs). For instance, in early 2019 the London Stock Exchange (LSE) invested 20 million dollars to Nivaura, a blockchain start-up that specialises in fully-automated tokenised bonds recorded on a blockchain.Footnote 13 In April 2019, LSE and Nivaura issued shares with £ 3 million on LSE’s test network.Footnote 14 While traders in the financial markets are using algorithms to make gains, the author argues that similar tools and opportunities should also be given to investors and consumers. This is a way to provide access to financial markets. Therefore, algorithms should be made available through market competition. Real-time data should also be made available to consumers and investors, rather than being an expensive commodity only available to those who can afford it.

However, users need to feel confident that the platforms on which algorithms are used are not likely to cause a market crash or to manipulate the market. Market crashes due to human behaviour have occurred in modern capital marketsFootnote 15 such as the ‘Dotcom Bubble Burst’ in the 1990s and the 1998 ‘Asian Crash’. AI does not reduce the risk of a market crash and may even increase the chances of it happening.Footnote 16 This can be seen in the market crash in 2008 for which High Frequency Trading (HFT) was considered the cause, or at least a contributing factor.Footnote 17 Hence, the regulation of HFT can provide a blueprint for regulating algorithmic trading on a p2p platform.Footnote 18 HFT is a type of algorithmic trading that has been used for more than a decade on trading platforms.Footnote 19 The reason for its emergence was the market liberalisation and market competition that substantially reduced the trading revenues of the trading platforms.Footnote 20 HFT allows trading venues to raise trading fees. It is not yet clear if HFT will be used on other trading platforms, such as p2p platforms,Footnote 21 to provide the same benefit. The business model on p2p trading platforms may not be the same as that on securities trading platforms. Technical obstacles may need to be overcome; for instance, the speed on a blockchain-based p2p platform may not be sufficient to support HFT.Footnote 22 However, the lessons learned regarding the risks in algorithmic trading provide a good regulatory framework that can be applied to provide security for users on the platform.Footnote 23 HFT, computerised trading controlled by algorithms, is a subset of the broader (and older) phenomenon of algorithmic trading.Footnote 24 In essence, algorithmic trading is simply the use of specialised software to implement predetermined decision-making rules for the evaluation of market conditions and other data in order to make trading decisions without human involvement.Footnote 25 Hence, in algorithmic trading, the traders’ computers directly interface with trading platforms, placing orders without immediate human intervention. The computers observe, at very high frequency, market data and possibly other information. Based on a built-in algorithm, trading instructions are sent to the platform, often within milliseconds. A variety of algorithms are used for identifying arbitrage opportunities; for seeking the optimal execution of large orders at a minimum cost; and for seeking to implement longer-term trading strategies.Footnote 26

2.2 Market Safety Regulation

The main focus of the regulation of algorithmic trading is market safety to address systemic risks caused by algorithmic trading. These include flash crashes, reduced liquidity, and herding behaviour.Footnote 27

2.2.1 Flash Crash

A market crash, predating the involvement of AI, has happened in the capital markets and the regulations put in place to avoid a recurrence focus primarily on human conduct. When machines are involved, systemic risk becomes the main concern, so market safety regulation needs to be introduced to provide security. The EU and UK regulators have introduced measures to mitigate the risk of a flash crash caused by AI. Algorithmic trading can result in a ‘flash crash’, for example, when the withdrawal of stock orders rapidly amplifies price declines.Footnote 28 After the flash crash in 2010,Footnote 29 the UK’s Financial Conduct Authority (FCA), alongside the Prudential Regulation Authority, started closely monitoring HFT. HFT firms were considered to have contributed to market instability and to the overall lack of investors’ trust in the market.Footnote 30 In the same effort, the EU regulates HFT activities under the ‘Markets in Financial Instruments Directive II’Footnote 31 and the ‘Markets in Financial Instruments Regulation’,Footnote 32 known together as ‘MiFID II and MiFIR’.Footnote 33 MiFID II has three main strands. First, it provides for a new operational regime governing algorithmic trading by investment firms.Footnote 34 Second, it extends its scope to encompass all firms engaging in algorithmic trading, in particular, specialist firms that undertake HFT. Third, it imposes operational requirements on trading venues, such as exchanges e.g. platforms.Footnote 35 For instance, circuit breakers, also called shock absorbers, are required for trading venues such as exchanges to temporarily halt trading when market prices, as indicated by a benchmark index, fall by a certain percentage during a specific period.Footnote 36

Under the first strand, investment firms that engage in algorithmic trading are subject to a ‘targeted operational regime’.Footnote 37 They are required to have in place effective systems and risk controls to ensure that trading systems are resilient and maintain the appropriate thresholds and limits to prevent incorrect or erroneous orders which may create or contribute to a disorderly market.Footnote 38 In the UK, these algorithmic trading requirements were implemented through Chapter 7A of the Market Conduct Sourcebook.Footnote 39 The second strand requires that firms must have ‘effective business continuity arrangements’ to handle any trading system failure and must ensure that systems are fully tested and continuously monitored.Footnote 40 Third, HFT firms must have an emergency ‘kill functionality’, which allows them to cancel all unexecuted orders with immediate effect.Footnote 41 In addition to this organisational requirement, all firms must notify the Financial Conduct Authority and the venue’s competent authority if the firms engage in any HFT on any EU trading venue.Footnote 42 Lastly, firms must carry out an annual self-assessment and issue a validation report covering such elements as governance, the control framework, and overall compliance with the other MiFID II requirements.Footnote 43 Firms are required to identify the algorithm ownership, establish testing processes, and identify relevant environmental factors, such as the counterparties that use algo-trade.Footnote 44 They are also required to identify risks and provide risk mitigation measures. The Prudential Regulatory Authority has also published a consultation paper to accompany these current regulations which focuses on the proposed expectations of a firm’s policies for the ‘governance and risk management of algorithmic trading’.Footnote 45 Some countries go further than the EU regulations and have adopted measures such as a requirement to hold a licence to operate HFT. In Germany, under section 32 of the German Banking Act, HFT firms need to hold a licence issued by the German Federal Financial Supervisory Authority.Footnote 46

2.2.2 Liquidity Risk and Procyclical Behaviour

To address liquidity risk, HFTs are required to register as market makers.Footnote 47 A market maker is a market participant that buys and sells large amounts of a particular asset in order to facilitate liquidity and ensure the smooth running of financial markets.Footnote 48 Market makers are obliged to continually quote bid and offer prices, and to guarantee the full sale or absorption of the security at a certain price.Footnote 49 When the market becomes stressed,Footnote 50 an HFT has an additional duty to ensure liquidity. Market makers have to protect proposals to purchase and sell stocks at levels corresponding to different price and size thresholds. Market makers have to comply with four different elements: to offer the quote at the best price in a specified period of time; to offer the quote at prices within the specified period of time from the National Best Bid or Proposal; to quote with a minimum dimension and various prices; and to build markets for a minimum amount of stocks.Footnote 51 Market makers take the highest trading risk to comply with the four elements, and are therefore required to maintain the necessary amount of capital.Footnote 52 Additionally, if market prices escalate or decrease more than a certain amount, exchanges have the authority to limit the usage of specific trading tactics.Footnote 53

There is also a risk of procyclical behaviour when market participants begin to use similar AI and ML programmes.Footnote 54 The consequent correlated risks may entail financial stability risks.Footnote 55 If an ML-based trader outperforms others, this could in the future result in many more traders adopting similar ML strategies, even if this reduces the profitability of such strategies.Footnote 56 While there is no evidence to date of this having occurred, it could become relevant as such trading strategies are increasingly adopted. As with any herding behaviour in the market, this has the potential to amplify financial shocks. The main risk is the creation of procyclical behaviour that is harmful to financial stability.Footnote 57 If regulators develop a preference for a robo-adviser design that is understood by firms, it could result in a convergence of models that would increase the probability of a systemic crisis.Footnote 58

2.2.3 The Same Approach to Peer-to-Peer Trading Platforms

Market safety and soundness regulations are aimed at providing financial stability and security to the users. In this section, the author uses the example of HFT to identify some of the risks to market stability of using AI. The main approaches to regulating activities and risks are internal systems and controls, self-assessment, and the reporting by both firms and trading venues.Footnote 59 There are also specific requirements to deal with market crashes, liquidity risk, and correlated losses. In addition, traders using AI will need to fulfil their duties as market makers. To avoid similar losses, brokers and investor advisers using similar AI technology will need to be subject to the same systems and controls as those used for HFT. In the application of regulations for the use of AI on p2p trading platforms, market safety and market integrity, alongside the access to finance, should continue to be the regulatory objectives. The risks, such as systemic risk, liquidity risk, and correlated risk, are the same as those on the capital market trading platforms. The difference lies in the methods of regulation. On the p2p platforms, organisational regulation is unlikely to be implemented for individual participants on the platforms. The regulatory emphasis will be more on requiring the trading platforms to vet individuals who use algorithms to make transactions,Footnote 60 to maintain the capacity to absorb the liquidity risk, and to have surveillance to control market manipulation. In fulfilling the objective of providing access to finance, the cost of maintaining the system should not be transferred to the users. If the cost of addressing these risks is too high, fewer platforms will be willing to operate. Hence, there should be ways to implement the systems of risks and control (organisational requirement) in a more economical way, such as by using certification systems to allow algorithmic uses on the platforms (an alternative way to submitting source code) and using RegTech solutions to monitor market manipulation.Footnote 61 In addition, the regulators can regulate the financial instruments (through product intervention powerFootnote 62) that could be used on the trading platforms.

3 Investor Protection

AI should give more freedom of choice as well as security to investors. It can enable access to finance by providing more economical investment advice to consumers who are excluded from accessing investment opportunities through a lack of information. This presents an opportunity for the use of AI to provide services to consumers for both execution or investment advice through, for example, robo-advisers.Footnote 63 When AI is consumer facing, such as in the use of robo-advisers or the use of AI by intermediaries for stock and fund selections, the focus of regulation is on both the ex ante (including reviewing the algorithmic models, Customer Due Diligence (CDD) and algorithm explicationFootnote 64) and the ex post protection of investors (compensation and liability for AIFootnote 65), especially retail investors. Hence, the main issues are the consumers’ understanding of the nature of AI (through algorithm explicabilityFootnote 66), the risk of using AI, and the liability associated with AI. Investors should also be protected in a fiduciary context: the providers of AI services have an obligation to act in the best interest of the customers and to present no conflict of interest.Footnote 67

3.1 Using Robo-Advisers to Close the Investment Advice Gap

There is no consensus on the definition of the financial advice gap.Footnote 68 According to the definition given by the Financial Advice Market Review (FAMR), the financial advice gap refers to ‘situations in which consumers are unable to get advice and guidance on a need they have at a price they are willing to pay’.Footnote 69 Cost is not the only factor causing this gap, and the financial advice gap should be defined as ‘the difference between the number of people who currently seek advice, and those who would seek advice if a cheaper and less intensive process existed’.Footnote 70 However, there is a common view that a gap exists for (potential) customers who have lower incomes or a lower level of assets and could either not afford the advisory fee or find it hard to access.Footnote 71 The conclusion of the FAMR is that the underlying reason for the existence of the financial advice gap is that there are not sufficient financial advisers since too many advisers are serving wealthy clients.Footnote 72 According to the survey conducted by OpenMoney and YouGov, an increasing number of clients are falling into the financial advice gap.Footnote 73 OpenMoney’s survey indicates that there are more than 400,000 people who believe they could not afford financial advice and over five million people who are not aware that there is free financial advice that could benefit them.Footnote 74 There are six million Britons who would be willing to pay for financial advice if the fee for that advice were lower.Footnote 75

The role of robo-advisers such as Nutmeg and Wealthify is to fill this financial advice gap.Footnote 76 Robo-advisers are a type of financial adviser that provides financial advice ‘in person’ or enables investment management online, with moderate to minimal human intervention. ‘Robo-advice’ is an umbrella term that refers to a broad spectrum of online automated tools and algorithms to determine financial or investment decisions for an individual’s portfolio. This process is based on financial analysis algorithms derived from mathematical rules. Progress through economic modelling and AI is the cornerstone of this technology. These algorithms are executed by software and thus, taken to extremes, human intervention is not required.Footnote 77 Robo-advisers aim at reducing the cost of financial advisory services, increasing consumer protection by reducing conflicts of interest,Footnote 78 providing better rational investment choices,Footnote 79 and enabling more access to real-time information.Footnote 80 Hence, robo-advisers can enhance the access to finance and reducing the cost of financial access that leads to more affordable financial services.Footnote 81 This effect would be highly beneficial for average savers who could access services that were previously inaccessible due to high commission fees.Footnote 82

3.2 Risks of a Conflict of Interest, Unsuitable Products, and Design Errors

The use of robo-advisers may put an end to two related problems in the financial markets. Financial advisers tend to sell high commission-fee products because the advisers draw their income from their sale.Footnote 83 Consequently, there is a conflict of interest between the investor and the adviser, who is also the broker.Footnote 84 However, it is possible for the algorithms to be designed to avoid such a conflict.Footnote 85 There is also a risk that robo-advisers may promote products that are not suitable for the particular investor. In addition, there is a risk that there may be algorithm design errors that can cause investor losses.Footnote 86 There is clearly potential tension between the broker's interests and the client's interests.Footnote 87 Robo-advisers may only be used to provide advice or recommendations, without performing the executions, although the users in these situations are likely to be guided to place orders through a specific product provider. For instance, Scalable Capital, an online robo-advice company, launched over-the-phone and face-to-face consultations for clients to select their investment portfolios based on their level of risk tolerance from a range of suggested advice.Footnote 88 Although financial institutions are responsible for damages caused to investors by using or relying on robo-advisers, there are some situations in which financial institutions are not responsible. These are the following: first, when financial institutions become insolvent and there is insufficient money cover to compensate investors; and second, when advice given was regarded merely as ‘guidance’ and the investor has autonomy to decide whether or not to accept it.Footnote 89

3.3 Legal Framework for Protecting Investors

Access to finance requires a legal and regulatory framework to protect investors who use robo-advisers for personal or business finance. There are various legal tools that govern the relationship between financial institutions and investors, as well as between developers of AI and investors.

3.3.1 Contract

In the UK, investors are protected by common law, statutory law, and the regulators’ rules. Under common law, investors are protected through contract law, the duty of care under tort law, and the fiduciary duty provisions under the law of equity. In contract law, investors are protected by the terms under the contract as well as by the various principles including mistake, misrepresentation, duress, undue influence, and legality.Footnote 90 Under the Unfair Contract Terms Act 1977, they are also protected against unfair terms.Footnote 91 Therefore, the use of AI devices by advisers should not violate these principles. Furthermore, if the investors are consumers, they will also be protected by consumer protection laws. However, contract law is unlikely to be an effective tool in providing protection to investors who do not have the capacity to appreciate the nature of AI, the performance of it, and whether AI is a suitable device for them to use for making investments.Footnote 92 Increasingly, a contract will appear merely as evidence of the existence of an agreement for the conduct between financial advisers and their clients. Prior to investors’ use of AI, the terms of a contract between them and their advisers are unlikely to be negotiated between the parties. Investors are unlikely to be sophisticated enough to ask the financial advisers to disclose relevant information before they agree to accept the advice derived from the algorithms. Clients may not even be able to negotiate a specific term that requires their advisers to explain how particular products were selected for them.

3.3.2 Tort

In tort, the advisers should owe a duty of care to the investors,Footnote 93 which entails ensuring that in deploying AI for providing advice or even execution services, the advisers’ behaviour does not fall short of a standard of care. The difficulty lies in how to identify an appropriate standard of care when using AI. There is currently no such standard set by either the financial services regulators or the industry.Footnote 94 Even if the regulators have set out rules with which the investment advisers must comply, these rules do not necessarily form the standard of care that the court will apply in determining whether there is a breach.Footnote 95 Relevant questions in setting the standard include: which AI devices are suitable for the client? what kind of data should be fed into an AI device to produce investment advice? what kind of warning should be given to investors? what kind of assistance should be given to investors? what kind of explanation should be given to investors about the outcome of the algorithms?

3.3.3 Fiduciary Duty

The core of the fiduciary duty is both the duty of loyalty and the duty of good faith.Footnote 96 Under the duty of loyalty, investor advisers should not put their interests in conflict with those of their clients.Footnote 97 This duty should not be removed by contract,Footnote 98 as fiduciary duties may generally be altered or restricted by agreement between the parties.Footnote 99 In practical terms, advisers need to use their best endeavours to find a suitable investment product for their clientsFootnote 100 rather than offering products that will reward them with a commission. Compared with the duty of care, the fiduciary duty applies in more limited circumstances. The fiduciary duty prohibits financial advisers from acting in a disloyal or improper way, i.e. it is a negative obligation.Footnote 101 However, the duty of care requires financial advisers to act competently and not recklessly, i.e. it is a positive obligation compared with the fiduciary duty.Footnote 102 When used in executing the clients’ request, AI should be used to fulfil the advisers’ duty of best execution.Footnote 103 While price discrimination is a common practice in market transactions, the fiduciary duty is insufficient to remove such a risk.

3.3.4 Consumer Protection Is Key

Access to finance will require stronger consumer protection to increase consumers’ willingness to use robo-advisers. Relying on an ex post regime based on common law is ineffective in providing protection to investors. The FCA statedFootnote 104 that if robo-advisers filter products and propose them based upon specific factors relating to a customer’s life and/or situation, this amounts to a personal recommendation. Therefore, the entire set of laws related to consumer protection would apply. Consumers are usually deemed to be vulnerable, as they do not understand the mechanisms working in financial markets and also because they cannot fully understand the advice provided.Footnote 105 As an experimentation space, a ‘sandbox’ would provide a more suitable environment for the emerging technologies.Footnote 106 Firms could then test a new business model built on Fintech without fearing the demanding scrutiny of the FCA.Footnote 107 However, it would not be sufficient to use a sandbox to protect consumers. In the US, the Securities and Exchange Commission (SEC) charged two robo-advisers with making false disclosures.Footnote 108

3.3.5 Product Liability

The institutions, i.e. investment firms, are responsible for damages caused to, or profits gained from investors. As the institutions will be deploying the AI devices, the current law specifies that they are responsible for damages caused to investors,Footnote 109 even though the AI devices have been developed by third parties, i.e. a tech company. In line with the approaches taken in the area of algorithmic trading, institutions using algorithmic trading are responsible for the effects caused and for the initial and continued testing of the algorithms.Footnote 110 However, the investor liability that the third parties should assume is not simply a matter for debate. The financial institutions could be insolvent, and there might be inadequate funding to compensate investors, whether provided through insurance or the government’s scheme of compensation. There might be an issue of how investors may hold the third-party developers responsible for losses.Footnote 111 In the realm of product liability, third parties are strictly liable for consumer losses.Footnote 112 However, AI is not a product and may not be used or operated by the consumers themselves. More often than not, AI devices are a set of algorithms (software) used by financial institutions to provide advice or execution for clients. Should the computer scientists and tech developers be responsible for mistakes in the design of the algorithms? The general answer lies in the product liability of the software.Footnote 113 The more specific question is whether AI developers should be responsible for any damages caused by the design of AI. The same question also arises in automated vehiclesFootnote 114 and defence software.Footnote 115 The EU Directive 1985Footnote 116 imposes strict liability on all parties in the supply chain for defective products, whether or not the defect arose from negligence. The UK lawFootnote 117 that embodies this directive is not clear on whether software is a product: the application of this law to software might cause a floodgate risk, and due to the widespread use of software, a legal finding that software providers are at fault could lead to unlimited liability.

3.4 Regulating Robo-Advisers to Achieve Access to Finance

To enhance financial inclusion through the use of robo-advisers, the AI solutions used must not be inferior to those given and used by wealth management providers to offer services to wealthier clients.Footnote 118 The major challenge of retail investors is to make informed decisions based on market information, such as fundamental aspects of companies, the industry specificities, market competition, and macro-economic conditions. Average investors are not normally equipped with professional expertise or the time to collect and analyse these data. By contrast, the wealthier investors with the help of professional financial advisers could make better data-driven investment decisions. In the UK, after a free introductory session, typical independent financial advisers’ fees are £450 for advice in a £11,000 investment asset scale; and £1,500 for investment strategy advice for a £50,000 inheritance.Footnote 119 AI should be used to level the playing field.Footnote 120 The World Bank’s research indicates that the asset amount under the management of robo-advisers will be tripled in the US from 426 billion US Dollars in 2018 to 1,486 billion in 2023.Footnote 121 For retail investors, robo-advisers are in place to reduce minimum investment requirements, even to no minimum investment criteria at all, as in the case of Betterment, and to charge lower fees, normally 0.25% of managed assets for robo-advisers and 0.75–1.5% of managed assets for human advisers. This is because robo-advisers save on fixed costs, such as the salaries of financial advisers, and reduce behavioural biases, such as limited capacity to manage and invest in various categories of assets.Footnote 122 The Bank of America requires $25,000 US to create an account with private financial advisers, compared with $5000 US to create an account with robo-advisers.Footnote 123 In addition, robo-advisers can easily be accessed at any time and from anywhere.Footnote 124

In terms of redress, consumers are less likely to bargain for their terms and are less likely to bring law suits to recover compensation, due to the excessive costs of bringing lawsuits.Footnote 125 Therefore, there must be more ex ante control in the design, development, and deployment of the robo-advisers used, and a more robust complaint or compensation scheme for consumers should be implemented.Footnote 126

4 AI as RegTech

4.1 RegTech and SupTech in Financial Markets

Since AI can streamline KYC/CDD processes to reduce the compliance costs of financial intermediaries,Footnote 127 more investment firms could be set up to provide services to investors. This would increase market competition and allow more financial innovation and thereby provide better access to finance for consumers and investors. AI devices can be used to detect conduct that violates market integrity, such as market manipulation including price fixing, disinformation, insider dealing, and money laundering. AI devices can be used by financial institutions, regulators, policy-makers, or even private market watchdogs to detect and prevent such misconduct. When devices are used for this purpose, they are dubbed ‘regulatory technology’ (RegTech).Footnote 128 RegTech will also include SupTech that is mainly used for the purpose of market supervision.Footnote 129 When RegTech is used to detect market misconduct, it involves elements of market surveillance that include the collection of individual data.Footnote 130 In this situation, the value of protecting individual rights and dignity may conflict with market integrity and public interest. For instance, an anti-money laundering regime requires financial intermediaries to screen personal data. However, this may be in contravention with the spirit of the General Data Protection Regulation (GDPR).Footnote 131 The main objective of RegTech is to protect market integrity. AI has been used as a SupTech service by exchanges,Footnote 132 in providing supervision and as a RegTech service by financial institutions,Footnote 133 for compliance purposes. In addition to ensuring the previously mentioned appropriate design modes in investor protection, another emerging concern is the need for data governance that ensures privacy protection and data protection.Footnote 134 In this case, to ensure market integrity, data on individuals can be collected that relates to law enforcement-related activities, such as insider dealing, market manipulation, and money laundering.Footnote 135 Personal data, transaction/order book data, communications and suspicious transaction and order reports (STORs) are collected for market oversight.Footnote 136

The collection of personal data for RegTech may violate data protection rules and privacy rights.Footnote 137 While consent is required for controlling and processing data, data collected for market integrity can be processed and transferred without the consent of the individual.Footnote 138 That is to say, individuals may not have the right to prevent the unauthorized sharing of their personal information in accordance with the GDPR and Data Protection Act 2018.Footnote 139 The right of privacy can be violated when personal data are collected for the development and deployment of RegTech.

4.2 AI and Anti-Money Laundering (AML)

Some regulators are using AI for fraud and anti-money laundering and countering the financing of terrorism (AML/CFT) detection.Footnote 140 It is possible, for example, that ML algorithms could detect suspicious transactions and provide a risk score for such transactions through supervised learning.Footnote 141 In addition, ML could be applied to screen known criminals, individuals and institutions who are on the global ‘black-list’ and forecast the likelihood of money laundering.Footnote 142 Furthermore, unsupervised ML allows it to summarise the characteristics of variables and to tag them based on certain criteria established by unsupervised learning.Footnote 143 That is to say, through unsupervised ML, financial institutions and regulatory authorities could identify the behavioural characteristics of financial crimes, including money laundering. The Australian Securities and Investments Commission (ASIC) has been exploring the quality of results and the potential use of Natural Language Process (NLP) technology to identify and extract entities of interest from evidentiary documents.Footnote 144 ASIC is using NLP and other technology to visualise and explore the extracted entities and their relationships. To fight criminal activities carried out through the banking system (such as money laundering), detailed information on bank transfers is collected and this information is correlated with information from newspaper articles.Footnote 145 Similarly, the Monetary Authority of Singapore (MAS) is exploring the use of AI and ML in the analysis of suspicious transactions to identify those transactions that warrant further attention,Footnote 146 allowing supervisors to focus their resources on higher risk transactions. Investigating suspicious transactions is time-consuming. Regulated entities use defensive filings to protect themselves,Footnote 147 and this leads to a high rate of false positives.Footnote 148 ML is being used to identify complex patterns and highlight suspicious transactions that are potentially more serious and warrant closer investigation.Footnote 149 Coupled with ML methods to analyse the granular data from transactions, client profiles, and a variety of unstructured data, ML is being explored to uncover non-linear relationships among different attributes and entities and to detect potentially complicated behaviour patterns of money laundering and terrorism financing that are not directly observable through suspicious transaction filings from individual entities.Footnote 150

4.3 KYC of Fund and Asset Management

Funds and other methods of investment, such as AIFs (alternative investment funds), attract a large number of organisations and individuals to invest. When individual investors or corporations make investments, they may also be requested to provide their personal information—including their names, address, date of birth, contact information, related anti-money laundering information, documents of income certification, payment details for dividend and redemption proceeds, and tax residence information. They are collected for different purposes, such as identification or to guarantee an obligation.Footnote 151 Thus, personal information is being controlled, processed and stored not only by investment fund companies, management companies or transfer agencies but also by the directors of these companies or other third-party agencies working for them.

To guarantee the security of fund transactions, MiFID II requires fund companies to reinforce six aspects of data collection.Footnote 152 For example, to prevent money laundering, customers may be asked to provide a certification of income. Furthermore, the Money Laundering, Terrorist Financing and Transfer of Funds Regulations 2017 require firms to maintain the safe custody of assets belonging to their clients.Footnote 153 Under these regulations, firms should establish and keep records for at least 5 years with as much detail as possible.Footnote 154 This includes personal information regarding relationships, order handlings, reports, assets and so forth.Footnote 155 However, GDPR authorizes the right of data subjects to have their personal data erased without delay.Footnote 156 The data subject has the right to demand the information controller to erase personal data concerning him or her without undue delay and the information controller has an obligation to do so.Footnote 157 This principle conflicts with the MiFID II principle that requires a firm to retain all records kept by it in relation to its MiFID business for a period of at least 5 years.Footnote 158 The purpose of the information collected is paramount. Is it being collected for law enforcement purposes, for guarantee obligations, for developing new products, or for research purposes such as developing RegTech?

4.4 Streamlining Compliance Processes

The KYC process is often costly, laborious, and highly duplicative, covering many services and institutions.Footnote 159 According to Thomson Reuters, some major financial institutions spend 500 million US dollars on KYC and CDD annually; the annual spending of 10% of the world’s top financial institutions is at least 100 million dollars and the average is 48 million dollars globally.Footnote 160 ML is increasingly used in the remote KYC processes of financial services firms to perform identity and background pre-checks. For example, applying AI in the process of KYC could detect any attempt to use fake documents to perform KYC in real time. AI could complete the facial, documental and any other verifications in real time in a single cycle. Hence, AI helps financial institutions to perform AML background checking in real time to avoid unwanted regulatory scrutiny and monetary fines.Footnote 161 ML is predominantly used in two ways: (1) evaluating whether images in identifying documents match one another, and (2) calculating the risk scores on which firms determine which individuals or applications need to receive additional scrutiny. ML-based risk scores are also used in ongoing periodic checks based on public and other data sources, such as police registers of offenders and social media services.Footnote 162 The use of these sources may enable risk and trust to be assessed quickly and cheaply.Footnote 163 Firms can use risk scores on the probability of customers raising ‘red flags’ on KYC checks to help make decisions on whether to proceed with the time and expense of a full background check. Nonetheless, concerns about the tools’ accuracy have kept some financial services from incorporating them. Research is needed to discover how regulators accept this kind of approach and what their worries are.

4.5 Public Interest and Individual Right

For public enforcement agencies, such as the National Crime Agency, the Serious Fraud Office, FCA, and policing agencies, current data protection law aimed at protecting individual sensitive data does not prevent them from collecting that information for the purpose of law enforcement, such as ensuring the security of citizens.Footnote 164 Furthermore, the current law does not prevent public agencies or financial institutions from collecting individual data in the public domain, and that can help them construct an individual profile for the purpose of KYC, as well as for the detection of suspicious transactions.Footnote 165 However, the ethical foundations of individual profiling for market surveillance have not yet been robustly established.Footnote 166 The identification of the parameters for agencies—either public or private—to carry out individual profiling will need to be built on human rights and human dignity principlesFootnote 167 to ensure not only individual safety but also societal safety.Footnote 168 Such profiling information could fall into the wrong hands and be used maliciously.Footnote 169

Individual consent is inadequate in protecting the individual for the following reasons: first, a person, as a general principle, cannot consent to be harmed;Footnote 170 second, the individual may not appreciate the risk;Footnote 171 and third, the individual may not know to what he or she is consenting.Footnote 172 Hence, there is also a need to redefine individual consent: the method of consent, the purpose of consent, and the possible revision and withdrawal of consent.Footnote 173

Even for KYC processes conducted for the purpose of protecting the individual, such as assessing the clients’ risk appetite in accordance with the clients’ suitability rules,Footnote 174 the consent to individual profiling is also problematic. The problem can arise in data quality and accuracy that can affect the quality and accuracy of profiling.Footnote 175 The data could be collected via social media and other smart devices. These integrated datasets containing information about an individual, possibly with extended information, can easily be seized by third parties through a legal request, e.g. a request from a foreign government.Footnote 176 Since clients did not consent to the sharing of their datasets with third party government agencies, transferring these data or providing government agencies access to those datasets may have prejudicial effects on the individuals’ rights in the legal process.Footnote 177 For instance, the original data collector, even if fully complying with statutory obligations initially, will still breach its legal obligations if it shares data with the next public authority to process these data further, unless the first collector provided a detailed explanation of the further sharing of the data and obtained consent at the time of collection.Footnote 178

4.6 Policy Recommendation

In this article, the author has looked at the regulatory objectives and regulatory methods associated with management systems and processes where AI has been deployed in securities trading and investment services. The author uses HFT as an example to examine how AI is regulated on a trading platform that is not consumer facing. The primary regulatory objective is to deal with systemic risk—flash crashes, liquidity risks, and procyclical behaviour. The secondary objective is investor protection (fairness) against market manipulation. The main regulatory approach is the requirement that operators—HFT specialist firms, securities firms, proprietary traders, and trading venues—have internal risk management systems and processes. In this way, these operators are also required to consider market safety and market integrity. Whether HFTs should disclose algorithms to the regulators remains a contentious issue, and the UK regulators do not require such disclosure. The regulatory objective of market safety is appropriate as the basis for continuing the regulation of AI for trading platforms. However, HFT firms, securities firms, and trading venues are all being subjected to a higher degree of control by regulators. These methods of regulation may not be appropriate for newcomers in the development of AI FinTech services provided on p2p trading platforms. The new p2p trading platforms, either on a distributed ledger technology (DLT) network or on Amazon-like ones, will need more consumer protection including price discrimination and privacy rights’ protection. In a p2p trading platform where consumers trade securities, the same regulatory objectives of market safety and market integrity should apply. The platform providers who use algorithms to execute client transactions, such as distributing their portfolios, need to ensure the protection of clients. To ensure there is no market manipulation behaviour, including price manipulation and price discrimination, the platform providers will also need to bear the burden of identifying those who use algorithms to trade or allocate securities. Unlike regulated trading platforms, the users of p2p platforms are likely to be individuals who rely on algorithms developed for interactions on the platforms. It is unlikely that individuals will have the capacity to implement risk management systems and controls. Therefore, to increase financial inclusion, the onus will be on the trading platform to monitor operations and to set the parameters of where these algorithms will operate. There should be an effective mechanism to exclude anybody from participating in the platforms who is found to be using algorithms to cause systemic problems or to manipulate the market.

Section 3 discussed the protection of investors from asset managers and investment advisers who are using AI. Since asset managers and investor advisers (even online ones) are more consumer facing, more protection should be given to investors, particularly retail and consumer investors, to increase their willingness to use robo-advisers to manage their assets. It is unlikely that investors will be able to negotiate contractual terms that serve to protect them against potential harm such as misleading information and price discrimination. The standard of care under tort law in financial services is an unstable concept for investors to use to claim redress. Furthermore, it is not clear whether a fiduciary duty can be assumed by robo-advisers or by advisers using algorithms.Footnote 179 Fiduciary duties can also be altered and restricted by parties. A fiduciary duty concerns what a fiduciary (investment adviser) cannot do (conflict of interest) rather than should do (act in the best interest of the client).

Therefore, using common law principles to provide protection to consumer investors would be inadequate, particularly if AI aims to provide access to finance and to close the advice gap where less wealthy investors do not have access to the same advice services as the wealthier. Wealthier investors are not only able to rely on common law principles to control the level of their protection but are also more likely to make complaints and bring lawsuits. It is more likely that statutory protection based on policy will provide consumers with protection that can balance the need of closing the advice gap with that of financial innovation. For instance, the ‘best interest’ principle and the suitability principle should continue to apply to the use of AI for providing advice on execution and investment. Detriments to consumers’ welfare such as price discrimination—wealthier clients’ investments are sold at a higher price—should also be factored in.

In terms of user protection, the more problematic situation is the one in which an AI investment advice tool is provided online—freely downloadable—and can be used on the providers’ platform or other platforms. These tools may not have been developed in-house by the investment firms. The advice may simply provide free guidance. In this situation, it is difficult to argue that there is a fiduciary relationship between a firm using robo-advisers and consumers/investors. There can also be a question about whether a contract is formed if the robo-advisers provide free investment guidance. Furthermore, it is likely that the software will continue to be treated as a ‘non-product’, hence product liability rules do not apply. These factors could leave investors relying on robo-advisers without adequate protection.

Rather than traditional investment services, consumers may use online robo-advisers to purchase financial products that might otherwise be unavailable to them, because they do not have access to the information that wealth management advisers provide. With open data and the development of more sophisticated algorithms, users may select online advisers that provide more economic services for advice and investment portfolio management. Consumers should be protected against poorly developed algorithms that do not act in their best interest, that are prejudiced against them, or that cause damage through errors.Footnote 180

In Sect. 4, the focus was on using AI in RegTech or SupTech solutions for preventing, detecting, and controlling financial crime, such as money laundering. Due to the cost of compliance, many online financial firms are prohibited from giving financial advice, especially if they process transactions on behalf of clients or if they provide p2p investment platforms. RegTech will streamline the KYC/CDD processes and hence reduce their compliance costs. This, in turn, will allow more firms to come to the market to offer services. The regulatory objective of market integrity directly conflicts with the privacy rights of individuals and data protection laws. It is submitted that data governance will need to be established to protect individual rights but also societal safety.

5 Conclusion

AI will bring benefits and risks to the financial services sector. To ensure continuity, market safety, investor protection, and market integrity should continue to guide the regulation of AI. In addition to these, access to finance should be a regulatory objective so that AI can be used not only to benefit financial intermediaries but also to provide a larger social benefit to those previously excluded from financial opportunities. Access to finance will help the use and regulation of AI to gain wider public acceptance. For this objective, AI can be used to help the optimisation of capital on p2p platforms, to help consumers to have cheaper access to more information through robo-advisers, and through the use of RegTech services to streamline KYC/CDD processes, hence reducing compliance costs. More detailed rules need to be developed to certify good algorithms and good platforms, to strengthen ex ante and increase ex post the protection of individuals who use robo-advisers, and to address how individual rights, such as privacy rights and data rights, can be protected to allow the conducting of more efficient KYC processes.