Deploying AI in the payments sector – With the global volume of online payments set to increase by almost 11% a year between 2015 and 2020 (and by as much as 30.9% in emerging Asian markets), according to a recent report from Cap Gemini & BNP Paribas, the benefits of tackling operational pain points, reducing manual processes and further driving efficiencies are clear.
This has lead many commentators to suggest that artificial intelligence, particularly machine learning (and its first cousin, deep learning), has the potential to be a huge game changer for the payments sector – writes Tim Wright, Partner at Pillsbury Winthrop Shaw Pittman LLP.
Growth Drivers
In Europe, the Revised Payment Service Directive (PSD2) will be a key driver of AI adoption, as open banking gives regulated fintech firms access to online banking systems that in turn will lead to an increasing number of new and innovative payment products.
In South East Asia, fuelled by the rapid increase in smartphone ownership and improved internet access, digital payments are growing rapidly, especially in China, India and the Philippines. And in the US, where the adoption of mobile payments, Near Field Communication (tap and pay) and other payment technologies has taken longer to get off the ground than in, say, the UK, total e-commerce spending is expected to reach $1 trillion by about 2023.
Unlocking AI’s potential
To date, most AI implementation in payments has been limited to discrete areas, such as customer on-boarding (identification and anti-money laundering), and fraud prevention and detection.
Looking further ahead, however, AI is expected to play an increasingly important role. Correctly implemented, AI has the potential to enable the sector to undergo a dramatic transformation with the promise of massive operational and strategic efficiencies through lower operational costs and automation of time-consuming procedures across processes such as payment reconciliation, validation and authorisation. Benefits include reduced time-to-market, better risk decision making and shortened approval processes, improved security and increased customer retention.
Understanding the Issues
Whilst having great potential, the transition to a digital, AI-first world brings with it a number of legal (and other) issues and risks that need to be carefully understood and mitigated to ensure that AI is implemented safely. As Professor Stephen Hawking wisely reflected: “[t]he rise of AI could be the worst or the best thing that has happened for humanity.” Less prophetically, in the context of sourcing, procurement, development and deployment of AI from or with vendors and other third parties, some important risks and issues for businesses to consider include:
- Align Procurement and Governance Processes – built for sourcing more traditional IT services and products, current procurement and governance processes can seem unwieldy in the “digital” environment and may need to be realigned; conversely, training and education are needed to ensure that governance structures and approval processes are followed, and internal policies and guidelines, such as information security, considered, especially where the amount of spend is less than a typical IT procurement so that approval levels need to be adjusted to reflect values other than spend and risk, such as over sharing data, are mitigated.
- Use the Right Contract – similarly, education is needed to explain why the “two-pager” proffered by a vendor may not be adequate; why Non-Disclosure Agreements and Proof of Concept Agreements are often needed early on in the process; and why contracting processes need to be carefully designed to ensure that key topics such as confidentiality; ownership of intellectually property; use of outputs; sharing of data; apportionment of risk and liability need to be considered. Contracts need to reflect the underlying transaction: for example, the different issues will arise in the case of an agile development arrangement, compared with the implementation and support of pre-existing AI, or a fintech collaboration with one or more banks.
- Set the Boundaries, Define the Responsibilities – contracts for the development or licence of AI tools, applications and systems, should clearly allocate how responsibility sits amongst the various suppliers, operators and users of AI and machine learning systems: for example, a vendor’s financial product may be based on data input devices or algorithms developed by another party entirely. If something goes awry, which party is responsible and for what? To what extent can a buyer rely on a vendor’s expert systems, and in what scenarios?
- Don’t Ignore Privacy – the GDPR ushered in a new EU-privacy regime, which has a number of implications for AI adoption by businesses in the payments arena. For example, Article 11 provides a right to “an explanation of the decision reached after [algorithmic] assessment,” and Article 22 states that a data subject should not to be subject to a decision with legal or significant consequences based solely on automated processing. Data Protection Officers should be brought in early to ensure that privacy impact assessments are completed, and other tenets of the GDPR complied with, such as privacy by design.
- Contemplate Change – although many of the AI technologies being brought to the marketplace today are not new, current deployments (many of which are dependent on access to huge amounts of data) are proceeding at a scale and with a speed not seen previously, hence regulators are, to an extent, playing catch up. This means that in the next few years, we are likely to see an increased amount of regulation being introduced, some of which may impact the cost and/or manner in which AI services can be delivered. Where applicable, contracts should set out how such a change in regulation should be handled and any related implementation costs borne.
- Set Standards for Compliance – as the sector grows, it is inevitable that, along with regulation, industry-specific, global standards (and other codes of conduct, such as the ethical code of conduct proposed by the House of Lords Select Committee on AI) will be developed. Contracts should determine the extent to which AI tools and services must meet such codes and standards.
Other issues will need to be given weight, especially in a heavily regulated environment like payments. These include transparency of decision making (aka black box syndrome), consumer protection, and anti-discrimination laws, where problems of prejudice caused by data bias, or unwittingly introduced by an algorithms’ developers, may arise.
The post Deploying AI in the payments sector – The key risks and issues appeared first on Payments Cards & Mobile.