AI & invoice finance: Should lenders be worried?

This article was originally published in Business Money magazine.

AI is everywhere; on our phones; on the news; in our cars. Last month’s Business Money was no exception, with several pages on the topic. 

One article caught my eye: research from Comply Advantage that found two thirds of UK banking and FS professionals believe AI “poses a growing cybersecurity threat” to the sector.

As technologist, we’re often asked for our views on what AI means for invoice finance. Below, I’ve set out the arguments for and against the risk AI poses to IF lenders, and explain how we protect the businesses we work with.

Against the AI threat

For some, the nature of invoices as an asset class protects lenders from AI disruption.   

If a lender suspects their client has falsified invoice or debtor information, they can verify that the debtor exists via a third party and contact the debtor at any time to verify an invoice.

To use AI to fool an IF lender, a bad actor would need to falsify invoices, debtors, contact details for debtors and data sources used for verification.

For the AI threat

An unlikely scenario; but arguably not impossible.

Using AI, a motivated client could create thousands of fake debtors with legitimate contact details. The lender could contact the ‘debtor’ to verify an invoice and be fooled by a human or AI actor posing as a company secretary. If the lender has failed to check might fund invoices against it.

AI makes it simple and cheap to set up sophisticated spoofing operations of this type. In February, a Hong Kong FS worker transferred $25m to a scammer posing as their company’s CFO in a phone call. Although suspicious, the presence of the worker’s colleagues in the call convinced them to make the transaction. The actors in the call were AI-generated deepfakes; the funds remain missing.

As AI becomes cheaper and more accessible, scams like these and others like teeming and lading will become easier to perpetuate. AI will increase the speed at which frauds can be committed, making them more difficult for lenders to spot and untangle.

These arguments relate to conventional frauds that IF lenders are already exposed to. But AI can go further and create new types of fraud by analysing and exploiting the weak points in FS providers’ operations. This is why IBM and other technology providers employ AI to find and close potential technology vulnerabilities before scammers can apply new, harmful techniques against the same weak points.

A balanced view

Although a technologist by trade, I’m not wholly convinced by AI’s acolytes. IF lenders remain somewhat protected from the worst scenarios above because of invoice finance’s reliance on third party debtor verification. At least, for now.

While time appears to be on IF lenders’ side, the pace of change in AI is remarkable and the technology will find weaknesses in lenders’ armour in future. The danger of more and more sophisticated frauds must be taken seriously by IF providers.   

In developing our new r3 RiskOps risk management system for invoice financiers, we’ve applied the following principles to help lenders defend themselves against AI-based lending risks:

1. Protect against client and operational risk

Lenders can verify their debtor risk against third party records. To an extent, fraudsters must therefore exploit lenders’ operational weaknesses.

We make lenders’ risk management processes more rigourous and transparent. For example: r3 RiskOps enable credit risk and ops and client teams to monitor and act on risk events in a single system. This means credit risk teams can be sure their risk recommendations are acted on and other risks are not missed by their colleagues.

2. Make data accessible and useful

Data doesn’t lie, and enabling lenders to find trends in their portfolio and monitor common indicators of risk events – such as cash in bank – can help them to spot even sophisticated frauds. For this reason, r3 

RiskOps enables lenders to track a wide range of lending events, as opposed to just invoice or verification data.

As new types of risk emerge, lenders will need access to new forms of risk information and tooling. Our systems are designed to connect to compliance systems and ingest new data sources, including ones that don’t exist yet.

3. Make risk management more efficient

Humans are the last defence against lending risk; so, they need to be freed up from time-consuming admin to focus their time on skilled risk management activity. This is increasingly important as the intensity and volume of potential frauds increases.

r3 RiskOps is therefore designed to be intuitive and efficient – just like using the apps on your phone.

4. Apply AI to defend against AI

AI isn’t only a potential threat to IF lenders; it’s an incredible risk management and lending opportunity.

 

We’ve designed our r3 RiskOps system and our other new systems to take advantage of the massive amounts of data owned by each of our lenders, to enable them to learn from patterns in their client data to spot risks earlier and identify opportunities to deliver new client services. Risk and reward.

Read more: