As regulators worldwide have cracked down on banks that move money for suspicious people, financial institutions found the solution for their due-diligence needs in artificial-intelligence software that spots anomalies. Now, however, more and more banks are finding that they need human beings, and lots of them, to interpret the massive amounts of disembodied information generated by the computers.
“Lenders are facing a labor crunch as they automate the fight against ill-gotten funds,” read a headline on a recent Bloomberg article. “Technology coughs up so much data that banks must hire legions of workers to sort through it all and separate the scoundrels from the scrupulous.
“ING Groep NV, which in 2018 paid a €775 million ($860 million) penalty to settle money laundering cases, in the second quarter of this year created 500 full-time positions to monitor suspicious transactions—a 20% rise,” Bloomberg said. “France’s biggest lender, BNP Paribas SA, has increased its compliance and anti-financial-crime head count by 40%, to 4,200, over the past three years.”
The problem is that the artificial-intelligence and machine-learning software assigned to background check depositors is flooding banks with “false positives,” raising needless alarms about lenders’ prospective clients. “Computers can spot suspicious activity, but they’re typically not smart enough to unravel precisely what’s going on with a client or a transaction,” especially one using offshore shell companies or cryptocurrencies, the article said.
Lenders almost everywhere are trying to thin out their larger workforces via attrition or layoffs, so the pressure to hire these large staffs for compliance tasks is particularly unwelcome. “Bulking up is tough,” Bloomberg said. “But banks have little choice.”