Artificial intelligence: as fraudsters get better, so must your defenses

in Technology, 29.05.2019

This article was written together with Dimitrios Kampas.

In today’s digital era, enterprises across industries are moving from paper-based to digital means of retaining records of their clients’ transactions and the footprints of day-to-day operations.

Nevertheless, cases of fraud persist: methods of deception are evolving along with the times, continuing to result in financial or personal gains for fraudsters. Rising numbers of daily transactions increase the opportunities for deceit, making it harder and costlier to instate systems of prevention. In this context, the way an organization tackles fraud-related risks—how quickly and efficiently—determines its success against its competitors. In facing vast amounts of data, companies that can use this data to mitigate potential risks are also at an advantage.

So how do you properly anticipate fraud and mitigate its effects?

Artificial intelligence for fraud detection and prevention

Artificial intelligence is a powerful option when it comes to searching for anomalies or suspicious transactions. However, of 750 fraudsters found and investigated between March 2013 and August 2015,[1] only 3% were detected using proactive, fraud-focused cognitive analytics, whereas 44% were found by traditional whistle-blowers and other tip-offs.

This very low percentage for AI is concerning, given that the world of fraud is only getting more complex and costly. Indeed, a 2018 global survey of over 41,000 certified fraud examiners revealed that fraud accounted for $7 billion in losses. Small business lost almost twice as much as larger ones.

As long as trust issues in AI are carefully managed, data analytics can be a highly useful addition to any company’s anti-fraud program: it can help limit potential financial and reputational damage from fraud and misconduct, while sending a message to would-be fraudsters that the risk of getting caught is high. Our team here at KPMG is deeply experienced in this area (we design and build fraud detection systems) and would like to outline five critical considerations for enterprises interested in detecting fraud using artificial intelligence:[2]

  • Quality of data: The data has to be accurate, up-to-date, consistent, and complete, while the sources of data need to be known and understood. The fraud tools deployed should fit the task at hand and be modeled on the processes that are relevant, such as the types of transactions or the involvement of particular functions.
  • Separating fraudulent from “normal:” Given the vast amount of data generated today, it is natural to think that machine learning can easily help detect fraud, given that the premise of most anomaly detection methods is to identify unusual patterns in an otherwise homogenous population. However, the success of these analytical techniques, especially if fraud is rare, depends on the ability to know what is normal. Typically, the fraud cases in a mixed dataset of “normal” and fraudulent instances are proportionally lesser. Customarily, the imbalance ratio of the dataset impedes statistical machine learning of the minor class (i.e., fraud). Our technical personnel at KPMG have been savvy enough to tackle the problem by enhancing the representativeness (i.e., synthetic data generators) of the data, or by deploying active learning on a proper subset of the data to contradict the more customary passive learning.
  • Minimizing the cost: A common issue that comes along with fraud detection systems is the numerous false-alerts of “normal” cases (aka false positives) that may lead to significant manual workload and loss of confidence. On the other hand, if there are too few red flags and, as a result, cases of fraud escape detection, this is equally detrimental, if not more so. Therefore, a successful anti-fraud analytics process has to walk a fine line between generating too many and too few red flags. The trick is to have cost estimation models in place and to have the business understanding necessary to tweak a fraud model, so as to minimize the potential loss, balancing effectively between false positives and false negatives.
  • Long-term effectiveness: Fraud characteristics evolve, and so no system can ensure fraud awareness for cases never seen before in the historical data. The way to approach this issue is to design a semi-autonomous system that keeps up its effectiveness in the long-term by considering relevant feedback in a self-improving fashion.
  • Ethical AI: Adopting an ethical fraud detection system is not merely a legal matter. A company could be fully compliant with the law, and yet, if it were to adopt a heavy-handed approach to fraud detection, it may undermine the trust of its employees in the organization and other parties that are included in the detection scope. The critical element for surmounting this issue is transparency on its intended purpose. It’s best to provide transparent and explainable AI tools that enhance trust and facilitate transformation, complying with the regulatory impositions in Luxembourg and beyond.

Conclusion

In conclusion, cognitive fraud-detection systems may add value to the day-to-day endeavors of enterprises to cope with fraud and the cost that comes along with it. Machines can ceaselessly and consistently monitor for fraud or suspicious transactions, managing risks and reducing manual effort. Nevertheless, fraud systems should be considered human assistants and should not be left unattended to make decisions that nest high risk for a business.

Finally, as technology seeps into our everyday lives more and more, it also alters how fraud is perpetrated, and, ultimately, detected. Societies, companies, and stakeholders are experiencing a trend toward greater transparency and trust. Building an overall culture of trust amongst business stakeholders, technology providers, and regulators will be the key going forward. Business stakeholders and regulators must be confident that data analytics and artificial intelligence algorithms work as intended and must trust each other to use them properly.

[1] Based on a global survey of KPMG professionals.
[2] See KPMG Global Data & Analytics: Using Analytics Successfully to Detect Fraud, 2016.


2 Comments

  1. Ricky Trask

    Some truly interesting info, well written and broadly user friendly.

  2. Billy Waring

    Some really interesting information, well written and broadly
    speaking user pleasant.

Leave a Reply

This blog is pre-moderated which means that all comments are reviewed by a moderator before they appear. KPMG reserves the right not to publish any comments made.