Machine learning has emerged as the best tool to fight fraud at scale, and merchants with the right instincts are increasingly turning to it for solutions. However, too many merchants are looking to machine learning as a panacea for fraud, and some vendors are irresponsibly fueling that belief, advocating a total replacement of seasoned fraud experts in favor of the machine.
The truth is, when machine learning is naively and dogmatically applied, it will not only fall short of its potential, but it’s also likely to perform much worse than traditional fraud prevention techniques.
The below article, written by co-founder Michael Liberty, originally appeared in The Next Web on February 17, 2016.
Using the past to predict the future
Machine learning, in simple terms, is the practice of using algorithms that learn a “model” from past data and using it to make predictions on future events. It implicitly assumes that the patterns of the past will be repeated in the future.
A common application is purchase recommendations, like those seen on Amazon, which are provided by models that have learned to predict what customers might purchase based on what similar buyers have purchased. The effectiveness of machine learning in this and other contexts, such as web search, provides strong evidence that it can be a valuable tool in the data-rich field of fraud prevention.
However, unlike the customers that browse Amazon’s purchase recommendations, fraudsters actively avoid being predictable. They are constantly trying to evade detection efforts that use their previous behavior as a point of reference.
In this scenario, think of a machine-learned model as an intricate battle plan developed from millions of past battles. This plan helps you protect against tactics the enemy has already employed, but what if they introduce a new tactic?
An ever-changing field of battle
The danger of the “past predicts the future” assumption should be obvious. Nevertheless, some fraud prevention companies pitch machine learning as a silver-bullet solution that requires little fraud expertise or human intervention.
Let’s make the exploitable weaknesses of machine-learning a bit more concrete. One machine-learned model, which was trained on massive image databases using cutting-edge algorithms, said with a high degree of confidence that this image is a King Penguin.
With a similarly high degree of confidence, this is an armadillo:
Researchers tricked the models in question and arrived at the above images by starting with a random image and slowly altering it, observing how the model responded. They repeated the process with the new image until they arrived at an image which the model classified with 99 percent confidence.
Today’s fraudsters are equally sophisticated. They will endlessly refine their techniques, and a data scientist carrying the “perfect” model into battle will quickly find themselves in the unenviable position of having brought a knife to a gunfight.
Without the story and context, the model suffers
The primary goal of machine learning is to develop models that show predictive performance in a test or live environment similar to what was shown in the training environment. A model that does this well is “robust.” Simply put, the best way to ensure a model’s robustness is for a domain expert to examine the coherence of a model’s “story,” represented by the variables it’s built upon.
When fraudsters shift away from past patterns that they believe the model has identified as fraudulent, a good model won’t instantly fall apart. Preventing decay in the model’s prediction and prevention efforts all depends on making sure that the implicit story behind the model and its variables are stable, i.e. that they mesh with low-level understanding of what the fraudsters are doing.
If the model was trained to find correct, but fundamentally weak correlations, the model is useless in the face of an opponent that can quickly adapt.
Consider, for example, a shallow declaration often made in fraud prevention circles: Delaware is the highest fraud state in America. Yes, that Delaware. A naive modeler, without exploring and understanding the underlying cause for the claim, would take this insight and create a “fraud rate by state” input to his model, assuming that the model can take it from there and flag orders from Delaware as risky.
That model may perform pretty well for a while, but once a fraudster switches states, the model becomes useless.
After all, the underlying story identified through expert analysis, is that Delaware is a hotspot for reshipping. Had a domain expert contributed to the model training, he or she would have been able to understand the context and recommend better, more stable variables indicative of the reshipping issue, instead of flagging all orders from Delaware as fraudulent and frustrating customers.
Humans are slow, machines are dumb
Today, the best way to fight fraud is a marriage of complementary skills between man and machine. Humans are smart, but slow; machines are incredibly fast, but simple.
Solving the problem of fraud requires that we draw on all available resources. Domain experts bring the “story” of the fraud, their intuition, and an understanding of context. They provide teaching and guidance for what data to feed the model initially, as well as what new data to incorporate as fraud evolves. Once the general pattern is identified, a machine learning model can make the detection more precise, accurate, and consistent. The model can deliver exact estimates of probability and confidences levels. Unlike humans, models are not emotional or error-prone. And, of course, the models are infinitely more scaleable than a human.
George Patton once said, “Wars may be fought with weapons, but they are won by men.” Machine learning has emerged as an indispensable weapon against fraud, but it will never supplant the need for human ingenuity in the fight against fraudsters.