by Mike Cassidy
Online fraud fighter Noam Naveh had a succinct message for the group of e-commerce and payment security professionals who gathered for Signifyd’s first meetup to share and discuss fraud-prevention techniques:
Debating whether humans or machines are better suited to e-commerce fraud protection is counterproductive when it comes to foiling fraudsters who seek to rip-off retailers.
Instead, Naveh told the crowd at Signifyd’s Silicon Valley headquarters, the answer lies in using the best of machine learning teamed with human intelligence to stay ahead of fraudsters and their evolving schemes.
“The question is, then,” Naveh said, “what’s the best way to combine the strengths of humans and machines to tackle something as challenging, as complicated, as fraud?“
It is the sort of question that leads to an evening of fascinating conversation and some tangible tips at a Signifyd Payment Fraud Meetup, hosted for those in the trenches of fraud prevention. Signifyd’s second meetup, scheduled for Sept. 14, will continue the e-commerce fraud-prevention conversation as Google’s Swami Vaithianathasamy talks about the evolution of machine learning in preventing e-commerce fraud.
Naveh, a PayPal veteran and fraud–detection consultant, was an obvious choice to get the discussion started. He presented a comprehensive look at the state of the art in fraud prevention. In fact, he has been preaching the human-plus-machine gospel for years. (See related Q&A with Naveh.)
Humans are good at detecting new problems and new methods deployed by fraudsters attempting to stay one step ahead of online security, Naveh told the 50 or so anti-fraud professionals who stopped by for sandwiches, cold beer and networking. They can solve problems that they’ve never seen solved before. And if they can’t solve a problem, they can tell you what they need in order to be able to solve it, he explained.
Machines, for their part, can study transactions at tremendous scale — a scale at which it would be impossible for humans to work. They are not influenced by biases when working to determine whether an order is legitimate or fraudulent. Machines never sleep. Machines never forget.
E-commerce Fraud Constantly Evolves
Unlike using machine learning for everyday tasks, such as matching consumers with products online or recommending a list of movies based on past selections, using machines to spot fraudulent online orders comes with an extra degree of difficulty — namely the fraudsters seeking to buy goods and services with the stolen accounts and credit card numbers of others.
“Fraudsters are humans. They are very ingenious humans,” Naveh told the crowd at Signifyd. “So they adapt to whatever it is you do. Not just adapt. They try to beat you. And so now there is sort of an arms race between whatever it is that technology is doing and the models are doing — and whatever it is the fraudsters are doing in response to it.”
And make no mistake: It’s an expensive arms race. In the first quarter of 2017, Signifyd and PYMNT joined to produce the “Q1 2017 Global Fraud Index,” which found nearly $50 billion in potential fraud in the eight retail verticals the firms studied in the United States alone.
Naveh Offered Three Keys to E-commerce Fraud Prevention
That said, Naveh said there are smart ways for merchants to protect themselves from e-commerce fraud. In particular, he suggested keeping three things in mind.
1) Hire a small team of highly analytical fraud spotters: Many e-commerce concerns rely on front-line agents who examine a rapid-fire stream of transactions that have been flagged by automated models. They must determine whether the orders actually are fraudulent or whether there is a reasonable explanation for why the automated model escalated the order.
The agents must act fast, both because of the sheer volume of transactions and to ensure an efficient experience for customers. Approving an order that actually was fraudulent ultimately leads to a chargeback, meaning the retailer must repay the cost of the product and other fees.
When in doubt, the front-line agents deny the transaction, which means a lost sale in cases where the reviewer incorrectly decided that a legitimate order was fraudulent.
Instead of rapid response, Naveh said, why not hire a small team of data-embracing analysts who have the time to study the problem in a larger context? Those fraud-prevention analysts have the time and expertise, he said, to research the reasons a transaction was red-flagged. Those analysts can work to determine why the automated model kicked out a given transaction. They can consider why the model couldn’t make a solid determination on its own.
They look for patterns in the escalated orders. Is there some common theme or problem?
Best of all, the analysts can feed their solid conclusions into the machine-learning model to give it new information that reflects a changing world. Maybe a new variable needs to be added. Maybe a new type of data is needed. Maybe the model is misinterpreting the data it is already considering.
In the end, the improved process means retailers can significantly reduce both the number of declined orders and the number of chargebacks caused by fraudulent orders.
2) Seat the analysts in close proximity to those who develop the models and the infrastructure needed to run them: Analysts and developers need to speak the same language. They don’t need to become deep experts in each other’s work, but they need a working understanding of it. The analysts need an understanding of models, variables and the vulnerability of models, for instance.
The model-makers and infrastructure engineers need access to the analysts to soak up their expertise and their suggestions on how models must change to address changes in the world.
3) Validate the changes to your model incessantly: Make sure the human intuition and intelligence added to the automated system improve the model’s performance and that you can prove that statistically. With all due respect to humans, Naveh said, they can miss the mark. They sometimes rely on hunches or make mistakes or misinterpret events. Performance data will tell you whether they were right or wrong when they made a change. Pay attention to the data.
The three fraud-fighting steps above, Naveh added, need to be deployed under the principle of UGF. UGF, Naveh explained, stands for understand, generalize and formalize. If you look back at the work your analysts are doing, the funny sounding abbreviation makes perfect sense.
Understand is what the analysts do when they pick apart the reasons a transaction was escalated to them. It’s what they do as they wonder why the machine wasn’t able to make a determination of the order’s legitimacy on its own.
The analysts generalize when they look for patterns in the transactions that are referred for review. Is there a phenomenon that is common to the escalated transaction. Is the data bad? Is the machine misreading the data? Is more, or different, data needed?
And finally, they formalize when they use their conclusions to improve the performance of the models they use. The analysts, sometimes with the help of the developers, need to take their conclusions and translate them into something that can be programmed and applied to current models.
Finally, Naveh said that enterprises that are struggling with fraud detection don’t have to go it alone. He’s made a business of counseling those looking for answers. And, he said, there are vendors, such as Signifyd, who have built the tools and programs to prevent fraud and to insulate digital retailers from losses due to fraud.
“I don’t fix my own car,” is the way Naveh put it. “I go to the experts.”
Photo by iStock.