Skip to content

Human + machine is the answer for ecommerce fraud protection



Join our mailing list

Signifyd regularly publishes free reports packed with business insights, commerce trends and data from our massive Commerce Network. We’ll only email when we have something meaningful to share, no more than once per week. And of course you can unsubscribe any time.

There was a moment toward the end of NYPAY’s panel discussion on ecommerce fraud last week that seized on one of the fundamental questions surrounding the move to automation that is picking up momentum by the day.

As the hour-and-a-half session was winding down, a member of the audience at Deloitte headquarters in New York took a microphone and said: “I have a question about how AI, machine learning, is going to impact ecommerce on the negative side. Is there potential for AI to actually cause a threat and encourage more fraud?”

Panelist Prakas Santhana, Deloitte Consulting’s managing director of payments, integrity, cyber risk services, had a quick and direct answer: “Actually, yes.”

Far from being a buzz kill for the crowd of risk and payment professionals and the panel of fraud experts, the question lit the room up. And it ultimately prompted a debate of the digital-age-old question: Which is better at preventing fraud? Human? Or machine?

First the story that got the room buzzing. Santhana took the risk and payment professionals through a new scenario that Deloitte has been tracking. Sophisticated fraud rings are turning learning machines into allies.

Fraudsters are prompting false positives

“What they’re doing is something very analogous to denial of service attacks,” he explained. The fraudsters have come up with a way to generate legitimate orders that look very much like fraudulent orders, which when placed in large numbers can lead to false positives — or the improper cancellation of legitimate orders.

“So, over a short period you’ll see a burst of of legitimate transactions come through, but they’ll all look like fraudulent orders to the machine-learning algorithm. Meanwhile, the call center has investigated, and of course, they’re all legitimate transactions. So, what’s the next thing that happens?”

The next thing that happens, Santhana said, is customer support calls the fraud manager and tells him or her that the fraud models are not working. They need to be recalibrated so good orders are no longer denied.

“So, they pull the model down for a short period of time and that’s when the attack happens,” Santhana explained.

The story brought audible “wows” and muttering from the crowd contemplating the evil genius of it all.

Panelist Stefan Nandzik, who is Signifyd’s vice president of marketing, picked up the thread, explaining that to the questioner’s point, the fact that fraudsters understand that machine learning is being used to detect fraud is exploitable in itself.

“Machine learning, by definition, learns automatically,” said Nandzik of Signifyd, which provides guaranteed fraud protection, a model that promises to reimburse merchants for any approved orders that turn out to be fraudulent. He cited the example of the Microsoft chatbot Tay, which Twitter users trained to become a racist, hateful, machine-learning conversationalist.

“It wasn’t hacked,” Nandzik said. “It was just retrained. The same kind of practice can happen in fraud prevention as well. All of that is exploitable by the fraud side because they now understand really well how these decisions are made and how you could nudge the machine to think differently about it.”

Humans are a key element in fraud protection

Which is where the humans come in. Nandzik seconded a comment earlier in the evening that focused on the role of a trained fraud analyst, someone who has the experience and intuition to anticipate a change in the way fraudsters are attacking retailers and others.

Colin Sims, the CFO of Forter, another fraud protection company, said he wanted to make it clear that it isn’t just machine learning models that can be exploited. Humans can be fooled, he said, as can fraud protection based on static rules-based systems.

“Certainly, anything is exploitable by someone that is sophisticated enough,” he said.

He got no argument from Santhana of Deloitte or Nandzik. Rather their point was that machine learning is not a silver bullet either.

In fact, the answer is to combine the best of humans  — their intuition, instinct intelligence and domain expertise — with the best of machines — their ability to operate at incredible speed and scale all the while learning from every interaction. In other words, when a chatbot starts mouthing off, there should be a human there who can anticipate that mischief might happen and understand how to mitigate the outside factors that are training the machine in ways contrary to its purpose.

Or in fraud prevention, it means having humans who are able to step in when a machine is baffled by a scenario it has not encountered before. The human is there to help the machine get the most out of itself as it constantly improves. The machine is there to amplify the knowledge, the skill and intelligence of the human expert.

In the end, it’s a beautiful relationship. All brought to light by one question at the end of a long night.

Photo by iStock

Contact Mike Cassidy at [email protected]; follow him on Twitter at @mikecassidy.

Mike Cassidy

Mike Cassidy

Mike is the head of storytelling at Signifyd. A former journalist and a retail geek, he covers ecommerce and the way technology is transforming digital commerce. Contact him at [email protected].