Skip to content

Letting new users in



Join our mailing list

Signifyd regularly publishes free reports packed with business insights, commerce trends and data from our massive Commerce Network. We’ll only email when we have something meaningful to share, no more than once per week. And of course you can unsubscribe any time.

Growing your community is probably one of the hardest things to do. It doesn’t matter if you’re new or have been around for a while – attracting, retaining and engaging with good users who create quality interactions is a must if you want your community to thrive. The last thing you want when you’re in this situation is to have an imposed limitation on your ability to accept anyone who wants in. Obviously, this desire for user acquisition creates an opening to abusive users, specifically spammers and spam-bots. You don’t want to find yourself battling with hoards of the bad guys, either, and especially if you’ve been hit in the past you’re looking for a good preventive solution to this problem. So what have we found that publishers do to control new users?

  1. Approve each new user: yes, it’s as painful as it sounds. Some small publishers and ones that run communities around sensitive topics sometimes choose to let users in only after being referred by a known participant or after manual review. One publisher went through his elaborate way of challenging users with questions aiming to see if they know something about the topic of his community. The lighter version of this is obviously CAPTCHA and other bot-deterring techniques. Unfortunately, these prove to be very strong user acquisition killers as well.
  2. Approving each new action: some communities will let you sign up but then will limit you from certain actions until you gain some reputation (or karma, or PeopleRank). We think this is a great idea, with some downsides we covered in one of our earlier posts about reputation systems. When properly designed, a reputation system or a “gamified” onboarding process can serve as a great process to introduce good users to your community – but if they’re designed to prevent abusive users from causing harm, your good users may suffer.
  3. Community policing: some publishers wait for their users to moderate bad users out by flagging and complaining. While proving useful with obviously abusive users and cheaters, and a rather low cost method to use, community policing has its down sides: it misses on very pinpointed attacks on specific users and ones that happen behind the scences; they can quickly become a blame game (and we’ve seen cases where sock puppets were used to support one side or the other) and they let offensive behaviors grow into a problem before they get flagged. If someone is harrassing your users, chip dumping or using a fake identity you want to know about that in advance – before the damage is done.

No single method is perfect and most websites, particularly the more sophisticated community and gaming sites, employ a combination of those. We at the Signifyd team believe that being able to just start a service and accept users without being concerned about their identity or behavior is an asset, and this is what we’re building. Our focus on detecting accounts owned by a single user, fake identities and abusive behavior is a capability that augments on-boarding mechanism and allows them to focus on letting good users in, instead of keeping bad users out.

Signifyd

Signifyd

Signifyd provides an end-to-end Commerce Protection Platform that leverages its Commerce Network to maximize conversion, automate customer experience and eliminate fraud and customer abuse for retailers.