The war on content abuse is getting harder to win. In fact, content abuse has become so sophisticated that within four years, people might see fake information online more often than real information. It’s not just fake product reviews or social media comments, either. Even fake videos are starting to permeate the internet.

Any online marketplace or community that’s powered by user-generated content (UGC) is vulnerable to content abuse. UGC can include comments, blog posts, marketplace listings, videos… Twitter, YouTube, Pinterest, Indeed, and Airbnb are built around UGC.  If content abuse drives users away from these sites, the whole business falls apart. In all, customers spend a little over five hours a day interacting with UGC, and a fourth of the search results for the world’s twenty largest brands are linked to user-submitted content.

What is Content Abuse?

Any time fraudsters create or share fake or malicious UGC, that’s content abuse. But it isn’t always so cut-and-dry. Content abuse can take myriad forms: spam, fake listings, catfishing, fake reviews, and toxicity. And fraudsters aren’t the only ones committing content abuse. Even merchants have started paying for fake reviews on sites like Amazon.

Content abuse is dangerous partially because of how easily it spreads. Once content abuse infects an online marketplace or community, it breeds more content abuse. Honest users flee the site as their trust in the community is eroded. Fraudsters who notice the site’s vulnerability flock there to commit yet more content abuse. It’s a vicious cycle.

A Moving Target

To make matters worse, fraudsters seem to be getting smarter. They now rely on a diverse toolbox of methods like creating fake accounts, taking over honest users’ accounts, and scripted bot attacks. Content abuse is harder to spot, harder to stop, and easier to overlook than ever. Here’s how fraudsters are responding to fraud fighters’ attempts to fight this serious threat.

When you ban words and images that fraudsters are using to spread toxicity…

…they turn to coded language to hide their intent. Many content moderation teams ban words and phrases associated with hate speech and content fraud. But savvy fraudsters get around these bans by using slang, codes, and emojis to refer to sexually explicit content, ethnic groups, and even specific people. Some fraudsters don’t even make up their code words; instead, they switch to a different language so the moderators, who probably don’t speak Hungarian, can’t keep up.

When you keep an eye on new users to make sure they’re not fraudsters…

…they lie in wait to commit content abuse. Fraudsters used to create accounts explicitly to commit content abuse. In response, sites began closely monitoring new users’ activities to make sure they aren’t fraudsters. While many fraudsters still do create new accounts for their schemes, smarter fraudsters have found a workaround. They create an account, post innocuous content to lull the moderators into a false sense of security, and then they strike.

When you ban abusive content on the platform…

…they go off-platform and entice others to do the same. Of course, most sites that allow UGC forbid fraudulent content on the site. For obvious reasons, they don’t forbid content abuse off the site. So scammers lure people into clicking on innocent-looking URLs that may lead to scams, attacks, toxicity, and other fraudulent content elsewhere. Fraudsters also may attempt an above-board transaction with a user on the site itself before luring them off-platform to pay for the service.

The game of chess between fraudsters and fraud fighters is growing more complex. This is only a preview of the methods fraudsters are using to respond to fraud fighters. Ready to learn more — and to see what you can do to fight back? Download our free ebook 5 Trends Redefining Fraud!

Tags: , , , , ,

Share
Roxanna "Evan" Ramzipoor

Roxanna "Evan" Ramzipoor is a content marketing manager at Sift Science. Her debut novel The Ventriloquists will be released in 2019.