Whether it’s photos, gifs, memes, or screen shots … people are drawn to what’s visual. In fact, images are one of the most popular types of user-generated content online. But when the images are fraudulent or abusive? They can weaken user trust and destroy experiences across marketplaces, dating sites, forums, social networks, and communities.
That’s why we’re happy to announce that teams can now view images in the Sift Science Console, along with all the other signals we use to pinpoint fraudsters and bad content. While many customers have already been sending us user-submitted images, until now this data was working behind the scenes to help our machine learning models catch fraud. This update – which puts images front and center in the console – makes content moderation faster and easier than ever!
They say a picture is worth a thousand words…
And we agree. In your dashboard, you can now view all the images that you send us as part of content creation events. Just click each image to expand it and see it in full resolution in a new tab. Multiple images show up in a carousel that you can easily browse through. Any accompanying captions will also appear.
This new feature allows you to view even more context about fraudulent content, so you can make faster decisions. You can also review user risk and content risk – now including images – all in one place, and take action on users and content directly.
“I have a lot more autonomy to identify problems,” says Marissa Gordon, Director of Customer Success at Universe (Ticketmaster company), who is already using the new feature. “When I do need to review specific behavior, I can quickly see the full details of the user and their content, clear reasons why the model gave them a high or low score, and visual links to other users that the original bad user is connected to.”