Like Farming and Clickbait
Facebook uses algorithms to select what content goes in your newsfeed. These algorithms take a variety of things into account, such as your relationship to the poster, your interests, and even your location. However, one of the most important factors is the number of likes or shares that a post has received. If a post has been liked or shared by a large number of people, it will be more likely to show up in other peopleâ€™s feeds. The assumption here is that because lots of people are liking or sharing the post, it must contain popular or desirable content. Usually, this is a good thingâ€”it can help you quickly get to content that your friends and family have already vetted and that you are likely to enjoy and want to see. Unfortunately, scammers have developed a variety of mechanisms to take advantage of this functionality.
Scammers take advantage of their victims by getting them to view content related to their scams or to download viruses or other malware. Just like spam e-mail, it only requires a very small percentage of recipients to click on a malicious link or to run a malicious attachment in order for the scammer to make money. As a result, scammers have an incentive to get as many people as possible to view their content or click their links. To accomplish this on a social media platform, scammers look for ways to get lots of people to view their content by having it show up in their newsfeed. Because a postâ€™s popularity can drive how frequently it shows up in peopleâ€™s feeds, scammers look for ways to artificially inflate the popularity of their posts. One mechanism is so-called â€œlike farming.â€
Like farming begins when a scammer posts an article or story that is seemingly innocuous and designed to get people to like or share it. Often, these posts appeal to the emotions or political views of the readers.
â€œThis poor little girl with cancer lost her hair to chemotherapyâ€”â€˜likeâ€™ this post to let her know sheâ€™s still beautiful!â€ or â€œThis new government policy is outrageousâ€”â€˜likeâ€™ this post if youâ€™re outraged, too!â€ Another approach is to try to convince readers that they can win a valuable prizeâ€”such as the latest smartphone or even plain old cashâ€”by liking the story. Any story offering to enter you in a contest or give you something for simply liking or sharing is highly suspicious and unlikely to be legitimate. Stories promising that â€œIf I get X number of likes, then something amazing will happen for meâ€ or â€œI was challenged to get X number of likesâ€ are also highly likely to be like-farming schemes.
Once the scammer has convinced enough people to like or share the story, the scammer changes out the content. The post is edited such that it no longer contains the emotional story, puzzle, or contest but instead shows marketing material for the scam or other undesirable content. In some cases, the scammer will sell the rights to edit the post to other scammers on a black market. A post with a high popularity rating that can be edited at will is a valuable commodity to those looking to do you harm. Either way, you and your friends are now seeing questionable and even dangerous content thanks to the farmed likes.
Other, less malicious forms of abuse take advantage of the algorithmic post selection as well. Many organizations will create headlines for their stories that are incomplete or tantalizing in order to encourage users to click on them, like â€œYou wonâ€™t believe what happens next.â€ These â€œclickbaitâ€ headlines force users to actually click on the story in order to learn a key detail or to find out the answer to a question. When the story gets clicked on, the social media platform counts that as a vote toward the popularity of the story. The user has unwittingly bumped up the popularity of the post and made it more likely that it will be seen by members of the network. This increases the popularity rating of the post or the poster and makes it more likely that the content will be seen by more people. This type of abuse isnâ€™t necessarily perpetrated by actual scammers, just organizations looking to improve their online popularity ratings and the effectiveness of their advertising and marketing material. However, scammers often use these techniques as well. Falling for clickbait can lead you to malware and other scam sites.
How can one avoid falling victim to like farming and clickbait? It starts with being better informed. Look at the source of a post. Is it a reputable news organization or an unknown site? User behavior is also an important factor. Avoid liking or sharing suspicious posts and stories. If something sounds too good to be true, it probably is. Most social network platforms, including Facebook, provide tools that allow users to review their activity log and see what they have liked or shared. Users can take advantage of these tools to look back at their history and identify content that has changed or that they now recognize as like-farming or clickbait material. By reporting or un-liking suspicious content, this type of abuse can be mitigated. In mid-2016, Facebook announced a new algorithm to reduce the amount of clickbait in usersâ€™ newsfeeds. To assess the likelihood of a headline being clickbait, humans scored thousands of headlines on their likelihood of being clickbait; these scores were then used to train the new algorithm. Using this algorithm, Facebook is now able to automatically classify headlines based on their likelihood of being clickbait, and filter (or punish) those with high clickbait likelihood scores.
1. Have you personally encountered like farming? Have you reviewed your activity log? Have you ever liked a post that has turned bad?
2. You have now seen some techniques for identifying and avoiding scams. How can the social mediaâ€“using general public be better educated to avoid online scams such as like farming and clickbait?
3. Who is responsible for this type of malicious activity? Is it simply the fault of the scammers abusing the system? Do users and platform providers have a responsibility to reduce the risk of abuse? If so, how might this be accomplished?