By now, it is safe to say that most internet users know that bots are wreaking havoc online in a bunch of different ways. What they may not realize, though, is what those bots are actually trying to accomplish. A recent survey we conducted revealed 49% of those polled thought that bots came from companies looking to promote their products on e-commerce sites. That’s not exactly the truth, though. In reality, companies usually do not know that they are being targeted by bots.
People do not know that there are millions of bots out there clicking on billions of ads, nor do they know these bots are remote controlled by networks of cybercriminals around the world (see: our takedown of the giant international botnet, 3ve). Most people do not realize that there’s a chance a bot is living on their computer using their CPU power to commit fraud.
This goes far beyond a random person hacking into your accounts to buy expensive TVs. Bot operators are making thousands, even millions, of dollars from ad fraud. While there are a variety of ways to commit ad fraud, there are two main ways to monetize it: device-driven fraud and content-driven fraud.
Device-driven fraud entails the use of computers, servers, phones, and other devices by cybercriminals to counterfeit real ad impressions. In this case, bots impersonate real internet users by demonstrating genuine interest in advertisements being served on real websites or by impersonating a device allegedly owned by a human. The technology used by cybercriminals to impersonate a real human is complex and, at times, ingenious. These types of bots are characterized as Sophisticated Invalid Traffic (SIVT) for how closely they mimic human behavior. However, the way cybercriminals make money from it is straightforward: by sending more traffic to publisher sites or devices and getting paid for those "eyeballs."
The practice of sending traffic to publisher sites is traditionally referred to as “user acquisition” or “buying for tonnage.” When publishers decide they need to drive more eyeballs to match an advertiser’s campaign needs, those publishers will reach out to a third-party traffic provider to generate those clicks. The third-party traffic providers at times go to fourth- and fifth-party aggregators to juice up their traffic. It’s in these fourth- and fifth-party aggregators that botnets can hide. Legitimate user acquisition companies may unknowingly end up mixing fraudulent bot traffic with real humans because bots can disguise themselves as humans so well.
When it comes to devices, there’s one that holds the highest CPM and therefore, the biggest opportunity for botnets: Connected TV (CTV). A bot will look like a CTV device and put out a call for advertisers. Bots sell their “impressions”, but the ads are never seen by humans. This is called device impersonation and can be used with any internet-connected device. Again, it is very hard for advertisers to spot the difference between a true human-owned device and a bot because they are so sophisticated.
A bit more complex is content-driven fraud: the practice of creating fake sites and apps with fraudulent inventory and selling it to advertisers who believe their ads are showing up on real premium sites.
The most common and basic form of content fraud is the sale of useless inventory on “ghost sites” or “cashout sites”: websites that feature nothing but ad space and no content. The sites are visited only by bots, allowing the fraudster to collect the associated revenue. Bot fraud operators usually monetize ghost sites by sending bot traffic to those sites.
An example of a cashout site that uses fake content to appear like a “real” website.
To get a much higher cost-per-click, some talented cybercriminals counterfeit or “spoof” premium and reputable websites, tricking advertisers into paying premium ad rates for fake inventory. It’s hard for advertisers to spot the differences between requests because they are nearly identical. This is one of many “creative” ways to leech off the reputations of bigger publishers to commit content fraud. Another example is ad injection. This is when an ad is forced onto a publisher’s site without their knowledge or consent. Injected ads often replace ads currently on the page that were actually paid for, or appear on pages that were never meant to have ads in the first place. This type of ad fraud is unique in that it steals opportunities for impressions directly from publishers and damages their reputation with advertisers.
Lately, cybercriminals are hot on the creation of fake apps. These apps end up serving little to no purpose to the user despite what the app description may say - it is just another bot-created space filled with ads. Some of these apps will continuously run in the background, even when not in use, in order to use your device to commit other types of fraud, like device impersonation. These apps can also cause ads to pop up on a device, again, while the app isn’t even open. Our Threat Intelligence team has uncovered over 100 fraudulent apps from their investigations into the Poseidon code package and Tushu SDK.
Follow the Money
There’s a lot of room for “creativity” in these two ad fraud monetization strategies. However, to make money from these schemes, fraudsters can only choose between selling fake traffic to publishers or fake inventory to advertisers. Ad fraud can be dizzyingly complex, but from a security perspective, what’s most important is watching the criminals’ source of profits: follow the money. The fact that cybercriminals’ options for generating revenue are limited represents a significant advantage for security specialists. Where we can make it more difficult, and more expensive, to commit these fraudulent acts is to disincentive fraudsters. The stakes are growing and we are watching.
Tagged: Ad Fraud