Contrary to what you might hear on Twitter, bots aren’t always bad — they’re simply software scripts that live on computers. The amazingness of the internet as we know it today wouldn’t exist without the tasks that “good bots” perform all the time. For example, search engines and anti-virus companies use huge swaths of good bots to crawl, analyze, and catalog data from web servers.
Bad bots, by contrast, are conduits for cybercrime. They can steal login credentials and hack accounts, spread disinformation, or steal from e-commerce transactions. Cybercriminals often connect these bots into larger organized botnets to maximize scale and damage.
While it’s simple to identify and track the activity of good bots because they generally identify themselves very specifically, bad bots mask their activity by pretending to be human. This is why it’s so critical that businesses become adept at identifying fraudulent traffic.
A new, more humanoid adversary
The first generation of bad bots operated according to pretty rudimentary rules. They generally were run from data centers or just a few IP addresses and behaved predictably. For example, they’d visit one website thousands of times from the same IP address for exactly the same duration of time per page. In other words, they weren’t the best at looking human. Think of the computer scripts that controlled enemies in old video games: the boss might be difficult to beat at first, but as soon as you figured out their simple attack patterns, it was easy to exploit them and advance to the next level. As a result, these more basic bots were simple to detect by just looking for patterns that couldn’t possibly be human.
But in the last several years, cybercriminals have built more sophisticated programs that can mimic human activity. That's an awe-inspiring development for the field of programming, but a concerning one from a cybersecurity perspective. Now, instead of occupying single data centers, bots can be placed on residential IPs, where they have far more human behavior to observe and imitate.
To see this trend in reality, one of our recent Bot Baseline reports demonstrated how bots can blend in with human activity from both a geographic and viewability perspective. In the first chart, you can see that bot concentration closely mimics real human concentration across the United States. The second chart demonstrates how sophisticated bots are able to effectively mimic viewability behaviors and blend in with humans.
These compromised computers are often combined up to form huge residential botnets, which scammers can send en masse to any website. In fact, at White Ops we see that approximately 75% of bot activity comes from real peoples’ computers. Because these bots look and act like human users, most bot activity is indistinguishable from human activity to the naked eye, and even to most bot detection software.
Fighting today’s more sophisticated bots
The traditional approach to bot defense involves looking for the kind of jerky, robotic activities that characterize older bots. Unfortunately, detection methodologies using that approach have been rendered more or less obsolete by the more evolved bots. To effectively identify and block fraudulent traffic, defenders need an approach that goes deeper: directly investigating every single interaction on every single device. Bot defense needs to evolve and adapt to changing cybercriminal tactics, ensuring that defenses can be mounted even when cybercriminals have stepped up their game.