Privacy vs. Security: Is your bot mitigation solution effective in the wake of web privacy trends?

Bad Bots disguise themselves as humans to circumvent detection

Bot mitigation vendors focus on stopping bots with the highest degree of accuracy. After all, it only takes a small number of malicious bots to break through your defenses and wreak havoc on your online activities. One of the challenges of stopping bad bots is minimizing false positives (where a human is incorrectly categorized as a bot).

The more aggressively the rules are set in a bot mitigation solution, the more susceptible the solution becomes to false positives as it must decide whether to grant requests for indeterminate risk scores. As a result, real users are inadvertently blocked from websites and/or given CAPTCHAs to validate that they are indeed humans. This inevitably creates a poor user experience and reduces online conversions.

Much of the continued innovation in modern bot mitigation solutions has been a reaction to increasing adversary sophistication. The fact that bad bots increasingly look and act like humans in an attempt to evade detection makes it harder to rely on rules, behaviors and risk scores to make decisions, which makes false positives more pronounced.

Humans now dress up for privacy

A more recent trend is exacerbating false positives, and without proper innovation, it renders bot mitigation solutions dependent on legacy rules and risk scoring inadequate. It is the result of accelerating trends related to humans acting for more privacy on the Internet. Ironically, the move towards more privacy on the web may actually compromise security by making it even more difficult to distinguish between humans and bots.

To understand why it is essential to know how the majority of bot detection techniques work. They rely heavily on device fingerprinting to analyze device attributes and bad behavior. Device fingerprinting is performed on the client side and collects information such as IP address, user agent header, advanced device attributes (e.g. hardware flaws) and cookie identifiers. Over the years, the information gathered from the device’s fingerprint has become a major determinant for the scanning engines used to determine whether the request is robotic or human.

The fingerprints of the device, in theory, are supposed to be like real fingerprints. Thanks to which his fingerprint can uniquely identify each user. Fingerprint technology has evolved towards this goal – aka high definition device fingerprints – by collecting the growing abundance of client-side information. But what happens when the device’s fingerprint can’t be a reliable unique identifier – or even worse – starts to look like those presented by bad bots?

Disappearing device fingerprint

Previously, we have published articles on how bot operators evade detections based on device fingerprints. They collect fingerprints and use them in combination with anti-detection browsers to trick systems into thinking the request is legitimate. It was one of the first factors that caused Kasada to move away from device fingerprinting years ago as an effective way to distinguish between humans and robots.

On top of that, here are several recent web privacy trends that make “evidence” obtained through device fingerprinting methods even more suspect.

Trend #1 – Use of residential proxy networks

Of course, residential proxy networks are exploited by bot operators to hide their fraudulent activities behind seemingly innocuous IP addresses. But there is also a growing trend of legitimate users extending beyond traditional data center proxies to hiding behind residential proxy networks such as BrightData. Residential proxy networks have become increasingly cheaper and, in some cases, free; they provide a seemingly endless combination of IP addresses and user agents to mask your activity.

While some of these users hide behind home proxies for suspicious reasons, such as to circumvent access to restricted content (e.g. geo-restrictions), many use it to truly guarantee their online privacy and protect their personal data against theft. Fingerprinting techniques used to detect people behind proxy networks have become ineffective in light of modern residential proxy networks that hide your identity.

Bottom line: you can’t rely on IP addresses and user agents to distinguish between humans and malicious bots because they look the same when hidden behind home proxies.

Trend #2 – Using privacy mode and browsers

The recent increase in the availability and adoption of private mode and new privacy browsers also makes it difficult to rely on device fingerprinting.

Private browsing modes, such as Chrome Incognito mode and Edge InPrivate browsing, reduce the density of information stored about you. These modes take moderate measures to protect your privacy. For example, when you use private browsing, your browser will no longer memorize your consultation history, cookies accepted, forms completed, etc. once your session is complete. It is estimated that more than 46% of Americans have used a private browsing mode in the browser of their choice.

Additionally, privacy browsers, such as Brave, Tor, Yandex, Opera, and custom Firefox, take web privacy to the next level. They add extra layers of privacy such as blocking or randomizing device fingerprinting, provide tracking protection (combined with privacy search engines like DuckDuckGo to avoid tracking your search history), and remove cookies. cookies rendering ad trackers ineffective.

These privacy browsers order on 10% of total market share today, and they are gaining popularity. They have enough market share to present major challenges for anti-bot detection solutions based on device fingerprinting.

Conclusion: You cannot rely on advanced device identifiers or first-party cookies due to the growing percentage of users using privacy modes and browsers.

Trend #3 – Elimination of third-party cookie tracking

There will always be a substantial percentage of internet users who do not use privacy modes or browsers. Google and Microsoft have too much market share. But even for these users, fingerprinting devices will be increasingly difficult. One example is due to Google’s widely publicized effort to eliminate third-party cookie tracking. And although the delay has was recently postponed to 2023, this will inevitably make it more difficult to identify suspicious behavior.

Third-party cookies collected from the device fingerprinting process are often used as a telltale sign of bot-driven automation. For example, if a particular session with an identified set of third-party cookies attempted 100 connections, that’s an indicator that you’ll want to force them to revalidate and establish a new session.

Conclusion: You will soon no longer be able to use third-party cookie identifiers in the browser to help you identify bot-driven automation.

Go beyond the device fingerprint

Kasada moved away from device fingerprinting years ago, realizing the growing limitations of getting accurate device fingerprints from humans and bots. The team has come up with a new method that doesn’t need to search for a unique identifier that can be associated with a human being, but instead looks for the hard, indisputable proof of automation that comes up every time a bot interacts with websites, mobile apps and APIs.

We call this our customer inquiry process. Attributes are collected invisibly from the client looking for automation indicators, including the use of headless browsers and automation frameworks such as Puppeteer and Playwright. So, instead of asking whether a request can be uniquely identified as a human, Kasada asks if any request that comes up is in the context of a legitimate browser.

With this approach, there is no need to collect device fingerprints and there is no ambiguity in having to assign rules and risk scores to decide bot or human. Decisions are made on the first request, never leaving requests in your infrastructure, including those from new, never-before-seen bots – never requiring CAPTCHA to validate.

If you are already using a bot mitigation solution, ask them about their reliance on outdated fingerprinting methods and what steps they are taking to deal with the inevitable increase in false positives and undetected bots resulting from the move towards a more private web.

Want to see Kasada in action? request a demo and see the most accurate bot detection in the industry and the lowest false positive rate. You can also run a snap test to see if your website can detect modern bots, including those that leverage the Puppeteer Stealth and Playwright open source automation frameworks.

*** This is a syndicated blog from Kasada’s Security Bloggers Network written by Neil Cohen. Read the original post at: https://www.kasada.io/privacy-vs-security/

Comments are closed.