Artificial intelligence (AI) and machine learning are valuable tools in the fight against ad fraud. They power rule sets and algorithms that help us identify suspicious behavior. But they may also be our undoing. AI is making us lazy.
Automated fraud detection tools lure advertisers, publishers and especially solution-providers into a false sense of security. That last part is the most worrisome. Sure, it would be nice if advertisers didn’t rely too foolhardily on a single fraud detection tool, and if they tested different solutions against one another and reviewed data themselves. But you can’t blame them for thinking, “That anti-fraud solution I’m paying for has me covered. I am ticking this off my list.”
What worries me is the number of fraud detection tools that are also relying blindly on automation. They are not cross-referencing machine findings. They are not diving into the data to look beyond what their algorithms have uncovered. I know this because I have talked to providers about their approach. There is a difference between someone who can oversee AI, and even interpret its findings and someone who can challenge AI by taking a look at the raw data his or herself. You need to know what to look for. Technology vendors aren’t putting enough onus on hiring people with these skill sets, nor are they cultivating them from within. They are also not building this step into their processes.
AI will get it wrong; it will fail to catch fraudulent traffic. Criminals are masters at outsmarting the system. They know what types of tools organizations are using and how they work, so they develop tactics that will go undetected. A machine doesn’t know what it doesn’t know. You can program it to detect X, Y and Z, but you can’t teach it to “highlight anything else that may be suspicious.” You have to select specific criteria.
Some fraudsters perpetrate occasional ad fraud at a level so infrequent machine learning won’t even catch it. With ML, the machine is comparing expected results to what it interprets. A small imperfection probably isn’t enough for the machine to flag it. You need people who understand the inner workings of technology, but also, how the internet works.
How criminals think.
How to scour data for small patterns machine learning might have missed.
Then you can get ahead of a problem before it turns into one that is large enough for AI to notice.
Human eyes are the most valuable resource when it comes to understanding ad fraud. I think some of my colleagues are scared to say that because they believe it threatens the value of their technology.
I suggest advertisers ask their ad fraud detection partners how often humans interact with their data. I bet many will say, “We don’t need humans! Our technology is so good and so accurate that we don’t need that step. Automation, remember?”
Well, that is bull.
There is no automated ad fraud detection tool on the market that is sophisticated enough to be accurate, and stay accurate in perpetuity, without human intervention.
I also suggest advertisers ask ad fraud detection partners how often they run through rule updates — the manual process in which you review your programming rules and confirm they are still valid, while also rolling out new rules to address new threats. From what I have seen and heard, many vendors do this too infrequently.
In my office, we joke about the WALL-E effect: AI makes us lazy. You need processes for scouring data monthly, weekly and daily. I once met a CTO at a well-regarded ad fraud detection firm who told me they do this once a year. ONCE A YEAR! Criminals roll out new threats every single day.
The fight against fraud is likely a never-ending battle. Machine learning has not advanced to the point that AI can detect every new threat and update itself. Humans need to define and push through patches. The longer ad tech vendors go without looking under the hood, the longer it’s going to take them to discover new forms of fraud.
Technology matters. We would lose the battle outright without it. Humans aren’t capable of processing the sheer volume of data our algorithms can. But machines can miss things, especially since criminals are developing tactics with that very goal in mind. It is convenient for an ad tech vendor to say, or even believe, “The machine has got it covered,” but you can’t be 100 percent effective without human support and intervention along the way.
Joe Rodichok is chief technology officer at eZanga. Read The Drum's coverage of Programmatic Punch, which took place last week in New York City.