What I Learnt From Working On Fraud
I originally wrote this article in May 2022, but have republished it here on Bear. My thinking has not changed much on this topic, and the difficulties in managing fraud/spam is still underrated by many people.
I recently changed companies, moving from a manufacturing B2B company to an internet payments company. I was assigned to work on reducing fraud committed by users, a completely novel topic for me. I learnt a lot in a short space of time and thought others would find it useful if I shared my basic mental model for how I think about fraud.
This may also be useful to people starting work in other related fields of spam and trust and safety, which have some similar dynamics to fraud.
Fraud is Adversarial
The first thing to think about when dealing with fraud for the first time is that it is an adversarial problem, meaning that fraudsters adapt to your responses and try to find ways around them. This can seem like an obvious statement, but is something that took me a while to fully internalise.
Part of the challenge for me was that I came from an engineering/scientific work background, which operates with a completely different set of rules. In the engineering world if you have an issue you can implement a change to fix it. What you don’t expect is the problem to consciously find a way to get around your solution, whereas in the fraud world any change you make will be studied for any weakness and will likely be overcome if you don’t adapt.
This means fraud is a constant cat and mouse game. You close one gap in your systems, which the fraudsters are currently exploiting, and they will eventually find another one. Sometimes the gap being closed is significant enough to reduce fraud for a long time, but there will never be a “final” action you can take. If you don’t continue to adapt the fraudsters will eventually learn the gaps in your defences and exploit them.
Tight Feedback Loops are Critical for Fraudsters
Feedback loops are how people learn, you take an action and wait for feedback to see whether the action was right or wrong. For learners the best type of feedback loop is one where it is reliable and quick. Reliable means taking an action gives you the same outcome repeatedly, while quick means you get the feedback in a short space of time.
Shooting is an example of a tight feedback loop activity, you see the result of firing the gun instantaneously and shooting the gun gives you the same result each time (if you perform the action in the same way). An example of an unreliable long feedback loop activity would be posting a blog online in hopes of improving your writing.
It is unreliable because good writing may or may not get noticed for reasons unrelated to the quality of the writing (maybe it got shared on Twitter by an important account by chance), and it takes days/weeks before you get any feedback on your writing from readers (if you get any at all).
Since fraudsters need to learn about your system as quickly as possible, so they can make money, they want tight feedback loops. They want to know what actions they take will lead to them getting kicked off your product and which actions they can get away with. There are two main ways you can make the feedback loops less tight, reduce the speed of iteration or raise the cost of iteration.
- To reduce the speed of iteration you can put in delays on how quickly you can create an account, or a delay in how quickly an action on your platform is processed.
- To raise the cost of iteration there needs to be a “cost” to learning on the platform. A cost can come in different varieties, this could be as simple as requiring a small payment to get started on your product.
More advanced versions of this can be to ask fraudsters to use up a valuable commodity to gain access to your platform which then can’t be reused if they are later kicked off the platform. e.g. stolen but still working personal ID numbers.
Even a small cost becomes a barrier to rapid learning, as fraudsters can’t scale up experiments rapidly, and have to make hard decisions on which gap to try and exploit with their limited resources.
A more advanced approach could be to make fraudsters feedback loops less reliable.
- Could you only sometimes kick off fraudsters for certain actions to make them less sure of why they were rejected?
- Could you put a delay into a rejection so the fraudsters are uncertain about why they got rejected?
- Could you “shadow-ban” a user so it takes them a while to even notice if they have been cut off from using the service?
I have not yet seen this idea being consciously used, but is something I imagine some teams consider.
It’s not easy to tell fraudsters and good actors apart
What you may have noticed in the last section on making feedback loops less tight is that this is an unpleasant experience for good users as well. You are increasing the difficulty of using your product, and making it harder to learn about your product. In a perfect world you would like all these costs to fall only on fraudulent users, and have a frictionless experience for good users. The problem is it is not easy to tell good users from fraudulent users.
This statement is a little tautological. If the fraudulent users were easy to catch then you wouldn’t let them access your product in the first place, therefore if they do get onto your platform it’s hard to detect them. However I think this point needs to be made because sometimes when you look at a fraudulent user in detail it can seem obvious that the user is fraudulent.
For example Twitter claims it is hard to detect spam accounts but when I find a spam account on my timeline it’s easy for me to distinguish the spam accounts from the non-spam accounts.
The issue here is how easy it is to detect a fraudster is related to the detection processes you have in place. All of the obvious fraudsters have already been rejected by your existing processes, and the ones remaining are the one that are not obvious to your existing processes. This can sometimes be embarrassing as fraudsters that are obvious to a human are not obvious to your automatic/non-human processes.
This is a result of fraudsters learning the gaps in your system, and is not easy to solve without adding more humans into your processes.
Unfortunately adding humans into the process is not a cure all. Sometimes this is a problem of cost and speed (humans are usually more expensive and slower than automatic processes), but there is also the larger problem that as you scale humans up they become less “human like” in their actions. They aren’t as easily able to use nuance, as you need your team of humans to respond in a repeatable manner to a wide variety of actions by users.
This requires you to write long and detailed instructions of how to respond to different situations, making them seem less human like and more like automatic/un-thinking.
Companies want predictability
Finally one key thing to understand how companies think about fraud is they want predictability. Companies can accept different levels of fraud in different markets, as long as there aren’t wild fluctuations within the markets. This desire for predictability is something I don’t fully understand, but there are several potential explanations.
- One reason is stable fraud rates allow the organisation to easily set budgets for their fraud teams, while swings in fraud rates lead to periods of under and over-resourcing.
- Spikes in fraud can also be dangerous as often companies set their prices with a background level of fraud in mind. If there is a sudden spike this can lead to historical profits being wiped out in an instant. You can theoretically plan for the spikes in your pricing, but this is a much harder problem to estimate, and harder to get right if you are looking at smaller products and markets.
- Finally it could just be the natural human preference for stability. If a certain level of fraud is to be expected this leads to less angst and worry than a sudden unexpected change. I have seen this in practice where a fraud spike in one country was lower than the baseline level of fraud in another country, but still required a significant response.
The reason you can sometimes see wild swings in fraud is that fraudsters have found a new hole in your system. They know they have a limited time before you close the gap, and the opportunity to make money disappears. Often news of this hole spreads from one fraudster to another and before you know it all the fraudulent activity has moved towards this new hole in your system, precisely the place where your processes are the weakest.
Eventually you close off the gap leading to the fraud rates to return to baseline, until the next exploit is found. This can create a sawtooth pattern in fraud rates, with low fraud rates most of the time combined with sudden jumps (teeth) as new gaps are found and then closed.
Summary
- Fraudsters are constantly learning and adapting to your systems, and so you need to continually adapt and learn.
- Anything you can do to slow down and reduce the reliability of the fraudsters feedback loops will help reduce fraud.
- The fraudsters you are worried about are hard to spot — if they weren’t you wouldn’t be worried about them.
- Predictability of fraud is important, but also hard to maintain.