Chapter 4. Fraud Prevention Evaluation and Investment

Money makes the world go round…

John Kander and Fred Ebb1

The bread and butter of fraud prevention is in research identification (through fraud analysis, modeling, reverse engineering, etc.) and mitigation—and balancing that process with avoiding friction for good customers as much as possible. But a whole framework of systems, tools, and departmental investment is needed to support those efforts, and that’s what this chapter focuses on.

The more care you take to ensure that your framework is the right fit for your team’s needs and your company’s structure, the more effectively it will support your fraud prevention work.

Moreover, it’s important to position the fraud prevention department appropriately within the wider organization. Your team will work best if it has close collaborative relationships with various other departments, and you’re more likely to get the resources you need if management understands your work and what you do for the company. Even though it’s only tangentially related to the crux of the job, it’s far more important than many teams realize, and it’s as important to invest in these relationships and educational efforts as it is in analysis and research.

Types of Fraud Prevention Solutions

Fraud prevention solutions are not a one-size-fits-all kind of discussion. There’s little point in arguing about which is the “best” fraud prevention solution or tool. All the options have different advantages and disadvantages. The question is: What’s best for your situation and goals?

This section looks at the main categories of solutions and tools you can use as the technical basis for your system. Bear in mind, though, that this technical basis must be guided by the experience and expertise of fraud prevention experts and by the research and insights of fraud analysts. Fraud prevention is not a “buy off the shelf, then set and forget” kind of profession.

Rules Engines

Rules engines are the traditional standby of fraud prevention. The principle of how rules engines work is simple; your transactions or online activity flows through a system, which can pick out certain characteristics. You can create a rule to say that any transaction above $200 should always go to manual review, or that logins from a specific geographical area should always be reviewed, or even reject activity from one country in particular (something that would result in an unfortunate number of false positives!).

You can set rules leveraging a huge range of factors, including type of item, price of item, time zone, geographical location, address details, phone information, email details, time of day, device information, browser information, and so on. There is also a wide range of consequences: you can automatically approve or reject, send to manual review, automatically require two-factor authentication (2FA), and so forth.

The downside is that rules tend to be a rather blanket approach to fraud: even if you’ve experienced a lot of fraudulent transactions from Nigeria lately, do you really want to block all transactions coming from there? You should, of course, combine different rules for a more nuanced approach, though it will still have a rather broad brushstroke effect.

Rules are also entirely dependent on the efforts of you and your team. They won’t update to reflect changes in customer behavior or fraud tactics unless you’re updating them. Existing rules will remain even if they’re no longer relevant, unless you remove them—which can result in legacy rules causing confusion down the line.

On the other hand, rules are easy to work with, take effect quickly, and give your team a sense of control. When things are changing fast, it’s valuable to have rules to work with so that your team can react quickly and decisively to swiftly moving circumstances. That’s something more than one team saw during the COVID-19 pandemic.

Rules engines can be built in-house, which enables you to make them tightly tailored to your needs and means you are entirely in control of how your system is built and the data in it. They can also be sourced from a vendor, meaning they can be spun up quickly and should be kept up to date with the latest technological developments for you without you having to continually invest in the system. Many teams combine aspects of both the in-house and vendor-sourced options.

Machine Learning

We won’t say much about how machine learning systems work here, because that’s something we cover more fully in Chapter 5, which deals with fraud modeling. Instead, we’ll touch on the topic here to place machine learning within the context of fraud fighting systems.

Machine learning systems have been in vogue in fraud prevention since around 2015, and the concept is simple: machines can be trained to recognize transactions or activity as fraudulent or legitimate based on past examples, and they can then accurately predict whether a new example will turn out to be fraud or legit.

One main advantage is that machine learning systems can adapt quickly to big new fraud tricks. Unlike manual reviewers, they see all the data, not just a small slice, and they don’t have to wait to confer with colleagues and then laboriously work out the best rule to add. They also notice patterns that humans are likely to overlook, and can be very nuanced in how they evaluate each instance compared to the broad brushstroke approach of a rules engine.

The downsides include that these systems tend to have a black box element; it can be hard to know why they’re making certain decisions or which factors they’re considering. This can be uncomfortable for teams who like to know what’s going on, and is a risk when it comes to avoiding bias. It can also make it difficult to correct the machine when it makes mistakes, and it can take time for a model to adapt to changes that are occur of the blue and don’t match the variables they’ve been trained to deal with (e.g., as we saw during the COVID-19 pandemic). Training a new model likewise takes time.

Moreover, some of the challenges machine learning faces when it comes to fraud prevention (which we look at in detail in Chapter 5) mean that in order to offset them, domain expertise is essential—but can be difficult to employ successfully with a pure machine learning model, particularly if the fraud team wants to be able to do this independently.

Hybrid Systems

Hybrid models combine machine learning with rules in some way. This might be starting out with a rules engine approach and adding machine learning for specific purposes such as pattern recognition (a machine can often notice patterns a human might miss). Or it could mean using a machine learning system as a base and being able to add rules to cope when things change quickly or to reflect new research from your team.

A hybrid model has emerged as the most popular with most fraud departments in recent years because of the potential to combine the advantages of both rules and machine learning in one system. When they talk about hybrid, different companies and different vendors mean very different things—and reflect a different balance between rules and machine learning—so it’s important to clarify this in discussions whenever that’s relevant for you.

Data Enrichment Tools

When your site sees a transaction or account activity, you have a certain set of data to work with. You’ll receive some information straight from the customer: name, email, perhaps phone number or address, and credit card number or other means of payment. You’ll also likely have some information you collect, such as IP address, device information, browser information, and so on.

Data enrichment tools let you enter any of those data points into their system, and receive whatever additional information they have from third-party sources or simply from having seen this information before. Many of these tools can be integrated directly into your own system, making enriching the data easy and, in some cases, automated.

There are a huge number of data enrichment tools out there. Some focus on specific data points, which are the only ones your team can send them for enrichment—email, device, IP, address, behavior—while others take a more holistic approach. Some provide certain sorts of data, meaning that regardless of whether you send email, phone, or device, you’ll expect them to send you further information on an associated social profile, or credit score, or whatever their specialty is. Others provide a range of types of data in response.

These can be extremely valuable in supplementing your own information, particularly with a new customer or with a customer adding or using new information. However, not every tool will be right for you.

You need to consider the return on investment (ROI): how much does any particular tool add to your accuracy? Many will allow you a trial period to determine this, or will accept a short-term contract initially so that you can test it out. Different companies have different needs. The fact that behavioral data was absolutely essential to fighting fraud when you were working at a bank doesn’t mean it’ll be as valuable once you’re working at an apparel retailer. You need to decide whether what you’re getting is worth the price you’re paying.

Similarly, some tools are stronger in certain regions than others, and you need to explore this before you sign. One particular tool might be great for North America but has virtually no coverage in Japan or Brazil. Depending on your audience, that may or may not matter.

There’s also the question of freshness. Since much of this kind of data comes from third parties, notably data brokers, ensuring that it’s fresh can be difficult. People move around, get new phone numbers, change companies, and update credit cards. Talking to others in the fraud prevention industry can be essential here: the community has knowledge to share and is usually very willing to do so. We encourage you to leverage this as a resource.

Consortium Model

Fraud fighters are unusually willing to collaborate with one another, including across companies and industries. This is in some ways a function of the nature of the job. Other departments, such as Marketing, Sales, Finance, Product, Logistics, and so on, are competing with other companies and their equivalent departments in those companies. Often, their success can spell annoyance or frustration for the equivalent departments in other organizations. In fraud prevention, this is not the case. Fraud fighters have a shared enemy: fraudsters. They’re competing, not against one another but against the common enemy.

As part of that ongoing battle, fraud fighters pool knowledge, sharing information about current trends, new fraudster techniques, and data points known to be associated with fraud. Much of this collaborative effort happens at conferences and industry events, and in regularly scheduled merchant calls. Direct data sharing sometimes happens very informally—via a group email with spreadsheets attached, for example—and sometimes indirectly, as when merchants in an industry prefer to all use the same fraud prevention vendor so that they can benefit indirectly from one another’s experiences.

Using a consortium model is a way to make the data sharing more formalized and direct. Various fraud prevention teams upload their fraud records to a centralized location, which all the teams can access and integrate as part of their own systems. You could think of it as a shared decline list, but on a large scale.

Some consortiums are run by third parties or a group of merchants purely in order to establish and maintain the consortium. Others evolve as a by-product of a fraud prevention solution; as a solution becomes used by many merchants, marketplaces, or banks, the solution sees and stores data and analysis from many different sources. Each company using the solution effectively benefits from that accidental consortium. Many fraud prevention services can testify that the network effect brought them to greatness, thanks to being able to see a single attacker moving between banks, retailers, advertisers, exchanges, and so on. DoubleVerify in adtech fraud prevention; Forter and Riskified in ecommerce protection; and Cyota (acquired by RSA and now spun off as Outseer), IBM Trusteer, and Biocatch in banking protection are just a few examples.

Cyota is a good example of the network effect in action. Uri Rivner, cofounder of AML innovation startup Regutize (and a cofounder of BioCatch), helped Cyota (in his role as vice president of international marketing) make the ideas behind risk-based authentication and the eFraud Network not just a reality but an accepted industry practice. Uri noted:

Cyota paved the way, and companies like BioCatch brought new fields of science into play right when the industry most needed them. BioCatch started by collecting behavioral biometric data such as mouse motion, typing patterns, and the way one holds and operates a mobile device. Initially the focus was on building behavioral profiles and flagging anomalies which might indicate someone else was operating inside the user’s online account, but after going live with the tech we discovered something amazing. Looking at the way cybercriminals were operating gave us a huge amount of data no one had ever seen before—and by working with large banks we had just enough examples of fraudulent behaviors to really understand fraudster interactions and behavioral patterns. This allowed us to model the typical behavior of fraudsters and the tools they operate, such as remote access, which, coupled with what we knew about the user behavior history, was 20 times more accurate in detecting fraud than just looking at the user profiles.

Uri gave the example of identifying remote access attacks: “When you control someone else’s device remotely over the internet, your hand–eye coordination gets awkward and delayed due to latency; given sufficiently sensitive behavioral analytics, it can be immediately detected.” Uri added that sharing criminal behavioral patterns across the industry was a huge boost to detection.

In the world of ecommerce, consortium models are equally helpful, since fraudsters like to reuse email accounts, phone numbers, and so forth across sites. This fraudulent practice is aimed at maximizing their ROI (because setting up accounts takes time), thus a consortium model can be an effective way for companies to protect their systems against data points that have already become burnt on other sites.

In a way, buying into a consortium is like having a special kind of data enrichment particularly targeted to what you’re looking for and want to guard against.

Using consortium data

Consortium data can be powerful, especially when companies within a single industry all use the same consortium, as fraudsters often specialize in particular industries. However, there are some caveats that come with this model.

First, decline list data has a lag built in: you don’t discover that an email address is problematic until a chargeback comes in, which may be days, weeks, or months after the address was used. As fraud fighters say, “A decline list is a great way to catch last month’s fraudsters.” It’s potentially valuable, since one site’s fraudsters from last month may be yours today, but it’s potentially too late to be useful. You need to be aware of this, and not treat the consortium as a silver bullet.

Second, the consortium model can encourage teams to think about email addresses, phone numbers, and so on as “good” or “bad,” which is inaccurate and misleading. An email address is just an email address. It’s how it’s used that matters. Emails that end up compromised through account takeover (ATO), for instance, are not problematic in themselves. They just went through a “bad patch,” so to speak. Similarly with physical addresses, the fact that a place was used by fraudsters for a little while says nothing about the place itself. Maybe it’s an office or large apartment building with a concierge—most people in the building are legitimate customers, and you don’t want to tar them with the fraudsters’ brush. Maybe it was a drop house for a while (a place used by criminals as a delivery point or storage location for stolen or illegal goods), but now the fraudsters have moved on and legitimate customers live there. Perhaps the phone number used to belong to a criminal. It’s even possible that a credit card that has been compromised is still being used by the real customer, whose orders should be accepted. And so on.

In general, you can risk-score email addresses and phone numbers to help detect obvious risk issues (e.g., the email address is only one day old, or the phone number is a nonfixed Voice over IP [VoIP] number—i.e., just a free internet number).

Data points are not bad. Users can be bad. Identities can be bad. It’s those you need to watch out for and identify. Consortium data can help you do that, as long as you don’t get confused about what it’s giving you. See more on this topic in Chapter 15.

There is also a third point regarding the consortium model that is more of a limitation than a caveat. Consortiums are useful for sharing decline list data: the fraud records associated with fraudulent activity. In terms of privacy considerations and legal or regulatory restrictions, this falls comfortably into the category of prevention of illegal activity, but not always. These same considerations, however, prevent most companies from sharing information relating to good customers in a similar fashion, even if they were willing to do so, which for competitive reasons most would not be.

The difference between the consortium model and the more standard data enrichment model is that with data enrichment, when companies share their users’ data in order to learn more in connection with it, the data is being shared with a trusted third party: the data broker or third-party provider. In a consortium, it is shared more directly with other online businesses, some of which may be competitors. It is a nice thing about the fraud industry that competing companies are willing to share fraud data with one another in order to collaborate against the fraudsters, who are their common enemy, but of course it does limit the nature of the collaborative effort, since it’s also important not to give a competitor an advantage through sharing data not directly related to fraudsters.

Providerless consortiums

An interesting alternative has been developed very recently as part of what technology research and consulting company Gartner calls the privacy-enhancing computation trend, so called because it draws on privacy-enhancing technology (PET). In this model, the consortium can pool all kinds of knowledge—regarding both good and bad users—because none of the personal user data is actually shared with other companies or any third party. For this reason, the trend is sometimes referred to as providerless since the third-party provider is removed from the equation. The sensitive user data does not leave the possession of the company trying to verify it.

This form of consortium relies on some form of privacy-enhancing technique such as homomorphic encryption, multiparty computation, zero knowledge proofs, and so on. An interesting paper from the World Economic Forum goes into the details of how each of those techniques works and gives examples of the uses in financial services, so you can check that out for more information. But the basic idea is not hard to grasp.

Imagine that you and a friend wanted to see whether your bank cards had the same CVV number (the three-digit security code on the back). There’s something like a 1:1,000 chance that you do, so it’s by no means impossible. You don’t want to tell each other what your number is, since you are fraud analysts and know how risky this would be. You could tell a trusted third party—but you really would have to trust them, and being fraud analysts, you err on the side of caution when it comes to trust and safety.

One idea would be for you to roll dice together a number of times, and add or multiply the rolls to come up with a huge random number. You use a calculator to add that huge number to your CVV, resulting in an even larger number. You can now both tell that very large number to a third party, who can tell you whether you have a match.

The third party gets no information beyond match/no match; they cannot learn your CVV numbers, because they do not know the random number you and your friend got from the dice rolls. You and your friend cannot learn each other’s CVVs (unless it is a match, of course), because you don’t tell each other your final number. This is an admittedly much simplified version of the kinds of privacy-enhancing technologies that can enable companies to see whether user data they’re seeing is trusted by—or conversely, considered fraudulent by—the other companies in the consortium.

The providerless consortium model is still new, but it has already found real-life expression in Identiq, an identity validation network that enables companies to leverage one another’s data to validate identities without sharing any personal user data at all. Other companies are also considering the ways PETs may be used within identity validation or fraud prevention. (Full disclosure: Shoshana Maraney, one of the authors of this book, currently works at Identiq and is intrigued by the collaborative possibilities the providerless trend represents for the fraud prevention community.)

The providerless approach is an interesting refinement on the data enrichment and consortium tools, particularly in the context of increasing data privacy regulation around the world. It also offers interesting possibilities with regard to pooling knowledge about which customers can be trusted rather than just which can’t.

Building a Research Analytics Team

To make the most of the solutions and tools you choose, you’ll need a capable research analytics team to make sure you’re always tailoring to the specific needs of your business and the behaviors of your customers. Even for a fairly small team, you need to start off with a couple of real domain experts: people who have been fighting fraud for some time and have a good, broad understanding of both the granular level—what to look for when reviewing individual transactions—and the macro level—seeing which trends have wide impact and putting that knowledge to use to protect the business. With fraud research and analytics, two is always better than one; fraud analysts benefit enormously from being able to check intuitions against each other, brainstorm, and work through challenges together.

As long as your team is guided by experienced professionals, you can recruit other team members for junior positions. Experience working with data is a plus, but statistical expertise isn’t necessary as long as candidates show aptitude and are willing to learn. Over time, you can train them to spot anomalies in the data your company sees and develop an intuition for when something isn’t right with a transaction.

It’s a good idea to start new employees off with manual reviews so that they build up an understanding of typical transactions and interactions with the site, as well as get to know the profile of your customers—in addition, of course, to getting a sense for the fraud attacks and fraudsters your team faces. However, it’s equally important to train them in gap analysis—that is, comparing true results with predictions, and sampling and then reviewing to find a root cause for any blind spots that caused gaps in performance. Encourage the team to think about what could be changed in your models to improve the system’s ability to both catch fraud and avoid friction. Fraud analysis is not rote work; you want to train analysts to look for patterns outside individual transactions, seek out ways to corroborate and leverage that knowledge, and build the insights gained into your system.

In terms of team culture, encouraging creativity is as important as the more obvious virtues of data analysis and careful investigation. You want your team to think about different kinds of data sources they could use to confirm or reject hypotheses, brainstorm new ways to put existing data or tools to use, and be able to balance diverse possibilities in their minds at once.

For this reason, it’s important not to insist that fraud analysts be consistently conservative. It’s true that chargebacks must be kept low, but there’s always a small amount of maneuvering room to try out new tools or techniques that could, if successful, improve your results, even if sometimes you’re unlucky and they backfire. Equally, if you consistently make analysts focus on the transactions they miss—the chargebacks they didn’t stop—they’ll become very conservative and your approval rate will go down. (Fraud managers, you can experiment to see whether this holds true for your team, if you like. Anecdotally, the results seem pretty consistent. An exclusive focus on chargebacks for your team is not good for a company’s sales.) Teams must focus on preventing false positives as well as chargebacks to keep the balance.

In the same way, team structure should be kept as flat as possible; stringent hierarchies limit employees’ willingness to experiment and suggest new ways of doing things. It’s also important to remind team members of the positive side of the job (helping smooth customer journeys, protecting the business from loss, solving tough problems) if the negative side of seeing so much criminal activity seems to get them down. This is most relevant in companies that are more likely to hear from victims of fraud, including banks, cryptocurrency companies, and gift card companies, but can be a challenge in other industries as well.

Within this context, it’s important to mention the value of bottom-up analysis, as explained in Ohad Samet’s book Introduction to Online Payments Risk Management (O’Reilly). The world of online payments has evolved considerably since that book was published, but the key tenets of training and approach described for fraud teams are just as relevant today as they were when the book was written. Samet lays out the importance of inductive research and reasoning, with fraud analysts being taught to sample many case studies (of transactions, logins, account creations, etc.—whatever is relevant for your business) and then to tease out both the legitimate and the fraudulent stories for each one, matching the data that can be seen. Finding more sources to support or refuse each possibility is the natural next step. From there, fraud analysts can draw on their case study experience to suggest large-scale heuristics that can be checked against the company’s database.

It’s particularly important to draw attention to this bottom-up kind of analysis because the top-down model, using a regression-based approach, is in many ways more instinctively obvious within the fraud-fighting use case. Companies, after all, have so much data—so what does it tell you? What does it lead you to deduce or plan? The top-down approach is necessary, of course, and we’ll mention it in Chapter 5. But the fact is that often, fraud happens in small volumes, and fraudsters are always trying to mimic the behavior of good customers.

You need to balance out both of those challenges, and the best way to do it is by using your human resources most effectively, including their creative abilities. As vice president of product at Identiq, Uri Arad, puts it in his interview on the Fraudology podcast, drawing on nearly a decade of fighting fraud at PayPal:2

The data-based approach, with machine learning and statistics, is great at giving you the big picture. And the story-based approach, with people digging into individual cases, is great at giving you the insight into the details that we need to really understand what’s going on. When you put the two together, that’s extremely powerful.

Collaborating with Customer Support

Working in sync with and supporting other departments is important, generally speaking, in your organization, but in many cases the fraud prevention team has a special relationship with customer support, and where they don’t, it’s possible they should.

Customer support is on the front lines of consumer interaction with your business. That also means they’re the most likely to be in direct contact with the fraudsters trying to steal from your business. Customer support training is more likely to focus on company policy and customer enablement, ensuring customers get a good experience, than it is on identifying and blocking fraudsters. Fraud departments should ensure that this important element is covered as well, and updated regularly in line with developing fraud trends.

There are two parts to this collaboration. First, fraud fighters can help customer support representatives understand the tricks fraudsters are likely to play on them, from calling up to change an address after a transaction has been approved, to professional refund fraud, to ATO attempts. Representatives who aren’t trained not to give away sensitive user information or even company information, such as which systems are used internally, may become a weak link in the security and fraud prevention chain. Hardening against attacks at the customer support level protects the whole business from fraud and from security attacks more generally.

Second, if a tight feedback loop is set up, customer support experiences can feed into fraud teams’ knowledge of customers and trends. Companies that are not set up to make the connections in this way may go for months or even years without realizing that they’re suffering a serious refund fraud attack, for example, because the information that shows it (which may include representatives being able to recognize certain fraudsters’ voices on the phone and the scripts they use) stays within customer support and isn’t integrated into the systems of knowledge belonging to the fraud prevention team.

Measuring Loss and Impact

As we said in Chapter 3, once upon a time, fraud prevention teams were measured on how low they could keep the company’s fraud chargebacks. The only relevant key performance indicator (KPI) was the number of fraud chargebacks received—usually measured in dollars, though sometimes by percentage of transactions. There’s a compelling logic to it. These chargebacks are the most obvious fraud cost to an ecommerce business in particular. The rules from the card networks support this approach as well; companies that see their chargebacks rise above 1% are, in ordinary circumstances, likely to see consequences leading to probationary terms, fines, or even an inability to process certain card brands.

In fact, though, measuring the company’s true fraud losses and the impact of the fraud-fighting team is more complex, as many companies have come to realize in recent years. This has made setting KPIs, measuring loss, and measuring the fraud prevention team’s impact all the more difficult—not least because part of doing this effectively involves ensuring that upper management decision-makers understand fraud, fraud prevention, and the relevant context.

Companies nowadays usually don’t want to keep chargebacks to an absolute minimum. Of course, it’s crucial to stay well below the chargeback thresholds set by the card companies, with a comfortable margin of error in case you’re suddenly hit by an unexpected fraud ring or something similar, but there’s still a wide gap between this and trying to aim for absolute zero when it comes to chargebacks. Overly focusing on minimizing chargebacks implies stringent policies that are likely causing high false positives, which are, after all, another form of loss to the business and one that is widely agreed to often be larger than fraud chargebacks, sometimes by quite some margin. False positives are, unfortunately, notoriously difficult to calculate, and doing so requires continual research into declined transactions and the willingness to let some “gray area” test cases through to see whether they are fraudulent or not.

Tip

It’s crucial that upper management understand the trade-off involved between chargebacks and false positives, and that this is part of the process of setting reasonable KPIs and measuring the impact of the team. Some education may be necessary here, and fraud prevention leaders should consider this an intrinsic part of their job. If avoiding false positives is to be a KPI for your team, it must be clear what calculation is involved here.

Regarding choosing which metrics the department should focus on, bear in mind that you can’t set KPIs in a vacuum. Does your company value precision or speed more highly? That will impact your policy for manual reviews. What balance should you seek to strike in terms of chargebacks versus false positives? That’s intimately connected to the level of focus on customer experience, as well as the nature of your product. What level of friction is acceptable? That depends on your market and your vertical. Setting realistic targets that will match the priorities of the company as a whole requires educating upper management about how fraud and fraud prevention fit into the wider business, as well as discussions about how they see the role of your team in supporting wider priorities.

Benchmarking against industry averages is also important. The fraud rates, challenges, and even chargebacks seen by a gift card marketplace will be very different from those seen by an apparel retailer. A bank would have an entirely different profile again—and neobanks (aka internet-only banks) versus traditional banks may have different norms and expectations. Anti–money laundering (AML) is another story altogether (and has a different kind of calculation regarding mistakes, relating to the regulatory requirements involved). You can’t realistically measure your own loss unless you understand it in the context of the wider industry of which you’re a part. If you have 0.6% fraud chargebacks in an industry that typically sees 0.3%, you’re in a very different position than a team with 0.6% chargebacks in an industry that typically sees 0.8% to 0.9%.

Unfortunately, benchmarks are often difficult to assess, since much of this information is the kind companies prefer to keep private. Surveys such as the Merchant Risk Council’s Global Fraud Survey (often carried out in conjunction with CyberSource) or the LexisNexis True Cost of Fraud report can give you reasonable insight into metrics across different industries, though there is a limit to how granular these surveys can be. Informal discussions with fraud fighters from other companies will also give you a useful sense of where you stand. This type of information is equally important when talking to and educating upper management.

Measuring impact is tricky as well. The value of a prevented fraudulent transaction is not the only amount involved here. Here are some other factors to consider:

  • The actual amount of fraud you’re seeing—your fraud rate—is lower than it would be if you were not protecting the business effectively. If your entire team went on holiday (or became worse at their jobs), fraudsters would discover this quickly and the fraud rate would be a lot higher. Think about how quickly fraud rings jump on a vulnerability once it’s been discovered; it would be like that on a larger scale and without correction. So, you’re actually saving the business far more than the amount of fraud you’re stopping. There’s a lot of potential fraud that never comes your way because you’re making the business a hard target to attack. This is difficult to measure, but using examples of fraud rings and extrapolating can provide an illustration of the point. You may also sometimes see chatter in fraudster forums sharing news about companies that are trying to fill a number of fraud prevention positions; fraudsters like it when that happens because they know the department is overstretched. They’re likely to attack. These sorts of discussions also illustrate the point.

  • There are additional costs associated with most transactions, particularly physical goods, including logistical efforts, the cost of replacing the item, and the item’s unavailability for other customers. This sort of thing is included in what LexisNexis calls its multiplier, which the company uses to calculate how much each dollar of fraud (or prevented fraud) actually represents to the business. It’s usually at least three times the amount connected to direct loss through chargebacks. This same analogy applies to the opening of online accounts for banks. It can be far more costly to close bogus bank accounts (due to operational losses) than the dollar amount of relevant actual fraud losses.

  • If your team protects against account-level fraud, such as account takeover, fake reviews, or collusion in a marketplace, you’re protecting the business’s reputation in meaningful and valuable ways and are undoubtedly having an impact, even if it’s one that’s hard to measure with a number. You can, however, provide numbers relating to fake reviews prevented, accounts protected from being hacked, and so on, and it is crucial that you do present these figures to upper management. When your impact extends well beyond checkout, it’s important that this is visible. There may be related KPIs you want to consider that are based on these sorts of metrics.

Justifying the Cost of Fraud Prevention Investment

It can be frustrating, but the reality is that you’re always going to have to justify your team’s head count, budget, and investment in new tools or technologies. Even if you know you’re doing a great job and compare favorably to the rest of your industry, your upper managers likely don’t know that. And even if they do, they’ll need to be able to defend that to board members, shareholders, and other stakeholders.

First and foremost, you need numbers. Here are just some of the essential figures:

  • Your fraud rate, or the number of attacks you’re seeing as a percentage of overall transactions or activities. You may want to break this down into types of attack.

  • The number of fraudulent attempts you stop, both as a percentage of the total number of attacks and in dollar value.

  • The exposure dollar amount versus the actual losses (e.g., you have $5 million in potential transaction losses, but as a function of your fraud team and your tools the actual losses were only $75,000).

  • Your chargeback rate.

  • Your successful chargeback dispute rate.

  • Your manual review rate, or how many transactions or activities you manually review, as a percentage of total transactions or activities.

  • The percentage of manually reviewed cases that are approved.

  • The average speed of manual review.

  • If relevant, figures relating to account-level abuses such as coupon abuse, wire fraud losses, peer-to-peer (P2P) fraud losses, fake reviews, and more that harm the business’s bottom line and/or reputation.

What you want to do is convey how much you’re saving the business every year (or quarter, as the case may be). You need your execs to see your work in the context of the bigger picture. Get them to imagine what life would be like without a fraud team—because it wouldn’t be pretty.

Once you’ve set that scene, you can tie it back to your tools and head count. If your manual review team is working quickly and furiously (and accurately), present numbers for how it would be if that team were smaller or, if you’re angling for increasing the head count, larger. If your hybrid system enabled you to support the company’s entering a new market with low chargebacks, low friction, and low false positives, make sure credit is given to that system (and to your team for choosing it). If you want a new tool, measure what you estimate the results would be in the relevant area if you did have it.

Some of this is an annual, biannual, or quarterly exercise. But to lay the groundwork for success, you need to make sure there’s an ongoing educational effort reaching out to the whole company, and especially to upper management. You can’t afford to let that slip.

Interdepartmental Relations

Fraud prevention departments often operate in something of a silo within their organization. The approach and the kind of work that’s integral to the job can seem very foreign to other parts of the company. The exceptions may be the Trust and Safety department and the Cybersecurity department, and it may be worth investing in close relationships with these teams, which face similar opponents, concerns, and attacks. As Uri Lapidot, senior product manager of risk at Intuit, said:

Fraud teams may well have valuable context to share which can help cybersecurity and trust and safety teams, and vice versa. More than that, the teams often have overlapping interests and priorities, and working together can make [the] best use of each department’s resources and knowledge. Keeping in close contact, with regularly scheduled syncs, is important for everyone involved.

The distance between fraud prevention and departments outside the cybersecurity or trust and safety realms is a problem, though. You can’t stay in your comfort zone. Fraud fighters should not underestimate the importance of interdepartmental relations.

If others in your organization don’t understand your work, what you do, or how you do it, that’s an opportunity for education. You can run lunch and learn sessions, or teach Fraud 101 in company onboarding classes for new employees. Fraud is fascinating, and not just to fraud analysts, as long as you relate the topic to your audience and the company and use some real-life stories to illustrate your points effectively.

As we’ve said, much of fraud prevention involves finding the right balance (or the right compromise) between customer experience and aversion to risk. There are a lot of other departments that are affected by that decision, including marketing, sales, product, and customer support. If you ignore them, they’ll continue to grumble and think you’re a bunch of naysayers. If you involve them in the challenges and trade-offs and ensure over time that they understand the bigger picture from your perspective, they’ll join you in finding workable solutions. They’ll also start remembering to loop the Fraud Prevention department into discussions like entering a new market, or let you know in advance about a coupon program they’re rolling out or a flash sale they’re contemplating.

We’ve heard from a lot of fraud analysts that they’re often the last to know about this sort of thing, and that sometimes they learn about it too late—when false positives (or new fraud) have already spiked. To tweak Hanlon’s Razor, never attribute to malice that which is adequately explained by ignorance. They didn’t let you know, because they didn’t realize you needed to know. And it is part of your job to make sure they do realize that going forward.

Other departments don’t know much about fraud prevention. You need to educate them so that they can understand how their work relates to yours, believe that you’re working toward the same ultimate goal as they are, and want to work together to achieve success for the company.

As an illustration of why it’s so vital to develop rich, collaborative interdepartmental partnerships, Tal Yeshanov, head of risk and financial operations at Plastiq, talked about her experiences of how working with marketing teams has been so valuable to her fraud team’s success:

At the end of the day, your marketing and fraud teams are both trying to answer the question, “Who is this user?” The end goal is different, and the KPIs are different, which can obscure this important truth, but the fact is that sharing some data around the users such as IP, device, time zone, language settings, email, phone, account age, transaction volume, transaction velocity, etc., can help both teams excel at what they do. Once teams come to see that their work, and goals, are really not so different, they’re often eager to see what they can achieve together.

Fraud and marketing teams just have to work together. Marketing’s job is to bring awareness of product and feature offerings to users. When marketing teams launch campaigns, especially successful ones, it’ll mean users will come and transact. The marketing team will decide on the message/discount/promotion to offer, the timing of when to launch (around holidays usually), and the scope/size of the campaign (how often, how long, and to whom). All of these things will affect risk teams, and they should plan to work together to have a strategy in place to handle the increased transaction volume.

Tal also offered a few things to watch out for, to show how interlaced the marketing and fraud concerns really are:

The overall count of transactions will increase
This means more users will place orders. Make sure fraud teams are staffed and trained appropriately.
The dollar value of each transaction may increase
This means users may choose to spend more money to take advantage of/qualify for a promotion. Make sure to adapt rules, models, and workflows to account for this so that false positives are kept to a minimum.
The behavior of users will change
Maybe instead of buying one item they’ll buy three, instead of shipping to their own home they’ll ship to a friend, or instead of placing the order from their home, they might be doing it from the airport or a hotel (especially if it’s around the holidays at a time when folks are traveling). As we mentioned, fraud teams need to look at past trends and speak to marketing to make sure they account for these changes so that legitimate users don’t fall prey to the systems meant to catch the fraudsters.
The type of transaction may be different
Perhaps a user has only ever bought one sort of item, but now with an incentive the user may choose to branch out and buy different types of things. Make sure your fraud team and marketing team are both aware of what the other is doing.

Data Analysis Strategy

Data analysis makes or breaks fraud prevention efforts. It’s easy to focus on the immediate analysis needs: the transactions, logins, and so on flowing through the system now. There’s a vital place for that, and in many cases it’s what most members of the team will spend the most time on. But if you never look beyond that, it will take over your horizon, leaving you continually fighting fires. Strategy is important so that you understand the structure of how your team is approaching the challenges they face, and how to improve it for better results. It’s something you need to take time for, even when things are busy, because it might not be urgent, but it’s very important.

Depending on your priorities as a department and a company, you may take a different approach to data analysis strategy. But there are two points we want to highlight that are broadly relevant. The first is that you need to build into your quarterly plans the right connections between your automation and your human expertise.

These should not be run separately; you’ll get the best results if each one guides the other. For example, you will get the most out of your machine learning models if you have a domain expert regularly review the features that come from it. Many of those features will be meaningful, but sometimes it will be either random or related to a trend you understand within a relevant cultural or social context that the machine lacks. In these cases, you need to have the machine either ignore the features or modify them appropriately, if they’re going to stop fraud without adding false positives.

Similarly, teams should schedule regular brainstorming sessions to explore, generate, and then test complex features related to cases too rare or too complicated for the model to notice. Too rare is obvious, and the material for noticing such cases may come from your own random sampling of cases or from collaboration with customer service teams. For example, take internet cafes in developing nations. Flights booked from these are usually more likely to be fraudulent. But what if the person at the keyboard matches the persona you’ve built up of an international traveler? Then it’s actually a good sign. People are complicated. A model that is confused by receiving too many mixed signals (both good and bad) will simply balance this out as nonindicative one way or the other. But a human expert can understand the information in context and make sure it’s used appropriately.

The second point we want to highlight is the importance of working with customer support teams as a part of your data analysis strategy specifically. With the right trusting relationship and regular contact, these teams can give you the best direction possible when you want to look for developing fraud trends. If you hear that customer support had a spate of customers who are supposed to be 80-year-old females but their phones are always answered by a young male, you can feed that knowledge back into your system and flag those transactions.

Work with the customer support team to agree on a list of suspicious occurrences and the fraud indicators they hear or see. Then add a button to their system that lets them report in real time whenever something like this happens by choosing the appropriate indicator from a simple drop-down menu. The easier you make it for them to help you, the more help you’ll get and the more data you’ll get. Your team can then look for patterns. It won’t be a huge amount of data and it won’t be the kind you could plug in to an automated model, but it will be enough and it will be chosen carefully enough to be worth your fraud team’s time. A domain expert will be able to work out whether it’s just a coincidence or the tip of a fraud iceberg.

Tip

Make sure you notify the customer support team when their contributions have helped you. It’s great for building up a good relationship and makes it more likely they’ll want to help in the future. Plus, you never know: it might inspire one or two of them to become fraud analysts in time.

Fraud Tech Strategy

Your fraud tech strategy will vary enormously depending on your budget and the resources available to you for onboarding new tools. As with data analysis strategy, there are a few broadly relevant points we want to highlight.

First, your fraud tech strategy should be strategic. Don’t integrate it with something just because it sounds fun and shiny and clever. It may be all of those things, but if it’s going to do something your business doesn’t really need, you’re just making your system more complicated for no reason. Even if your organization is willing to simply test out new tech to see what’s out there, you should analyze the real weaknesses of your current situation and try to find solutions for those, rather than looking for whatever sounds most exciting.

By contrast, teams who struggle to find the budget or resources for new technologies shouldn’t let that limitation stop them from being equally focused on where their weaknesses lie and investing time and research in tools that could have a measurable impact on them. Even if it takes well over a year to get the tool you knew you needed months ago, if it’s the right one for you, having to wait doesn’t make it less relevant once you’ve got it. And you do need to keep an eye on what’s out there so that once you have the opportunity, you can get what you need.

Second, make sure that when you’re designing your tech strategy you cover all the bases in your system. You need to be able to engage in rule-based tactical work so that you can stop specific fraudsters or rings right away or adapt on the fly to fast-changing circumstances. You may also want to have machine learning–powered tech, in which case you should make sure you also plan for the maintenance that goes with it, identifying new trends and related attributes. Within this context, remain agile. For instance, if you’ve invested heavily in your machine learning team and technology, and it’s working really well and your data science partners are helping you solve problems you’ve been worrying about for years, that’s great. But don’t forget that you also need the ability to use rules to adapt in the short term (as, for example, at the beginning of the COVID-19 pandemic, when things changed so quickly in so many ways). It’s better to write a simple rule to tide you over until your model can be trained than to rely entirely on your normally excellent machine learning system and be blindsided.

Make sure you have a variety of tools and approaches available to you so that you can use whichever tool is most appropriate for the task at hand. You may love your hammer, but that doesn’t mean every problem is a nail.

Third, when you’re considering your tech needs, remember to think about the whole customer journey, from account creation to login to transaction or action and more. If necessary, prioritize which elements are most crucial for extra support or tooling and address them first (but don’t forget the rest of them).

This is just as relevant for the parts of a payment that your company doesn’t control; make sure you understand the situation with your authentication flows, and if you’re losing out there, explore solutions that can help. As usual, there’s often a trade-off in terms of user friction, with greater friction being added to reduce false positives, and that should be a part of your wider understanding of company priorities regarding customer experience. Relatedly, if you’re interested in frustrating fraudsters by forcing them to provide (and thus burn) more data through authentication processes, that may be relevant to your tech strategy too.

Data Privacy Considerations

Fraud prevention and other work designed to detect and block criminal activity are exempted from many of the data privacy considerations that constrain other industries and departments. For example, Recital 47 of the EU’s General Data Protection Regulation (GDPR) notes, “The processing of personal data strictly necessary for the purposes of preventing fraud also constitutes a legitimate interest of the data controller concerned.” In a similar vein, the California Privacy Rights Act (CPRA) maintains that fraud prevention is an exception to the right to delete “to the extent the use of the consumer’s personal information is reasonably necessary and proportionate for those purposes.”

The lawmakers are, of course, following a compelling logic; failing to exempt fraud prevention from restrictions on sharing data would play into fraudsters’ hands by making it far harder to identify thieves at work, since each company would only be able to work with their own data, plus whatever additional information or clarification they could gain by working collaboratively with other groups using privacy enhancing technology of some form. Data enrichment tools, and in many cases, even third-party solution providers, would no longer be of use, crippling fraud detection efforts. In the same way, the right to insist that a company holding your data deletes it, as introduced by legislation like GDPR, is reasonably mitigated by fraud prevention needs, since if fraudsters could successfully demand the deletion of their data, they would be much harder to catch on their next attempt. Regulators are not in the business of making life easier for fraudsters (at least not intentionally).

For all these reasons, it seems likely that future data privacy legislation, including legislation governing data transfers between jurisdictions, will follow similar patterns in exempting fraud prevention from many of their demands. However, this does not mean fraud prevention departments are unaffected. As teams preparing for GDPR will recall, the structure and searchable nature of the databases used must be amenable to right-to-access requests, right-to-delete requests when appropriate, and so on. Procedures must also be put in place to enable fraud teams to determine when the right-to-delete requests may safely be complied with. Moreover, identity verification processes, necessary to ensure that the person requesting their data really is the person in question, may fall to the lot of the fraud prevention team.

Beyond this, you’ll note the words strictly necessary and reasonably necessary and proportionate used in the regulations quoted earlier. The interpretation of these words, and others like them elsewhere in the regulations, is hugely important in marking out what fraud fighters can and can’t do with and about users’ data. That’s a field day for lawyers, but as part of that discussion, fraud prevention teams need to be able to explain what data they use, why they need it, and what they do with it. This is also worth bearing in mind when considering new tools for data enrichment.

It is important that fraud-fighting teams work together with their company’s legal team to ensure that everything is being done not only in compliance with the relevant local laws, but also in ways that are likely to be future-proofed against coming changes. A thorough audit of which data is sent to which vendors and a shrewd analysis of the necessity of various relationships with data brokers may also be valuable. It’s important to know how your team stands with regard to data sharing and legislation in order to make your case convincingly should it be necessary to defend your data-sharing practices.

Identifying and Combating New Threats Without Undue Friction

Much of a fraud team’s day-to-day work is focused on the immediate moment: which activities or transactions are fraudulent, what patterns or fraud trends or techniques are showing up this week, and how to find the balance between friction and fraud prevention for the customers you have now.

However, research into new threats remains essential. First, if left undetected for long, a new trick can be enormously costly for the business. Second, panicked reactions to surprise threats when they are eventually discovered often result in high friction and lost business. Third, remembering the importance of demonstrating expertise to upper management, it’s important to show that you are on top of developments and not learning about a new threat months after the rest of the community has begun talking about it.

Research is often all very good in theory, but it can fall by the wayside under the pressures of more urgent problems. To avoid this, it’s worth choosing specific individuals whose job is to investigate certain areas of your own data a specified number of times per month or quarter, and nominate other team members to keep track of forums, articles, and newsletters in the fraud-fighting community. Regular weekly meetings for those who engage in manual reviews are also valuable to enable discussion that can bring new patterns to light. Making these activities set events on the team’s calendar will prevent their accidental disappearance.

When new threats are identified, of course, it is important to remember that detection and prevention should be carried out with as little friction as possible for good users. It is always tempting in these situations to overreact and go too far on the risk-averse side of the spectrum, so establishing a procedure for dealing with new threats, which includes the consideration of friction, may be worthwhile.

Keeping Up with New Fraud-Fighting Tools

Just as new fraud techniques evolve over time, vendors are continually developing new fraud-fighting tools to combat them. As with new threats, it’s worth having one or more people on your team whose job includes regularly researching new tools so that your team doesn’t miss out on the latest option that precisely matches the need you’ve recently discovered. Some particularly large fraud teams have one person dedicated to this role.

It’s important, of course, to assess each tool carefully both alone and in comparison to your existing tools and systems. How much will this new tool improve your accuracy in either increasing fraud detection or reducing friction? Always test the results against your current setup. No matter how good a tool is, even if it comes highly recommended by trusted comrades in the industry, it might not be a good fit for your needs and risk profile.

Summary

This chapter outlined aspects of the framework that underlie a successful fraud prevention team. The system you use, the structure of your team, your relationships with other departments, data privacy considerations, and keeping up with new developments in both fraud and fraud prevention are important elements of creating and running effective fraud-fighting efforts in your company. The next chapter is, in a sense, a companion chapter to this one, exploring fraud prevention modeling options, challenges, and solutions. For many teams, this will be an equally essential fraud-fighting necessity, though the kind of model you use will depend on the challenges you face and the industry in which you’re operating.

1 John Kander and Fred Ebb, “Money,” in Cabaret, music by John Kander, lyrics by Fred Ebb, book by Joe Masteroff (1966).

2 Karisse Hendrick, “A 21st Century Approach to Enabling Merchant Collaboration (w/ Uri Arad at Identiq)”, June 10, 2021, in Fraudology, produced by Rolled Up Podcast Network, podcast.

Get Practical Fraud Prevention now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.