opinion

Artificial Intelligence Treats Risk Like Cancer

An embarrassing thing happened to me in Amsterdam. I’d just finished dinner with a new partner at a nice restaurant. OK … more expensive than nice, but you know what I mean. I grade food in Amsterdam on a curve.

We were getting to know each other, talking about where we came from and where we’re going. After the dessert the waiter brought the check. We split the bill: 167.35 euros for me, 167.35 euros for him. His card worked. Mine didn’t. WTF!

We’ve been investing deeply in our risk AI for over three years and we’re still learning a lot. It’s a very long learning curve.

Bear in mind there was wine with each course … so I wasn’t at my sharpest when the bill arrived. I checked my balance on my bank’s mobile app. There was plenty of money in the account. Whatever. Not one of those euros were helping me.

I gave the waiter my Amex. It went through because … it always goes through.

It’s probably happened to you, too. A risk system prevents you from making a purchase. You go from enjoying yourself into rapid problem solving mode. Not fun.

One of the biggest complaints I hear from our new partners is, “My old biller was scrubbing too hard!” In other words, the biller was stopping good transactions and preventing sales. It can happen. It was the reason my card wasn’t accepted at the restaurant in Amsterdam.

This summer Visa changed its rules. If “scrubbing too hard” to be under a 2 percent limit used to be annoying, then scrubbing to be under 1 percent can kill your business. How does a biller know which transactions to accept and which to block?

The early approach involved looking for patterns in data. Specialists would look at their data and come up with ideas to identify risk. “It looks like people in France chargeback a lot.” Programmers would query databases to find patterns. “Yes, it’s true. People in France chargeback more than average.” Then the programmers would write algorithms to identify and block those transactions.

Large billers also have risk analysts who manually review transactions looking for suspicious signs. Perhaps they could see that the same IP had been used to make 10 transactions with different cards in a short period of time. Then they could check to see if those users had opened the confirmation email with the login data. If they had not been opened the risk analyst could cancel those transactions.

The goal of a risk department is this: Find the smallest group with the highest percentage of bad guys. That may not make immediate sense. Risk wants to block as few transactions as possible. Ideally risk systems find all of the risky transactions in less than 1 percent of the total. Then they would not be blocking any good, non-risky transactions. Ideal. No one has reached that ideal but the best risk teams are moving closer towards it each day.

One of our partners at Vendo comes from a long line of innovative doctors. His great-grandfather invented a dye that surgeons use to identify cancerous cells during an operation. It’s called “Terry's Polychrome Methylene Blue.” Before this dye, doctors would start cutting and they would cut out too much healthy tissue … just to be sure they had removed all of the cancer. Once they applied the dye, however, the cancer cells would identify themselves by changing color. The surgeon could make sure that he only cut out the cancer leaving as much of the healthy body as possible. That’s what risk is trying to do. Only cut out the cancer.

A false positive is identifying a good transaction as risky and either blocking it or refunding. You want to do that as little as possible. That’s me in Amsterdam not being able to buy with my regular card and switching to Amex. That’s the surgeon before the dye. That’s the biller that is doing risk the old way in a world that has changed completely.

A friend of mine died of cancer a few years ago. Her doctor told me that we don’t yet understand the disease. He said, “Once we do then we will be able to write down the cure on a single sheet of paper.” Today we have lots of treatments for risk. Many different approaches. But we don’t really understand it well enough to write the solution on one sheet of paper. Or do we?

Perhaps we do have a way of managing it that is as inexplicable and difficult to understand as the thing itself. A large insurance company recently spent tens of millions of dollars, hundreds of thousands of man hours and not a small amount of computing power to find a better way of evaluating medical risks and setting prices for their customers. A machine learning technique produced 20 percent better results than the next best approach.

In the end they went with the second best approach. Why? Because they wanted to be able to understand their model and they couldn’t understand what the machine was doing. It used a kind of alien intelligence. The humans couldn’t figure it out. So they destroyed the machine they feared. In the process they turned their backs on a 20 percent increase that would have made them the market leader.

How does artificial intelligence (AI) become intelligent? How does machine learning learn?

Just like a child. It senses its environment and tries to get what it wants. A baby wants food. It cries. It gets food. It learns that crying brings food.

In contrast, AI doesn’t want anything naturally. It has to be told what to want. You could think of this like instilling values in a child. We teach kids the golden rule, “Do unto others as you would have them do unto you.”

We tell the risk AI that it should maximize revenue within constraints. Low reversals (refunds, chargebacks, stolen card alerts, etc.) and high throughput of good transactions. It learns by trying different approaches. When it finds one that works it does more of it.

What are some of the ways we trained the risk AI to perform risk tasks?

We started with linear regression. This one is familiar to anyone who has sold their home. A linear regression model compares your house with recent homes that have been sold. It gives you the value of your house based on its features.

If your house has three bedrooms, was built less than 10 years ago and you have recently renovated your kitchen then your house would be worth X. Improving your landscaping would increase the price of your house by $20,000. If it only costs $10,000 you would do it. If you add a fourth bedroom it would add $30,000 to the value but the cost would be $50,000. Linear regression tells you not to do it.

The primary advantage of the linear regression model is that it is understandable. However, the results weren’t that good when we tried the algorithm on past data. There were too many clean transactions that were seen as risky. When we used linear regression on 18 months of transactions it found 50 percent of risk in 30 percent of transactions.

That means that if you had a chargeback ratio of 1.4 percent (over the limit) and wanted to be at 0.7 percent (comfortably under the limit) then linear regression would cut your sales by 30 percent. Do you have a 100 sales a day? With this approach you would be left with only have 70 sales a day. No, that wasn’t going to work. The results on historical data were so bad we never even tested it on live transactions. We had to keep looking for smarter solutions.

We tried gradient boosting machines. Here’s Wikipedia on gradient boosting: “Gradient boosting is a machine learning technique for regression and classification problems which produces a prediction model in the form of an ensemble of weak prediction models, typically decision trees. It builds the model in a stage-wise fashion like other boosting methods do, and it generalizes them by allowing optimization of an arbitrary differentiable loss function.”

Sounds good — and complicated — (it is!) but it still didn’t produce the results we wanted.

Next we tried random forest. This also uses a collection of decision trees. You’ve seen decision trees before. They have goofy ones in the back of every issue of Wired magazine. Your customer support people use them to decide when to give a refund or escalate. Here’s Wikipedia’s definition: A decision tree is a decision support tool that uses a tree-like graph or model of decisions and their possible consequences, including chance event outcomes, resource costs, and utility. It is one way to display an algorithm.

The “random” part is designed to avoid overfitting data, that is, making an algorithm work really well for past data but not street smart. We want a system that is constantly learning and random forest looks at the results of collections of different decision trees to be more flexible in dealing with the changing reality of risk.

A decision tree relies on patterns that a human can spot. Having large numbers of decision trees that are built by the machine enables the AI to identify patterns that no human could ever see. This is the approach to AI we use today. However, it competes with other approaches and will certainly be replaced with new, improved AI driven solutions in the future. It’s a never ending process. We’ve been investing deeply in our risk AI for over three years and we’re still learning a lot. It’s a very long learning curve.

It is very costly to build a system that goes beyond human intelligence. There are three upfront costs. You have to gather large amounts of relevant data. You have to build teams that can work with it. You have to create tools and access tremendous amounts of computing power. All of those costs can be understood upfront, before starting the project. However, there is a fourth cost that is hidden. It is the cost of ignorance, of giving up control.

But how much conscious control do we exercise generally? Our brains perform a massive amount of unconscious calculations each day. When we are driving a car we look at oncoming traffic and decide whether to enter the lane. We measure the speed of oncoming cars, we estimate our car’s ability to accelerate, etc. We do all of this unconsciously. A self-driving car also does millions of calculations before deciding to enter traffic. We can’t explain the information we are processing fully… neither can the AI driving the self-driving car.

No one fully understands how AI makes each decision. We can’t understand it because it is beyond human understanding. We design it we feed it data and we measure the results it produces. What happens inside the servers where the AI lives is a black box, literally and figuratively.

It’s nerve wracking. We would much rather work with a system that we can understand fully. Other billers have simpler systems that they can understand. However, those systems produce inferior results. In today’s world of tighter risk restrictions we cannot afford the comfort of old ways.

Google has gone through a similar transition. They used to rely on algorithms that they could understand. In recent years they switched to AI. Why? Search results were 15 percent better. The choice was clear. Switch to AI or no longer be the king of search, de-throned by an AI upstart.

Why do we feel comfortable sharing our hard won intellectual property? Because there’s little risk in sharing. Billers always keep their risk rules close to their chest. Have you wondered why? Because they don’t want fraudsters figuring their rules out and going around them to defraud clients.

Our head of analytics is French. He lives in Barcelona. Recently he had to make a payment for his Grench mobile phone account. He tried from Barcelona with a French credit card and was blocked. He used a proxy so that he would appear to be in France, re-attempted the transaction, and it was successful. Clearly the risk algorithm used by his French mobile carrier checked for card/location mismatch but not for proxy. That is exactly the kind of thing that billers don’t want you to know.

An AI doesn’t have fixed rules so we’re happy to talk about it. We used to have those rules. Back then we kept our mouths shut about what we were doing, for obvious reasons. Fraudsters focus their energies on systems they can reverse engineer. That’s only possible with simple, understandable risk systems. The best way for our industry to advance is with cutting-edge treatments for maximum health.

Thierry Arrondo is the managing director of Vendo, which develops artificial intelligence systems that allow merchants to dynamically set prices for each unique shopper.

Related:  

Copyright © 2024 Adnet Media. All Rights Reserved. XBIZ is a trademark of Adnet Media.
Reproduction in whole or in part in any form or medium without express written permission is prohibited.

More Articles

opinion

Strategic Upscaling of Non-4K Content

If content is king in adult, then technical quality is the throne upon which it sits. Technical quality drives customer acquisition and new sales, while cementing retention and long-term loyalty.

Brad Mitchell ·
profile

'Traffic Captain' Andy Wullmer Braves the High Seas as Spirited Exec

Wullmer networked and hobnobbed, gaining expertise in everything from ecommerce to SEO and traffic, making connections and over time rising through the ranks of several companies to become CEO of the mobile business arm of TrafficPartner.

Alejandro Freixes ·
opinion

To Cloud or Not to Cloud, That Is the Question

Let’s be honest. It just sounds way cooler to say your business is “in the cloud,” right? Buzzwords make everything sound chic and relevant. In fact, someone uninformed might even assume that any hosting that is not in the cloud is inferior. So what’s the truth?

Brad Mitchell ·
opinion

Upcoming Visa Price Changes to Registration, Transaction Fees

Visa is updating its fee structure. Effective April 1, both the card brand’s initial nonrefundable application fee and annual renewal fee will increase from $500 to $950. Visa is also introducing a fee of 10 cents for each settled transaction, and 10 basis points — 0.1% — on the payment volume of certain merchant accounts.

Jonathan Corona ·
opinion

Unpacking the New Digital Services Act

Do you hear the word “regulation” and get nervous? When it comes to the EU’s Digital Services Act (DSA), you shouldn’t worry. If you’re complying with the most up-to-date card brand regulations, you can breathe a sigh of relief.

Cathy Beardsley ·
opinion

The Perils of Relying on ChatGPT for Legal Advice

It surprised me how many people admitted that they had used ChatGPT or similar services either to draft legal documents or to provide legal advice. “Surprised” is probably an understatement of my reaction to learning about this, as “horrified” more accurately describes my emotional response.

Corey D. Silverstein ·
profile

WIA Profile: Holly Randall

If you’re one of the many regular listeners to Holly Randall’s celebrated podcast, you are already familiar with her charming intro spiel: “Hi, I’m Holly Randall and welcome to my podcast, ‘Holly Randall Unfiltered.’ This is the show about sex, the adult industry and the people in it.

Women In Adult ·
trends

What's Hot Now: Leading Content Players on Trending Genres, Monetization Strategies

The juggernaut creator economy hurtles along, fueled by ever-ascendant demand for personality-based authenticity and intimacy — yet any reports of the demise of the traditional paysite are greatly exaggerated.

Alejandro Freixes ·
opinion

An Ethical Approach to Global Tech Staffing

One thing my 24-year career as a technologist working to support the online adult entertainment industry has taught me about is the power of global staffing. Without a doubt, I have achieved significantly more business success as a direct result of hiring abroad.

Brad Mitchell ·
opinion

Finding the Right Payment Partner

Whenever I am talking with businesses that are just getting started, one particular question comes up a lot: “How do I get a merchant account?” It’s a simple question, but it has a complicated answer.

Jonathan Corona ·
Show More