For several years already, some banks and online lenders have been using automated customer service functions to help their customers get help with specific problems. One of the biggest traditional banks relying on A.I. for customer service functions is Capital One, the tenth largest United States bank, which has lately tried to cast itself as a subprime alternative to some of the other behemoths—the Spirit Airlines of the banks, perhaps—a place willing to offer credit cards to people with lower credit scores and checking accounts to small-dollar depositors. Part of keeping its own costs low has involved automating some customer service functions and encouraging customers to do most of their banking business online rather than at Capital One branches.
The customer service A.I. function Capital One implemented was the first of its kind. Its developer, Dr. Tanushree Luke, patented its design. It was named Eno, giving it a vaguely human spirit that was supposed to make real humans feel more comfortable interacting with it.
Eno was designed to engage customers in digital chats and use the information the customers provided to route them to specific services. If someone had a question about a credit card bill, for instance, the chat program could analyze that person’s account information, speech, and customer profile and decide exactly what to do next. Did the customer need to be sent to an internal collections team, or to a sales team that could handle some sort of service upgrade?
It might have seemed simple to create a program like this, but it was not. Similar to the voice automation that many companies adopted years earlier, the Eno chat function had to be able to understand a wide array of words and phrases that different customers chose to use in their chats. Then it had to process that information and create its own responses to the customers’ questions. The responses had to feel to the customers as though they were more than just crudely matched pairings of answers and questions that would make the customer feel trapped in a digital world of “frequently asked questions.” They had to feel like real responses. They had to make the whole interaction seem smooth, effective, and pleasant. The program had to send people to the right places, and it had to keep from accidentally failing to do business with various kinds of customers lest the bank run afoul of equal credit and anti-discrimination laws.
And in deciding how to answer customers’ questions, the algorithm did not just look at a small sample of customer histories or credit scores. It used data that connected customers’ preferred devices— smartphones versus laptops—the models of these devices, the kinds of cars the customers drove, even the colors of those cars. An ocean of data went into predicting what kinds of financial decisions each customer was most likely to make and how Capital One could maximize its own revenue based on them. Dr. Luke and her team succeeded in this complex task, even patenting their creation. It was, in a very significant sense, the first of its kind in banking. Other banks rushed to develop competing versions of it.
Capital One had taken a bold step in hiring Dr. Luke and getting her to design its product. Her background was in government work. She’d had jobs at the Department of Homeland Security and the Defense Department, where she was a technical lead on a project developed under the Defense Advanced Research Projects Agency, the laboratory for ultra-powerful new military technologies. She was, in short, no slouch, and she was proud of her work at Capital One.
In public appearances, Dr. Luke, who had a PhD in theoretical and mathematical physics from George Mason University, seemed, above all else, fearless. She was confident in her own brainpower, but it was more than that. She wasn’t afraid to talk about things that others around her didn’t seem to want to talk about.
Maybe it was because she was a woman in the vastly male world of computing and programming. Maybe it was because she was a Brown woman. Whatever it was, she displayed a motivation to care about whether something she was doing was right or just or fair in a way that many other people working in her industry simply did not. She had long been outspoken about the dangers of hidden bias in algorithms and had emphasized that proper testing, as well as diversity among the people actually writing new programs for machine learning, were essential.
At Capital One, she realized that the banking industry wasn’t just far behind other industries when it came to developing their own A.I. tools; A.I. was the banking industry’s veritable Wild West. Regulators did not know how to police it. Banks did not know whom to hire to create it or monitor it. Some of the people working on writing new code for banks had taught themselves the coding process by reading about it on the internet, which meant that they understood far better how to get a computer program to follow steps A, B, and C than to go back through a completed program and design a test that would reliably show whether the program was working, including whether it was doing exactly what it was supposed to do and nothing more or less.
Wherever she went, Dr. Luke warned her listeners, whether they were coworkers, students, or peers in the tech industry, that, on a whole, not enough was being done to make sure that A.I. was used by companies for ethical purposes only, and with plenty of safeguards to prevent bad unintended consequences.
That was her reputation when, in November of 2019, another big bank, the Minneapolis-based U.S. Bank, announced with great fanfare that it had lured Dr. Luke away from Capital One to be its new head of A.I.
This is an excerpt from The White Wall: How Big Finance Bankrupts Black America by Emily Flitter. Copyright © 2022 by Emily Flitter. Reprinted by permission of Atria/One Signal Publishers.