In today’s episode, we cover the recent news that one of the biggest names in facial recognition AI, IBM, has put research in this area on-hold because of concerns about the negative implications of this technology.

[Auto-generated transcript]

IBM Puts Facial Recognition on Hold

David Zweifler: [00:00:00] Artificial intelligence is impacting our lives in all kinds of ways. But to me, the most obvious impacts are happening in the area of facial recognition. Whether it’s sensors making personalized recommendations when you walk through the doors of the store, unlocking your car without ever having to touch your keys, or using your phone camera to figure out where you met that person who’s face  just can’t put a name to. Put it all together and you realize that what was, until very recently, science fiction is now turning into science fact. 

[00:00:38] Given that so much of the incredible promise of this technology is about to be realized, you have to wonder why is a pioneer in this space, IBM, walking away? As with any great technology, along with the benefit, there are some very serious issues that need to be considered as well before there is widespread adoption of facial recognition — especially in the private sector.

[00:01:04] To provide a cybersecurity and fraud prevention perspective on this technology, today, I’m speaking to Rajiv Yadav, who has more than 25 years of cybersecurity experience, to discuss IBM’s ethical concerns about the technology, and how it might be used and misused in both private and public hands. 

[00:01:25] Thanks for joining us today. Raj. So, right now there’s a lot of consideration being given for leveraging facial recognition and publicly available information, things like social media, to create a know a national or even a global identity database using facial recognition along with, you know, other, other data. What do you think are the long-term considerations there? Is that something that, we should be doing? Is that something that’s going to be helpful for preventing fraud? 

Facial Recognition and The Private Sector

[00:02:03] Rajeev Yadav: [00:02:03] Yes. I think there are definitely good use of it, especially when you look at government space or states. You obviously have passports. We all put photos in our passports because the guy at the check-in point has to look at your face and say, it is indeed who you are. Right?

[00:02:21]And we use faces in so many documents to prove who prove who we are. Motor vehicle, school, you know, you name it. It it’s part and parcel for identity proofing. Right? Any, anywhere you go. So I think, you know, from  (a) national security perspective, or a national registration database perspective, government have it and they need to use it, then it’s worth it. I think it’s definitely out there now. 

[00:02:52] Can we do the same in private sector? That’s where I think deep questions, arise, especially given the drawbacks coming in from these technologies. So, the whole idea is how do you get very accurate facial recognition, so that you can avoid pitfalls, and serious consequences?

[00:03:19] Governments definitely have big use for it and, you know, quite rightfully in many cases, but not so much in private sector

[00:03:31] David Zweifler: [00:03:31] looking at government. I mean … the, advantages of countries using it for visa control and things like that are big, but there are already governments that are using it for more nefarious purposes, like, like China, you know, in, in terms of being able to identify, protesters, from, you know, facial recognition or even parts of their face. I mean, It seems like, is there any way, you know, before we get into private sector, I mean, even, even governments, there seems to be some big, ethical questions that have to be answered — do you agree and, and are addressing this?

Challenges and Opportunities for State Use

[00:04:22] Rajeev Yadav: [00:04:22] Yeah, I agree. I think China is about 170 million CCTVs if I’m not mistaken. It’s, it’s an outlier in our typical, you know, pro-democratic versus non-democratic countries, if you will. So this really pegs the question on civil liberties and to what degree you want to really protect that. .

[00:04:44] In the Western world, you will tend to see that it’s it’s, you know, it gets protected quite a bit. And, you know, government outreach into people’s civil liberties is a big issue. And this is not to deny that government can’t misuse those either. if it goes into wrong hands, But there are also legitimate use cases, right?

[00:05:06] They still have to do, you know, anti-terrorism act. They still have to find missing persons. They still have to do drug enforcement. There are a lot of, you know, practical, good use cases, you know, there, this does help, right to government  agencies . But of course, as with anything, it has drawbacks if you don’t control it properly. 

[00:05:30] You know, the argument is that if you don’t control that data, government can become, you know, very big. But again, you rewind the history, you know, a hundred years ago when photographs or photography became prevalent, we have the same challenges, right? Oh, gee. You know, you know, somebody can take my picture outside and put it in a newspaper and this and that.

[00:05:53] So. Yeah. Civilization has been maturing with these technologies as they have been coming in front of us. So I don’t think that it’s, it’s stoppable. I only think that we need to evaluate it and regulate it properly and look at its pros and cons. At the end of the day, it’s risk management, right? You got to with any technology, you have a risk and how effectively you manage those risks is really the bottom line. 

Facial Recognition as a Retroactive Threat

[00:06:22] David Zweifler: [00:06:22] I’m not really buying photography as a comparable here. You might’ve had concerns about having your photo taken, when that technology first became widespread, but like now we’ve got hundreds, and in some cases, thousands of our photos online, and, you know, and our children have had their photos put online, in many cases without, you know, consent and that information can be used against them, with technology that has been introduced much more recently. The photos are out there and the technology, really, a lot of this AI is already in the public domain for those who, you know, have a, a working knowledge of how to use it.

[00:07:17]We’re waiting on more clear ethical guidelines, but isn’t the genie already out of the bottle on this?

[00:07:25] Rajeev Yadav: [00:07:25] Amazon Microsoft and IBM, the big guns, recently pulled out from facial recognition technology, right. Or, more accurately,they have put moratorium on it. You might have heard that news. They have done so primarily to avoid biases in civil liberties and, you know, human resources issues  because these technologies don’t have a hundred percent accurate results.

Scary Efficient Facial Recognition in China

[00:07:48] So that’s the primary driver. Now having said that, there are some very good use cases for this. You might have heard the Chinese, use-case where they spotted a fugitive on the run in the stadium of 60,000 people.

[00:08:04] He was literally shocked when they caught him, you know, He was on the, he was on the run for economic crimes for, like, 20 years or so. And they spotted him, you know, amidst of 60,000 plus users. 

[00:08:17] And, you know, now you can extend the same thing to retail, the stores, for example. Right. So, you know, let’s say there are — in, in our case, like what do we try to prevent fraud? So there are shoplifters out there. 

[00:08:31] Now assume that just the way you have a record of pedophiles, now you have a record of shoplifters. Now, these shoplifters going to, you know, shopping, I’m not saying that you have to discourage them from buying. But, you know, potentially as a shop-owner, you have a right to be having an elevated level of alert so that now you have, you know, extra due diligence to monitor the person. Right. So you could be just more vigilant. you know, again, I’m kind of treading into the privacy realm here…

[00:09:03] David Zweifler: [00:09:03] Yeah, sure. And civil liberties. I mean like, you know, some, some shop, I mean, like right now, even if you’re a convicted felon, you can walk into a drug store and pay cash and go about your business, having paid your debt to society, but like you could have been flagged for shoplifting, maybe not even convicted of shoplifting, you know, 15, 20 years prior. And you’re still walking around with the same face … I mean, it sounds a little scary. 

Criminal History and Facial Recognition

[00:09:33] Rajeev Yadav: [00:09:33] No it is, but it’s, it is a very good debate. It, it gets very hairy when you have repeat offenders, Have they recovered enough that they’re not gonna do the same offense again? Right? Be it, Pedophiles or be it drug abuse or be shoplifting. Right. So people have certain habits and, you know, you have to tread on the caution that if they’re repeating the offense, multiple times, don’t shop owners have the right to protect their assets. And also at the same time, not discourage the civil liberties of the buyer in question. 

[00:10:06] So I would do that, right? So if. If there is somebody who’s walking in, and with a very bad record of the past, I don’t want to discourage them. I will give the benefit of the doubt, but don’t you want to kind of be alert about it, that this is what it is and make sure you know, it doesn’t do any undue harm to your environment and your other  (inaudible) shoppers?

[00:10:30] That, that is a very good debate. It’s an ethical debate.  I don’t know where the exact answer lies, to be honest with you. I think they both have liberty and freedoms, one to protect their assets and one to protect their choice to buy anything they want without judgments. Right. So where do you draw the line?

[00:10:46] So it’s a very good debate. I exactly don’t know the answer, to be honest. We should be able to empower both the sides, so that they can exert their rights and protect that rights. Right. That, that, but this, this is a very good, use case . 

Facial Recognition Benefits

[00:11:04] The other use case would be, you know, missing persons, identifying, missing persons. We have thousands and thousands of missing person across the globe on a daily basis. 

[00:11:14] Same thing with a say, a blind person or someone with special needs. So you want to bring them into social forums and engage in a healthy conversation and can give them a device that if somebody is smiling, you know, the device gestures differently, right. It vibrates in certain mode or, you know, it makes them understand that yeah, person is smiling on the other side. Because a blind person technically can’t see them, but now at least they have a gestures to go out and field, right? I mean, these are very advanced use cases. 

[00:11:44] David Zweifler: [00:11:44] Yeah, real world use cases today, identifying wanted felons that walk into a drug store that technology exists. It’s there, you know, being able to identify missing persons who might’ve been abducted as children that’s there. There there’s, all kinds of things that are it it’s being used for today. And also, very scary use cases today. 

[00:12:10] I was listening to you talk about China and identifying the wanted criminal in a crowd of 60,000 people. And I was like, ‘Oh, please let it be a pedophile. Please let it be a murderer.’ But you know, economic crimes. I mean, you know, like if a, if police, could look at a crowd of protesters and use facial recognition to identify people who have posted online that they want to, abolish police funding and single those people out for special, for special treatment, that’s presumably a real-world use case today. Yeah. Right? 

Why The Pioneers Are Stepping Back

[00:12:45] Rajeev Yadav: [00:12:45] Yeah. Totally. And those are the reasons why I think the bigger ones are pulling out because they see the downstream impact because of that. Right. Because you could easily start abusing that. Especially with no clear ethical guidelines in place for those things. 

[00:13:00] But the staying on the topic of benefits. So you, you talked about police use cases, so think about like, you know, car chases. So typically they’ve had like this license plate played reader through camera, not facial recognitions, right. So you know, which car is passing it, which checkpoints.

Car Theft and Thumb Prints

[00:13:17] So now,  if people have stolen the car, so the misplaced, the license plate. Now you’re really out of luck, right? Because you don’t really know who is in that car. So facial recognition recognition becomes another tool for you to really correlate the story better. It does help in those cases. 

[00:13:41] That’s why I said that’s a government use cases. They have genuine reasons to use it. And, but if not controlled, it could really get very bad. So, yeah. as with anything, right. So same thing with fingerprint too. 

[00:13:55] By the way, that’s one of the negatives, I realized that personally, but a lot of you can have, experience on that because facial recognition actually looks at your face. So I I’m a big proponent of thumbprint readers. I think the fingerprint is much, much better. And it leaves your ear and eyes out of the equation and authentication and verification. So I’m a big proponent of it. I think it is much better technology. It is much proven and it’s much easier to use than facial recognition.

New Data to Steal

[00:14:29] David Zweifler: [00:14:29] Here’s a technical question. I mean, I mean, ultimately a camera, you know, computer vision, it’s looking at a thumbprint, it’s looking at a face and it’s establishing mathematical relationships within the topography of a thumb print or a face print or a retinal scan. I mean, it’s basically, it’s ultimately a math, a very, very complex mathematical relationship.

[00:14:54] And so. If I have that, right. I mean, presumably you have a very, very complex piece of identifying data that’s connected to your body. but ultimately, it’s a piece of information that could be stolen, just like anything else. Right? 

[00:15:12]Rajeev Yadav: [00:15:12] Totally. so exactly it in, in the facial recognition, the challenge, David is how do you detect a live face? Right. So liveliness of the face is the key thing. So for example, if you look at the threat factors, people using high definition, photos, makeup, you know, some heat changes infrared this or that, you know, all the way you can dodge facial recognition. Can you still detect fake versus live face?

Proof of Liveliness

[00:15:43] So liveliness detection algorithms are the key instrument, and this is where the biases are coming into play. Are the really mature enough? Are they’re really advanced, so that they’re damn accurate. Then we can use it with, you know, affirmative, you know, beyond doubt kind of reasoning. And the answer is no, it’s not there yet.

[00:16:03] Right. So that’s why the big guns have pulled it out. But. And the reason why I liked thumbprint better is because it leaves your face and ears out of the equation. Right. So if you’re driving, you’re not going to get into an accident because you have to open the camera and look at the phone and boom, there was a car parked in front of you and you rear ended the car, right? 

[00:16:23] David Zweifler: [00:16:23] I mean, presumably you could have a dash cam that’s pointed at you all the time, right?  Behind the wheel and it could be mounted onto the hub of your steering wheel. And it’ll be looking at you the entire time you’re driving and you never have to look up because, but, yeah. I mean, I hear what you’re saying. 

Facial Recognition and Fraud Prevention

[00:16:42] Obviously the, there are a lot of immediate benefits from a fraud prevention point of view, not the least of which is like, you know, knowing whether the person in your store who is handing you a credit card, whether the face matches up with an identity, which is connected to the credit card. I’m sure we could probably rattle off dozens of other benefits.

[00:17:03]Is there a, special hazard around online fraud that is currently connected to, biometric, and specifically facial recognition, or is it still early days for that? 

[00:17:20] Rajeev Yadav: [00:17:20] It’s at least early days in North America, you can say, but remember the CNP card, not present situations, which is pretty much online, right?

[00:17:27] So you’re all buying online without physically. You know, displaying over credit cards to the merchants, right. so we just type in over credit card. Boom. You know, you get the items shipped to you. China has a merchant or provider that basically you just walk in and you pay it by facial recognition, right?

[00:17:47] So they don’t have the card, et cetera. They are moving into the barrier that you don’t have to carry credit cards with. You. So those kinds of situations will pop up. and same thing. I mean, you’re seeing Amazon doing with its Go grocery models, with the whole foods, et cetera, where, you know, you walk in, you buy the goods and you walked out, you know, nothing was transacted automatically you get a bill and it was based on what you pulled out from the store. 

A New Way to Pay

[00:18:13] So what you’re looking at in the store is, really driven by bunch of algorithms behind the shelves. And they’re all looking at, you know, pattern recognitions or, you know, picture recognition along with the weights, for example, in grocery stores.

[00:18:28] Right? So they’re both the transactions are happening, but staying focused on… and they had pretty amazing testing on it. So you could try to hack around the shelf. Right? So looking at the cereal box, swapping it with another one, you know, mixing and matching, but you know, there’s so many variables to dodge in the shelves.

[00:18:49] It still figured out that you walk out with this product, right, as opposed to, you know, what you tried to play around or defeat the system. So that has gotten pretty accurate, to be honest, and solved driven with a pattern matching, you know, pixel matching. And weights because every product has certain weights, and if that shelf goes out and, you know, XYZ, you know exactly what’s going on over there and you can corroborate that. 

[00:19:13] But it’s still early days I would say. But, I think the biggest just like credit card data banks, right? So you can go on, dark web and start buying credit cards for, let’s say 10 bucks or two bucks, depending upon how much net value they hold.

[00:19:28] Similarly, you’re gonna start being able to buy face prints.  We call it face print databases. because with 3D printing and all you could start, you know, spoofing David’s face, for example. And if that Chinese, use case of merchant, right, it becomes prevalent worldwide. Now you can buy goods just by walking into a store.

Losing Face At The Cash Register

[00:19:47] Right. So I just spoofed David and boom, David gets the bill, right? So those, those things will become prevalent, but the reality is that, if you look at current data, right, the benefits far outweigh the risks right now with facial recognition. 

[00:20:04] So for example, if you go to ATM, right? So you have ATM is skimming all these fraudsters try to skim, right? Because they have superficial card readers installed on top of the real, ATM card reader. So they’re skimming of credit card numbers. Now with facial recognition. Now they have another barrier to skim, right? And, and if, if your liveliness is pretty accurate, the algorithm, that will catch that and deter it.

[00:20:33] So now you have saved, you know, ATM, skimming, frauds quite a bit. So I think, it’s still too early to your point, but, there are more benefits being observed by the finance side as opposed to drawbacks, but it’s too early in some use cases I would say, yes. So like the Chinese merchant is not prevalent. So, you know, as more 3D printers become available, credit cards will be replaced by faceprint as another, you know, target attack vector. 

[00:21:08] David Zweifler: [00:21:08] I don’t know whether this seems more accessible cause I see it online a lot, but Deep Fakes. I mean like if, if like right now it seems like somebody would have the ability to use something as simple as a photograph, and a video where I’m speaking or one of my podcasts to then create a video message that looks and sounds like David Zweifler that could be sent to a relative asking for funds. Right? I mean, does 

[00:21:42] Rajeev Yadav: [00:21:42] Defamation. Extortion. Identification fraud. They’re all very probable fraud slash scams, if you will. One leads into  another. The whole idea is to get your money by fraudulent activities. So you’re gonna try to scam people.

Spoofing Faces and Voices

[00:21:58] Now imagine with David’s voice and his facial, I’m actually able to spoof David in a very compromised position, and I’m going to extort him for money. And I can use his identity, you know, for, you know, where they work, you know, what the do you know, I started profiling it much deeply and customize the attack vectors and go after all your loved ones and said, this is what David is doing.

[00:22:24] That leads into defamation and extortion, you know, along with the, you know, very customized scams, along with harassment, right? You could just as easily harass people. Based on this Deep Fakes. 

[00:22:36] David Zweifler: [00:22:36] Is there a, I mean, is the anti-fraud technology ready for this? 

[00:22:42]Rajeev Yadav: [00:22:42] Ours is, not to sell ours only, but the idea is you just don’t rely on one, one element of identity verification.

[00:22:50] You’ve got to have thousands and we’ve got more than 2000, 3000, identity attributes in real time. So even though David. is getting spoofed here, we know that it’s not David because we can see so many other data attributes of David showing that, Oh, he’s still at home or he’s still here. This, this person is not really David. He’s an imposter. So it’s here today. 

[00:23:15] David Zweifler: [00:23:15] So in a way, the data complexity…  I mean, the AI is potentially creating very convincing forms of identity verification. But, at the same time, the complexity, all of the different data points when taken together still make it relatively straightforward, not easy, but straightforward to identify fraud and fraudsters.

Data Complexity Is Your Friend

[00:23:42] Rajeev Yadav: [00:23:42] Yes. I would say that don’t just rely on facial recognition for stopping fraud. You just can’t, that’s just one element. You’ve got to look at the whole trail of the identity. You know, the more variables you throw at it, it’s harder for the fraudsters to spoof all of them real time. 

[00:24:01] Remember even the face detection, the whole idea is liveliness in the face. Can you predict that the face you’re looking at it is live, right? That’s the whole challenge in facial recognition, because the more live you make it, the less chance of that it’s fraudulent because you’re removing photographs or mockups or other things from the equation. Right. So. Well guess what we are trying to make it live by cumulative attributes of personal identities, as opposed to just facial identity.

[00:24:32] David Zweifler: [00:24:32] That’s kind of the saving grace. It’s like, you could have a perfect representation of me asking for money, on video, but if that’s originating from an internet cafe in Nepal, even though it’s a very, very convincing, fake, you can be reasonably confident that that is a  phony, if I’m continuing to make purchases in upper New York.

[00:25:03] This is David Zweifler for the Fraud.net podcast. Today I’ve been speaking to Rajiv Yadav, Chief Information Security Officer at Fraud.net, around the opportunities and special risks that facial recognition AI presents from a fraud and fraud-prevention perspective.

[00:25:21] Fraud.net uses collective intelligence and machine learning to make digital transactions safe. Learn more at fraud.net.