Identity Verification categry drop

How do fraudsters see Identity Verification?

21 Mar, 2022
John Marsden

OCR Labs wanted to get an unbiased opinion on the Identity Verification industry from all those it touches. In order to facilitate this we reached out to John Marsden, previously of Equifax, iovation, TransUnion and now We Fight Fraud, to interview his industry contacts with no commercial interest to get an honest opinion of how practitioners, consumers and fraudsters see Identity Verification technology and the industry as a whole. In this part 3 of a 4 part series (practitioners, consumers and fraudsters), John explores how fraudsters see the technology and how they attempt to deceive it.

Over to you, John...


For some, the term ‘Fraudster’ is used too liberally and categorises a variety of criminal activity into a simplified overarching label. Whilst pulling this piece together, this model of understanding is far too simplistic, I expected some responses, but totally missed the criminal point of view.

Firstly, my key point of access was Tony Sales, at one time branded as ‘Britain’s Greatest Fraudster’ by the UK press, Tony now and for over 10 years, helps businesses to understand the threats they face with We Fight Fraud. We fight fraud leverages criminal, enforcement, and academic insights to help businesses understand and defend against the criminal threats that face them.

Other sources were provided anonymously, in this whole exercise, I still do not know who was real and who was fake, albeit in some cases, that is an easier judgement than in others.

As I write this for OCR Labs, it is important to note that we did not identify any compromise of their technology.

The criminals contacted didn’t discern between IDV providers, acknowledging that some brands (Banks etc) offered a higher challenge and others not so. They did not demonstrate a detailed understanding of the technologies in play, they “Give it a go and if it works we’ll do it again”.

give it a go

The Landscape for Identity Fraud

Before we dive into the approach to Identity and Verification, we need to understand how criminals approach the need for a facility. This segments the criminals, the application might be for a loan and the funds be removed and often cycled through several accounts at speed, meaning the criminal source of funds was the loan, but moreover, an account facility is that layer in the laundering journey, or the honey pot for a consumer scam. As we sit post pandemic, the global crisis gave so many channels for fraudulent intent, the use of a facility for money laundering has become the main threat for Identity crime according to Sontiq, a TransUnion Company, this appears to be a common stance across western economies as fraud took advantages of the opportunities created by the pandemic to defraud state funds.

The accounts could be and often look genuine and operate without any issue, after all, one of the most valuable tools in a criminals kit bag is this ability to launder and exit funds regardless of the criminal activity that sourced the funds. The criminals objective is not to break an IDV system, nor evade fraud controls, doing these things are a means to an end. The criminal is not brand loyal, nor do they particularly care about the identity being used, rather they need control of the facility. One word of current warning, anecdotally, I am aware of growing concern that accounts are being charged with a final bitcoin purchase and disappearing, this may be the last action taken by a fraudster once the account has served its purpose.

3000 bank account

Another big fact at play is that criminal activity has always had an underworld market. Identity is for sale, and it’s not always stolen or created synthetically. To quote Tony Sales, ‘£3,000 would buy someone’s bank account and there is no shortage of people willing to pass the facility over’. A quick scan over some of the dark web forums, bear this out. £2.5k-£3k provides a fully functional bank account.

This does not exclude operations who carry out the actual act of opening a fake bank account and those willing to attempt to fool the systems, there is without doubt an opportunity to impersonate regardless of the countless moves to prove a biometric living person match.

Known Fraud attempts

We need to start at the basics, to start here is not to dismiss the technological attempts to break IDV technologies. The less sophisticated attack is currently effective, dependent upon the capabilities of the IDV provider in play and seems to be the predominant activity.

The basic is using replica, fake, fraudulently obtained genuine (FOG) documents alongside a compromised identity record, or synthetically created identity. Next, most systems are asking for a form of ‘selfie’ to verify the face is live and belongs to the Identity document.

The ability to obtain documents, obtain data and create a selfie image with all the meta data correlating is relatively simple, but then, even when vendors have claimed a robust solution, I have witnessed cases where they authenticate the wrong face. Those following this blog series should refer to our first blog, there is an acceptance of a false positives and false negatives within the current technology and the reality of small gaps in accuracy allowing bad actors through.

It’s fair to say, IDV is not the only anti-fraud measures used by many businesses, reliance on a single form of fraud prevention is not best practice and the intention of many providers to expand their offering clearly shows the value of prevention technologies beyond that of the IDV process. It would be fair to ask why this is not an absolute screen. In reality that is not achievable, quite simply, even first party applications have risk, although a robust way to confirm the identity is needed, it needs to work in the real world and cater for all the variety of environments.

if they open the account

In my research, my respondents claimed to have spoofed major banking brands. The common approach to use original documents, poor lighting, and a similar face. When asked about the risk of exposing the face used to potential prosecution, the approach was relatively brazen ‘if they open the account, we’re in, if they stop it, we just try again or go elsewhere’. The respondent had not used their own face, they claim because of the need to look similar. This individual stated they were ‘not worried’ about showing their face and leaving this record. When asked how successful this operation was, the respondent claimed it worked ‘most of the time’ and when pushed on a percentage claimed ’around 80%’. It should be noted that facial biometrics should be relatively unique to an individual regardless of physical appearance and the technology should be able to detect a problem with lighting, hats, glasses, and generally poor image quality. I can only hope that the success rate was exaggerated or a result of poor technology in use at the time of application which has since been moved on.

Another contact offered technical support to fraud, this individual suggested using replica document images and static selfies. The technical element being the adaptation of the meta data in the images. It’s good to see that the reliance on such static images is becoming less. This type of attack has been relatively common, but predicates on a simple exchange of images (by upload rather than on-the-spot capture), it’s not as sophisticated as what’s to come from the criminals. This individual stopped responding as soon as I moved to ask about their ability to fool liveness detection.

On the edge…

So much technology and concepts have been presented about document authentication and the proof of a face match for live individuals. Any sensible purchaser of such technology would have a degree of thought to the systems potential compromise.

Whilst ‘deep fakes’ have been used in multiple frauds on consumers, causing shame, embarrassment, or coercion their use in any meaningful numbers against the many vendors and technologies deployed at point of application, is this about to change? As I write, I am sure more criminal research is being practically deployed to gain this advantage. It is very clear that control of account facilities is a vital tool in the criminals tool kit, so what’s being talked about?

Firstly, in the types of IDV services we are discussing, a copy of a document is needed, this offers less of a challenge to the criminal than the proof a living person as the person is represented on the document. Fraudulently Obtained Genuine documents (FOG) or Forged (Replica) documents are available to them.

Whilst those criminals consulted opted for original documents, the replicas and images of documents these can be of a very high quality and capable of passing the 2D evaluations in play, eChip NFC is a response to forgery, but not a comprehensive as we need to accept that it has less than universal capability, whether that technology is in the consumers hand or the consumers ability to get what can be a tricky operation right.

Impersonation then becomes either a game of chance or a technical hack.

Chance seems to be a challenge we can rise to, it’s a question of the technology used, albeit with any deployment of such technology the balance between acceptance and decline/referred cases drives behaviours which might be sympathetic to accepting applicants. As an industry we do need good technology.

So, what’s being discussed by criminals on the ‘hack’ or the array of liveness tests and technological approaches. The categorisation is entirely the authors creation.

Data Injection - Essentially controlling the flow of data back to the IDV service through intercepting the signal between the originating device and the system, this allows an injection of a preprepared video or more increasingly deep faked image. The deep fake allows the operator to respond to actions required by the IDV system. Many providers have invoked a method of detecting liveness through asking the consumer to produce some actions eg: Smile, look left, look up or stick out your tongue etc. The prevailing defence currently looks to confirm the device signature and the characteristics are consistent - ensuring end to end connections with no breaks in flow.

Screen Replay - This ranges from the simplest recording of a screen during the application to advantaged, educated and technically capable attacks. This simple attack should not be possible, but sadly, with certain vendors it appears possible. The mitigations are multiple, vendors look to detect such things as 3D depth, moving shadows and the subtle millisecond refresh of the image in a screen, others are playing patterns across the screen to produce unique patterns and to assist in the 3D modelling. The criminals are catching up fast, the software and techniques being shared show how the patterns are captured and replayed, the shadows a native element of the software and the use of monitors with finer definition and faster refresh. The vendors continue to innovate and prove a live matching individual, such as heartbeat analysis (photoplethysmography) being added to the array of depth, shadow, and movement detection.

Masks - The ability to create a latex/Silicon or a paper face mask is certainly feasible but comes with a degree of cost and scalability is difficult for the criminal. Certainly, the mask disguises the criminal’s identity, a paper face mask is easily spotted by liveness detection (or should be), the latex versions do allow for movement and pose a more difficult to detect attack, albeit a huge effort to construct the attack which would indicate targeted attacks of significant value. We can see a future for 3D printing use here. The response is in the detail of liveness detection, this threat underlines the innovation from the vendors. Like screen replays the use of heartbeat analysis and other live human factors should highlight any masks use.

Lighting - It’s obvious that IDV systems need to cater for all types of environments, they do tend to provide instructions to the consumer to be well lit during the operation, so the use of poor lighting to frustrate the ability of the system to measure the facial characteristics is being used, and certainly a feature of the impersonation attacks described by the criminal. It is possible to detect poor lighting and instruct the user to find a better lit aspect, however, asking a consumer to find the right place is friction, it’s clear this element of balancing friction and convenience needs to be perfected.

Voice Recognition - Often restricted at the point of application to proving liveness, this is usually conducted by recording a set of words or an expression which can be screened to ensure it’s human and compared with voice prints from other applications and negative lists. Whilst it offers very little challenge to the criminal, it does hold the potential to recognise repeat attacks, assuming that the fraud is detected. Within our research, the facilities were largely used for money laundering, I have no doubt that eventually these networks will be embroiled in some suspicion, whether they are clearly identified as fraud and indeed, if the voiceprint can indeed be found, marked and reused over a long timeframe is a question. Companies change suppliers, the technology changes and with so many proprietary formats in play the odds are in the favour of the criminals. The challenge for IDV companies will be how to work with ‘faceprints’ and keep user privacy protected.

Where does this leave us now?

Security needs to be appropriate to risk, it needs to be recognised that many other data points are being used to spot fraud where the risks and regulation demands a multi-faceted approach, however, IDV systems are becoming more prevalent and are at times being used as the primary fraud detection and prevention technology, they are establishing a level of trust which is often a distinct benefit for users.

The market for document based IDV is in boom, a huge number of vendors have entered the market and the slideware has been flowing. It’s very clear not all the technologies are equal although some vendors are investing heavily into staying ahead of the spoof, whilst relatively poor technologies are to be found.

Digital Identities and the assurance frameworks offer potential significant steps forward in proving Identity, however, in the coming years the ability to take and prove identity through the use of a government issued ID document will prevail. Therefore, it needs to have a robust response to criminals, it’s possible, but the work behind this article demonstrates clearly there is issues with the technologies in play.


Thank you, John. Looking forward to the final roundup.

P.S - If you are experiencing any of the issues raised by the interviewees in this blog please feel free to reach out to us at hello@ocrlabs.com or request a demo. We’re more than happy to chat about the state of the industry and if it's a good fit, show you how our technology could overcome the challenges you are facing.

Identity Verification
Blog
Identity Verification
Security

John Marsden

For over 20 years John has been at the leading edge of e-commerce, helping clients to navigate the risks and customer experience challenges involved in safely transacting with customers in the digital channels. John continues the career theme of 'making the internet a safer place'.

LinkedIn
21 Mar, 2022
John Marsden
Identity Verification
Blog
Identity Verification
Security

Want to try OCR Labs?

Book a demo text-arrow

See our tech in action. Request a demo or trial now.

See how OCR Labs can make your remote onboarding faster, smarter, safer and more certain.