The good old (Internet banking vulnerability) days!

So a long time ago (25 years ago actually!) in a research centre not so far away, I helped to write a paper that described an Internet banking vulnerability, outlined how the authentication systems that were used by browser-based internet banking applications could be bypassed and presented some options for reducing the corresponding level of risk.  

Now, 25 years on I’ve just finished reading a really good report (written just a few months ago actually!) about some recent attacks using keyboard loggers to steal Internet banking credentials. The report, which I summarise below, is great and memories of the good old days have come flooding back!

Internet banking was a new and big thing in the 90s and was also a prime and obvious target for anyone in the field of information security. So Dean, a colleague of mine, decided to come up with a proof-of-concept that would steal the credentials from an Internet-banking browser client. As I remember, we had a lot of fun preparing to demonstrate a proof-of-concept attack to a representative of one of the larger banks, even going to the extent of having an image of Darth Vader pop up on a remote computer screen to display the login credentials that had been “stolen”, with a talky-bubble from Darth saying that the funds would be used to build a new Death Star.

Anyhow, I guess we were a bit naive at the time as rather than thanking us for demonstrating the vulnerabilities (as we had hoped), the bank rep became quite angry, apparently thinking the on-line (LAN-only, and air-gapped) Darth Vader demo had actually stolen money from his account. 

Ah well, the follies of youth.

You can read the paper here but in summary my co-author Dean wrote code that used two techniques in order to capture the internet banking users credentials:

  1. The first technique used demo-attack code that “scraped” the authentication details from the Card Number and PIN edit boxes that were used to collect the login credentials and returned the cleartext contained in the buffers of these controls. (And yes, you authenticated to your Internet banking site with your PAN and PIN back then :-))

  2. But it was the second technique that I reckon was the coolest! The internet banking application that we were targeting at the time had an on-screen keypad, and the user would type their credentials into that virtual keyboard by clicking on the virtual keys with their mouse. The keypad danced about on the screen, changing its location after every mouse click, presumably to prevent an attacker from mapping mouse-click coordinates to particular on-screen keypad buttons. However, the demo attack code got past that measure by simply capturing a bitmap image of the keypad window after each mouse click and drawing a red dot corresponding to the co-ordinates of the mouse pointer in that window.

Remember that the keyboard could be anywhere on the screen for any mouse click but even so, the final captures looked something like this:

Each circle showed an image of the screen that was centred on the location of the cursor when the user clicked their mouse button; in the above example, the user’s PIN would have been 4435.  I expect that it would have been possible to push the images through some image-processing scripts to convert the captures to a string but that wasn’t the point: The point was that we had the credentials and it was very cool indeed!

Now what was I on about?

And so you may be asking what has brought on this wistful reminiscing?  Well in fact, I started dusting off my memories when I read this report, “Screentime: Sometimes It Feels Like Somebody’s Watching Me” only a few weeks ago.  As the article describes, Proofpoint Threat Research has detected malware that takes screenshots of infected devices, and uses those screenshots as the first stage of a (quite tricky!) attack chain. The malware seems to be much more bloated than Dean’s code; he just used some Microsoft-provided example source code for the screenshot-capture functionality, whereas this malware seems to use an entire imaging program.  Whatever the case though, the Proofpoint report notes that the attackers use the Screenshotter malware to gather information on a compromised host before deploying additional payloads such as the Rhadamanthys Stealer.
And as the Threatmon report notes, the Rhadamanthys Stealer malware collects data and sends it the attacker, typically targeting the credentials that are used in emails, FTP accounts and… online banking services! And upon reading that I thought: The good old days are here again! I wonder what has changed over time and to what extent today’s recommendations are any different to the ones that we thought about 25 years ago?  Let’s first look at the Proofpoint report which notes that:
  1. The attackers distribute malware via email or sometimes third-party sites including (apparently) Google ads.

  2. The victim user has to first receive the malicious link (i.e. the link is not filtered or blocked by content filters upstream) and then click on it to get the ball rolling. After that, the user will download a JavaScript file, do some more clicking, and then download and run additional payloads, all of which need to execute on the user’s computer (presumably) without being caught by antivirus, [E|M|X]DR software and services.
Having noted those two caveats, we can focus just on the credential stealing capabilities of the Rhadamanthys Stealer, and we can be pretty confident that credentials that the user enters via their keyboard or mouse are likely to be stolen and misused.  Of course, other payloads (like crypto lockers and data exfiltrators) could also be downloaded and executed but let’s just focus on authentication and credential theft for now, since that’s what we were worried about in ‘98.  The three main risk-mitigation strategies we thought about back then were One-Time Passwords, the use of specialised hardware devices like smart cards to perform sensitive security operations, and an “atomic authentication scheme” which would create a mini security kernel of sorts to support functions such as authentication. But how would these ideas stack up today?
  1. If we look ahead to today, we can see that micro-kernels and separation kernels don’t feature in the domestic computing world, but they do have an important place in security-critical applications for industries that are extremely failure intolerant: Automotive, defence and avionics for example.

  2. However, Multi-Factor Authentication (MFA) is of course in common use and while smart cards never really took off for general use, other tokens such as YubiKeys using protocols such as U2F are widely used.

  3. Similarly, dedicated devices like Hardware Security Modules or secure enclaves within the CPU (e.g., Intel SGX or ARM TrustZone) are widely available and can be used to create a mini security kernel

And although we did not mention web browsers in our paper, today’s browsers are leaps and bounds ahead of those of the late 90s and typically support features like Web Cryptography APIs, Content Security Policy (CSP), Same-Origin Policy (SOP), and Sandboxed iframes.

What is old shall... well... stay old

So considering all these solutions, why are we still 25 years later, talking about screen-scraping credential stealers? 

I think it’s because security “solutions” that focus on just one vulnerability have never worked (except in some really special, physically-controlled environments), and never will work. Our paper focused solely on strong authentication (and supporting) mechanisms which is fine because that’s all we were interested in at the time. But no single mechanism (strong authentication included) will prevent the kind of attack described in the Proofpoint report. 

So what will work?  Well funnily enough, the same as what worked in 1998!  A risk-driven approach that relies on well-accepted and understood, holistic security frameworks, standards or guidelines, all of which cover off on topics like:

  1. Secure software development practices. For example, designing software that didn’t reply on gimmicks like jumping keyboards, but which (again, for example) integrated Multi-Factor Authentication.

  2. Regular security audits and penetration testing, blue and red teaming, and security maturity assessments (followed of course by improvements). I guess we really did a bit of a security audit back in the day, which is how we could show the risk associated with password-only authentication.

  3. MFA and related strong authentication methods. OK, so we’ve covered that already, so everyone knows this is important, right? Here is some useful guidance if you’re looking for more direction

  4. Endpoint security although now of course with less of a focus on signature-based antivirus which we considered at the time, and more of a focus on operating system process management and EDR.

  5. Thorough and comprehensive log collection, analysis, reporting and alerting of all systems within the target computing environment. 

Of course, a complete framework will also include other important considerations such as encryption for data both in transit and at rest; regular updates and patching; security awareness training and testing; and intrusion detection and prevention systems using extensive log-collection and analysis.  

But which framework to choose? 

My PhD supervisor was Professor Kerry Raymond, a great teacher and mentor. And as she cheerfully explained back in about ‘94, “The best thing about standards is that there are so many to choose from!” Aside: She also had this sign behind her, facing the door, so you could not miss it when you approached her desk.  I loved that sign!

Anyhow, back on track, for our purposes, the common standards and frameworks choices include, for example:
DotSec is of the view that:
  1. The CIS Controls are very comprehensive, are easily understood and so can be implemented without too much faffing about.  What’s more, they come with three Implementation Groups (IGs) which can be thought of as being analogous to maturity levels. An organisation can review these IGs and, with reference to it’s risk identification and management plan, can prioritise the order in which it implements its controls, and can even chart a framework-backed path towards maturity improvement over a period of time.

  2. The CIS controls don’t really have much to say about GRC or privacy.  Our work in the ISO/IEC27001 space has reinforced our view that organisations only improve their level of security maturity if GRC roles, responsibilities and processes are resourced and openly supported by management. Similarly, the CIS controls don’t really dive into privacy. It’s hard to know where to go with this since, as the Optus, Medibank and Latitude debacles have shown, the Australian Notifiable Data Breaches (NDB) scheme has teeth like a jellyfish, but I guess something is better than nothing (maybe), so a good control set should include a reference to some formal privacy standard, framework or legislation.

So, what we recommend is to use the CIS Essential Controls, in conjunction with selected controls from the Australian Privacy Principles (APPs), and some GRC-related controls from Annex A of ISO/IEC 27001:2022.  It’s our view that this approach will allow an organisation to take advantage of the best features of the standards and frameworks listed above, while also overcoming any shortcomings that exist in each individual case.

But that’s enough for now, and so we’ll come back to discuss our preferred mix of controls in  another blog post.


It was certainly interesting to read the Proofpoint report and remember the old DSTC paper from 1998. DSTC was a great organisation full of very clever people, and it was nice to think about it again and to find scraps of the old days still there on the WayBack Machine!  

Which brings me to the end of this post: I got to start off with some reminiscing about credential-theft demos, and then finish up with some sermonising about control frameworks.  

What a day!  There have to be some benefits to growing older you know! 🙂 

Me with my password manager!
Scroll to Top