DotSec – dot com security DotSec Mon, 20 Jul 2020 22:17:40 +0000 en-GB hourly 1 DNS records – abandon at your peril Wed, 15 Jul 2020 02:43:43 +0000

Recently, there has been some interesting news describing how attackers have been able to take over various subdomains by taking advantage of abandoned DNS records.

To recap, this is a security mis-configuration issue:

  1. A victim organisation sets up (perhaps in a testing scenario) a service on a public cloud provider such as Azure.
  2. The organisation then creates a CNAME pointing an entry in the organisation’s DNS records to the cloud-provider endpoint.
  3. Some time later the organisation then deletes the cloud provider service (it was only temporary after all), but forgets to delete the CNAME.
  4. An attacker comes along, finds the abandoned DNS record, and creates a service in the same cloud provider with the same endpoint DNS name.
Voila! The attacker now owns and controls an endpoint (web site, service, whatever) that is pointed to by the victim’s legitimate DNS records.

So what? Why should I care?

So, what is the harm in that you might say? Well, just ask Epic Games!

Back in March 2020, some of the Epic Games subdomains were hijacked to serve poisoned PDF files. From the user’s perspective, they were downloading documents from a legitimate Epic Games web site but malicious code in the documents (along with other vulnerabilities in the Epic Games infrastructure) may have lead to compromised user accounts for an affiliated mobile app.

The use of a hijacked subdomain for phishing purposes provides a number of clear advantages for attackers:

  • Rather than setting up a completely new (but  related) domain name (e.g. <target_org> instead of <target_org>.com) to attempt to trick users, you can take advantage of target users having innate trust in their own domain. Those security-awareness training courses never told you to be suspicious of your own domain right?
  • It’s likely that the target organisation’s mail and web content filters are going to be lenient on content containing URLs using their own domain – indeed they may have explicit policies to whitelist such URLs, lest IT security starts interfering with their business processes!
  • Common ‘low-risk’ application vulnerabilities in the target organisations web applications such as weak Content-Security-Policy headers or use of common domain cookies suddenly become a whole lot more serious when an attacker controls an application which uses your domain.

So what kind of cloud services are vulnerable, how does the issue arise and what can you do to prevent it?

I want to try this! What should I do?

To take a well-known example (Azure App Services), suppose we want to create a test Azure web application in the Azure portal. The first thing to do is choose a name:

Bummer – ‘test6’ is already taken by someone. What about ‘test61’ ?

Perfect! Now what?

We now proceed to develop our web application and deploy it to Azure under this name. But I don’t want my users to have to hit “”; instead, I’d prefer them to visit “”; it’s all about aesthetics you know 🙂

In order to serve your Azure app under a custom domain (such as you need to prove you own the domain. The typical way to do this is to create a TXT record in your DNS zone with the value provided to you by the Azure portal. The Azure portal will the look up this record using a public DNS server and if it exists, it is considered validated. A similar DNS validation procedure will allow us to generate a certificate from a Certification Authority such as DigiCert.

Once we have all this set up, the final step is to create a CNAME record in our DNS zone which points to

Now our app is all set up under and our users are happy.
After  a while, we decide that we no longer need that web app (it was a test application after all) and we delete it in our Azure Portal (to save money), and continue on with our next important IT project.

However, we have now just created the perfect conditions for a hostile subdomain takeover!

Enter the dragon!

Attackers targeting your organisation will be constantly trying to enumerate all the hostnames in your DNS zone: they won’t attempt a zone transfer (since it’s almost certain that operation is not possible with your DNS provider – it’s not 1995 after all), rather they will use a bunch of open source sites and freely available tools to find valid DNS records in your zone.

At the end of the day these tools will notice that since you deleted your Azure web app, the DNS CNAME record for still points to, but the latter hostname no longer resolves. Bingo!   All the attacker now needs to do is to create their own Azure app with the name ‘test61’, and then create a phishing site at which will be hit when your users visit again! Of course, the attacker must also add the ‘custom domain’ to his Azure app service, and to do that he needs to prove domain ownership right ? Well, no – according to our tests, once that particular App Service name has been validated by the original (target) organisation, the attacker does not need to perform any further validation. Thanks Azure!

Won't someone think of the certificates?

The attacker would prefer those potential victims to visit, not – their security awareness training has taught them to look for the padlock right? The attacker cannot use DNS verification to prove domain ownership of because they don’t control the DNS records.

Fortunately, most Certificate Authorities alternatively allow you to prove ownership of a site by placing a text file with well-known content on that site. Since you already control that site, you can put the required content on the site and get your certificate that way.

How do you prevent all this happening in the first place ? You just need to make sure you clean up those ‘dangling’ DNS records that you think are no longer pointing to a real resource. This will prevent attackers from cyber-squatting on your real-estate and putting themselves in prime position to attack your users. Microsoft’s own advice on this matter makes it pretty clear.

The end.... ?

Now,  as we come to the end of our post, you may be thinking that Azure (with it’s simple naming scheme for App Services) is alone in facilitating subdomain takeovers. Unfortunately, there are many cloud services which may potentially be vulnerable. Stay tuned for part 2 of this series of blog posts where we take a look at some vulnerable AWS services which can be targeted by similar methods.

The sky is falling! Sun, 21 Jun 2020 02:49:08 +0000

As you will be aware of by now, the Prime Minister warned Australians of a “sophisticated, state-based cyber actor” targeting Australian organisations and all tiers of government.

But is the sky really falling and if it is, will we all be equally devastated when it crashes down?  And what are the risks associated with the reported attacks?  This post aims to provide you with some of that information. 

How sophisticated?

According to the ACSC documents (there is a summary one and a more detailed one) that were referenced in the press conference, the attackers are reportedly targeting a number of vulnerabilities in various commercial software, in order to gain initial access to systems:

  • Telerik UI – CVE-2019-18935.
  • VIEWSTATE handling in Microsoft IIS Servers
  • Citrix Products – CVE-2019-19781
  • Microsoft SharePoint – CVE-2019-0604

Patches have been available for all of these vulnerabilities for between 3 and 7 months.  For example, the Telerik UI vulnerability  is described in CVE-2019-18935; a patch was released for this vulnerability 2019, and the vulnerability can be freely demonstrated and exploited with metasploit.  An organisation that has not applied these patches doesn’t need to just worry about “significant state-based cyber actors”; any attacker with the slightest clue can exploit these vulnerabilities with very little effort.

Let's go phishing!

In the event that the attackers are unable to exploit the above vulnerabilities, they are apparently falling back to good-old spear-phishing attacks. It is reported that  the attackers are using various methods for this, such as:

  • Sending emails to targets which contain links to credential harvesting websites (i.e. phishing sites). The attackers are reportedly masking these URLs by exploiting open redirect vulnerabilities.
  • Sending emails to targets which contains links to download malicious Microsoft PowerPoint documents from OneDrive and DropBox, as well as simply attaching the Microsoft PowerPoint document to the email.
  • Sending emails to targets which contain links to OAuth token theft applications.
  • Sending emails to targets that contain images that allow the attackers to identify users that have opened the email, and therefore in turn, identifying them as a more susceptible target.

It is reported that the attackers are making use of compromised Australian websites for command and control servers. It is suspected that this is being done to bypass geo-IP blocking mechanisms and to appear innocuous to administrators monitoring DNS and proxy traffic.

Once again, none of this is sophisticated or uncommon. While the press conference might imply that other governments are responsible for some/lots of these attacks; they’re not the only ones in the game. Just review this year’s news reports for evidence of organised criminal attacks on businesses in sectors as varied as logistics, transport, brewing, cloud, finance and wool sales. If a business does not implement application whitelisting, privileged account management and user education, phishing is probably the easiest and most sure-fire way for an attacker to get into their organisation.

So is the sky really falling?

Not all of it… but some fairly heavy chunks have been crashing down for a while now and some of the  newer threats (like ransom-ware attackers now leaking stolen data as well as encrypting it) have resulted in consequences (think Toll, MyBudget, Lion and Landmark White) that have been both high-profile and expensive.

To some extent, it doesn’t matter if the attackers are random individuals, organised criminals or overseas governments: If we consider the vulnerabilities and attacks that were described in the PM’s press conference, then the likelihood and consequences (a.k.a. risk) of a successful attack could easily be reduced with some foresight and planning. The press conference referred to known vulnerabilities for which patches exist, and to tactics and techniques that are not terribly sophisticated. An organisation that has solid, documented and verified security policies and processes in place should be alert, but not alarmed.

If on the other hand the press conference has made you realise that you’re behind the eight-ball, than that’s a good thing because now you can make some (in many cases quite simple) improvements to your organisational security standing.

1) Patch!

Two of the ASD Essential 8 controls relate to patching.  If you’re reading this because you haven’t patched the vulnerabilities described in the government announcement, then you may need to act quickly: You need to patch all Internet facing software, operating systems and devices as soon as possible (i.e. within a time that is measured in hours and days, not weeks and months).  Once that’s done however, you need to plan, document, implement and review (monthly) your Patch Management and Vulnerability Management policies and procedures.  You should separate administration (applying the patches and managing the vulnerabilities) from compliance (ensuring the patches and vulnerabilities are managed according to policy) and you should report on compliance as discussed in Point 3 below.

2) Implement two-factor authentication. Everywhere. Especially on Cloud services!

Two-factor authentication is the most common example of MFA or Multi-Factor Authentication. The idea is to reduce the attacker’s opportunities by reducing the total reliance on passwords, the most commonly used single factor of authentication. DotSec strongly recommends that all Internet accessible systems should be configured to accept only two factor authentication, according to documented identity and access management policy and procedures. This includes, but is not limited to:

• Email and file sharing services,
• Remote access connections,
• Company portals, and
• Office 365 and similar “cloud” services.

Two-factor authentication is (again) a part of the ASD’s Essential Eight strategies for mitigating information security incidents.

3) Get your SIEM (alerting and reporting) in order

Without proper alerting and reporting, it is difficult, if not impossible to detect and respond to attacks from internal or external adversaries. We understand that trialling, developing and implementing such a system in a timely manner is not practical for a lot of businesses, so please give us a call today if you require assistance. DotSec has deployed alerting and reporting security solutions (SIEM) for many national customers.

4) Work through the other ASD Essential Eight strategies

While most people in IT are familiar with the ASD Essential Eight strategies for mitigating information security incidents, a lot do not implement them. We covered off three already in points one and two! No time like the present to get started on the rest. Note that we list “the rest” here because implementation of controls like Application Control (white listing) and Administrative Account management will require a bit of planning and so will take longer to implement.  If you are worried that you might fall foul of the attacks that were discussed in the government’s press conference, you should get started with the low-hanging fruit (patching, MFA and logging) right away. But don’t forget to put in place a plan that will bring you back to the remaining controls in a timely manner.

Call a mature 20 year-old!

With over 20 years of experience, DotSec can help you plot a calm, rational course that takes into account your risks, budget and in-house skills.  We can help you understand and comply with security frameworks, we can manage your info/cyber security services, and we can help you to develop a prioritised, risk-based approach to securing your organisation’s assets. 

Please see here on our website for more information on DotSec’s informed organisation security assessments, or give us a call.

Scareware v1 – Just silly… probably Tue, 11 Jun 2019 06:11:53 +0000 Along with lots of other people on the Internet, you’ve probably received an unsolicited email, not only threatening you but claiming to have stolen your password and hacked your web cam.  The emails generally go along the following lines:

While poorly worded, the email can certainly appear alarming and indications are that perhaps the attacker does have a password, and could really carry out their threat.

My first thought however was, “what rubbish!”  I use two-factor authentication and even if I was worried about people’s perceptions of my browsing habits, my laptop camera doesn’t seem to be working…

…but still… that password in the email looks good and random… could it be one that I use or have used?   A quick check on the Have I Been Pwned site and voila!  There it is!  It has been stolen!  Now, was it my and if so, where did I use it?  Another quick check, this time through the backups of my password-manager database and there it is again!  It’s a password that I used on a brewing site that I frequented a couple of years ago; the site must have been compromised since I stopped using it but my account details must still have been lying about… unexpired and unencrypted… thanks site owners!

I’m stuffed!

So I know that the password was only used on an old brewing site which contains no PII or payment details, and I know that the attacker cannot access my accounts or my web cam.  I can therefore be confident that this is just silly scareware which can safely be deleted.

But I might not be so cocky if I realised that I had reused the brew-site password elsewhere, especially if I had used it on a site that I really cared about like work, or perhaps Office 365.  Why would I be more worried?  Because then the attacker’s claim might be true.  But even more worryingly, because when my username and password are stolen from one compromised site, they can be reused across multiple other sites in an attack known as credential stuffing.

Credential stuffing is a really common attack and in December and January this year (2019) we assisted three separate businesses who were all defrauded of around $40K, and at least one (perhaps all, it’s hard to be certain without proper logs) of those frauds started life as a credential stuffing attack. Basically, the victim had reused his/her username and password when setting up a range of on-line accounts, including personal and social-media sites, and his/her work Office 365 account.

Eventually, one of the sites on which the victim had an account was compromised, the attacker was able to steal the victim’s username and password for that site.  The victim’s employer did not enforce two-factor authentication on the organisation’s Office 365 service so it was trivial for the attacker to log onto Office 365 with the reused credentials and masquerade as the victim, eventually defrauding the victim’s employer of around $40K.

That’ll do

To conclude, here are few take-away messages that are worth remembering:

  • Don’t reuse passwords across different web sites and servers. The more I reuse a password, the more likely I am to suffer from a credential theft and stuffing attack. From once-off brewing-site breaches to real jackpots like the Collection1 example, we’ve seen password reuse results in outcomes ranging from mild inconvenience through to fraud worth over $40K.   You need not just take our word for it though; other researchers have conducted extensive studies that show: If you reuse your password (and user ID) across multiple sites, you’re going to be done over… it’s just a matter of when. And they’ve also shown that we you get done over, you’re probably gonna pay… big time!
  • Do use a password manager, and use it properly (where that includes secure backups, strong manager keys and/or passwords, and use on a secured host) so that it remains secure.  Done correctly, a password manager precludes the need to remember or insecurely record or reuse passwords, greatly reducing the effectiveness of password-reuse (and silly scareware) attacks.
  • If you run a business, move to two-factor authentication (2FA) and Single Sign-On (SSO).  Seriously, the mechanisms and procedures to support 2FA and SSO have been around now for 20 years and it’s not a big deal… even social-media sites do it!
The KeePass password manager – available for almost every platform

In a subsequent post, we’ll have a look and some not-so-silly scareware which has been used to try to extort money with the threat of destroying an entire organisation’s on-line reputation.

Until then, safe browsing!

It’s not what you know… Fri, 17 May 2019 03:11:23 +0000 (Actually, that’s exactly what it is!)

Monitoring eCommerce sites for compromise

DotSec knows that securing eCommerce sites properly can be tricky. Various best-practice guides to securing eCommerce software such as Magento do exist (see [1], [2] below) but despite the efforts of all concerned (including system owners, third-party providers, developers and administrators) system compromises are fairly common.

Furthermore, the consequences of a compromise are generally serious, and can include loss of Personally Identifiable Information (PII), site defacement, and loss of cardholder/payment details.

As you’ll have seen from previously publicised site compromises, one of the key shortcomings that allows an attack to be successful is the lack of visibility and awareness on the part of the site owner. In many recent attacks, the target site has been compromised for weeks or months before the site-owner becomes aware of the damage. Consider just a couple of recent examples:

This is the kind of advertising that money can’t buy!

Had the owners/operators of these (and dozens of other compromised) sites been aware of what was happening, the magnitude and consequences (including international publicity and fame!) of the attacks would have been far less. But constantly watching for small (and relevant) signs of malicious activity is hard work: And that is why one of the key components of DotSec’s managed, secure-hosting services is pro-active logging, reporting and alerting!

DotSec provides fully-managed, highly available, hosting that addresses relevant requirements from the PCI DSS, for a number of leading Australian National retailers. Our customers’ marketing and web-dev teams need to operate autonomously as they organise new product catalogs, sale events and new marketing tools and features. DotSec can not (and should not) interfere with those operations since the business depends upon their timely completion, but DotSec can keep an eye on things and alert the marketing and web-dev teams when their changes make the shopping site vulnerable to a bit of credit card swiping. Here are just a few examples of how we work.

Case #1 – The Russians are coming here!

Now this was one of the more interesting incident identification and response cases for a long while! Some time back, DotSec notified one of our managed-services customers that a desktop within the customer internal network was probably compromised with malware; the malware appeared to be logging user activities, and sending logs of those activities to overseas attackers.

The customer in question only uses our Cirrus WAF, so we don’t have a complete SIEM/SOAR infrastructure in place on the customer’s computing environment. None the less, Cirrus does it’s job well and the logs that the WAF generated showed some interesting activity:

  • The Cirrus WAF logs indicated that on multiple occasions, a user within the customer’s internal network was making requests to a very specific URL inside of the administrator interface of Magento. By way of example, here is one of the requested URLs:
  • All requests (irrespective of their source) to the customer’s web site go through the Cirrus WAF and so we could determine that a couple of days after a request to the admin URL was made from the internal desktop, a Russian-based IP address made the exact same request! As you can see above, the keys within the URL are essentially random, so it is highly unlikely (let’s say, as-good-as impossible) that someone in Russia could simply “guess” the URL correctly.
  • To add to the unlikely nature of someone in Russia guessing the URL, the Russian-based addresses always duplicated a request that was made by the internal desktop, and always a couple of days after the desktop had made first made the request.

The patterns that emerged over a couple of days indicated that an internal desktop had become infected with some kind of monitoring malware, and malicious attackers were retrieving (or being sent) data from that desktop whenever it requested sensitive (admin) URLs.

The following chart depicts the occasions where a Russian-based IP address requested a Magento administrator URL that was previously requested by a user within the customer’s internal network.

The Cirrus WAF had been configured to only allow access to the Magento administrator interface from a white list of source IP addresses, so the requests from the Russian-based IP addresses were blocked and the repeated attack attempts were unsuccessful.

While it’s good that the attacks failed, the logs that were generated by the attack attempts were still valuable however, because they illustrated the fact that a desktop on the internal network was compromised. Having realised this fact, DotSec could alert the customer to the compromise, and ensure that the desktop in question was investigated and addressed immediately.

Case #2:  Just take my creds!

As required by PCI, and because we are genuinely curious, DotSec was performing routine log analysis, looking for any anomalies in web requests to one of our customer’s web sites. A couple of examples of things that we like to check for when performing log analysis of our customer sites are:

  • Requests for unusual files. This is may include files with “unusual” file extensions (such as .zip, .backup, ,.sql, .xml, .txt, and even .php to try and catch any shells).
  • An unusually high number of, or unusual pattern of:
    • HTTP GET requests for pages or files.
    • HTTP POST requests to any given URL.
    • Requests from overseas based clients.
    • Distinct clients requesting a single URL.
    • Distinct clients making multiple requests over an extended period of time.

While performing our log review, DotSec was alerted to the fact that an attacker had crafted a request that was designed to exploit a vulnerability in a plugin that was used by the web-dev and marketing team; the aim of the exploit was to allow the attacker to download the local.xml configuration file for the Magento application.

The local.xml configuration file contained credentials for the production database and so when we saw what the attacker was attempting, DotSec promptly alerted the customer’s web-development team to the issue.

Furthermore, DotSec took immediate action by restricting access to the plugin via the Cirrus web application firewall (WAF), re-issuing new credentials, and conducting an investigation using Splunk to determine if/how the vulnerable component had been abused in the past; these activities prevented exploitation of the vulnerable component while the web-dev team worked on a longer-term fix.

Case #3:  Oh, you brute!

On a separate occasion we were analysing various HTTP POST requests made to a customer’s web server, and we began to see some unusual patterns emerge. Namely, a handful of foreign IP addresses were making hundreds of HTTP POST requests to various API endpoints, such as:


The logs indicated that attackers (well, probably attack bots rather than humans) were attempting to brute-force user credentials via these endpoints. We analysed the traffic to determine whether or not there were any “valid” requests to these endpoints (which there weren’t) and locked down access to the endpoints using our Cirrus WAF.

Had these requests not been noticed then the attacker could have continued their brute-force attempts forever… or at least until they had managed to achieve their goal and recover one of the target passwords. Once that was done, the real attack would have taken place, with much more dire consequences!


You cannot defend against the unknown and so awareness is key!  Further more, most frameworks and standards such as the PCI DSS and ISO 27001 state that formalised procedures need to be followed in order to detect and respond to anomalous and threatening events in a timely and effective manner.

DotSec provides log collection, aggregation, analysis, reporting and alerting services as part of our managed information security practice. In the examples above, we’ve described how we were able to assist our retail customers by detecting and preventing malicious activity using our logging and monitoring, and incident-response services. Please contact us today if you would like a hand setting up and/or managing a logging, reporting and alerting platform for your own eCommerce site.


You’re invited to breakfast! Thu, 28 Feb 2019 04:46:53 +0000

Join us for breakfast and hear about the kinds of security measures you can use to securely deploy your on-line services, either in-house or in the cloud. We’ll have plenty of time for questions and discussions, and we’ll cover off on three main topics:


Securely deploy your on-line services.
Hear how automation and dev-ops help with the secure deployment of on-line environments, as well as with the ongoing security and administration of real-world, national-brand web sites.

Shield your on-line services.
Gain a good understanding of Web Application Firewalls (WAFs), and see how this essential component can be used to help secure your on-line hosting environment.

Monitor and report on your on-line services.
Find out how to most effectively keep an eye on the security and general operations of your on-line service, and how to use monitoring and alerting to support pro-active service security.


Register now!



Date: Wednesday, April 3rd 
Time: 8am – 9:30am
Venue: Sofitel Brisbane Central. 249 Turbot St, Brisbane City

Yes, there is such a thing as a free breakfast!  But you’ll need to RSVP for catering purposes before 5pm March 29!   We look forward to meeting you there!

A recent Splunk presentation Fri, 07 Dec 2018 02:05:47 +0000 What the hell was that?!?

We recently delivered a presso that described how DotSec has used Splunk for a number of interesting projects.  (In preparing the presso, I was a bit shocked to discover that we’ve actually been using Splunk now for over 10 years!  Fun times!)  Anyhow, our presentation was quite interactive, and it covered off four projects which pretty-well summarise work that we do at DotSec on a fairly regular basis:

  1. Splunk for compliance.  Lots of our customers have compliance requirements, especially regarding PCI DSS, IRAP and ISO 27001.  Other customers are keen to align their computing environment with accepted infosec best practice. Logging, monitoring, reporting and alerting is a big part of achieving compliance with almost any framework or best-practice guideline, and this part of the presso showed how easily DotSec has used Splunk to help in meeting our customers’ compliance goals.

  2. Splunk for due diligence.  As shown in at least one news article almost every week, attackers are often successful in their goal of compromising and misusing any organisation’s information systems.  When this worse case event happens, directors and C-level officers need to be able to show that the compromise was not as a result of negligence. Furthermore, insurance underwriters are increasingly including questions in their coverage applications that seek to understand how effectively an organisation manages and secures its corporate computing environment.   This part of the presso discusses Splunk in the context of insurance coverage and obligations.

  3. Splunk for incident prevention.  Anyone remember an incident at Equifax?  Of course we do, and we also remember that the attackers exfiltrated stolen information over a period of 76 days before they were detected.  It’s imperative that organisations use automated tools monitor all aspects of their computing environment, so that it’s possible to detect and respond quickly to anomalous and/or threatening activities. Without this kind of proactive approach, an organisation will only know that its been hosed once the damage has already been done.  And of course, this part of the presso shows how DotSec has used Splunk assist with this kind of incident prevention work.

  4. Splunk for incident response.  Knowing that something bad is about to happen (or has just happened) is useful, but it’s obviously also important to contain a security event once such an event has been identified.  The questions that are often asked is, “How many systems were hit; how much did we lose; are the attackers still in there?” This section of the presso describes how DotSec has used Splunk to analyse in-progress (or past) security incidents so that the most effective incident-reponse measure could be enacted.

All in all, it was good presso, and we received lots of interesting questions.   The slides from the presso are available here; please have a look through and let us know if you have any questions or comments.  

Until next time!

]]> 0
PCI DSS confusion: These are not the patches you’re looking for Wed, 24 Oct 2018 00:56:53 +0000 Or, are they? In the course of our PCI DSS-related work, we’ve noticed one issue that often causes some confusion for many clients:  Do missing operating system or application patches need to be applied, even if those missing patches are only flagged by the internal vulnerability scan as medium or low risk? It’s an important question which needs to be answered carefully in order to ensure that the client remains compliant with the DSS, without incurring unnecessary cost and overhead.

The short (and useless) answer is that they may do!  For the longer (and more useful) answer read on.

Patching activities and vulnerability remediation activities can overlap, however they are actually quite separate beasts.  Let’s consider patching first:  From a purely patching perspective, PCI DSS requirement 6.2 states that you should:

“Ensure that all system components and software are protected from known vulnerabilities by installing applicable vendor- supplied security patches. Install critical security patches within one month of release.”

The testing procedures and guidance for this control go on to state that:

  • “Applicable critical vendor-supplied security patches are installed within one month of release.”

  • “All applicable vendor-supplied security patches are installed within an appropriate time frame (for example, within three months).”

This means that, regardless of any internal vulnerability scan findings, all systems must have vendor-supplied security patches installed within a month (for critical patches) or “an appropriate time frame” (for all non-critical patches).

Now, let’s consider remediating vulnerabilities that were discovered as a result of a vulnerability scan, using a tool such as Nessus.  From an internal vulnerability scan perspective PCI DSS requirement 11.2.1 states:

Perform quarterly internal vulnerability scans. Address vulnerabilities and perform rescans to verify all “high risk” vulnerabilities are resolved in accordance with the entity’s vulnerability ranking.

This means that in order to meet requirement 11.2.1, an organisation only has to remediate “high risk” vulnerabilities identified in the internal vulnerability scan results.  And here’s where the confusion lies:  Even though requirement 11.2.1 only mandates remediation of high-risk vulnerabilities,  lower-risk findings will still need to be addressed if they result in non-compliance with other PCI DSS requirements.

Let’s consider two examples:

  1. If a vulnerability scan identifies that a system is missing medium-risk vendor-supplied security patches, these patches must still be applied in order to be compliant with PCI DSS requirement 6.2, as described above. The fact that a vulnerability scan identified the issue and reported it as only a medium risk has no bearing as to whether or not the patches must be applied.

  2. Another example is the internal vulnerability scan finding that is sometimes produced by Nessus: “SMB signing not required”. This is a medium-risk finding and as discussed above, medium-risk findings do not have to be fixed to meet requirement 11.2.1. However this finding is still relevant as it indicates an issue with the application of an organisation’s system configuration standards on the identified systems. PCI DSS requirement 2.2 deals with system configuration and hardening standards and it states:  “Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.”  SMB signing is an industry-accepted best-practice, as described in this document from Microsoft and so this vulnerability would need to be addressed.

So now you have it!

So, in summary, while only high risk internal vulnerability scan findings need to be remediated to meet requirement 11.2.1, medium and low findings may indicate compliance issues in other areas, such as patching or configuration management, which need be addressed to meet separate PCI DSS requirements.


]]> 0
DotSec joins the Amazon Partner Network Tue, 16 Oct 2018 05:20:52 +0000 Overview

We’re excited to announce that DotSec now a member of the Amazon Partner Network (APN) a global partnering program for Amazon Web Services (AWS). 

DotSec has been designing, deploying and managing secure computing environments on AWS for over 4 years now; joining the APN allows us to further help our clients to securely manage their cloud-hosted businesses. 

Boost control and visibility of your data on AWS


DotSec has a strong history in the development, hosting and integation of secure systems, and AWS and DotSec can help you to create a highly secure environment on the AWS Cloud. 

AWS provides all of its customers with an infrastructure that was built from the ground up with security in mind. However, assuring the security of your application stack on the AWS Cloud is your responsibility. This means leveraging APN security solutions to protect and manage your application workloads and satisfy your compliance requirements such as PCI DSS, SOC2, HIPAA/HITECH, and FISMA.

Discover the MSP Advantage

As an AWS Managed Service Provider (MSP), DotSec is capable of building and migrating large-scale computing environments to the AWS Cloud, as well as managing workloads and services being hosted on AWS. By leveraging us to manage your security and compliance on AWS, you can simplify this effort and focus on your core business. 

Our case studies page provides details on a number of relevant projects.   As we described there, DotSec has integrated a wide range of AWS services to meet our client’s requirements, including:

  • AWS Auto-scaling Groups, Launch Configurations and Lambda functions for automated resource-scaling, automated backups and rotations of AWS storage devices.
  • AWS CloudFront for content delivery.
  • AWS RDS for database services, and AWS EC2 reserved instances for reducing hosting costs.
  • Automation and dev-ops for zero-downtime deployments and patching across all environments.

DotSec  continues to design, configure and maintain hosting infrastructure with information security at its core. New infrastructure hosting on AWS commonly includes:

  • Hardened EC2 instances, secured to an extent that exceeds the requirements dictated by standards such as the PCI DSS.
  • Regular patching of all environments using automated, zero-downtime controlled deployments.
  • Cirrus, a Web Application Firewall (WAF) that protects all Internet-accessible assets.
  • Host-intrusion detection software (HIDS).
  • Secure and customer-specific AWS IAM policies and roles.
  • Customer-specific AWS Security Groups across AWS Virtual Private Clouds (VPCs).
  • Continuous monitoring, reporting and alerting on all components including AWS CloudTrail, EC2 hosts, HIDS and WAF.

DotSec’s secure hosting and management experience ensures that our clients can stay secure while focussing on their core on-line businesses, confident of the security, robustness and manageability of their AWS-hosted environment. Contact us and we’ll show you just how secure and cost-effective your cloud-hosting can be!

CONTACT US TO START!]]> Testing and assessment methodologies Mon, 20 Aug 2018 06:36:20 +0000 Overview

DotSec specialises in testing applications and services for its online retail, government, finance and banking, legal, investment, online gaming, education, online payments, insurance, telco and data centre clients.

At DotSec, we pride ourselves on our independence, and on our ability to bring to focus the skills of experts who do not just test and assess systems, but who have developed, integrated and maintained information systems for nearly 20 years. When it comes to assessment and testing, DotSec works with you to understand your business processes, identify your assets, and assess and then manage your risks. You can be certain of receiving a complete and concise report since our assessments are not clouded by any partner, reseller or vendor relationships.

Methodology and tools

DotSec security professionals conduct a wide range of Security Audits and Threat and Risk Assessments (TRAs) which can be uninformed (blind) or informed, and which can include Penetration Tests (pen tests), code reviews and design reviews. Our core process is consistent and is based on a number of standards, primarily AS/NZS ISO 31000:2009, AS ISO/IEC 27001, the Australian Government Information Security Manual (ISM) and IS18/IT&T-14 (State). We are also highly experienced at performing internal and external Cardholder Data Environment (CDE) penetration tests, in line with requirement 11.3 of the PCI DSS.  Of course, most customers will have some unique requirements (generally with either scoping and availability of scoping information, or custom reporting requirements) and we are of course very happy to accommodate those needs.

Whatever the case, our customers are always presented with a detailed report which includes the following sections:

  • Executive summary, which includes summaries of the target of assessment, key findings and key recommendations.


  • Scope and asset list, which describes the target(s) of assessment in detail.
  • Findings and recommendations, which includes a list of discovered vulnerabilities, the risk associated with each vulnerability, and a summary of related risk mitigation recommendations.
  • Threat and risk assessment, which include detailed descriptions of the vulnerabilities or short-comings that were discovered, the techniques that were used in the discovery, and a description of how the vulnerability could be exploited in a successful attack.
  • Recommendations, which describes how the level of risk associated with each vulnerability may be reduced to an acceptable level.

We are often asked what tools we use when completing an assessment. The fact of the matter is that any tool is only as good as its owner, and an assessment that is based on the use of a particular tool will always fall short. For the record, tools that we have used in the past include nmap, wireshark, Nessus, various Retina products, Microsoft policy tools, Splunk, various proxies, most command-line network utilities, airsnort, most automated hashing tools  and so on. However, it is our assessors’ experience and insight, not the tools, which allow us to consistently deliver high-quality and valuable results.

Informed or blind?

There are a range of alternatives that you can consider when deciding on the kind of assessment that best suits your goals. Keep in mind of course that these broad categories are not set in stone, and that it may suit you to undertake an assessment which includes elements of each of the categories described below.

Uninformed (blind)

Uninformed or blind assessments are assessments for which the assessor has no information about the target of assessment, other than (in most cases) its location. Blind assessments are somewhat limited because they are generally conducted within a fixed time-frame. An uninformed assessor may not discover all vulnerabilities within that fixed time frame, whereas an attacker with more time on their hands may in fact find it at a later date.

Some clients prefer blind assessments because they require little preparation and have the appearance of being done from the perspective of an Internet-based attacker. This is fine, but it is worth keeping in mind that just because an uninformed assessor does not find a vulnerability in n days, that does not mean that a real attacker will not discover it in n+1 days.


Informed assessments are assessments for which the assessor understands the details of the target of assessment, and for which the project takes the form of an audit. The assessor will generally begin an informed assessment by reviewing design, policy, and/or as-built implementation documentation. In addition, the assessor will generally have access to the target; the access may be privileged (root, admin, etc.) or unprivileged (for example, a general user of a web application). The exact number and kinds of information, accounts and other information that can be used during an informed assessment will depend upon the scope and aims of the assessment (remember we noted above that many clients have specific goals and requirements).


Informed assessments may offer better value for money than uninformed assessments, since the reconnaissance and vulnerability discovery phases of the assessment should be much more effective and efficient than is the case in an uninformed assessment. This is because the assessor can spend time reviewing the correctness and completeness of design and implementation documentation, and clarifying details with the client.

Fit for purpose

Assessments which include elements from the above core groups include source-code assessment (perhaps the most informed assessment of all), audits (where policies and procedures are evaluated) and others. Each client has their own goals and requirements. The key is to a successful assessment is early agreement and documentation of the scope, cost, goals, methodology and desired outcomes.

IRAP compliance for national service provider Wed, 15 Aug 2018 04:05:56 +0000 We’ve compiled a case study that summarises 18 months of very challenging, rewarding and ultimately successful work, guiding the development of an IRAP-compliant information security management practice. 

Our client was an international service-provider to governments in Australia and overseas. In order to be able to provide services to the Australian federal government, our client needed to comply with the Australian government’s requirements for protective security and standardised information security practices. These requirements were defined within the Australian government’s IRAP framework.

Our work involved the development of policies, procedures and infrastructure, and we had to engineer organisational change without interruption to the client’s national business-as-usual activities. All the work needed to be completed within an aggressive time frame that was defined by the federal government.

We have provided plenty of information about the IRAP program in a previous post so we won’t re-hash it here. Instead, we’ve compiled a case study that summarises 18 months of very challenging, rewarding and ultimately successful work. You can read more about this, and other case studies, on our Case Studies page.

Feel free to contact us if you need some assistance in defining or executing your IRAP program of work.