DotSec – dot com security https://www.dotsec.com DotSec Tue, 24 Nov 2020 05:55:14 +0000 en-GB hourly 1 https://wordpress.org/?v=5.5.3 DNS records (part 3) – The final pluck https://www.dotsec.com/2020/11/13/dns-records-part-3/ https://www.dotsec.com/2020/11/13/dns-records-part-3/#respond Fri, 13 Nov 2020 00:37:10 +0000 https://www.dotsec.com/?p=1641

In the final (?) part of our investigation into abandoned DNS records and the risk that they present to organisations, we’ll review ‘elastic’ IP addresses as supported by the various cloud providers including AWS, Azure and Google Cloud. For the most part (because we’re most familiar with it) we will be using AWS as our reference cloud platform.

Elastic IPs (in AWS terminology; they’re called static public IPs in Azure and GCP nomenclature) are ‘floating’ IP addresses which can be assigned to a compute node or (possibly) a load balancer or NAT gateway. Elastic IPs are typically bound to a particular geographic region, but not ‘availability zone’, so a single IP address can be assigned to instances in any of the availability zones defined in your Virtual Private Cloud (VPC).

Why would you want to assign a specific public IP address to a compute node instead of using load balancers or random platform-assigned host names ? Some possible reasons include:

  • You need shell/terminal access to the compute node over the public Internet, and setting up a VPN and/or DirectConnect between on-premises and AWS networks is overkill.
  • You have a low-traffic site and/or no high availability requirements, so a load balancer is also overkill.
  • You need fine-grained control over the firewall (e.g. the instance  ‘security group’) to permit traffic from specific source IP addresses and or destination ports on the node, a feature which may not be available with other public-facing services.

Whatever the reason, elastic IP addresses are commonly used by organisations in their cloud portfolio. Of course, client applications and human users rarely address endpoints using (hard-coded) public IP addresses, so it is common for the organisation to set up A records in their organisation’s DNS records, mapping a human-readable hostname to that elastic IP address.

Just as was described in Part 1, organisational risks arise when the resources being pointed to by these records are later decommissioned, but the records themselves are not. Just like hand-me-down clothes, unused elastic IP addresses can be recovered and handed out to other users of the cloud platform at a later date.  Attackers who obtain an elastic IP address in this way and can discover the host name that maps to their new IP address can use that address to mount fairly realistic phishing (or other) attacks.

Note that these Elastic IPs can appear in different types of DNS records such as A or MX records. If an organisation loses control of an elastic IP address appearing in one of their A records, then an attacker can use that IP address not only to host a malicious web application at that IP address, but also to conduct realistic phishing attacks using the organisation’s own domain!

How does that work then, eh ?

Recall that RFC 5321 specifies that, if a domain associated with an email address does not have an MX record, then Mail Transfer Agents (such as Exchange or O365) should use the A record for that domain as a fallback. This is historical behaviour which pre-dates the existence of MX records and which continues to this day for compatibility reasons. Let’s look at an example:  Say a record for sub.example.com.au points to an elastic IP that an attacker controls; the attacker can then send email to potential victims using the “From” address of reply@sub.example.com.au.  If the victim replies, the attacker will receive that response at the elastic IP address they control (assuming there is no MX record for sub.example.com.au which is a reasonable assumption).  Why go to the bother of performing Business Email Compromise scams by hacking O365 mailboxes and performing man-in-the-middle attacks with malicious INBOX filter rules when you can just send an email from a legitimate subdomain and get the reply sent straight to your inbox!

And how widespread is it?

Very!  We have built a tool that will continually allocate and release Elastic IP address in AWS, and that will also look up newly allocated Elastic IP address in the Project Sonar data to see if there are one or more A records pointing to that IP. If there are no A records pointing to that elastic IP address, the address is released back into the pool.

The tool is designed to operate within some limits:

  • The tool must operate with the default AWS limits which dictate that you can by default only allocate a maximum of 5 elastic IPs per region.
  • It is necessary to wait between allocation attempts otherwise you’ll just get reallocated the addresses you were allocated previously.

Even within these limits, our tool allows us to consistently get several elastic IPs in the ap-southeast-2 region per hour. “That’s not many!”,  you may say, but run the tool over a couple of weeks and you’ll have your hands full of abandoned A records!   Of course, you have no control over which A records are pointing to your elastic IP, but for most attackers, that probably doesn’t matter: If a business is pointing its DNS records to a cloud provider (IAAS) IP address, it’s likely to be a big (read: lucrative) enough to make it worth the attackers time.

Of course, it would be great to show you some example sites that can be built in order to take advantage of the corresponding abandoned records but we can’t…

…since doing so would leave the discovered records open to misuse.  We have found that even after we inform domain owners that they are at risk, the records are tend to be left dangling indefinitely, so providing any further details would just contribute to an already long-term problem.

What to do then ?

To begin with,  it may seem we are picking on AWS here, but rest assured that the same risk lies in static (recycled) IP addresses offered by other cloud providers. Some providers offer a BYO service for IP addresses which should eliminate this particular risk.

Whatever the provider, the root cause of this issue is the fact that DNS records for cloud provider-provided IP addresses are not cleaned up after resources are decommissioned. Such a clean-up process should be part of your Change Management document (you do those, right?) which describe the decommissioning process for unused cloud resources.

For an organisation’s existing DNS records, the easiest way to check if you are affected by this issue is to iterate through the records in your DNS zones (using whatever API your DNS provider offers) and search for any AWS (https://ip-ranges.amazonaws.com/ip-ranges.json) or Azure (https://www.microsoft.com/en-au/download/details.aspx?id=56519) IP addresses therein. Once you have your target list, you will then need to manually confirm that the record points to an existing resource managed by your organisation. If it doesn’t, you should delete the record to mitigate this kind of risk.

]]>
https://www.dotsec.com/2020/11/13/dns-records-part-3/feed/ 0
Still dangling! (DNS records – part 2) https://www.dotsec.com/2020/09/17/dns-records-part-2/ https://www.dotsec.com/2020/09/17/dns-records-part-2/#respond Thu, 17 Sep 2020 05:46:40 +0000 https://www.dotsec.com/?p=1503

In our previous post, we examined the risks of leaving ‘dangling’ CNAME records pointing to DNS zones which are not under the domain-owner’s sole control. The consequences include increased risk of successful phishing attacks as well as reputational damage. The examples given in that post focused on Azure ‘App Services’ as those services are vulnerable to the kinds of subdomain takeover attacks previously described.

Lest we be accused of picking on Azure, let us now focus on some AWS services which are also prone to such attacks.

Elastic Beanstalk

Elastic Beanstalk is an AWS (platform-as-a-)service (PaaS) which allows you quickly deploy an application running on a variety of platforms such as Java, ASP.NET or Python without having to bother yourself with setting up VPCs, EC2 instances or load balancers. The service does all that for you (under the covers) with a few button clicks, allowing you to focus on the development of the application itself.

When creating a Beanstalk application, you can supply a custom domain under which your application will be available under the URL:

   https://<custom_domain>.ap-southeast-2.elasticbeanstalk.com

In the above URL, “ap-southeast-2” refers to the AWS region which you have configured to host the application.

If you choose a custom domain which is already taken you will receive an error:

If however you choose a domain that is available, things will work out just fine.  Here’s one that someone prepared earlier:
That domain is available, so you will be able to deploy your application under the beanstalk URL: https://mafat-prod.ap-southeast-2.elasticbeanstalk.com But of course, if anyone has got a CNAME record pointing to the above subdomain of elasticbeanstalk.com, then your Beanstalk application will also be accessible via an alternative URL:

Of course, it’s impossible to tell which domains have records which have a CNAME pointing to an ‘available’ Elastic Beanstalk subdomain. There is no equivalent of the PTR record for CNAMEs. The best you can do it try to enumerate them all and find the ones you are interested in.

Fortunately, with a bit of scripting, you can do that yourself! Rapid7’s Project Sonar provides a wealth of data from their efforts to ‘scan’ the Internet. This includes a 20GB (uncompressed) file of approximately 193 million CNAME records which they have obtained from the hostnames seen in various sources such as Certificate Transparency logs as well as any SubjectAltNames appearing in endpoint certificates encountered during their scans.

From this 193 million CNAME records, there are approximately 90K which point to a subdomain of elasticbeanstalk.com and about 9K of which are ‘dangling’ (i.e. available for takeover). If we just restrict ourselves to those subdomains of ap-southeast-2.elasticbeanstalk.com (i.e. the ones associated with the AWS Sydney region) there are 421 domains available for takeover.

WorkMail works well!

WorkMail (and WorkDocs) are AWS services offering End User applications to organisations in the form of managed email and online file storage.

Similar to the Elastic Beanstalk case, attempting to register an access point which has already been taken generates an error:

And similar to the Elastic Beanstalk case, there are organisations who have created CNAME records pointing to a subdomain of .awsapps.com which can be taken over by an attacker. Using the Project Sonar data we were able to find 13 such domains which were vulnerable to ‘takeover’ of their WordDocs/WorkMail services.

And it works... in Cognito (yep, we wrote that 🙂

AWS Cognito is an service which allows you to set up an (OAuth2-based) identity provider which you then use to provide authenticated access to service providers under your control.

Again, the Cognito service allows you to create a custom subdomain which you can use to direct your users to login:

Similar to the Workmail case above, if an organisation has CNAME records already pointing to the Cognito service you’ve just set up, you can take advantage of this as an attacker. In this particular case, Cognito URLs are inherently related to authentication, and hence would be ideal for use in phishing campaigns to steal user credentials.

Having said all that, a search of the Project Sonar data did not turn up any dangling CNAME records for Cognito services – which is good! 

Conclusion

While we have noted that several AWS services are potentially vulnerable to takeover, via CNAME  we also emphasise that the vast majority are not, including favourites such as 

  • EC2
  • ELB
  • CloudFront
  • RDS
  • Elasticache
  • API Gateway

This is because the subdomain names created when you create an instance of one of those services include enough randomness such as:

    myelb-2d1234ce2e3a2g8b.elb.ap-southeast-2.amazonaws.com

This makes creating a load balancer with just the right name practically infeasible.

Having said that, the widely-used Elastic Beanstalk service is vulnerable to subdomain takeovers. So the conclusions of the previous post remain: clean up those ‘dangling’ DNS records before some one cleans you up!

 

]]>
https://www.dotsec.com/2020/09/17/dns-records-part-2/feed/ 0
DNS records – abandon at your peril https://www.dotsec.com/2020/07/15/dns-records-abandon-at-your-peril/ Wed, 15 Jul 2020 02:43:43 +0000 https://www.dotsec.com/?p=1430

Recently, there has been some interesting news describing how attackers have been able to take over various subdomains by taking advantage of abandoned DNS records.

To recap, this is a security mis-configuration issue:

  1. A victim organisation sets up (perhaps in a testing scenario) a service on a public cloud provider such as Azure.
  2. The organisation then creates a CNAME pointing an entry in the organisation’s DNS records to the cloud-provider endpoint.
  3. Some time later the organisation then deletes the cloud provider service (it was only temporary after all), but forgets to delete the CNAME.
  4. An attacker comes along, finds the abandoned DNS record, and creates a service in the same cloud provider with the same endpoint DNS name.
Voila! The attacker now owns and controls an endpoint (web site, service, whatever) that is pointed to by the victim’s legitimate DNS records.

So what? Why should I care?

So, what is the harm in that you might say? Well, just ask Epic Games!

Back in March 2020, some of the Epic Games subdomains were hijacked to serve poisoned PDF files. From the user’s perspective, they were downloading documents from a legitimate Epic Games web site but malicious code in the documents (along with other vulnerabilities in the Epic Games infrastructure) may have lead to compromised user accounts for an affiliated mobile app.

The use of a hijacked subdomain for phishing purposes provides a number of clear advantages for attackers:

  • Rather than setting up a completely new (but  related) domain name (e.g. <target_org>-verification.com instead of <target_org>.com) to attempt to trick users, you can take advantage of target users having innate trust in their own domain. Those security-awareness training courses never told you to be suspicious of your own domain right?
  • It’s likely that the target organisation’s mail and web content filters are going to be lenient on content containing URLs using their own domain – indeed they may have explicit policies to whitelist such URLs, lest IT security starts interfering with their business processes!
  • Common ‘low-risk’ application vulnerabilities in the target organisations web applications such as weak Content-Security-Policy headers or use of common domain cookies suddenly become a whole lot more serious when an attacker controls an application which uses your domain.

So what kind of cloud services are vulnerable, how does the issue arise and what can you do to prevent it?

I want to try this! What should I do?

To take a well-known example (Azure App Services), suppose we want to create a test Azure web application in the Azure portal. The first thing to do is choose a name:

Bummer – ‘test6’ is already taken by someone. What about ‘test61’ ?

Perfect! Now what?

We now proceed to develop our web application and deploy it to Azure under this name. But I don’t want my users to have to hit “https://test61.azurewebsites.net”; instead, I’d prefer them to visit “https://test.dotsec.com”; it’s all about aesthetics you know 🙂

In order to serve your Azure app under a custom domain (such as dotsec.com) you need to prove you own the domain. The typical way to do this is to create a TXT record in your DNS zone with the value provided to you by the Azure portal. The Azure portal will the look up this record using a public DNS server and if it exists, it is considered validated. A similar DNS validation procedure will allow us to generate a certificate from a Certification Authority such as DigiCert.

Once we have all this set up, the final step is to create a CNAME record in our DNS zone which points test.dotsec.com to test61.azurewebsites.net:

Now our app is all set up under https://test.dotsec.com and our users are happy.
 
After  a while, we decide that we no longer need that web app (it was a test application after all) and we delete it in our Azure Portal (to save money), and continue on with our next important IT project.

However, we have now just created the perfect conditions for a hostile subdomain takeover!

Enter the dragon!

Attackers targeting your organisation will be constantly trying to enumerate all the hostnames in your DNS zone: they won’t attempt a zone transfer (since it’s almost certain that operation is not possible with your DNS provider – it’s not 1995 after all), rather they will use a bunch of open source sites and freely available tools to find valid DNS records in your zone.

At the end of the day these tools will notice that since you deleted your Azure web app, the DNS CNAME record for test.dotsec.com still points to test61.azurewebsites.net, but the latter hostname no longer resolves. Bingo!   All the attacker now needs to do is to create their own Azure app with the name ‘test61’, and then create a phishing site at test61.azurewebsites.net which will be hit when your users visit test.dotsec.com again! Of course, the attacker must also add the ‘custom domain’ test.dotsec.com to his Azure app service, and to do that he needs to prove domain ownership right ? Well, no – according to our tests, once that particular App Service name has been validated by the original (target) organisation, the attacker does not need to perform any further validation. Thanks Azure!

Won't someone think of the certificates?

The attacker would prefer those potential victims to visit https://test.dotsec.com, not http://test.dotsec.com – their security awareness training has taught them to look for the padlock right? The attacker cannot use DNS verification to prove domain ownership of dotsec.com because they don’t control the DNS records.

Fortunately, most Certificate Authorities alternatively allow you to prove ownership of a site by placing a text file with well-known content on that site. Since you already control that site, you can put the required content on the site and get your certificate that way.

How do you prevent all this happening in the first place ? You just need to make sure you clean up those ‘dangling’ DNS records that you think are no longer pointing to a real resource. This will prevent attackers from cyber-squatting on your real-estate and putting themselves in prime position to attack your users. Microsoft’s own advice on this matter makes it pretty clear.

The end.... ?

Now,  as we come to the end of our post, you may be thinking that Azure (with it’s simple naming scheme for App Services) is alone in facilitating subdomain takeovers. Unfortunately, there are many cloud services which may potentially be vulnerable. Stay tuned for part 2 of this series of blog posts where we take a look at some vulnerable AWS services which can be targeted by similar methods.

]]>
The sky is falling! https://www.dotsec.com/2020/06/21/the-sky-is-falling/ Sun, 21 Jun 2020 02:49:08 +0000 https://www.dotsec.com/?p=1405

As you will be aware of by now, the Prime Minister warned Australians of a “sophisticated, state-based cyber actor” targeting Australian organisations and all tiers of government.

But is the sky really falling and if it is, will we all be equally devastated when it crashes down?  And what are the risks associated with the reported attacks?  This post aims to provide you with some of that information. 

How sophisticated?

According to the ACSC documents (there is a summary one and a more detailed one) that were referenced in the press conference, the attackers are reportedly targeting a number of vulnerabilities in various commercial software, in order to gain initial access to systems:

  • Telerik UI – CVE-2019-18935.
  • VIEWSTATE handling in Microsoft IIS Servers
  • Citrix Products – CVE-2019-19781
  • Microsoft SharePoint – CVE-2019-0604

Patches have been available for all of these vulnerabilities for between 3 and 7 months.  For example, the Telerik UI vulnerability  is described in CVE-2019-18935; a patch was released for this vulnerability 2019, and the vulnerability can be freely demonstrated and exploited with metasploit.  An organisation that has not applied these patches doesn’t need to just worry about “significant state-based cyber actors”; any attacker with the slightest clue can exploit these vulnerabilities with very little effort.

Let's go phishing!

In the event that the attackers are unable to exploit the above vulnerabilities, they are apparently falling back to good-old spear-phishing attacks. It is reported that  the attackers are using various methods for this, such as:

  • Sending emails to targets which contain links to credential harvesting websites (i.e. phishing sites). The attackers are reportedly masking these URLs by exploiting open redirect vulnerabilities.
  • Sending emails to targets which contains links to download malicious Microsoft PowerPoint documents from OneDrive and DropBox, as well as simply attaching the Microsoft PowerPoint document to the email.
  • Sending emails to targets which contain links to OAuth token theft applications.
  • Sending emails to targets that contain images that allow the attackers to identify users that have opened the email, and therefore in turn, identifying them as a more susceptible target.

It is reported that the attackers are making use of compromised Australian websites for command and control servers. It is suspected that this is being done to bypass geo-IP blocking mechanisms and to appear innocuous to administrators monitoring DNS and proxy traffic.

Once again, none of this is sophisticated or uncommon. While the press conference might imply that other governments are responsible for some/lots of these attacks; they’re not the only ones in the game. Just review this year’s news reports for evidence of organised criminal attacks on businesses in sectors as varied as logistics, transport, brewing, cloud, finance and wool sales. If a business does not implement application whitelisting, privileged account management and user education, phishing is probably the easiest and most sure-fire way for an attacker to get into their organisation.

So is the sky really falling?

Not all of it… but some fairly heavy chunks have been crashing down for a while now and some of the  newer threats (like ransom-ware attackers now leaking stolen data as well as encrypting it) have resulted in consequences (think Toll, MyBudget, Lion and Landmark White) that have been both high-profile and expensive.

To some extent, it doesn’t matter if the attackers are random individuals, organised criminals or overseas governments: If we consider the vulnerabilities and attacks that were described in the PM’s press conference, then the likelihood and consequences (a.k.a. risk) of a successful attack could easily be reduced with some foresight and planning. The press conference referred to known vulnerabilities for which patches exist, and to tactics and techniques that are not terribly sophisticated. An organisation that has solid, documented and verified security policies and processes in place should be alert, but not alarmed.

If on the other hand the press conference has made you realise that you’re behind the eight-ball, than that’s a good thing because now you can make some (in many cases quite simple) improvements to your organisational security standing.

1) Patch!

Two of the ASD Essential 8 controls relate to patching.  If you’re reading this because you haven’t patched the vulnerabilities described in the government announcement, then you may need to act quickly: You need to patch all Internet facing software, operating systems and devices as soon as possible (i.e. within a time that is measured in hours and days, not weeks and months).  Once that’s done however, you need to plan, document, implement and review (monthly) your Patch Management and Vulnerability Management policies and procedures.  You should separate administration (applying the patches and managing the vulnerabilities) from compliance (ensuring the patches and vulnerabilities are managed according to policy) and you should report on compliance as discussed in Point 3 below.

2) Implement two-factor authentication. Everywhere. Especially on Cloud services!

Two-factor authentication is the most common example of MFA or Multi-Factor Authentication. The idea is to reduce the attacker’s opportunities by reducing the total reliance on passwords, the most commonly used single factor of authentication. DotSec strongly recommends that all Internet accessible systems should be configured to accept only two factor authentication, according to documented identity and access management policy and procedures. This includes, but is not limited to:

• Email and file sharing services,
• Remote access connections,
• Company portals, and
• Office 365 and similar “cloud” services.

Two-factor authentication is (again) a part of the ASD’s Essential Eight strategies for mitigating information security incidents.

3) Get your SIEM (alerting and reporting) in order

Without proper alerting and reporting, it is difficult, if not impossible to detect and respond to attacks from internal or external adversaries. We understand that trialling, developing and implementing such a system in a timely manner is not practical for a lot of businesses, so please give us a call today if you require assistance. DotSec has deployed alerting and reporting security solutions (SIEM) for many national customers.

4) Work through the other ASD Essential Eight strategies

While most people in IT are familiar with the ASD Essential Eight strategies for mitigating information security incidents, a lot do not implement them. We covered off three already in points one and two! No time like the present to get started on the rest. Note that we list “the rest” here because implementation of controls like Application Control (white listing) and Administrative Account management will require a bit of planning and so will take longer to implement.  If you are worried that you might fall foul of the attacks that were discussed in the government’s press conference, you should get started with the low-hanging fruit (patching, MFA and logging) right away. But don’t forget to put in place a plan that will bring you back to the remaining controls in a timely manner.

Call a mature 20 year-old!

With over 20 years of experience, DotSec can help you plot a calm, rational course that takes into account your risks, budget and in-house skills.  We can help you understand and comply with security frameworks, we can manage your info/cyber security services, and we can help you to develop a prioritised, risk-based approach to securing your organisation’s assets. 

Please see here on our website for more information on DotSec’s informed organisation security assessments, or give us a call.

]]>
Scareware v1 – Just silly… probably https://www.dotsec.com/2019/06/11/scareware-v1-just-silly/ Tue, 11 Jun 2019 06:11:53 +0000 https://www.dotsec.com/?p=800 Along with lots of other people on the Internet, you’ve probably received an unsolicited email, not only threatening you but claiming to have stolen your password and hacked your web cam.  The emails generally go along the following lines:

While poorly worded, the email can certainly appear alarming and indications are that perhaps the attacker does have a password, and could really carry out their threat.

My first thought however was, “what rubbish!”  I use two-factor authentication and even if I was worried about people’s perceptions of my browsing habits, my laptop camera doesn’t seem to be working…

…but still… that password in the email looks good and random… could it be one that I use or have used?   A quick check on the Have I Been Pwned site and voila!  There it is!  It has been stolen!  Now, was it my and if so, where did I use it?  Another quick check, this time through the backups of my password-manager database and there it is again!  It’s a password that I used on a brewing site that I frequented a couple of years ago; the site must have been compromised since I stopped using it but my account details must still have been lying about… unexpired and unencrypted… thanks site owners!

I’m stuffed!

So I know that the password was only used on an old brewing site which contains no PII or payment details, and I know that the attacker cannot access my accounts or my web cam.  I can therefore be confident that this is just silly scareware which can safely be deleted.

But I might not be so cocky if I realised that I had reused the brew-site password elsewhere, especially if I had used it on a site that I really cared about like work, or perhaps Office 365.  Why would I be more worried?  Because then the attacker’s claim might be true.  But even more worryingly, because when my username and password are stolen from one compromised site, they can be reused across multiple other sites in an attack known as credential stuffing.

Credential stuffing is a really common attack and in December and January this year (2019) we assisted three separate businesses who were all defrauded of around $40K, and at least one (perhaps all, it’s hard to be certain without proper logs) of those frauds started life as a credential stuffing attack. Basically, the victim had reused his/her username and password when setting up a range of on-line accounts, including personal and social-media sites, and his/her work Office 365 account.

Eventually, one of the sites on which the victim had an account was compromised, the attacker was able to steal the victim’s username and password for that site.  The victim’s employer did not enforce two-factor authentication on the organisation’s Office 365 service so it was trivial for the attacker to log onto Office 365 with the reused credentials and masquerade as the victim, eventually defrauding the victim’s employer of around $40K.

That’ll do

To conclude, here are few take-away messages that are worth remembering:

  • Don’t reuse passwords across different web sites and servers. The more I reuse a password, the more likely I am to suffer from a credential theft and stuffing attack. From once-off brewing-site breaches to real jackpots like the Collection1 example, we’ve seen password reuse results in outcomes ranging from mild inconvenience through to fraud worth over $40K.   You need not just take our word for it though; other researchers have conducted extensive studies that show: If you reuse your password (and user ID) across multiple sites, you’re going to be done over… it’s just a matter of when. And they’ve also shown that we you get done over, you’re probably gonna pay… big time!
  • Do use a password manager, and use it properly (where that includes secure backups, strong manager keys and/or passwords, and use on a secured host) so that it remains secure.  Done correctly, a password manager precludes the need to remember or insecurely record or reuse passwords, greatly reducing the effectiveness of password-reuse (and silly scareware) attacks.
  • If you run a business, move to two-factor authentication (2FA) and Single Sign-On (SSO).  Seriously, the mechanisms and procedures to support 2FA and SSO have been around now for 20 years and it’s not a big deal… even social-media sites do it!
The KeePass password manager – available for almost every platform

In a subsequent post, we’ll have a look and some not-so-silly scareware which has been used to try to extort money with the threat of destroying an entire organisation’s on-line reputation.

Until then, safe browsing!

]]>
It’s not what you know… https://www.dotsec.com/2019/05/17/its-not-what-you-know/ Fri, 17 May 2019 03:11:23 +0000 https://www.dotsec.com/?p=755 (Actually, that’s exactly what it is!)

Monitoring eCommerce sites for compromise

DotSec knows that securing eCommerce sites properly can be tricky. Various best-practice guides to securing eCommerce software such as Magento do exist (see [1], [2] below) but despite the efforts of all concerned (including system owners, third-party providers, developers and administrators) system compromises are fairly common.

Furthermore, the consequences of a compromise are generally serious, and can include loss of Personally Identifiable Information (PII), site defacement, and loss of cardholder/payment details.

As you’ll have seen from previously publicised site compromises, one of the key shortcomings that allows an attack to be successful is the lack of visibility and awareness on the part of the site owner. In many recent attacks, the target site has been compromised for weeks or months before the site-owner becomes aware of the damage. Consider just a couple of recent examples:

This is the kind of advertising that money can’t buy!

Had the owners/operators of these (and dozens of other compromised) sites been aware of what was happening, the magnitude and consequences (including international publicity and fame!) of the attacks would have been far less. But constantly watching for small (and relevant) signs of malicious activity is hard work: And that is why one of the key components of DotSec’s managed, secure-hosting services is pro-active logging, reporting and alerting!

DotSec provides fully-managed, highly available, hosting that addresses relevant requirements from the PCI DSS, for a number of leading Australian National retailers. Our customers’ marketing and web-dev teams need to operate autonomously as they organise new product catalogs, sale events and new marketing tools and features. DotSec can not (and should not) interfere with those operations since the business depends upon their timely completion, but DotSec can keep an eye on things and alert the marketing and web-dev teams when their changes make the shopping site vulnerable to a bit of credit card swiping. Here are just a few examples of how we work.

Case #1 – The Russians are coming here!

Now this was one of the more interesting incident identification and response cases for a long while! Some time back, DotSec notified one of our managed-services customers that a desktop within the customer internal network was probably compromised with malware; the malware appeared to be logging user activities, and sending logs of those activities to overseas attackers.

The customer in question only uses our Cirrus WAF, so we don’t have a complete SIEM/SOAR infrastructure in place on the customer’s computing environment. None the less, Cirrus does it’s job well and the logs that the WAF generated showed some interesting activity:

  • The Cirrus WAF logs indicated that on multiple occasions, a user within the customer’s internal network was making requests to a very specific URL inside of the administrator interface of Magento. By way of example, here is one of the requested URLs:
/index.php/secureadmindrs3d222ff32f/order_order/detailreport/orderId/12237/incrementId/13f43358/storeId/19/key/c1d9aadf...[etc]...92623f0523daa93f344a704d
  • All requests (irrespective of their source) to the customer’s web site go through the Cirrus WAF and so we could determine that a couple of days after a request to the admin URL was made from the internal desktop, a Russian-based IP address made the exact same request! As you can see above, the keys within the URL are essentially random, so it is highly unlikely (let’s say, as-good-as impossible) that someone in Russia could simply “guess” the URL correctly.
  • To add to the unlikely nature of someone in Russia guessing the URL, the Russian-based addresses always duplicated a request that was made by the internal desktop, and always a couple of days after the desktop had made first made the request.

The patterns that emerged over a couple of days indicated that an internal desktop had become infected with some kind of monitoring malware, and malicious attackers were retrieving (or being sent) data from that desktop whenever it requested sensitive (admin) URLs.

The following chart depicts the occasions where a Russian-based IP address requested a Magento administrator URL that was previously requested by a user within the customer’s internal network.

The Cirrus WAF had been configured to only allow access to the Magento administrator interface from a white list of source IP addresses, so the requests from the Russian-based IP addresses were blocked and the repeated attack attempts were unsuccessful.

While it’s good that the attacks failed, the logs that were generated by the attack attempts were still valuable however, because they illustrated the fact that a desktop on the internal network was compromised. Having realised this fact, DotSec could alert the customer to the compromise, and ensure that the desktop in question was investigated and addressed immediately.

Case #2:  Just take my creds!

As required by PCI, and because we are genuinely curious, DotSec was performing routine log analysis, looking for any anomalies in web requests to one of our customer’s web sites. A couple of examples of things that we like to check for when performing log analysis of our customer sites are:

  • Requests for unusual files. This is may include files with “unusual” file extensions (such as .zip, .backup, ,.sql, .xml, .txt, and even .php to try and catch any shells).
  • An unusually high number of, or unusual pattern of:
    • HTTP GET requests for pages or files.
    • HTTP POST requests to any given URL.
    • Requests from overseas based clients.
    • Distinct clients requesting a single URL.
    • Distinct clients making multiple requests over an extended period of time.

While performing our log review, DotSec was alerted to the fact that an attacker had crafted a request that was designed to exploit a vulnerability in a plugin that was used by the web-dev and marketing team; the aim of the exploit was to allow the attacker to download the local.xml configuration file for the Magento application.

The local.xml configuration file contained credentials for the production database and so when we saw what the attacker was attempting, DotSec promptly alerted the customer’s web-development team to the issue.

Furthermore, DotSec took immediate action by restricting access to the plugin via the Cirrus web application firewall (WAF), re-issuing new credentials, and conducting an investigation using Splunk to determine if/how the vulnerable component had been abused in the past; these activities prevented exploitation of the vulnerable component while the web-dev team worked on a longer-term fix.

Case #3:  Oh, you brute!

On a separate occasion we were analysing various HTTP POST requests made to a customer’s web server, and we began to see some unusual patterns emerge. Namely, a handful of foreign IP addresses were making hundreds of HTTP POST requests to various API endpoints, such as:

 https://site/index.php/api/v2_soap
https://site/downloader/
https://site/api/xmlrpc/
https://site/index.php/rss/

The logs indicated that attackers (well, probably attack bots rather than humans) were attempting to brute-force user credentials via these endpoints. We analysed the traffic to determine whether or not there were any “valid” requests to these endpoints (which there weren’t) and locked down access to the endpoints using our Cirrus WAF.

Had these requests not been noticed then the attacker could have continued their brute-force attempts forever… or at least until they had managed to achieve their goal and recover one of the target passwords. Once that was done, the real attack would have taken place, with much more dire consequences!

Summary

You cannot defend against the unknown and so awareness is key!  Further more, most frameworks and standards such as the PCI DSS and ISO 27001 state that formalised procedures need to be followed in order to detect and respond to anomalous and threatening events in a timely and effective manner.

DotSec provides log collection, aggregation, analysis, reporting and alerting services as part of our managed information security practice. In the examples above, we’ve described how we were able to assist our retail customers by detecting and preventing malicious activity using our logging and monitoring, and incident-response services. Please contact us today if you would like a hand setting up and/or managing a logging, reporting and alerting platform for your own eCommerce site.

References
[1] https://magento.com/security/best-practices
[2] http://docs.magento.com/m1/ce/user_guide/magento/magento-security-best-practices.html


]]>
You’re invited to breakfast! https://www.dotsec.com/2019/02/28/youre-invited-to-breakfast/ Thu, 28 Feb 2019 04:46:53 +0000 https://www.dotsec.com/?p=681

Join us for breakfast and hear about the kinds of security measures you can use to securely deploy your on-line services, either in-house or in the cloud. We’ll have plenty of time for questions and discussions, and we’ll cover off on three main topics:

  

Securely deploy your on-line services.
Hear how automation and dev-ops help with the secure deployment of on-line environments, as well as with the ongoing security and administration of real-world, national-brand web sites.

Shield your on-line services.
Gain a good understanding of Web Application Firewalls (WAFs), and see how this essential component can be used to help secure your on-line hosting environment.

Monitor and report on your on-line services.
Find out how to most effectively keep an eye on the security and general operations of your on-line service, and how to use monitoring and alerting to support pro-active service security.

 

Register now!

 

  

Date: Wednesday, April 3rd 
Time: 8am – 9:30am
Venue: Sofitel Brisbane Central. 249 Turbot St, Brisbane City

Yes, there is such a thing as a free breakfast!  But you’ll need to RSVP for catering purposes before 5pm March 29!   We look forward to meeting you there!

]]>
A recent Splunk presentation https://www.dotsec.com/2018/12/07/a-recent-splunk-presentation/ https://www.dotsec.com/2018/12/07/a-recent-splunk-presentation/#respond Fri, 07 Dec 2018 02:05:47 +0000 https://www.dotsec.com/?p=657 What the hell was that?!?

We recently delivered a presso that described how DotSec has used Splunk for a number of interesting projects.  (In preparing the presso, I was a bit shocked to discover that we’ve actually been using Splunk now for over 10 years!  Fun times!)  Anyhow, our presentation was quite interactive, and it covered off four projects which pretty-well summarise work that we do at DotSec on a fairly regular basis:

  1. Splunk for compliance.  Lots of our customers have compliance requirements, especially regarding PCI DSS, IRAP and ISO 27001.  Other customers are keen to align their computing environment with accepted infosec best practice. Logging, monitoring, reporting and alerting is a big part of achieving compliance with almost any framework or best-practice guideline, and this part of the presso showed how easily DotSec has used Splunk to help in meeting our customers’ compliance goals.

  2. Splunk for due diligence.  As shown in at least one news article almost every week, attackers are often successful in their goal of compromising and misusing any organisation’s information systems.  When this worse case event happens, directors and C-level officers need to be able to show that the compromise was not as a result of negligence. Furthermore, insurance underwriters are increasingly including questions in their coverage applications that seek to understand how effectively an organisation manages and secures its corporate computing environment.   This part of the presso discusses Splunk in the context of insurance coverage and obligations.

  3. Splunk for incident prevention.  Anyone remember an incident at Equifax?  Of course we do, and we also remember that the attackers exfiltrated stolen information over a period of 76 days before they were detected.  It’s imperative that organisations use automated tools monitor all aspects of their computing environment, so that it’s possible to detect and respond quickly to anomalous and/or threatening activities. Without this kind of proactive approach, an organisation will only know that its been hosed once the damage has already been done.  And of course, this part of the presso shows how DotSec has used Splunk assist with this kind of incident prevention work.

  4. Splunk for incident response.  Knowing that something bad is about to happen (or has just happened) is useful, but it’s obviously also important to contain a security event once such an event has been identified.  The questions that are often asked is, “How many systems were hit; how much did we lose; are the attackers still in there?” This section of the presso describes how DotSec has used Splunk to analyse in-progress (or past) security incidents so that the most effective incident-reponse measure could be enacted.

All in all, it was good presso, and we received lots of interesting questions.   The slides from the presso are available here; please have a look through and let us know if you have any questions or comments.  

Until next time!

]]>
https://www.dotsec.com/2018/12/07/a-recent-splunk-presentation/feed/ 0
PCI DSS confusion: These are not the patches you’re looking for https://www.dotsec.com/2018/10/24/pci-dss-confusion-these-are-not-the-patches-youre-looking-for/ https://www.dotsec.com/2018/10/24/pci-dss-confusion-these-are-not-the-patches-youre-looking-for/#respond Wed, 24 Oct 2018 00:56:53 +0000 https://www.dotsec.com/?p=634 Or, are they? In the course of our PCI DSS-related work, we’ve noticed one issue that often causes some confusion for many clients:  Do missing operating system or application patches need to be applied, even if those missing patches are only flagged by the internal vulnerability scan as medium or low risk? It’s an important question which needs to be answered carefully in order to ensure that the client remains compliant with the DSS, without incurring unnecessary cost and overhead.

The short (and useless) answer is that they may do!  For the longer (and more useful) answer read on.

Patching activities and vulnerability remediation activities can overlap, however they are actually quite separate beasts.  Let’s consider patching first:  From a purely patching perspective, PCI DSS requirement 6.2 states that you should:

“Ensure that all system components and software are protected from known vulnerabilities by installing applicable vendor- supplied security patches. Install critical security patches within one month of release.”

The testing procedures and guidance for this control go on to state that:

  • “Applicable critical vendor-supplied security patches are installed within one month of release.”

  • “All applicable vendor-supplied security patches are installed within an appropriate time frame (for example, within three months).”

This means that, regardless of any internal vulnerability scan findings, all systems must have vendor-supplied security patches installed within a month (for critical patches) or “an appropriate time frame” (for all non-critical patches).

Now, let’s consider remediating vulnerabilities that were discovered as a result of a vulnerability scan, using a tool such as Nessus.  From an internal vulnerability scan perspective PCI DSS requirement 11.2.1 states:

Perform quarterly internal vulnerability scans. Address vulnerabilities and perform rescans to verify all “high risk” vulnerabilities are resolved in accordance with the entity’s vulnerability ranking.

This means that in order to meet requirement 11.2.1, an organisation only has to remediate “high risk” vulnerabilities identified in the internal vulnerability scan results.  And here’s where the confusion lies:  Even though requirement 11.2.1 only mandates remediation of high-risk vulnerabilities,  lower-risk findings will still need to be addressed if they result in non-compliance with other PCI DSS requirements.

Let’s consider two examples:

  1. If a vulnerability scan identifies that a system is missing medium-risk vendor-supplied security patches, these patches must still be applied in order to be compliant with PCI DSS requirement 6.2, as described above. The fact that a vulnerability scan identified the issue and reported it as only a medium risk has no bearing as to whether or not the patches must be applied.

  2. Another example is the internal vulnerability scan finding that is sometimes produced by Nessus: “SMB signing not required”. This is a medium-risk finding and as discussed above, medium-risk findings do not have to be fixed to meet requirement 11.2.1. However this finding is still relevant as it indicates an issue with the application of an organisation’s system configuration standards on the identified systems. PCI DSS requirement 2.2 deals with system configuration and hardening standards and it states:  “Develop configuration standards for all system components. Assure that these standards address all known security vulnerabilities and are consistent with industry-accepted system hardening standards.”  SMB signing is an industry-accepted best-practice, as described in this document from Microsoft and so this vulnerability would need to be addressed.

So now you have it!

So, in summary, while only high risk internal vulnerability scan findings need to be remediated to meet requirement 11.2.1, medium and low findings may indicate compliance issues in other areas, such as patching or configuration management, which need be addressed to meet separate PCI DSS requirements.

 

]]>
https://www.dotsec.com/2018/10/24/pci-dss-confusion-these-are-not-the-patches-youre-looking-for/feed/ 0
DotSec joins the Amazon Partner Network https://www.dotsec.com/2018/10/16/dotsec-joins-the-apn/ Tue, 16 Oct 2018 05:20:52 +0000 https://www.dotsec.com/?p=602 Overview

We’re excited to announce that DotSec now a member of the Amazon Partner Network (APN) a global partnering program for Amazon Web Services (AWS). 

DotSec has been designing, deploying and managing secure computing environments on AWS for over 4 years now; joining the APN allows us to further help our clients to securely manage their cloud-hosted businesses. 

Boost control and visibility of your data on AWS

 

DotSec has a strong history in the development, hosting and integation of secure systems, and AWS and DotSec can help you to create a highly secure environment on the AWS Cloud. 

AWS provides all of its customers with an infrastructure that was built from the ground up with security in mind. However, assuring the security of your application stack on the AWS Cloud is your responsibility. This means leveraging APN security solutions to protect and manage your application workloads and satisfy your compliance requirements such as PCI DSS, SOC2, HIPAA/HITECH, and FISMA.

Discover the MSP Advantage

As an AWS Managed Service Provider (MSP), DotSec is capable of building and migrating large-scale computing environments to the AWS Cloud, as well as managing workloads and services being hosted on AWS. By leveraging us to manage your security and compliance on AWS, you can simplify this effort and focus on your core business. 

Our case studies page provides details on a number of relevant projects.   As we described there, DotSec has integrated a wide range of AWS services to meet our client’s requirements, including:

  • AWS Auto-scaling Groups, Launch Configurations and Lambda functions for automated resource-scaling, automated backups and rotations of AWS storage devices.
  • AWS CloudFront for content delivery.
  • AWS RDS for database services, and AWS EC2 reserved instances for reducing hosting costs.
  • Automation and dev-ops for zero-downtime deployments and patching across all environments.

DotSec  continues to design, configure and maintain hosting infrastructure with information security at its core. New infrastructure hosting on AWS commonly includes:

  • Hardened EC2 instances, secured to an extent that exceeds the requirements dictated by standards such as the PCI DSS.
  • Regular patching of all environments using automated, zero-downtime controlled deployments.
  • Cirrus, a Web Application Firewall (WAF) that protects all Internet-accessible assets.
  • Host-intrusion detection software (HIDS).
  • Secure and customer-specific AWS IAM policies and roles.
  • Customer-specific AWS Security Groups across AWS Virtual Private Clouds (VPCs).
  • Continuous monitoring, reporting and alerting on all components including AWS CloudTrail, EC2 hosts, HIDS and WAF.

DotSec’s secure hosting and management experience ensures that our clients can stay secure while focussing on their core on-line businesses, confident of the security, robustness and manageability of their AWS-hosted environment. Contact us and we’ll show you just how secure and cost-effective your cloud-hosting can be!

CONTACT US TO START!]]>