DNS Misc

Hard-to-categorise DNS related stuff

DNS data integrity

DNS data integrity

The purpose of every DNS is to deliver correct data in an orderly manner to every query.
To maintain the data integrity of the DNS zone content it needs to be monitored, otherwise it is just a phrase without meaning.
To monitor DNS data integrity is one of two main features of our service.


Due to the ever increasing hacker activity and the fact that the exploits seems never ending we need to up the ante on security monitoring to keep pace with current events.

Even though we take good care of our systems with regular patching and security updates we will never be entirely secure. It's the name of the game. Our DNS servers are in the front line, always exposed to the brute force of what the hackers all over the Internet can throw at them.

We have always known that the DNS is a highly valued target for hackers, mainly due to its exposure on the Internet. We also know that the DNS servers rarely are the end target for any hacker, but, by gaining control of a DNS server a hacker can create a platform from which he can mount numerous attacks against a target, often bypassing perimeter security systems. This became alarmingly obvious when the details of the 2019 DNS hijacking campaign were exposed.

The threat from hackers should never be taken lightly. Many of them have more in-depth knowledge about networks and systems than the people running the systems have. In combination with funding and motivation our adversaries make our jobs to protect our computer environments a monumental tasks. Phrases like "It's never going to happen to us" and  "We have nothing of value or interest for hackers" are not only naive but also very dangerous because they are often used by organisations looking to avoid spending time and money on security. It's bound to end very badly.

The DHS CISA issued the Emergency Directive 19-01 when the DNS Hijacking campaign were exposed in early 2019. The directive gave American authorities and agencies only ten working days to complete the following 4 steps to mitigate the effects of the attack. They were:

  1. Audit all DNS resource records
  2. Change DNS account password
  3. Add MFA (Multi-Factor Authentication) to DNS accounts
  4. Monitor Certificate Transparency logs

This article will focus on the first step. Steps 2 to 4 are more or less basic security hygiene and should be fairly easy to implement. The first step is a bit more challenging.

It's quite a bit of work to audit all DNS resource records. First of all, you need to know the exact IP addresses that goes with every record. Second, you need to repeat this process on every name server used. And third, which is not explicitly submitted in the Emergency Directive, you have to repeat this audit. Regularly. With the lessons learned from the hijacking campaigns, at least a couple of times every hour.

Why do the records need to be audited several times every hour? To explain this we need to go into more detail what DNS Hijacking is and what the hackers typically did after successfully hijacked a DNS server.

DNS hijacking is basically that an unauthorised person or entity (hacker, adversary, threat actor) gains administrative access to a system that runs a DNS service. The hacker is then free to do whatever he wants on that system with his administrative privileges, like changing host resource records to point to new IP addresses, usually to systems under the hackers control. One of the first tasks a hacker perform once he gains control over a system is to make sure he maintains that control, like installing backdoors that will allow him continuous access to the system. So, once in he's hard to get rid of.

By modifying host resource records to point to systems under the hacker control, all traffic intended for the original host is now redirected to the hacker controlled system. That system is usually set up to record incoming traffic and then pass it on to the original target. This is called a "Man-in-the-middle" or MIM attack and is extremely hard to detect. The goal is of course to record information that will help the hacker to venture further into the targets environment, like account names and passwords, but could very well be other information like credit card numbers and so on depending on the hackers intentions and the targets profession. But it get worse!

By issuing new certificates through services like Let's encrypt, and install it on his server, a hacker is able to cheat the web browsers (ie cheat the user using the web browser) into thinking that the new IP address is legit which gives a user little or no chance to identify the ongoing attack. Validation of the certificate request is usually done by entering a TXT string into the zone file on the DNS (which the hacker controls). By modifying resource records like MX, mail exchanger and NS, Name Server, the hacker is able to redirect incoming emails and DNS queries to or through his own servers. And there is more!

To avoid detection the researchers found that the hijackers modified resource records only for a short period of time before reverting the changed records back to its original form. This time could be as low as one hour or even less. When the hijackers modify a resource record they usually manipulate the TTL of that record either so it will remain in resolver caches longer or not at all depending on circumstances. Most resolvers allow TTLs up to one week (even though DNS protocol specifications allows for TTLs up to approximately 68 years). A resource record with 0 TTL will not be cached at all.

Mobile phones usually connects to its mail server several times every hour, sending its credentials every time...


DNSSEC protects us from these types of unauthorised modifications, doesn't it?

The answer is both "Yes it does" and "No, it doesn't".

DNSSEC will protect against this type of unauthorised modification if:

  • the zone is signed (only 3% are) - and -
  • the resolver used is DNSSEC aware and will discard invalid responses - and -
  • the hijacked DNS server is not a master

DNSSEC is of no use if:

  • the zone is unsigned (97% are) - or -
  • the resolver is unaware of DNSSEC - or -
  • the hijacking happened on a master

If DNSSEC were properly deployed on all zones and all DNS resolvers would be DNSSEC aware, the outcome would be very different! But with the current situation on the Internet the conclusion here is that DNSSEC is very unlikely to be of any help.

Read my article on DNSSEC here!

How to detect DNS hijacking

The key to detect if your DNS servers have been hijacked is to constantly look for unauthorised modifications of your DNS resource records. DNS monitor checks for these type of changes every 5 minutes* on all authoritative name servers. And we do it from outside the organisation!

When a domain is set up and configured DNS monitor looks up (among other things) the MX and NS resource records as well as the corresponding A and AAAA records for each NS and store the initial result in the configuration database. These records are then checked every 5 minutes* and compared with the stored initial values in the database.

Each domain can be completed with additional host resource records by the administrator (the account holder) that will be stored and checked the same way the MX, NS and its corresponding A/AAAA RR's are.

Since an unauthorised modification of a resource record could indicate a successful DNS hijacking the DNSmonitor service will issue an ERROR message if a modified resource record is detected.

*/ Premium and Basic domain subscriptions are checked every 5 minutes. Free domain subscriptions are checked once every hour.

Posted by Henrik Dahlberg in DNS Misc
Monitoring DNS availability

Monitoring DNS Availability

Pretty much all Internet communication relies on the availability of the DNS system. If the DNS is down, the Internet seems down.
DNS monitor check every Internet facing authoritative DNS server for the monitored domain, making sure that the domain is responding to queries.


All connections, with very few exceptions between computer hosts are preceded by a DNS call to help the connecting host to find out to which IP address to connect. If the DNS call fails no connection will be attempted. That is why DNS Availability is key!

In todays world where everything rely on fast connections and load times you need to make sure that the DNS is is not causing any of it.

The DNS protocol were designed back in the 80's. Even though the protocol has evolved over the years the availability and load balancing implementations remain virtually untouched.

Resolver cache

To limit the number of queries directed to authoritative servers (to save bandwidth and increase speed) a DNS resolver will always try to respond with cached data. The resolver iteration process and resolver cache are explained and illustrated in the DNS Tutorial part 1. So instead of me repeating myself please take a look there if you need some refreshing on the subject.

A resolver will only query an authoritative server if it can't find the response in its cache. On a normally utilised resolver the query responses will most likely originate from the resolver cache. Let us not forget that cached data may also exist on a local machine as well.

With this in mind even if you can reach your intended host the authoritative DNS server or servers for that host/domain could actually be down. You wouldn't know for sure until the cached data in your stub and recursive resolver has been dropped by exhausting the record TTL. The only way to know for sure if the authoritative DNS servers are available is to query them directly and bypass cached data altogether.

DNS Availability

Two or more authoritative DNS servers are normally used for redundancy to mitigate risks and enable load balancing.. This in turn brings up another issue, that of how the resolvers know if a server is available or not.Remember that the DNS protocol for the most part uses UDP as transport and doesn't rely on heart-beat like other modern network equipment (unless your have built your DNS infrastructure behind load balancers)!

When the resolver iterate through the domain infrastructure to resolve a query it receive and build a list of nameservers authoritative for relevant domains. The servers are listed in random order and the resolver will query the first nameserver from that list. If that server is unresponsive for any reason the resolver will proceed with a query to the next server in the list and so on until a response is received or the list is exhausted.

This is where things get interesting. The DNS protocol doesn't really go into detail on how this failover should work. Only that it should work. So much have been left to the vendors to figure out and make its own implementation. Some resolver vendors only wait a couple of seconds for a response before continue to the next server in the list while others leave it to the TCP/IP stack to work it out, usually with a 30 or more seconds delay before continuing. A 30 or more seconds delay caused by a non-responsive DNS server is simply not ok. On an interactive session a user would definitely klick on the next google link. If you run a web shop or similar this would probably mean a lost customer.

DNSmonitor makes a huge difference in this area! We monitor the publicly available authoritative DNS servers for a customer domain we will find if a server is unresponsive and/or is prone to time-outs. This will be processed and reported to the monitor dashboard. And since we aren't biased by providing the actual DNS service we will simply provide the information in a neutral fashion.

Internet view

Most organisations use a split horizon setup where the internal and externa DNS name space are different and usually reside on different DNS servers. Problems with the external name space and DNS infrastructure can be hard to detect unless monitored closely. On the internal network you can rely on that your users will detect any DNS or resolver problems before your monitoring system does. On the Internet you normally don't have that luxury...

Posted by Henrik Dahlberg in DNS Misc