Public DNS Servers

Domain Name Resolution (DNS) is one of the services we take for granted every day. It works behind the scenes to resolve name-to-IP addresses. It works so well that we can accept the defaults without clearly understanding how it works. Most ‘computer guys’ or even IT Professionals really don’t have a good grasp on this topic. Simply ask someone to define root-hints and it will clearly demonstrate the knowledge of a technician.

The biggest reason it is overlooked is that it simply works — until it doesn’t. But beyond that, the question exists — can it work better?

This article is about public DNS name resolution — that is, for things outside of your local environment. We’ll save local domain resolution for another day — such as your Active Directory domain name resolution.

So let’s take a quick look at when you type a website name into a browser — perhaps the easiest example of this. What actually happens? Your local computer uses the following method to resolve names, going down the list until it finds a match. At each step its looking for a hit, which is typically a caches result.

  1. Your local computer first checks a local file called the hosts file to see if there is a static IP configured.
  2. Then it checks it’s local DNS cache — so it doesn’t constantly have to ask another source.
  3. It then uses the DNS name configured for your network interface. Which could be your DNS server for your local network (AD server), or perhaps just your home wireless router… (In some very rare cases it is skipping this and using your ISP’s DNS server.) But sticking with the local DNS server it will also check it’s cache first before going out to its upstream server, which is likely your ISP’s DNS server.
  4. Your local ISP is also checking its cache which if that fails, it will likely either source another upstream server, or hopefully, it will use root-hints.
    1. Root hints are the sort of master directory of authoritative servers, which will tell your server who to ask for authoritative information for the TLD, such as .com or .net.
    2. Once it gets the root zone, then it will query those servers to see specifically which DNS servers are authoritative for the next level such as microsoft.com
    3.  Then it will query that server for the actual DNS hostname, such as http://www.microsoft.com

As you can see once you hit step 4, you’re involving talking to a lot more servers, at a distance and latency for each step — which is why we have DNS caching. Each hop along this line introduces latency… Now there is a lot of things which can be said here. But I want to talk about a few things:

  1. Cache is essential for timely name resolution, however, this comes at a cost of stale records. This is especially important for IT Professionals to know because there is inherent latency involved with any DNS change. While local network DNS changes can propagate quickly, especially for AD Integrated AD changes when you’re talking about the public internet, it can take 24-72 hours for a simple hostname change to propagate because each cache location is going to hold on to that data for a certain length of time, often stated as TTL or Time-To-Live.
  2. Public DNS Servers have extremely diverse quality… from the amount of data in their cache to response time. DNS service is really a required afterthought for most internet service providers. As long as it works, they don’t care. As a result, response times can be significant if you need to query your ISP’s DNS information. Additionally, many of the times your ISP doesn’t use a geographically near DNS server so you might be having to traverse the internet to the other side of the continent to get your simple DNS response. Regional ISPs might not have a very good cache of DNS names causing them to reach into the Root Hints, which is time consuming, to build their cache.

There can be a huge performance improvement by migrating away from your ISP’s DNS servers. I have been experiementing with many different options over the decades.

  • Many years ago Verizon had some public DNS servers at 4.4.4.4 that was extremely popular, fast and reliable. However, they became flooded with a bunch of IT professionals directing their networks to 4.4.4.4 which impacted performance, so they closed it to just Verizon customers. It was such an easy IP address number to remember it was often used over ISP DNS servers just because it was easy to remember.
  • In 2009 Google released their set of public DNS servers at 8.8.8.8 and 8.8.4.4 which quickly became a popular replacement for the Verizon servers. As of this writing they’re still publically available.
  • Around the same time, I became introduced to OpenDNS which was recently acquired by Cisco for being awesome at DNS Resolution. Beyond just being a very fast, reliable, responsive DNS server, they also provided very basic DNS filtering. This helped IT professionals by keeping the really, really bad stuff from properly resolving. It also provides options for DNS based content filtering as well, which permitted businesses to get basic content filtering for objectionable content for low cost.
  • Starting in 2018, another company which are experts at DNS resolution, CloudFlare entered the public DNS space with their DNS servers at 1.1.1.1 and 1.0.0.1. They are ANYCAST addresses and you’ll automatically be routed to the geographically closes DNS servers to you. Benchmark testings show that the 1.1.1.1 servers are significantly faster than anything else within North America. Not only for caches records but also for non-caches results.

Today when choosing a public DNS server for my clients, it comes down to either CloudFlare or OpenDNS. In environments where we have no other source of content filtering, then I prefer to use OpenDNS but if the client has some form of content filtering on their firewall then the answer is the CloudFlare 1.1.1.1 network.

One important thing to note is that after ClouldFlare started using the 1.1.1.1 address, it exposed that some hardware vendors were improperly using 1.1.1.1 as a local address, against RFC standard. So in some isolated cases 1.1.1.1 doesn’t work for some clients — but this is because the systems they’re using are actually violating the RFC standards. So this isn’t CloudFlare’s causing but rather vendors disregarding RFC standards when they built their systems to use this unregistered space for their own purposes.

As far as how I personally use this as an individual, at home we use OpenDNS with content filtering to keep a bunch of bad stuff off of our home network, it even helps by filtering ‘objectionable ads’ from popping up often.

On my mobile devices, I have a VPN Tunnel which I use on any network which will let me use a VPN, like at Starbucks, etc., and you can find more about this config at this Roadwarrior VPN Configuration article. But sometimes I cannot connect to the VPN due to firewall filtering, such as at Holiday Markets or at my kids school guest network, so in those cases, I use the 1.1.1.1 DNS Profile for my iPhone.

One other closing issue — there have been various ISPs in the past which force all DNS resolution through there servers. In fact, there is one which on each subsequent request for a record, it will artificially increase the TTL number on each request. Basically trying to get your system to cache the results. In this case, your pretty stuck if you run into this but I would suggest you complaining to your sales rep for that ISP. Also you can look into using the DNS over TLS or DNS over HTTPS but as of right now Windows doesn’t natively support it without third party software, some very modern routers might support it, and I know that the DD-WRT aftermarket wireless firmware supports it. So you might have a bit more work to do to get it working.

 

Microsoft DNS Scavenging in mixed DC environments

After encountering this problem in two different clients, I figured this should be reported for better understanding:

First off, Microsoft considers it a best practice, starting with Windows Server 2008 to enable DNS Scavenging which is the process which will automatically clean out stale (non-recently-updated, dynamic DNS addresses). And I have used this multiple times before with great success in same-version DNS/DC environments. However it appears that at two of our clients we’ve experienced problems where static entries have been deleted. While this is not the design of DNS Scavenging, it appears that it MIGHT be an incompatibility between environments with both 2003 and 2008 DNS servers/domain controllers in the same network. Perhaps having something to do with the aging or timestamp method. However I haven’t been able to reliably reproduce it (don’t want to test in a production environment), nor find any documentation to back up this theory. But after it occurred at my second client in a mixed DNS version environment, I figured it was worth mentioning as something to watch out for.

Tech Note: Port Conflict leading to RADIUS / IAS / Wireless issues

Apparently there is a chance that a security patch (MS 08-037) can lead to port conflict issues.

There was an issue at one of my clients this morning stemming from this.  The DNS Server was using a port that was needed for the IAS (RADIUS) Server.   The IAS service would not stay running.   As a result wireless clients could not authenticate.

Most of the details are here:  http://support.microsoft.com/kb/953230

There is a registry key that behaves differently for XP/2000/2003 than for Vista/2008.  It’s “MaxUserPort”.  (My assumption is that’s why this is an issue – someone set it to an appropriate value for a new OS but it applied to all of them and ends up breaking some.) For 2000/2003 it defines the maximum range of ports available for dynamic use.  On the affected server this registry key was to 65535 with the implication that the entire port range from 1024-65535 was available for dynamic usage.  IAS could not get its reserved ports as they were in use by DNS.  Deleting the registry key sets dynamic port range back to the default of 49152-65535 and resolved the issue.  I restarted both services multiple times without conflicts.

MaxUserPort

On Windows Server 2003 and Windows 2000 Server, the HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameters\MaxUserPort registry subkey is defined as the maximum port up to which ports may be allocated for wildcard binds. The value of the MaxUserPort registry entry defines the dynamic port range.

Anti-Spam via SPF: Sender Policy Framework

VirusSPF is an excellent method of preventing email spoofing, protecting your users from having their domain show up on spam throughout the world. SPF, however, is only as effective as you make it, as it requires changes to your DNS servers for each domain you host email for.

It is in the best interest of all email users everywhere that domain administrators add SPF records to their domain that indicate what servers are authorized to send email for their domain. Encouraging your domain administrators to adopt SPF protects them from being the victims of spoofing, and reduces the spam threat on not only your server, but others throughout the world as well.

More information can be found at http://www.openspf.org/.

Powered by WordPress.com.

Up ↑