Public DNS Servers

Domain Name Resolution (DNS) is one of the services we take for granted every day. It works behind the scenes to resolve name-to-IP addresses. It works so well that we can accept the defaults without clearly understanding how it works. Most ‘computer guys’ or even IT Professionals really don’t have a good grasp on this topic. Simply ask someone to define root-hints and it will clearly demonstrate the knowledge of a technician.

The biggest reason it is overlooked is that it simply works — until it doesn’t. But beyond that, the question exists — can it work better?

This article is about public DNS name resolution — that is, for things outside of your local environment. We’ll save local domain resolution for another day — such as your Active Directory domain name resolution.

So let’s take a quick look at when you type a website name into a browser — perhaps the easiest example of this. What actually happens? Your local computer uses the following method to resolve names, going down the list until it finds a match. At each step its looking for a hit, which is typically a caches result.

  1. Your local computer first checks a local file called the hosts file to see if there is a static IP configured.
  2. Then it checks it’s local DNS cache — so it doesn’t constantly have to ask another source.
  3. It then uses the DNS name configured for your network interface. Which could be your DNS server for your local network (AD server), or perhaps just your home wireless router… (In some very rare cases it is skipping this and using your ISP’s DNS server.) But sticking with the local DNS server it will also check it’s cache first before going out to its upstream server, which is likely your ISP’s DNS server.
  4. Your local ISP is also checking its cache which if that fails, it will likely either source another upstream server, or hopefully, it will use root-hints.
    1. Root hints are the sort of master directory of authoritative servers, which will tell your server who to ask for authoritative information for the TLD, such as .com or .net.
    2. Once it gets the root zone, then it will query those servers to see specifically which DNS servers are authoritative for the next level such as microsoft.com
    3.  Then it will query that server for the actual DNS hostname, such as http://www.microsoft.com

As you can see once you hit step 4, you’re involving talking to a lot more servers, at a distance and latency for each step — which is why we have DNS caching. Each hop along this line introduces latency… Now there is a lot of things which can be said here. But I want to talk about a few things:

  1. Cache is essential for timely name resolution, however, this comes at a cost of stale records. This is especially important for IT Professionals to know because there is inherent latency involved with any DNS change. While local network DNS changes can propagate quickly, especially for AD Integrated AD changes when you’re talking about the public internet, it can take 24-72 hours for a simple hostname change to propagate because each cache location is going to hold on to that data for a certain length of time, often stated as TTL or Time-To-Live.
  2. Public DNS Servers have extremely diverse quality… from the amount of data in their cache to response time. DNS service is really a required afterthought for most internet service providers. As long as it works, they don’t care. As a result, response times can be significant if you need to query your ISP’s DNS information. Additionally, many of the times your ISP doesn’t use a geographically near DNS server so you might be having to traverse the internet to the other side of the continent to get your simple DNS response. Regional ISPs might not have a very good cache of DNS names causing them to reach into the Root Hints, which is time consuming, to build their cache.

There can be a huge performance improvement by migrating away from your ISP’s DNS servers. I have been experiementing with many different options over the decades.

  • Many years ago Verizon had some public DNS servers at 4.4.4.4 that was extremely popular, fast and reliable. However, they became flooded with a bunch of IT professionals directing their networks to 4.4.4.4 which impacted performance, so they closed it to just Verizon customers. It was such an easy IP address number to remember it was often used over ISP DNS servers just because it was easy to remember.
  • In 2009 Google released their set of public DNS servers at 8.8.8.8 and 8.8.4.4 which quickly became a popular replacement for the Verizon servers. As of this writing they’re still publically available.
  • Around the same time, I became introduced to OpenDNS which was recently acquired by Cisco for being awesome at DNS Resolution. Beyond just being a very fast, reliable, responsive DNS server, they also provided very basic DNS filtering. This helped IT professionals by keeping the really, really bad stuff from properly resolving. It also provides options for DNS based content filtering as well, which permitted businesses to get basic content filtering for objectionable content for low cost.
  • Starting in 2018, another company which are experts at DNS resolution, CloudFlare entered the public DNS space with their DNS servers at 1.1.1.1 and 1.0.0.1. They are ANYCAST addresses and you’ll automatically be routed to the geographically closes DNS servers to you. Benchmark testings show that the 1.1.1.1 servers are significantly faster than anything else within North America. Not only for caches records but also for non-caches results.

Today when choosing a public DNS server for my clients, it comes down to either CloudFlare or OpenDNS. In environments where we have no other source of content filtering, then I prefer to use OpenDNS but if the client has some form of content filtering on their firewall then the answer is the CloudFlare 1.1.1.1 network.

One important thing to note is that after ClouldFlare started using the 1.1.1.1 address, it exposed that some hardware vendors were improperly using 1.1.1.1 as a local address, against RFC standard. So in some isolated cases 1.1.1.1 doesn’t work for some clients — but this is because the systems they’re using are actually violating the RFC standards. So this isn’t CloudFlare’s causing but rather vendors disregarding RFC standards when they built their systems to use this unregistered space for their own purposes.

As far as how I personally use this as an individual, at home we use OpenDNS with content filtering to keep a bunch of bad stuff off of our home network, it even helps by filtering ‘objectionable ads’ from popping up often.

On my mobile devices, I have a VPN Tunnel which I use on any network which will let me use a VPN, like at Starbucks, etc., and you can find more about this config at this Roadwarrior VPN Configuration article. But sometimes I cannot connect to the VPN due to firewall filtering, such as at Holiday Markets or at my kids school guest network, so in those cases, I use the 1.1.1.1 DNS Profile for my iPhone.

One other closing issue — there have been various ISPs in the past which force all DNS resolution through there servers. In fact, there is one which on each subsequent request for a record, it will artificially increase the TTL number on each request. Basically trying to get your system to cache the results. In this case, your pretty stuck if you run into this but I would suggest you complaining to your sales rep for that ISP. Also you can look into using the DNS over TLS or DNS over HTTPS but as of right now Windows doesn’t natively support it without third party software, some very modern routers might support it, and I know that the DD-WRT aftermarket wireless firmware supports it. So you might have a bit more work to do to get it working.

 

Dad needs a new computer?!

One of the banes of most IT Professionals is when family members ask for help with purchasing a computer, or worse yet, they just purchased something from a big-box retailer and need help.

This is a multi-part story inspired by my dad who called me recently for a computer question he had. It made me realize that 13 years ago I helped him purchase the computer he currently has. I couldn’t believe it’s been that long! I’m thankful that after he received the catalog for home computers from Dell that he immediately came to me to ask for advice…

Now I’ll get back around to what computer I help him select because I want this to sink in for just a moment…

My dad has a desktop computer,

that was purchased 13 years ago,

that he is still using…

And as for performance, it is working just as good today as it did when it was first purchased… Almost unbelievable! Oh, and he has no plans on replacing it either!

Okay, now as the commercials for miracle weight loss say, “results are not typical”… but they are not wholly unexpected. Let’s talk about this a bit.

My first advice to anyone purchasing a computer for home use, is to skip the big box stores, and even anything seemingly consumer grade. Everything in this real seems to be designed with a short lifespan in mind. Cheaper parts, poorer construction, etc. Not to mention all of the consumer bloatware that seems to come on them. So the first thing I tell everyone and everyone is to immediately go to a major computer sellers “enterprise” tab on their page, be it Dell or HP or whomever. Normally anybody can still just order these, and the benefits are more solid construction, longer MTBF and usually far less bloatware preinstalled. In this case, 13 years ago I had my dad purchased a Dell Optiplex Workstation.

Now if you simply did that, it shouldn’t be surprising to get 6+ years out of the hardware, to get over 10 years is to really be getting your money’s worth. Now truth be told, he did have to replace the power supply once but that was likely caused due to a recent series of lightning storms in his area that the little power-strip surge protector couldn’t really protect against.

But okay, let’s talk about performance… There are really two prongs to why this thing performs so well…

First, he uses his computer for just word processing — and printing — nothing else. Nothing online and he wanted his computer to be as secure as possible from such threats… So, that makes things really easy… Realize that if the computer is an island, there is no external connectivity – no internet, no USB drives, etc. Then it really is an island. What are the threat vectors in this case? None really. So, do you need patch management? Not of the system is working? Most ‘bugs’ patched these days are more about vulnerabilities, not functionally. And honestly, after 13 years, if there are any functionality quirks, he doesn’t seem them as such, but just work through or around them. It really is surprising to see how stopping patching significantly improves system performance and reliability!

For the record, I’m a huge proponent of patch management – but that is because in virtually all cases you have threat vectors you need to account for. But let’s pause for just a moment, and think about that — are there places or situations where you can vastly improve security and performance by outright removing a threat vector such as the internet? It’s also worth mentioning that because of this lack of patching, the 2007 Daylight Saving Adjustment was never patched on his computer. But there are ways to manually patch this yourself on such systems.

But beyond that, let’s talk about the statement that it runs that the same performance level. That is a true statement, although perhaps a bit misleading. Do you remember having to wait for Windows XP to boot up? I sure do. Although if you think back, XP made a lot of waves because it did boot much faster than prior operating systems of the day. But that aside, Windows 10 boots almost instantly. But that is what end users expect these days, my iPhone is instant on… The concept of having to wait befuddles us nowadays. So by today’s comparison, the computer is slloooooowwwww. But that is just my modern comparisons. But it works just as fast as it always has… After all, the processor is still ticking away at the same speed, and the software hasn’t changed at all.

The biggest reason it isn’t a problem for him is that he has no point of comparison. He is retired, the computer works the way it always has. He hasn’t worked on more modern, faster computers.

It’s also probably a mindset — my parents have hundreds of VHS movies. Sure, they have DVD and the latest blue ray discs. Mostly, however, because it’s virtually impossible to not buy a blue ray player. So sure, they’ve got the latest and greatest, and the quality is better than VHS. Although who knows how well they actually see with their aging eyes. But why throw out thousands of dollars worth of working (inferior) VHS movies and buy again higher quality movies, which, at the end of the day, is the exact same movie, story, actors, lines, etc., And most of those movies really were filmed using inferior camera equipment of the day… So is there really a big difference between Gone with the Wind on blue ray since it was captures with 70 year old, non-digital camera technology?

In the end its a bit of a philosophical discussion. Perhaps.

But what’s the takeaway from this article, if any? I would propose a few points:

  • Purchasing: realize that the enterprise gear is often worth it even for personal use because while it can be marginally more expensive, it can last far longer. I think his tower cost sub $500.
  • Security: Consider how in every environment security and performance can be improved by mitigating threat vectors. Remember that patch management is one tool we have to address threats and isn’t a panacea into itself.
  • Performance: Performance is very relative, and subjective. Each use application is different – purchasing or upgrading in blanket terms is wasteful. Each user, department, or situation can often be different and unique. Address them as such.

 

 

 

 

 

 

Installing Vagrant

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover Vagrant — a virtual environment management tool.

Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the “works on my machine” excuse a relic of the past.

If you are a developer, Vagrant will isolate dependencies and their configuration within a single disposable, consistent environment, without sacrificing any of the tools you are used to working with (editors, browsers, debuggers, etc.). Once you or someone else creates a single Vagrantfile, you just need to vagrant up and everything is installed and configured for you to work. Other members of your team create their development environments from the same configuration, so whether you are working on Linux, Mac OS X, or Windows, all your team members are running code in the same environment, against the same dependencies, all configured the same way. Say goodbye to “works on my machine” bugs.

 

Let’s get into how to setup Vagrant by HashiCorp:

  1. First, make sure you have your virtualization software installed. For this example, we’re running Oracle’s VirtualBox as it’s an excellent and easy to use open source option. See my VirtualBox Installation Guide here.
  2. Find the appropriate package for your system and download it.
  3. Run the installer for your system. The installer will automatically add vagrant to your system path so that it is available in terminals.
  4. Verify that it is installed by running the command vagrant from the command line – it should run without error, and simply output the options available. If you receive an error, please try logging out and logging back into your system (this is particularly necessary sometimes for Windows).

That’s it, you’re all set. Now go ahead and take a look at my Introduction to ScotchBox which is a great web development option, which uses a vagrantbox.

 

Footnote: It’s also worth mentioning that recently Docker has gained a lot of attention and for some web developers its a great option. I’ve only looked into it a bit, and will probably create a series using that tool later this year.

 

Version Disclosure: This document was written while the current version of vagrant is 2.2.4 and virtualbox is 6.0.4 – different versions might be slightly different.

Installing Virtual Box

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover VirtualBox — an Open Source Virtualization product for your local machine.

Oracle VM VirtualBox (formerly Sun VirtualBox, Sun xVM VirtualBox, and Innotek VirtualBox) is a free and open-source hosted hypervisor for x86 computers and is under development by Oracle Corporation. VirtualBox may be installed on a number of host operating systems, including Linux, macOS, Windows, Solaris, and OpenSolaris. There are also ports to FreeBSD and Genode.  It supports the creation and management of guest virtual machines running versions and derivations of Windows, Linux, BSD, OS/2, Solaris, Haiku, OSx86 and others, and limited virtualization of macOS guests on Apple hardware.

In general, our application for web development is to emulate our production web server environment which is often a LAMP or WIMP stack. For our examples in this series, we’re going to look at the most popular, the LAMP stack (Linux, Apache, MySQL, and PHP).

 

The installation and setup of VirtualBox are very simple:

  1. Verify that you have a supported host operating system – that is, the desktop operating system that you’re on right now. https://www.virtualbox.org/manual/UserManual.html#hostossupport
  2. Navigate to https://www.virtualbox.org/wiki/Downloads and download the version that is right for your host operating system.
  3. Host OS Specific Steps:
    1. For Windows installations double-click on the downloaded executable file. Select either all or partial component installation – for web development make sure the network components are also selected — USB and Python support is optional
    2. For Mac installations double-click on the downloaded dmg file. And follow the prompts.
    3. For Linux – see this link: https://www.virtualbox.org/manual/UserManual.html#install-linux-host

For most people that is just about it, you’re installed and all set with VirtualBox. The next step for most web developers will be to install Vagrant, which makes managing virtual images super easy!

 

In some situations, your host machines BIOS settings need to be changed because your manufacturer has turned off the required settings by default. You don’t need to worry about this unless you get an error when trying to use a virtual machine. You might get a message like:

  • VT-x/AMD-V hardware acceleration is not available on your system
  • This host supports Intel VT-x, but Intel VT-x is disabled
  • The processor on this computer is not compatible with Hyper-V

This issue is can occur regardless of the virtualization technology you use (VMWare, XenServer, Hyper-V, etc).

How to configure Intel-VT or AMD-V:

  1. Reboot the computer and open the system’s BIOS menu. Depending on the manufacturer, this is done by pressing the delete key, the F1 key or the F2 key.
  2. Open the Processor submenu (also may be listed under CPI, Chipset, or Configuration)
  3. Enable Virtualization, Intel-VT, or AMD-V. You may also see Virtualization Extensions,  Vanderpool, Intel VTd, AMD IOMMU, if the options are available.
  4. Select Save & Exit.

You should be now all set, reboot into your host operating system and try again.

Version Disclosure: This document was written while the current version of virtual box is 6.0.4 – different versions might be slightly different.

 

Scotch Box – Dead simple Web Development

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover Scotch Box — a virtual development environment for your local machine.

Many people begin development by working directly on live, production web servers. Sometimes they’ll work in a sub-directory or a different URL. However, there are several drawbacks to this approach.

  1. Performance: Every update of your files requires them to be sent over the internet, and equally your tests also need to come back over the internet. While each of these is probably only an extra second of latency for each file, it can quickly add up over the lifetime of development.
  2. Security: Let’s face it, development code isn’t the most secure out of the gate. I recently was developing a custom framework and in the process of writing the code for the display of images, introduced a bug which would dump any file to the browser, even php code or environment variables.
  3. Debugging: Debugging tools such as Xdebug shouldn’t be installed on production servers as it can accidentally expose sensitive data.
  4. Connectivity: You must be connected to the internet to develop, so internet connection, no development.

So for most of my projects, I develop first on my laptop. But instead of installing a full LAMP stack on my desktop (where I’ve got a database and web server running full time in the background), I use a Virtual Machine through Oracles Free VirtualBox Hypervisor.  And instead of having one virtual machine host multiple projects, which might have different development needs (specific PHP versions, databases, etc), I spin up a new virtual instance for each project. This is made super easy through a tool called Vagrant. As they say:

Development Environments Made Easy

This post assumed you already have both Oracles VirtualBox and Vagrant installed on your local machine.

My favorite development stack is Scotch Box — perhaps this is because I love scotch, but more likely because it’s (in their own words): THE PERFECT AND DEAD SIMPLE LAMP/LEMP STACK FOR LOCAL DEVELOPMENT

It’s three simple command line entries and you get access to:

  • Ubuntu 16.04.2 LTS (Xenial Xerus) OS
  • Apache Web Server
  • PHP v7.0
  • Databases: MySql, PostgresSQL, MongoDB, SQLite
  • NoSQL/Cache: MemCashed, Redis
  • Local Email Testing: MailHog
  • Python v2.7
  • Node.js
  • Go
  • Ruby
  • Vim
  • Git
  • Beanstalkd
  • And much more.

Within PHP it includes tools like Composer, PHPUnit, WP-CLI. Also since this is designed for development PHP Errors are turned on by default. It works with most frameworks outside of the box, with the exception of Laravel which needs just a bit of tweaking. All major CMS are supposed like WordPress, Drupal and Joomla.

And if you want access to more updated versions, such as PHP 7.2 or Ubuntu 17.10.x, you can pay just $15 for their pro version which comes with so much more!

So how to do install it?

  • From the command line, go to your desired root directory, such as Documents
  • git clone https://github.com/scotchio/scotchbox myproject
  • cd myproject
  • vagrant up                    (learn how to install vagrant)

You can replace “my-project” with whatever you want to name this specific development project.

After you run “vagrant up” it will take several minutes to download the code from the internet. Then you’ll be all set. You can browse http://192.168.33.10/

For shell access SSH to 127.0.0.1:2222 with the username of vagrant, and password of vagrant.

You’re all set.

Configuring a basic Road Warrior OpenVPN Virtual Private Network Tunnel

If you’re a road warrior like me, you’re often accessing the internet from insecure hotspots. All traffic that traverses an open wireless connection is subject to inspection, but furthermore even on untrusted secured wirelesses, you’re activity is subject to monitoring by those providing the internet (trusted or otherwise), as well as ISP providers, etc.

To help keep what you’re doing private, I suggest always using a secure VPN tunnel for all your roaming activity. This guide will show you how to setup your own VPN tunnel using Linode for only $5 per month! That’s right, why pay a third party company money for your privacy which costs more, and you get unlimited usage for yourself and whoever else you decide to provide access for.

Now to be clear upfront, the purpose of this setup is to provide secure tunneling when you’re on the road with untrusted networks such as hotels or coffee shops. Some of the reasons people use VPNs is to provide general internet privacy, which this setup will NOT provide. It does, however, allow you to appear to be connecting to the internet from another geographical location. They have 8 datacenters, spanning the US, Europe, and Asia Pacific. So when you’re on the internet you can configure it so that it appears your connecting from a different location then you’re actually located.  There are other benefits available such as giving you an always fixed WAN IP address, so when you’re configuring security for your services, you can now lock down access to a specific remote IP. Think of only allowing remote connections to your server/services/etc from a single IP address. That provides much stronger security instead of just leaving remote access open.

 

Let’s get started with the configuration:

This post is going to assume you already have a basic Linode setup. Here is how to install the OpenVPN Server in a very simple way. That way, these instructions will work with any Ubuntu Linux Server. Leave comments if you’d like a full setup guide and I’ll throw it together for you.

  1. Remotely connect to your server (such as SSH)
  2. Login as root (or someone with sudo rights)
  3. Run the following from the command prompt:wget https://git.io/vpn -O openvpn-install.sh && bash openvpn-install.sh
  4. When prompted I suggest the following configuration:
    1. UDP (default)
    2. Port 1194 (default)
    3. DNS of 1.1.1.1 (see this link for more info)
    4. Enter a name for your first client name – this is unique for each client. So for example, I’ll call my first one Laptop
  5. The file is now available at /root/ under the filename equal to the client name you specified in step 4.4 — in our example /root/Laptop.ovpn
  6. Download that file to your local computer using the transfer method best for your system:
    1. Linux/MacOS use SCP
    2. Windows use Windows SCP
  7. You’ll want to download the OpenVPN client from https://openvpn.net/community-downloads/
  8. Install the Laptop.ovpn file you downloaded into OpenVPN client – for Windows, right click on the systray icon, choose import – from file. Choose the Laptop.ovpn file you copied from the server. After you choose the file it might take a minute or so, and you should see a notice that the file was imported successfully. Then check the systray icon again and you’ll now see the server WAN IP address listed. Then you simply click that IP address then connect, and you’re all set.
    1. The first time you initiate a connection you may be prompted to trust this unverified connection, this is because you’re using a self-signed certificate. For basic road warriors, this is sufficient. If you’re a corporate IT department, you might want to consider using your own certificate, either trusted or enterprise certs.

You can simply repeat steps 1-3 above, and at step 4 you’ll only be prompted for the client name. Do this for every device and/or user that needs to remotely access this server. For me, I use a separate key for my laptop, phone, and tablet. If they’ll be connected at the same time, you’ll need separate keys. You can also run through the same steps to revoke certificates – so you want to make sure you name them something logical, such as myAndroid, kidsiPhone, wifesLaptop, etc.

Enjoy!

 

 

 

 

 

Configure Plesk as OpenVPN Server with Windows 10 as Client

Plesk is a powerful web server management tool. Among the included features is an OpenVPN Server, so when you’re working remotely you can connect directly to your server remotely. This can be very helpful if you’re a developer who works remotely from insecure locations like a Starbucks Coffeeshop or other remote location. The instructions provided by Plesk are not really clear on this topic, nor at least not fully up-to-date and the included client download package is a legacy version of the OpenVPN client.

TLDR (in summary) if you’re the only person who manages both the Plesk Server and uploads files, and you want a really secure setup, read on. Otherwise, you can just stop here, because this is NOT going to give you any real-world benefits.

As of the writing of this post, Plesk only supports a single remote host at a given time. And if you configure multiple devices they all use the same encryption key. Additionally, you’re limited to traffic intended for the Plesk server directly, and it does not route traffic more broadly within either the server LAN or to the WAN. This results in a network configuration known as split-tunneling. Meaning only traffic for the remote server is sent over the tunnel and all other traffic still goes out your internet connection. So the net result is a secure connection just to your Plesk server, but nothing else. If you’re already using FTPS and SSH, then this really provides NO benefit for you. There are feature requests to extend the Virtual Private Network features of Plesk, but as of this writing, it has not been implemented yet.

Also, because technology changes quickly, please note the following – this documentation is based on the following software versions:

  • Plesk Onyx Version 17.8.11 Update #38
  • OpenVPN Windows Client 2.5.0.136 (link)
  • Windows 10 Enterprise, Version 10.0.17134.523

Let’s get started on how to configure the OpenVPN Server.

  1. Start by installing the Plesk Extension: Virtual Private Networking
  2. Then open the Extensions shortcut via the navigation pane > Virtual Private Networking.
  3. On the Preferences page that opens, specify the following parameters:
    1. Remote Address: Leave this blank as you’re intending to remotely connect TO the Plesk server.
    2. Remote UDP port: You can leave this field blank if you have not specified the remote address above.
    3. Local UDP port, your server will listen for incoming VPN traffic on this local UDP port. The default port is 1194.
    4. Local peer address and Remote peer address: Usually leave the default. This needs to be a separate address space from either your existing WAN or LAN of the server, as well as ideally not overlapping with the local IP address that you’ll be connecting from as well.
    5. Click OK.
  4. The Plesk VPN component is initially disabled. To use the VPN functionality, enable the component by clicking the “Switch On” button.
  5. Click on “For a Windows Client” button to download the package. BUT DO NOT use the OpenVPN client included.
  6. Extract the package to any location.
  7. Open the extracted files and copy the vpn-key to you c: directory
  8. Then open the openvpn.conf file using any text editor, such as Notepad, or my preferred editor, Notepad++
    1. Change the line: secret system/vpn-key
      To read: secret c://vpn-key
    2. Save the file as openvpn.ovpn
  9. Then move the file from its current location to c:\ — in Windows 10 usually the security permissions will prohibit you from directly saving-as to the c: directory.
  10. From the start menu, run OpenVPN Client — not the OpenVPN GUI.
  11. Right-click on the sys-tray icon and select Import > From File. Point it to your c:\openvpn.ovpn file
  12. In a few seconds (but not immediately), it will show the VPN in the listing when you right-click on the OpenVPN Client sys-tray icon. Click on the Plesk Server, then select Connect.

You should be all set, and you can test your connection by trying to ping your server from the command line to the IP address selected above, typically 172.16.0.1 — if this resolves then your VPN is setup properly. You can also go to a http://www.WhatIsMyIP.com and verify that all other web traffic is routing through your local internet connection and not your server.

You’re now configured to access your server over the VPN tunnel.

 

Now, you’ll need to access your Plesk server using that IP address, which can itself be problematic. Sure FTP/FTPS to 172.16.0.1 will work just fine, but if you try to navigate to the Plesk Web Console, at https://172.16.0.1 you’ll get a certificate error because the certificate is signed for the FQDN (Fully Qualified Domain Name) such as Plesk.example.com

You could modify you hosts file, but then you’ll have all sorts of problems connecting if your not connected via the VPN tunnel.

 

So this begs the question, why even bother with this? The only reason I can think of is if you’re using Plesk as a GUI management for your web servers, and you want to really keep the sever closed off. With the VPN setup, you can close down FTP/FTPS ports, as well as the Plesk ports like 8443 to the outside world. It creates a much more secure setup and is a good ideal if you’re the only one who is going to manage this server. But otherwise, if other people need to use FTP or the console, then there is no reason to implement this.

 

 

PuTTY – Accessing a Linode Server

PuTTY is a free and open source SSH client for Windows and UNIX systems. It provides easy connectivity to any server running an SSH daemon, so you can work as if you were logged into a console session on the remote system.

  1. Download and run the PuTTY installer from here.
  2. When you open PuTTY, you’ll be shown the configuration menu. Enter the hostname or IP address of your Linode. PuTTY’s default TCP port is 22, the IANA assigned port for for SSH traffic. Change it if your server is listening on a different port. Name the session in the Saved Sessions text bar if you choose, and click Save:

    Saving your connection information.

  3. Click Open to start an SSH session. If you have never previously logged into this system with PuTTY, you will see a message alerting you that the server’s SSH key fingerprint is new, and asking if you want to proceed.

    Do not click anything yet! Verify the fingerprint first.

    PuTTY verify SSH fingerprint

  4. Use Lish to log in to your Linode. Use the command below to query OpenSSH for your Linode’s SSH fingerprint:
    ssh-keygen -E md5 -lf /etc/ssh/ssh_host_ed25519_key.pub
    

    The output will look similar to:

    
    256 MD5:58:72:65:6d:3a:39:44:26:25:59:0e:bc:eb:b4:aa:f7 root@localhost (ED25519)
    

    Note

    For the fingerprint of an RSA key instead of elliptical curve, use: ssh-keygen -lf /etc/ssh/ssh_host_rsa_key.pub.
  5. Compare the output from Step 4 above to what PuTTY is showing in the alert message in Step 3. The two fingerprints should match.
  6. If the fingerprints match, then click Yes on the PuTTY message to connect to your Linode and cache the host fingerprint.

    If the fingerprints do not match, do not connect to the server! You won’t receive further warnings unless the key presented to PuTTY changes for some reason. Typically, this should only happen if you reinstall the remote server’s operating system. If you receive this warning again from a system you already have the host key cached on, you should not trust the connection and investigate matters further.

How to compress files and directories on Ubuntu

One of the most common ways to quickly and effectively compress files on a Linx server such as Ubuntu us the combination of TAR GZIP. When moving directories between servers this is far faster to compress, transfer and expand — compared to raw transfer of files.

Here is an example of how I used the command recently to move some files between web hosting servers.

Source server:

tar -czvf name-of-archive.tar.gz /path/to/directory-or-file

Then using normal FTP I copied this file to my local machine before uploading it to my destination server.

tar -xzvf archive.tar.gz

Powered by WordPress.com.

Up ↑