COBOL 2020 – why are we still using it?

With the 2020 Covid-19 Pandemic going on, it has brought to light just how old many computer systems are running on our government. It is shocking to people to learn that today’s computer systems are running v, a 60+ year old programming language, mostly on IBM mainframes.

What is more shocking is just how much many of the news papers are getting wrong. But it isn’t just their fault. When we have college professions stating:

“There’s really no good reason to learn COBOL today, and there was really no good reason to learn it 20 years ago,” says UCLA computer science professor Peter Reiher. “Most students today wouldn’t have ever even heard of COBOL.

FastCompany Article: What is Cobol

The reality is many company’s still use Cobol. And while things like Java, JavaScript and Python are really “hot” languages right now, the reality is that many, many fortunate 500 companies still run it for a lot of their critical systems. Further it was errant when FastCompany stated that “[o]ne key reason for the migration is that mobile platforms use newer languages, and they rely on tight integration with underlying systems to work the way users expect.” However, the overall trend in technology has been t de-couple your code, not tightly-couple it.

Further evidence to this fact is that most legacy airlines are also running equally legacy code, yet they still have performant Web 2.0 and mobile interfaces. They do what everyone else has been doing which is layering modern technology on top of older frameworks using API. Currently we use fancy things like GraphQL and REST API, but the concept of an API is nothing new. SOAP interfaces have been around a long time (1998). Or how about POXIS (Portable Operating System Interface – aka IEEE 1003) from 1988.

Before I get started, let me stay that I’ve been involved in technology since 1990 to one degree or another, and remember fondly the days of working on those ‘green screen dumb terminals’. I’ve personally done work on COBOL and other mainframe style systems like the AS400 which some were written using a version called COBOL/400. I have experience in mainframe systems from airlines to manufacturing/ERP. As well as more modern operating systems from Microsoft Windows and Red Hat Linux. And we as the novel web development frameworks and stacks (PHP, JavaScript, etc).

Why do we continue to use COBOL? Because it is ‘relatively’ rock solid compared to most programs you see today. The uptime on these systems have been measured in years – not days or hours. We’re going well beyond things like 5-nines uptime (99.999%). And this isn’t using fancy cloud based, fault tolerant systems. But rather just one clunky old IBM mainframe. The software simply works, and works well. However, what it doesn’t do is scale all that well. And often, what we’re seeing isn’t the failure of COBOL, per-se, but often the modern interfaces that people have layered onto of COBOL failing.

Reliable technology is essential to businesses that expect to be working for decades, who invest millions or billions into the software.

And its not just “old stuff” that is using older hardware/software. We can look at things being made brand new this year, such as the Boeing 737MAX series, which is running hardware equivalent to the 1990’s NES (Nintendo Entertainment System). It reason is that it is battle tested and extremely reliable. It isn’t broken and it has more than enough computing power for the task.

Forget about tech startups for a moment. If you were building a new system that you need to be still working 20 years from now without ‘patching bugs’, but simply needed to continue to perform exactly the same things – would you choose a system that is new/novel that may or not be supported, or would you go with a system which literally has been supported for decades and is in-part propped up by the fact that most fortune-500 companies are also in the same boat as you?

Perhaps now, it starts to make a lot of sense.

And for that reason it seems that Mr. Reiher is rather out of touch with reality. Yes, there isn’t a huge growth market for COBOL engineers – if anything the year over year need is probably shrinking – but also the number of programmers are retiring even faster. Creating not only a great need, but also a fantastic pay opportunity with nearly zero competition.

It is also a really simple language to learn and is objected oriented which most people should be familiar with as it’s used in a lot of modern languages. The challenge for the emergent issues is the experience needed to understand and reverse engineer someone else’s code. A short hello world program is easy in just about any language, but of course, what is needed is mastery. vMany, many businesses have tried to migrate away from old mainframe technologies, without success. There is just too much build in business logic, that is sitting there, unrealized, but extremely important. When they try to reverse engineer it, and rewrite it into a more modern language, features always drop away.

And it just isn’t COBOL and those who use it are stuck. Here are a few other examples:

  • Microsoft has attempted to get away from their “DLL Hell” something Microsoft has tried, and failed to get away from since day one – but still even the later Microsoft Windows 10 still has linger legacy code hardening back to Windows 95.
  • Adobe Software tried to reinvent their products to be web based instead of purely installed applications – even after 5 years of development on products like Photoshop and Lightroom has resulted in product which have only a small fractions of the legacy features – sure some neat new things, but a lot of the old functionality is lost.
  • Airlines who spend millions of dollars each year on licensing to GDS (Global Distribution Systems) which also run legacy code, are trapped using ancient COBOL like technology. It is the primary reason why in 2020 you still are limited to buying no more than 9 seats at a time – the underlying ticketing system can only accept a single digit number.
  • State Farm Insurance has been built on COBOL – and when I was 16 years old I worked on their old green screen terminals. Over the last 30+ years they’ve been working to transition to modern tech stack. For a period in the early 2000 what they did was bring PC’s in to the agent offices, and you had access via a separate terminal window to basically the mainframe system. In the 10’s the introduced a web interface where it was more modern interface, but at the end of the day, not only was COBOL the underlying database and performing the business logic, there is still certain things that can only be performed by going back into the dumb terminal.

One way to look at it is this — for the last 20, 30, 40 years a company has been investing into feature enhancements and tweaks. That is a LOT of code, and business logic that has been changed. This is muddled in with a lot of bad, legacy code that might not do anything anymore. Worse, over the course of time there has been bad developers come along and instead of fixing or addressing an issue properly wrote an obscure bit of code to work around something they didn’t understand.

Has anyone successfully migrated?

There is one company who did successfully completely rewrite their system which comes to mind – around 2000 Apple Computer completely replaced their operating system for the Mac. When it changes from OS 1,2,3..etc., to OS X – it has never been the same again. And along the way it broke just about everything. Apple changed both the hardware and software. And therefore older pre OS X hardware couldn’t run OS X, and most software was not compatible either. It was basically a cut-your-losses, which their was many. And Apple hardly started from scratch, rather it was based on Unix. So it wouldn’t count as a migration, but they did the change.

Can’t we do that with our legacy unemployment systems?

Absolutely it is possible, but extremely expensive. Every state has different custom rules, so a software company who has a competitive alternative will not only have a big price tag, but also an even larger cost to customize it to make it similar to your existing system. Often these costs are more than 10 years of operating expenses continuing to use COBOL.

What would I do if I was the Director of Technology for a company still using a COBOL based system?

As someone who has experience maintaining legacy code, as well as projects to completely re-write a system — here is what I would do. To ensure the greatest possible uptime and reliability, I would first decide the language framework I’m going to use. It likely would be moving to objective-C or something similar, possibly Java (not to be confused with JavaScript) or maybe PHP. I would build out a decoupled system with a modern front-end framework (Vue, Angular, React, etc), and then use that to access my “modern” controller/model, which would start by just transparently passing through to the “legacy” system. I would progressively start moving the business logic from the legacy to modern system. Until we’ve eventually moved everything over to the modern system.

This looks a lot like what I believe State Farm Insurance is doing currently. I would expect this project to easily be a decade long process or longer. Something no politician would like and it wouldn’t win any popularity vote as being seen as ‘addressing the problem’. But IMHO it is the best route forward.

The alternatives are to throw ungodly amounts of money at purchasing a new system outright and then customizing it, and having a LOT of broken things along the way. I’d rather take years to move over each system of an unemployment system and get it right, versus trying to flip the switch on a new system and mess up peoples unemployment checks.

The end result is a more affordable, reliable and stable change – that takes time, versus another expensive quick-fix.

But what about the people suffering now?

What people are looking for is a reactionary measure instead of a response. The reality is that in a few days to weeks all of the backlog will be solved. Also realize one of the biggest reasons for the backlog isn’t the technology but the staffing levels. But beyond that, regardless of the various reasons it will be worked out in days-to-weeks. However, as someone who as implemented large scale systems serving millions of end users, something new cannot be implemented overnight – it would be a month’s long project. Therefore, throwing money at the problem will not make a meaningful difference for individuals right now. Same thing if we on boarded double the number of COBOL programs nationwide, you’d see only an incremental increase in the processing of claims. Rather, the focus today should be on how to respond to this situation, not react. What do I need to do so that 10, 20, 30 years from now the choices made today will ensure continued success.

Installing Vagrant

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover Vagrant — a virtual environment management tool.

Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the “works on my machine” excuse a relic of the past.

If you are a developer, Vagrant will isolate dependencies and their configuration within a single disposable, consistent environment, without sacrificing any of the tools you are used to working with (editors, browsers, debuggers, etc.). Once you or someone else creates a single Vagrantfile, you just need to vagrant up and everything is installed and configured for you to work. Other members of your team create their development environments from the same configuration, so whether you are working on Linux, Mac OS X, or Windows, all your team members are running code in the same environment, against the same dependencies, all configured the same way. Say goodbye to “works on my machine” bugs.


Let’s get into how to setup Vagrant by HashiCorp:

  1. First, make sure you have your virtualization software installed. For this example, we’re running Oracle’s VirtualBox as it’s an excellent and easy to use open source option. See my VirtualBox Installation Guide here.
  2. Find the appropriate package for your system and download it.
  3. Run the installer for your system. The installer will automatically add vagrant to your system path so that it is available in terminals.
  4. Verify that it is installed by running the command vagrant from the command line – it should run without error, and simply output the options available. If you receive an error, please try logging out and logging back into your system (this is particularly necessary sometimes for Windows).

That’s it, you’re all set. Now go ahead and take a look at my Introduction to ScotchBox which is a great web development option, which uses a vagrantbox.


Footnote: It’s also worth mentioning that recently Docker has gained a lot of attention and for some web developers its a great option. I’ve only looked into it a bit, and will probably create a series using that tool later this year.


Version Disclosure: This document was written while the current version of vagrant is 2.2.4 and virtualbox is 6.0.4 – different versions might be slightly different.

Installing Virtual Box

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover VirtualBox — an Open Source Virtualization product for your local machine.

Oracle VM VirtualBox (formerly Sun VirtualBox, Sun xVM VirtualBox, and Innotek VirtualBox) is a free and open-source hosted hypervisor for x86 computers and is under development by Oracle Corporation. VirtualBox may be installed on a number of host operating systems, including Linux, macOS, Windows, Solaris, and OpenSolaris. There are also ports to FreeBSD and Genode.  It supports the creation and management of guest virtual machines running versions and derivations of Windows, Linux, BSD, OS/2, Solaris, Haiku, OSx86 and others, and limited virtualization of macOS guests on Apple hardware.

In general, our application for web development is to emulate our production web server environment which is often a LAMP or WIMP stack. For our examples in this series, we’re going to look at the most popular, the LAMP stack (Linux, Apache, MySQL, and PHP).


The installation and setup of VirtualBox are very simple:

  1. Verify that you have a supported host operating system – that is, the desktop operating system that you’re on right now.
  2. Navigate to and download the version that is right for your host operating system.
  3. Host OS Specific Steps:
    1. For Windows installations double-click on the downloaded executable file. Select either all or partial component installation – for web development make sure the network components are also selected — USB and Python support is optional
    2. For Mac installations double-click on the downloaded dmg file. And follow the prompts.
    3. For Linux – see this link:

For most people that is just about it, you’re installed and all set with VirtualBox. The next step for most web developers will be to install Vagrant, which makes managing virtual images super easy!


In some situations, your host machines BIOS settings need to be changed because your manufacturer has turned off the required settings by default. You don’t need to worry about this unless you get an error when trying to use a virtual machine. You might get a message like:

  • VT-x/AMD-V hardware acceleration is not available on your system
  • This host supports Intel VT-x, but Intel VT-x is disabled
  • The processor on this computer is not compatible with Hyper-V

This issue is can occur regardless of the virtualization technology you use (VMWare, XenServer, Hyper-V, etc).

How to configure Intel-VT or AMD-V:

  1. Reboot the computer and open the system’s BIOS menu. Depending on the manufacturer, this is done by pressing the delete key, the F1 key or the F2 key.
  2. Open the Processor submenu (also may be listed under CPI, Chipset, or Configuration)
  3. Enable Virtualization, Intel-VT, or AMD-V. You may also see Virtualization Extensions,  Vanderpool, Intel VTd, AMD IOMMU, if the options are available.
  4. Select Save & Exit.

You should be now all set, reboot into your host operating system and try again.

Version Disclosure: This document was written while the current version of virtual box is 6.0.4 – different versions might be slightly different.


Configuring a basic Road Warrior OpenVPN Virtual Private Network Tunnel

If you’re a road warrior like me, you’re often accessing the internet from insecure hotspots. All traffic that traverses an open wireless connection is subject to inspection, but furthermore even on untrusted secured wirelesses, you’re activity is subject to monitoring by those providing the internet (trusted or otherwise), as well as ISP providers, etc.

To help keep what you’re doing private, I suggest always using a secure VPN tunnel for all your roaming activity. This guide will show you how to setup your own VPN tunnel using Linode for only $5 per month! That’s right, why pay a third party company money for your privacy which costs more, and you get unlimited usage for yourself and whoever else you decide to provide access for.

Now to be clear upfront, the purpose of this setup is to provide secure tunneling when you’re on the road with untrusted networks such as hotels or coffee shops. Some of the reasons people use VPNs is to provide general internet privacy, which this setup will NOT provide. It does, however, allow you to appear to be connecting to the internet from another geographical location. They have 8 datacenters, spanning the US, Europe, and Asia Pacific. So when you’re on the internet you can configure it so that it appears your connecting from a different location then you’re actually located.  There are other benefits available such as giving you an always fixed WAN IP address, so when you’re configuring security for your services, you can now lock down access to a specific remote IP. Think of only allowing remote connections to your server/services/etc from a single IP address. That provides much stronger security instead of just leaving remote access open.


Let’s get started with the configuration:

This post is going to assume you already have a basic Linode setup. Here is how to install the OpenVPN Server in a very simple way. That way, these instructions will work with any Ubuntu Linux Server. Leave comments if you’d like a full setup guide and I’ll throw it together for you.

  1. Remotely connect to your server (such as SSH)
  2. Login as root (or someone with sudo rights)
  3. Run the following from the command prompt:wget -O && bash
  4. When prompted I suggest the following configuration:
    1. UDP (default)
    2. Port 1194 (default)
    3. DNS of (see this link for more info)
    4. Enter a name for your first client name – this is unique for each client. So for example, I’ll call my first one Laptop
  5. The file is now available at /root/ under the filename equal to the client name you specified in step 4.4 — in our example /root/Laptop.ovpn
  6. Download that file to your local computer using the transfer method best for your system:
    1. Linux/MacOS use SCP
    2. Windows use Windows SCP
  7. You’ll want to download the OpenVPN client from
  8. Install the Laptop.ovpn file you downloaded into OpenVPN client – for Windows, right click on the systray icon, choose import – from file. Choose the Laptop.ovpn file you copied from the server. After you choose the file it might take a minute or so, and you should see a notice that the file was imported successfully. Then check the systray icon again and you’ll now see the server WAN IP address listed. Then you simply click that IP address then connect, and you’re all set.
    1. The first time you initiate a connection you may be prompted to trust this unverified connection, this is because you’re using a self-signed certificate. For basic road warriors, this is sufficient. If you’re a corporate IT department, you might want to consider using your own certificate, either trusted or enterprise certs.

You can simply repeat steps 1-3 above, and at step 4 you’ll only be prompted for the client name. Do this for every device and/or user that needs to remotely access this server. For me, I use a separate key for my laptop, phone, and tablet. If they’ll be connected at the same time, you’ll need separate keys. You can also run through the same steps to revoke certificates – so you want to make sure you name them something logical, such as myAndroid, kidsiPhone, wifesLaptop, etc.







Configure Plesk as OpenVPN Server with Windows 10 as Client

Plesk is a powerful web server management tool. Among the included features is an OpenVPN Server, so when you’re working remotely you can connect directly to your server remotely. This can be very helpful if you’re a developer who works remotely from insecure locations like a Starbucks Coffeeshop or other remote location. The instructions provided by Plesk are not really clear on this topic, nor at least not fully up-to-date and the included client download package is a legacy version of the OpenVPN client.

TLDR (in summary) if you’re the only person who manages both the Plesk Server and uploads files, and you want a really secure setup, read on. Otherwise, you can just stop here, because this is NOT going to give you any real-world benefits.

As of the writing of this post, Plesk only supports a single remote host at a given time. And if you configure multiple devices they all use the same encryption key. Additionally, you’re limited to traffic intended for the Plesk server directly, and it does not route traffic more broadly within either the server LAN or to the WAN. This results in a network configuration known as split-tunneling. Meaning only traffic for the remote server is sent over the tunnel and all other traffic still goes out your internet connection. So the net result is a secure connection just to your Plesk server, but nothing else. If you’re already using FTPS and SSH, then this really provides NO benefit for you. There are feature requests to extend the Virtual Private Network features of Plesk, but as of this writing, it has not been implemented yet.

Also, because technology changes quickly, please note the following – this documentation is based on the following software versions:

  • Plesk Onyx Version 17.8.11 Update #38
  • OpenVPN Windows Client (link)
  • Windows 10 Enterprise, Version 10.0.17134.523

Let’s get started on how to configure the OpenVPN Server.

  1. Start by installing the Plesk Extension: Virtual Private Networking
  2. Then open the Extensions shortcut via the navigation pane > Virtual Private Networking.
  3. On the Preferences page that opens, specify the following parameters:
    1. Remote Address: Leave this blank as you’re intending to remotely connect TO the Plesk server.
    2. Remote UDP port: You can leave this field blank if you have not specified the remote address above.
    3. Local UDP port, your server will listen for incoming VPN traffic on this local UDP port. The default port is 1194.
    4. Local peer address and Remote peer address: Usually leave the default. This needs to be a separate address space from either your existing WAN or LAN of the server, as well as ideally not overlapping with the local IP address that you’ll be connecting from as well.
    5. Click OK.
  4. The Plesk VPN component is initially disabled. To use the VPN functionality, enable the component by clicking the “Switch On” button.
  5. Click on “For a Windows Client” button to download the package. BUT DO NOT use the OpenVPN client included.
  6. Extract the package to any location.
  7. Open the extracted files and copy the vpn-key to you c: directory
  8. Then open the openvpn.conf file using any text editor, such as Notepad, or my preferred editor, Notepad++
    1. Change the line: secret system/vpn-key
      To read: secret c://vpn-key
    2. Save the file as openvpn.ovpn
  9. Then move the file from its current location to c:\ — in Windows 10 usually the security permissions will prohibit you from directly saving-as to the c: directory.
  10. From the start menu, run OpenVPN Client — not the OpenVPN GUI.
  11. Right-click on the sys-tray icon and select Import > From File. Point it to your c:\openvpn.ovpn file
  12. In a few seconds (but not immediately), it will show the VPN in the listing when you right-click on the OpenVPN Client sys-tray icon. Click on the Plesk Server, then select Connect.

You should be all set, and you can test your connection by trying to ping your server from the command line to the IP address selected above, typically — if this resolves then your VPN is setup properly. You can also go to a and verify that all other web traffic is routing through your local internet connection and not your server.

You’re now configured to access your server over the VPN tunnel.


Now, you’ll need to access your Plesk server using that IP address, which can itself be problematic. Sure FTP/FTPS to will work just fine, but if you try to navigate to the Plesk Web Console, at you’ll get a certificate error because the certificate is signed for the FQDN (Fully Qualified Domain Name) such as

You could modify you hosts file, but then you’ll have all sorts of problems connecting if your not connected via the VPN tunnel.


So this begs the question, why even bother with this? The only reason I can think of is if you’re using Plesk as a GUI management for your web servers, and you want to really keep the sever closed off. With the VPN setup, you can close down FTP/FTPS ports, as well as the Plesk ports like 8443 to the outside world. It creates a much more secure setup and is a good ideal if you’re the only one who is going to manage this server. But otherwise, if other people need to use FTP or the console, then there is no reason to implement this.



How to compress files and directories on Ubuntu

One of the most common ways to quickly and effectively compress files on a Linx server such as Ubuntu us the combination of TAR GZIP. When moving directories between servers this is far faster to compress, transfer and expand — compared to raw transfer of files.

Here is an example of how I used the command recently to move some files between web hosting servers.

Source server:

tar -czvf name-of-archive.tar.gz /path/to/directory-or-file

Then using normal FTP I copied this file to my local machine before uploading it to my destination server.

tar -xzvf archive.tar.gz

Powered by

Up ↑