Top 5 Ways to Speedup Desktop Computers

In the old days, computers often performed very slowly because of a lack of preventative maintenance. Those days are behind us because a lot of these things are now handled by Windows 10 automatically. Instead, our computers run slow either because of the junk that comes preinstalled on your computer or because of junk we put into it.

This article was based on a recent pro-bono job I did to help out a local non-profit in Redding, California. The purpose of this is how to make a typical home or small business perform better, and not take an already good machine and make it faster by tweaking (like overclocking or messing with the registry). Everything here will be pretty basic.

  1. Go through the computer and remove all unneeded software that came installed on your computer. This means games, trial software, etc. Also in most cases, 90% of the software that comes from the manufacturer (Gateway, Dell, HP, etc) can also be uninstalled without consequence.
  2. Remove all anti-virus software unless you’re using an enterprise level software. Most of them significantly slow down performance and have very little benefit. Most of the time I’ve gone in to remove a virus from a computer it was running antivirus software! Also, virtually no “free” version of anti-virus is licensed for business use (yes, that means non-profits as well). For example, see this article on Malware Bytes. If you’re on Windows X, you can rely upon Windows Defender (built in) do to a good enough job. If you’re running an older version, you should either upgrade or manually install Windows Defender (free). Also be sure to check out my Top 5 Virus Tips (a bit old).
  3. Disable unused browser extensions: Disable anything that they don’t actively use/need. These can have problems from privacy to performance implications.
  4. Use CCleaner Portable – to do a one time scan and cleanup of the PC files, removing unneeded files and cleaning up the registry. The portable version can be found here.
  5. Use AutoRuns (advanced) – this advanced tool can be used by a technician to see a lot of the things running in the background on the computer, this becoming bloated can really cause performance issues. But if you don’t know what you’re doing here, you can easily leave your computer unable to work or might fail when you reboot. Don’t use this tool lightly. https://docs.microsoft.com/en-us/sysinternals/downloads/autoruns

That is about it as far as things that will make a difference. While I’m there I’ll also check to make sure that Windows Update, Windows Defender and Disk Defragmentation is working properly. In the old days doing a disk defragment was critical to performance and easy, low hanging fruit, but those days are over. There have been so many improvements to the operating system, that old tips from before 2010 no longer apply. I might also just check to make sure that the disk has enough free space (at least 20% free), but with the capacity of hard drives now a days I cannot recall the last time I’ve seen a small business computer having performance issues due to storage limitations.

Finally, while I need to update the article be sure to look at the First 10 things I do with a new computer.

 

 

 

[FIXED/SOLVED] scotch/box

scotch/box and scotch/box-pro have been discontinued for over 2 years! Version 3.5 was released with Ubuntu 16.05 and Pro version 1.5 was released with Ubutnu 17 – both are out of support with Ubuntu and running them can be very challenging!

Common errors include:

  • Unable to run apt-get update without errors
  • Running apt-get upgrade doesn’t upgrade anything
  • Unable to run or install modern frameworks like Laravel or Symfony on ScotchBox
  • PHP 7.0 is no longer supported

THE SOLUTION/FIX:

After being frustrated at the workarounds I decided to rebuild the box completely from scratch using Nick’s Scotchbox as a baseline, but my iteration is called Cognac Box.

Installation and use are just as simple to use, but using a much more modern tech stack!

Ubuntu 18.04 LTS, latest PHP, MySQL, Redis, etc. as of March 2020

To use:

git clone https://github.com/reddingwebpro/cognacbox.git my-project 
cd my-project
vagrant up

That’s it, you’re all set. Enjoy!

 

 

The following is mostly so people looking for solutions can find this page:

 

The following are common errors when working with ScotchBox and StchBoxPro in 2020:

$ sudo apt-get update
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://dl.yarnpkg.com/debian stable InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23E7166788B63E1E
W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://repo.mongodb.org/apt/ubuntu xenial/mongodb-org/3.2 Release: The following signatures were invalid: KEYEXPIRED 1507497109
W: Failed to fetch https://dl.yarnpkg.com/debian/dists/stable/InRelease The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 23E7166788B63E1E
W: Failed to fetch http://repo.mongodb.org/apt/ubuntu/dists/xenial/mongodb-org/3.2/Release.gpg The following signatures were invalid: KEYEXPIRED 1507497109
W: Some index files failed to download. They have been ignored, or old ones used instead.

 

$ composer create-project symfony/website-skeleton my_project_name
Could not delete /var/www/my_project_name/vendor/symfony/flex/src/Command:
Stderr from the command:

E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux/linux-headers-4.13.0-21_4.13.0-21.24_all.deb  404  Not Found [IP: 91.189.88.24 80]
E: Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/l/linux/linux-headers-4.13.0-21-generic_4.13.0-21.24_amd64.deb  404  Not Found [IP: 91.189.88.24 80]
E: Unable to fetch some archives, maybe run apt-get update or try with --fix-missing?
$ sudo apt-get update
E: The repository 'http://ppa.launchpad.net/longsleep/golang-backports/ubuntu artful Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

 

Cognac Box — the Scotchbox Alternative

I have been using ScotchBox for Vagrant for many years for web development. It’s been a great tool. Certainly, there are other methods out there like Docker, WAMP, etc., but this tool works well for me. Nick over at Scotch.io build it as an excellent tool but he is no longer maintaining it.

The Scotchbox and Scotchbox Pro are no longer running a supported version and it might just outright fail to boot up anymore because of its lack of support.

In 2019I began looking for a replacement/alternative for Scotch.io’s Scotchbox and Scotchbox Pro – unable to find anything as elegant, I rebuilt my own Vagrant Box based on the ScotchBox model with Ubuntu 18.04 LTS (long term support). As well as a fully updated stack of tools. And I’m releasing it for FREE:

Cognac: The Modern Vagrant Development Environment (LAMP)

The installation cannot be more simple. Assuming you have GIT, Vagrant and VirtualBox installed on your system:

From the command line:

git clone https://github.com/reddingwebpro/cognacbox.git my-project
cd my-project
vagrant up

That’s it, you’re all set. In a few minutes you’ll browse to:

http://192.168.33.10

If you have any questions, please let me know.

 

Forgot my password and spam

On the topic this month about security and password, the discussion of lost passwords come to mind today. When you need to reset your password on your website, the most common thing it asks you for is your password.

pass1

Seems simple and harmless enough, but can it be a gateway for spam? You might thing not, but depending on how the website responds to the email address you put in, it might!

Some systems will reply in a clear way on if that account exists or not. For example, it might reply that the password will be emailed to you, or that the account doesn’t exist.

Think about that, what is the implication?

Simply that a malicious person can write a very basic script that will attempt to “password reset” random email addresses, and your response will verify if that email actually exists.

The threat here is a couple:

  1. First it gives a hacker specific knowledge that your account exists so it can try to brute force their way into your account;
  2. Instead of attempting to figure out your account, now I can simply send you a spoofed email saying you need to change your password at your-bank.com because, well I know you have an account there and it makes the attack that much more legitimate-sounding;
  3. Finally, it lets them sell your email address as a known good address, since obviously, you use it to access some online service.

 

For the end-user/consumer/professional, take a moment, see which websites leak your private email address. If they do, direct them to read this article so they can protect your privacy better.

For developers, this is a call to action to stop leaking this data accidentally. The preferred method is to simply say “if your account exists, we’ll send you a reset password”. That stops it dead in its tracks because that message goes to every email attempt. Also, be sure to check out this article about account security overall.

 

WebDev: Password Best Practice

Based on my article earlier today about Password Reset Tips for Businesses, this article is about the responsibilities and best practices of web developers. The Security versus Convenience paradigm is well known. There is no specific rule here because each business use case is different. However, it is extremely common for security to be an after-thought or bolt-on instead of designed with a security model in mind.

It is important to first determine what level of security is needed. And while most of us are not designing a bank-level secure application, we often have extremely valuable and sensitive information. Among those is the simple username-password combination. We all know someone who does it, perhaps you still do, but almost certainly you used to — that is share passwords between websites. And your application users are doing that too. Many of them actually are giving you their bank password! So to some degree, you have extreme trust from your end users and need to take that seriously. So with that in mind, let’s talk about several best practices as well as what I’ve seen in the wild:

  • Passwords:
    • should never be stored in plain text or even in an encrypted form. In virtually NO use cases it there a need for you to know their password.
    • any password sent in plain text should be a one-time use password (ie initial email, or password recovery). A password reset that is sent via plaintext email should never be reusable.
    • the hashing algorithm should be upgradable because today’s secure hashes are yesterdays insecure. I remember when MD5 was the best hash we had available. Your code should be able to accommodate changes to the hash method.
    • passwords should be salted. Some frameworks like PHP password_hash provide unique, one time salt for each hash.
    • password length and other criteria should be determined by your specific need for security – obviously the longer the better. In fact, everything supports that length trumps complexity every time. The NIST even removed their recommendation for special characters.
    • consider partnering with third parties like LastPass to help users adopt using a password manager.
    • consider checking all passwords against the list of 500 million breached passwords via API.
    • know that single-use passwords aren’t more secure. That is the practice of simply emailing or texting a user a password at each login.
    • while password expiry in the sense of scheduled password resets is considered legacy practices, consider when passwords should be no longer valid. Imagine that inactive accounts (say 1 year) have their password hashes purged. Thereby requiring the user to reset their password via email. How have you improved security? If you were breached, how many fewer passwords would be exposed? Think of Linked in being able to say instead of 167 Million accounts compromised, they could say 30 million active accounts were compromised. (and 137 usernames without passwords). That would be a huge improvement. It helps the security landscape, but still might not save your job!
  • Usernames:
    • email accounts are okay, but usernames are better. They tend to be even more unique.
    • consider providing “display names” and “usernames” as distinct fields. The username then can be hashed for additional security just like passwords. In the event, your database is exposed, both the username and passwords are protected. Ideally using a different salt then the passwords.
  • Email Addresses:
    • evaluate: do you really need to know the email address for your users? In many cases, the only time you contact them is during a password reset, during which they can provide you the email address and you can compare it to your hashed value.
    • some other uses have been that a user preference in another table stores things such as user notification settings (phone, email, etc). So that notifications can still be sent out, while keeping your authentication database table free from unsecured email addresses.
  • Two Factor Authentication:
    • these are great technologies to employ for either each login or for when accessing highly sensitive areas. My bank, for example, requires my use of the RSA-2FA Fob when I conduct any financial transfers, but just a simple password for most “less sensitive” activities. This is a great balance of security/convenience.
    • my own personal perspective is that biometrics-based authentication must always be paired with another factor and never relied upon as a single factor. (ahem, Microsoft) While password management is an issue, the problem with biometrics is that we’re barely keeping ahead of the ability to detect fraudulent use of our biometric data. And the biggest problem is once your biometric identity has been compromised, you cannot change it like you can a password. Once fingerprint tables get into the wild (perhaps they already are), you cannot just change your biometric data. You have to trust that they biometric technology gets better to detect fraud.
  • Cookie & Session Security:
    • this is huge and cannot be simply stated, but you must not blindly trust your web server to security handle session state. You must ensure that the person you think you’re talking to is the right person – that the session hasn’t been hijacked.
    • consider limiting the number of sessions per login (that is that two sessions cannot simultaneously be going on with the same credentials).
    • understand how roaming IP addresses impacts sessions (cell phone roaming, etc). When might you need to prompt for authentication?
    • clearly understand how your “remember me” can be insecure, and what actions might trigger a re-authentication.
    • is there a way for users to manage and understand their active sessions? Can they flush all other sessions? Should you be doing this automatically?
  • Rate limits, brute force attacks:
    • how is your system designed to detect and prevent brute force attacks? Do you even know if this is happening? I can guarantee that it is happening right now, but can you see it? What are you doing about it?
  • Web Application Firewall (WAF):
    • are you implementing a WAF in your firewall or software based solution? What is looking for zero-day exploits? Protecting against common threat vectors? Protecting about unexpected crashes or code dumps to the screen?
  • Cross-Site Scripting (XSS) & Cross-Site Request Forgery (CSRF):
    • what is being done in code to protect against these threats? Are you just accepting form data without any token?
    • how are you protecting against reply attacks?
    • a major airline made this mistake causing credit cards to be compromised.
  • Zero trust user-submitted data
    • be sure to apply the correct filters to all user-supplied input
    • properly prepare your data before submitting it to your databases to avoid SQL Injection Attacks.

Finally, remember that username/passwords and the issues addressed above are primarily your “front door”. But don’t forget the other security elements that you need to account for. Are the windows, crawlspaces, and inside secure? There is a great YouTube video about physical security for server rooms — you can spend huge amounts on ‘security’ while effectively leaving the physical door unlocked! This includes your physical servers, data-secure-at-rest, data security across sessions, and how data is protected against authorized users. Who can access sensitive or encrypted data? Your server administrators don’t need to be able to see/read/decrypt that data, nor should your web developers.

As you take a look at this, understand that there is a lot more at hand to securely developing applications. If you’re tempted to just hand off this responsibility to 0Auth or another third party, you still need to understand this list. Why? Because you need to know what parts above are handled by them, and thereby know what is on your shoulders. If your database queries aren’t properly prepared, I can still just inject code to “SELECT * from credit_cards WHERE 1=1” and all is lost! That isn’t a authentication issues, but is a security issue. Often we think of security as just been an authentication question, but it goes hand-in-hand with authentication, and it is wholistic, not just something a plug-in, add-on, or module will solve.

 

Happy coding!

Public DNS Servers

Domain Name Resolution (DNS) is one of the services we take for granted every day. It works behind the scenes to resolve name-to-IP addresses. It works so well that we can accept the defaults without clearly understanding how it works. Most ‘computer guys’ or even IT Professionals really don’t have a good grasp on this topic. Simply ask someone to define root-hints and it will clearly demonstrate the knowledge of a technician.

The biggest reason it is overlooked is that it simply works — until it doesn’t. But beyond that, the question exists — can it work better?

This article is about public DNS name resolution — that is, for things outside of your local environment. We’ll save local domain resolution for another day — such as your Active Directory domain name resolution.

So let’s take a quick look at when you type a website name into a browser — perhaps the easiest example of this. What actually happens? Your local computer uses the following method to resolve names, going down the list until it finds a match. At each step its looking for a hit, which is typically a caches result.

  1. Your local computer first checks a local file called the hosts file to see if there is a static IP configured.
  2. Then it checks it’s local DNS cache — so it doesn’t constantly have to ask another source.
  3. It then uses the DNS name configured for your network interface. Which could be your DNS server for your local network (AD server), or perhaps just your home wireless router… (In some very rare cases it is skipping this and using your ISP’s DNS server.) But sticking with the local DNS server it will also check it’s cache first before going out to its upstream server, which is likely your ISP’s DNS server.
  4. Your local ISP is also checking its cache which if that fails, it will likely either source another upstream server, or hopefully, it will use root-hints.
    1. Root hints are the sort of master directory of authoritative servers, which will tell your server who to ask for authoritative information for the TLD, such as .com or .net.
    2. Once it gets the root zone, then it will query those servers to see specifically which DNS servers are authoritative for the next level such as microsoft.com
    3.  Then it will query that server for the actual DNS hostname, such as http://www.microsoft.com

As you can see once you hit step 4, you’re involving talking to a lot more servers, at a distance and latency for each step — which is why we have DNS caching. Each hop along this line introduces latency… Now there is a lot of things which can be said here. But I want to talk about a few things:

  1. Cache is essential for timely name resolution, however, this comes at a cost of stale records. This is especially important for IT Professionals to know because there is inherent latency involved with any DNS change. While local network DNS changes can propagate quickly, especially for AD Integrated AD changes when you’re talking about the public internet, it can take 24-72 hours for a simple hostname change to propagate because each cache location is going to hold on to that data for a certain length of time, often stated as TTL or Time-To-Live.
  2. Public DNS Servers have extremely diverse quality… from the amount of data in their cache to response time. DNS service is really a required afterthought for most internet service providers. As long as it works, they don’t care. As a result, response times can be significant if you need to query your ISP’s DNS information. Additionally, many of the times your ISP doesn’t use a geographically near DNS server so you might be having to traverse the internet to the other side of the continent to get your simple DNS response. Regional ISPs might not have a very good cache of DNS names causing them to reach into the Root Hints, which is time consuming, to build their cache.

There can be a huge performance improvement by migrating away from your ISP’s DNS servers. I have been experiementing with many different options over the decades.

  • Many years ago Verizon had some public DNS servers at 4.4.4.4 that was extremely popular, fast and reliable. However, they became flooded with a bunch of IT professionals directing their networks to 4.4.4.4 which impacted performance, so they closed it to just Verizon customers. It was such an easy IP address number to remember it was often used over ISP DNS servers just because it was easy to remember.
  • In 2009 Google released their set of public DNS servers at 8.8.8.8 and 8.8.4.4 which quickly became a popular replacement for the Verizon servers. As of this writing they’re still publically available.
  • Around the same time, I became introduced to OpenDNS which was recently acquired by Cisco for being awesome at DNS Resolution. Beyond just being a very fast, reliable, responsive DNS server, they also provided very basic DNS filtering. This helped IT professionals by keeping the really, really bad stuff from properly resolving. It also provides options for DNS based content filtering as well, which permitted businesses to get basic content filtering for objectionable content for low cost.
  • Starting in 2018, another company which are experts at DNS resolution, CloudFlare entered the public DNS space with their DNS servers at 1.1.1.1 and 1.0.0.1. They are ANYCAST addresses and you’ll automatically be routed to the geographically closes DNS servers to you. Benchmark testings show that the 1.1.1.1 servers are significantly faster than anything else within North America. Not only for caches records but also for non-caches results.

Today when choosing a public DNS server for my clients, it comes down to either CloudFlare or OpenDNS. In environments where we have no other source of content filtering, then I prefer to use OpenDNS but if the client has some form of content filtering on their firewall then the answer is the CloudFlare 1.1.1.1 network.

One important thing to note is that after ClouldFlare started using the 1.1.1.1 address, it exposed that some hardware vendors were improperly using 1.1.1.1 as a local address, against RFC standard. So in some isolated cases 1.1.1.1 doesn’t work for some clients — but this is because the systems they’re using are actually violating the RFC standards. So this isn’t CloudFlare’s causing but rather vendors disregarding RFC standards when they built their systems to use this unregistered space for their own purposes.

As far as how I personally use this as an individual, at home we use OpenDNS with content filtering to keep a bunch of bad stuff off of our home network, it even helps by filtering ‘objectionable ads’ from popping up often.

On my mobile devices, I have a VPN Tunnel which I use on any network which will let me use a VPN, like at Starbucks, etc., and you can find more about this config at this Roadwarrior VPN Configuration article. But sometimes I cannot connect to the VPN due to firewall filtering, such as at Holiday Markets or at my kids school guest network, so in those cases, I use the 1.1.1.1 DNS Profile for my iPhone.

One other closing issue — there have been various ISPs in the past which force all DNS resolution through there servers. In fact, there is one which on each subsequent request for a record, it will artificially increase the TTL number on each request. Basically trying to get your system to cache the results. In this case, your pretty stuck if you run into this but I would suggest you complaining to your sales rep for that ISP. Also you can look into using the DNS over TLS or DNS over HTTPS but as of right now Windows doesn’t natively support it without third party software, some very modern routers might support it, and I know that the DD-WRT aftermarket wireless firmware supports it. So you might have a bit more work to do to get it working.

 

Dad needs a new computer?!

One of the banes of most IT Professionals is when family members ask for help with purchasing a computer, or worse yet, they just purchased something from a big-box retailer and need help.

This is a multi-part story inspired by my dad who called me recently for a computer question he had. It made me realize that 13 years ago I helped him purchase the computer he currently has. I couldn’t believe it’s been that long! I’m thankful that after he received the catalog for home computers from Dell that he immediately came to me to ask for advice…

Now I’ll get back around to what computer I help him select because I want this to sink in for just a moment…

My dad has a desktop computer,

that was purchased 13 years ago,

that he is still using…

And as for performance, it is working just as good today as it did when it was first purchased… Almost unbelievable! Oh, and he has no plans on replacing it either!

Okay, now as the commercials for miracle weight loss say, “results are not typical”… but they are not wholly unexpected. Let’s talk about this a bit.

My first advice to anyone purchasing a computer for home use, is to skip the big box stores, and even anything seemingly consumer grade. Everything in this real seems to be designed with a short lifespan in mind. Cheaper parts, poorer construction, etc. Not to mention all of the consumer bloatware that seems to come on them. So the first thing I tell everyone and everyone is to immediately go to a major computer sellers “enterprise” tab on their page, be it Dell or HP or whomever. Normally anybody can still just order these, and the benefits are more solid construction, longer MTBF and usually far less bloatware preinstalled. In this case, 13 years ago I had my dad purchased a Dell Optiplex Workstation.

Now if you simply did that, it shouldn’t be surprising to get 6+ years out of the hardware, to get over 10 years is to really be getting your money’s worth. Now truth be told, he did have to replace the power supply once but that was likely caused due to a recent series of lightning storms in his area that the little power-strip surge protector couldn’t really protect against.

But okay, let’s talk about performance… There are really two prongs to why this thing performs so well…

First, he uses his computer for just word processing — and printing — nothing else. Nothing online and he wanted his computer to be as secure as possible from such threats… So, that makes things really easy… Realize that if the computer is an island, there is no external connectivity – no internet, no USB drives, etc. Then it really is an island. What are the threat vectors in this case? None really. So, do you need patch management? Not of the system is working? Most ‘bugs’ patched these days are more about vulnerabilities, not functionally. And honestly, after 13 years, if there are any functionality quirks, he doesn’t seem them as such, but just work through or around them. It really is surprising to see how stopping patching significantly improves system performance and reliability!

For the record, I’m a huge proponent of patch management – but that is because in virtually all cases you have threat vectors you need to account for. But let’s pause for just a moment, and think about that — are there places or situations where you can vastly improve security and performance by outright removing a threat vector such as the internet? It’s also worth mentioning that because of this lack of patching, the 2007 Daylight Saving Adjustment was never patched on his computer. But there are ways to manually patch this yourself on such systems.

But beyond that, let’s talk about the statement that it runs that the same performance level. That is a true statement, although perhaps a bit misleading. Do you remember having to wait for Windows XP to boot up? I sure do. Although if you think back, XP made a lot of waves because it did boot much faster than prior operating systems of the day. But that aside, Windows 10 boots almost instantly. But that is what end users expect these days, my iPhone is instant on… The concept of having to wait befuddles us nowadays. So by today’s comparison, the computer is slloooooowwwww. But that is just my modern comparisons. But it works just as fast as it always has… After all, the processor is still ticking away at the same speed, and the software hasn’t changed at all.

The biggest reason it isn’t a problem for him is that he has no point of comparison. He is retired, the computer works the way it always has. He hasn’t worked on more modern, faster computers.

It’s also probably a mindset — my parents have hundreds of VHS movies. Sure, they have DVD and the latest blue ray discs. Mostly, however, because it’s virtually impossible to not buy a blue ray player. So sure, they’ve got the latest and greatest, and the quality is better than VHS. Although who knows how well they actually see with their aging eyes. But why throw out thousands of dollars worth of working (inferior) VHS movies and buy again higher quality movies, which, at the end of the day, is the exact same movie, story, actors, lines, etc., And most of those movies really were filmed using inferior camera equipment of the day… So is there really a big difference between Gone with the Wind on blue ray since it was captures with 70 year old, non-digital camera technology?

In the end its a bit of a philosophical discussion. Perhaps.

But what’s the takeaway from this article, if any? I would propose a few points:

  • Purchasing: realize that the enterprise gear is often worth it even for personal use because while it can be marginally more expensive, it can last far longer. I think his tower cost sub $500.
  • Security: Consider how in every environment security and performance can be improved by mitigating threat vectors. Remember that patch management is one tool we have to address threats and isn’t a panacea into itself.
  • Performance: Performance is very relative, and subjective. Each use application is different – purchasing or upgrading in blanket terms is wasteful. Each user, department, or situation can often be different and unique. Address them as such.

 

 

 

 

 

 

Installing Vagrant

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover Vagrant — a virtual environment management tool.

Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the “works on my machine” excuse a relic of the past.

If you are a developer, Vagrant will isolate dependencies and their configuration within a single disposable, consistent environment, without sacrificing any of the tools you are used to working with (editors, browsers, debuggers, etc.). Once you or someone else creates a single Vagrantfile, you just need to vagrant up and everything is installed and configured for you to work. Other members of your team create their development environments from the same configuration, so whether you are working on Linux, Mac OS X, or Windows, all your team members are running code in the same environment, against the same dependencies, all configured the same way. Say goodbye to “works on my machine” bugs.

 

Let’s get into how to setup Vagrant by HashiCorp:

  1. First, make sure you have your virtualization software installed. For this example, we’re running Oracle’s VirtualBox as it’s an excellent and easy to use open source option. See my VirtualBox Installation Guide here.
  2. Find the appropriate package for your system and download it.
  3. Run the installer for your system. The installer will automatically add vagrant to your system path so that it is available in terminals.
  4. Verify that it is installed by running the command vagrant from the command line – it should run without error, and simply output the options available. If you receive an error, please try logging out and logging back into your system (this is particularly necessary sometimes for Windows).

That’s it, you’re all set. Now go ahead and take a look at my Introduction to ScotchBox which is a great web development option, which uses a vagrantbox.

 

Footnote: It’s also worth mentioning that recently Docker has gained a lot of attention and for some web developers its a great option. I’ve only looked into it a bit, and will probably create a series using that tool later this year.

 

Version Disclosure: This document was written while the current version of vagrant is 2.2.4 and virtualbox is 6.0.4 – different versions might be slightly different.

Installing Virtual Box

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover VirtualBox — an Open Source Virtualization product for your local machine.

Oracle VM VirtualBox (formerly Sun VirtualBox, Sun xVM VirtualBox, and Innotek VirtualBox) is a free and open-source hosted hypervisor for x86 computers and is under development by Oracle Corporation. VirtualBox may be installed on a number of host operating systems, including Linux, macOS, Windows, Solaris, and OpenSolaris. There are also ports to FreeBSD and Genode.  It supports the creation and management of guest virtual machines running versions and derivations of Windows, Linux, BSD, OS/2, Solaris, Haiku, OSx86 and others, and limited virtualization of macOS guests on Apple hardware.

In general, our application for web development is to emulate our production web server environment which is often a LAMP or WIMP stack. For our examples in this series, we’re going to look at the most popular, the LAMP stack (Linux, Apache, MySQL, and PHP).

 

The installation and setup of VirtualBox are very simple:

  1. Verify that you have a supported host operating system – that is, the desktop operating system that you’re on right now. https://www.virtualbox.org/manual/UserManual.html#hostossupport
  2. Navigate to https://www.virtualbox.org/wiki/Downloads and download the version that is right for your host operating system.
  3. Host OS Specific Steps:
    1. For Windows installations double-click on the downloaded executable file. Select either all or partial component installation – for web development make sure the network components are also selected — USB and Python support is optional
    2. For Mac installations double-click on the downloaded dmg file. And follow the prompts.
    3. For Linux – see this link: https://www.virtualbox.org/manual/UserManual.html#install-linux-host

For most people that is just about it, you’re installed and all set with VirtualBox. The next step for most web developers will be to install Vagrant, which makes managing virtual images super easy!

 

In some situations, your host machines BIOS settings need to be changed because your manufacturer has turned off the required settings by default. You don’t need to worry about this unless you get an error when trying to use a virtual machine. You might get a message like:

  • VT-x/AMD-V hardware acceleration is not available on your system
  • This host supports Intel VT-x, but Intel VT-x is disabled
  • The processor on this computer is not compatible with Hyper-V

This issue is can occur regardless of the virtualization technology you use (VMWare, XenServer, Hyper-V, etc).

How to configure Intel-VT or AMD-V:

  1. Reboot the computer and open the system’s BIOS menu. Depending on the manufacturer, this is done by pressing the delete key, the F1 key or the F2 key.
  2. Open the Processor submenu (also may be listed under CPI, Chipset, or Configuration)
  3. Enable Virtualization, Intel-VT, or AMD-V. You may also see Virtualization Extensions,  Vanderpool, Intel VTd, AMD IOMMU, if the options are available.
  4. Select Save & Exit.

You should be now all set, reboot into your host operating system and try again.

Version Disclosure: This document was written while the current version of virtual box is 6.0.4 – different versions might be slightly different.

 

Powered by WordPress.com.

Up ↑