Forgot my password and spam

On the topic this month about security and password, the discussion of lost passwords come to mind today. When you need to reset your password on your website, the most common thing it asks you for is your password.

pass1

Seems simple and harmless enough, but can it be a gateway for spam? You might thing not, but depending on how the website responds to the email address you put in, it might!

Some systems will reply in a clear way on if that account exists or not. For example, it might reply that the password will be emailed to you, or that the account doesn’t exist.

Think about that, what is the implication?

Simply that a malicious person can write a very basic script that will attempt to “password reset” random email addresses, and your response will verify if that email actually exists.

The threat here is a couple:

  1. First it gives a hacker specific knowledge that your account exists so it can try to brute force their way into your account;
  2. Instead of attempting to figure out your account, now I can simply send you a spoofed email saying you need to change your password at your-bank.com because, well I know you have an account there and it makes the attack that much more legitimate-sounding;
  3. Finally, it lets them sell your email address as a known good address, since obviously, you use it to access some online service.

 

For the end-user/consumer/professional, take a moment, see which websites leak your private email address. If they do, direct them to read this article so they can protect your privacy better.

For developers, this is a call to action to stop leaking this data accidentally. The preferred method is to simply say “if your account exists, we’ll send you a reset password”. That stops it dead in its tracks because that message goes to every email attempt. Also, be sure to check out this article about account security overall.

 

WebDev: Password Best Practice

Based on my article earlier today about Password Reset Tips for Businesses, this article is about the responsibilities and best practices of web developers. The Security versus Convenience paradigm is well known. There is no specific rule here because each business use case is different. However, it is extremely common for security to be an after-thought or bolt-on instead of designed with a security model in mind.

It is important to first determine what level of security is needed. And while most of us are not designing a bank-level secure application, we often have extremely valuable and sensitive information. Among those is the simple username-password combination. We all know someone who does it, perhaps you still do, but almost certainly you used to — that is share passwords between websites. And your application users are doing that too. Many of them actually are giving you their bank password! So to some degree, you have extreme trust from your end users and need to take that seriously. So with that in mind, let’s talk about several best practices as well as what I’ve seen in the wild:

  • Passwords:
    • should never be stored in plain text or even in an encrypted form. In virtually NO use cases it there a need for you to know their password.
    • any password sent in plain text should be a one-time use password (ie initial email, or password recovery). A password reset that is sent via plaintext email should never be reusable.
    • the hashing algorithm should be upgradable because today’s secure hashes are yesterdays insecure. I remember when MD5 was the best hash we had available. Your code should be able to accommodate changes to the hash method.
    • passwords should be salted. Some frameworks like PHP password_hash provide unique, one time salt for each hash.
    • password length and other criteria should be determined by your specific need for security – obviously the longer the better. In fact, everything supports that length trumps complexity every time. The NIST even removed their recommendation for special characters.
    • consider partnering with third parties like LastPass to help users adopt using a password manager.
    • consider checking all passwords against the list of 500 million breached passwords via API.
    • know that single-use passwords aren’t more secure. That is the practice of simply emailing or texting a user a password at each login.
    • while password expiry in the sense of scheduled password resets is considered legacy practices, consider when passwords should be no longer valid. Imagine that inactive accounts (say 1 year) have their password hashes purged. Thereby requiring the user to reset their password via email. How have you improved security? If you were breached, how many fewer passwords would be exposed? Think of Linked in being able to say instead of 167 Million accounts compromised, they could say 30 million active accounts were compromised. (and 137 usernames without passwords). That would be a huge improvement. It helps the security landscape, but still might not save your job!
  • Usernames:
    • email accounts are okay, but usernames are better. They tend to be even more unique.
    • consider providing “display names” and “usernames” as distinct fields. The username then can be hashed for additional security just like passwords. In the event, your database is exposed, both the username and passwords are protected. Ideally using a different salt then the passwords.
  • Email Addresses:
    • evaluate: do you really need to know the email address for your users? In many cases, the only time you contact them is during a password reset, during which they can provide you the email address and you can compare it to your hashed value.
    • some other uses have been that a user preference in another table stores things such as user notification settings (phone, email, etc). So that notifications can still be sent out, while keeping your authentication database table free from unsecured email addresses.
  • Two Factor Authentication:
    • these are great technologies to employ for either each login or for when accessing highly sensitive areas. My bank, for example, requires my use of the RSA-2FA Fob when I conduct any financial transfers, but just a simple password for most “less sensitive” activities. This is a great balance of security/convenience.
    • my own personal perspective is that biometrics-based authentication must always be paired with another factor and never relied upon as a single factor. (ahem, Microsoft) While password management is an issue, the problem with biometrics is that we’re barely keeping ahead of the ability to detect fraudulent use of our biometric data. And the biggest problem is once your biometric identity has been compromised, you cannot change it like you can a password. Once fingerprint tables get into the wild (perhaps they already are), you cannot just change your biometric data. You have to trust that they biometric technology gets better to detect fraud.
  • Cookie & Session Security:
    • this is huge and cannot be simply stated, but you must not blindly trust your web server to security handle session state. You must ensure that the person you think you’re talking to is the right person – that the session hasn’t been hijacked.
    • consider limiting the number of sessions per login (that is that two sessions cannot simultaneously be going on with the same credentials).
    • understand how roaming IP addresses impacts sessions (cell phone roaming, etc). When might you need to prompt for authentication?
    • clearly understand how your “remember me” can be insecure, and what actions might trigger a re-authentication.
    • is there a way for users to manage and understand their active sessions? Can they flush all other sessions? Should you be doing this automatically?
  • Rate limits, brute force attacks:
    • how is your system designed to detect and prevent brute force attacks? Do you even know if this is happening? I can guarantee that it is happening right now, but can you see it? What are you doing about it?
  • Web Application Firewall (WAF):
    • are you implementing a WAF in your firewall or software based solution? What is looking for zero-day exploits? Protecting against common threat vectors? Protecting about unexpected crashes or code dumps to the screen?
  • Cross-Site Scripting (XSS) & Cross-Site Request Forgery (CSRF):
    • what is being done in code to protect against these threats? Are you just accepting form data without any token?
    • how are you protecting against reply attacks?
    • a major airline made this mistake causing credit cards to be compromised.
  • Zero trust user-submitted data
    • be sure to apply the correct filters to all user-supplied input
    • properly prepare your data before submitting it to your databases to avoid SQL Injection Attacks.

Finally, remember that username/passwords and the issues addressed above are primarily your “front door”. But don’t forget the other security elements that you need to account for. Are the windows, crawlspaces, and inside secure? There is a great YouTube video about physical security for server rooms — you can spend huge amounts on ‘security’ while effectively leaving the physical door unlocked! This includes your physical servers, data-secure-at-rest, data security across sessions, and how data is protected against authorized users. Who can access sensitive or encrypted data? Your server administrators don’t need to be able to see/read/decrypt that data, nor should your web developers.

As you take a look at this, understand that there is a lot more at hand to securely developing applications. If you’re tempted to just hand off this responsibility to 0Auth or another third party, you still need to understand this list. Why? Because you need to know what parts above are handled by them, and thereby know what is on your shoulders. If your database queries aren’t properly prepared, I can still just inject code to “SELECT * from credit_cards WHERE 1=1” and all is lost! That isn’t a authentication issues, but is a security issue. Often we think of security as just been an authentication question, but it goes hand-in-hand with authentication, and it is wholistic, not just something a plug-in, add-on, or module will solve.

 

Happy coding!

Password Tips for Businesses

This year Microsoft made a very public statement about how they’re fundamentally changing how passwords will work in Microsoft Windows 10 moving forward. Most significant is that they’re dropping the password expiration recommendation. This brings their recommended policies closer to what NIST also published on this topic. On one hand, these bring a collective sigh of relief from many end-users who are vexed when they see the dreaded “you must change your password in 14 days”…13 days…11 days… This was previously seen as ‘low hanging fruit’ for any IT consultant to come in and perform a security audit, and point out that they don’t force their users to change their passwords.

There are many reasons for the change in direction for both Microsoft and NIST recently. But the biggest reason I propose is that security threats to passwords have fundamentally changed in recent years, compared to the past. There is a good chance your email account is already known by hackers. But moreover, your password is even known by them. As of today over half-a-billion unique passwords have been compromised. And the ability to hack or compromise a password is far easier then it ever has been.

What the biggest things these shifts by Microsoft and NIST demonstrate are that ‘good enough’ approaches to security simply isn’t. Arbitrarity forcing users to change their passwords doesn’t make them more or less secure. And it has been argued that it often makes it less secure as users work harder to find ways to remember their passwords. Is ‘Th0rsHammer2’ any more secure than ‘Th0rsHammer1’? Likely not, but research consistently shows that is exactly what happens. Let’s step back and understand why we even consider changing passwords frequently. The fundamental reason is that the password becomes exposed, known to bad actors. The theory used to be that it was unlikely, but just in case, if we change passwords frequently it will reduce the impact. Nowadays we know better, it isn’t a question of “if” but when. And the follow-up question is, once your password is compromised, how long do the bad-guys need? Even the halflife of the typical 90-day forced password change is 45-days, more than enough to do damage.

The new model focuses on two elements:

  1. End-user education: Which primarily focuses on identifying threat vectors such as phishing attempts. But also in how to choose a good password, and avoid password reuse.
  2. Detection of compromise: This one is more technologically involved, but it basically required advanced threat detection to identify potentially compromised accounts or servers, and then using that to force a password change.

 

Recommended Action Items for SOHO (Small Office, Home Office)

  1. End-user education: Ensure that end-users receive training on how to identify and avoid phishing emails, how to choose a good password, and that business and personal passwords should never be the same.
  2. Ensure that every computer has a password required to log in — no accounts should be password exempt.
  3. Consider using a password manager like LastPass which will help create and manage your passwords. That way you can have unique passwords for every account.
  4. Consider using a Two-Factor Authentication (2FA) system whenever possible such as Microsoft Authenticator.
  5. Use OpenDNS which provides a basic level of threat protection for employee website activity.
  6. Pay attention to data breaches of large companies. Consider forcing password resets when such event occurs because there is a high likelihood your users are sharing the password between such large companies (LinkedIn, Yahoo, etc), and your network.

Recommended Action Items for Small Business (10-50 employees)

  1. End-user education: Ensure that end-users receive training on how to identify and avoid phishing emails, how to choose a good password, and that business and personal passwords should never be the same. Train on using password managers instead of sticky notes or excel files with password plainly documented.
  2. All systems should be domain-joined with password policies in place, ensuring that all accounts have strong and long passwords. Remove your password reset policy.
  3. Audit your existing use of role accounts, automatic login accounts, shared accounts, etc. Whenever possible eliminate such accounts so there is a one-to-one audit trail back to a specific user. When role or shared accounts are needed, they should generally have far fewer rights than normal users, and policies need to be in place to reset this upon any employee change.
  4. Consider using a password manager like LastPass which will help create and manage your passwords. That way you can have unique passwords for every account. Professional versions permit the ability to share passwords when needed.
  5. Consider using a Two-Factor Authentication (2FA) system whenever possible such as Microsoft Azure AD MultiFactor Authentication.
  6. Use OpenDNS which provides a basic level of threat protection for employee website activity.
  7. Pay attention to data breaches of large companies. Consider forcing password resets when such events occurs because there is a high likelihood your users are sharing the password between such large companies (LinkedIn, Yahoo, etc), and your network.

 

Recommended Action Items for Medium Business (51+ employees)

  1. All the items listed for Small Business PLUS:
  2. Ensure all public facing website exposing corporate resources (webmail, website, extranet, client-portals, etc) implement technologies like WAF, Fail2Ban, and more. Those resources should be placed in your DMZ, which is isolated from your local network and use completely different administrative credentials.
  3. Outbound traffic filtering including DLP (Data Loss Prevention), Advanced Threat Protection and Content Filtering.
  4. Consider implementing password auditing tools which compare your network passwords against the known password breaches.

 

The above lists are based purely on the topic of password-related security, and there are many additional security matters in general which need to be professionally assessed by any business. 

 

 

 

Public DNS Servers

Domain Name Resolution (DNS) is one of the services we take for granted every day. It works behind the scenes to resolve name-to-IP addresses. It works so well that we can accept the defaults without clearly understanding how it works. Most ‘computer guys’ or even IT Professionals really don’t have a good grasp on this topic. Simply ask someone to define root-hints and it will clearly demonstrate the knowledge of a technician.

The biggest reason it is overlooked is that it simply works — until it doesn’t. But beyond that, the question exists — can it work better?

This article is about public DNS name resolution — that is, for things outside of your local environment. We’ll save local domain resolution for another day — such as your Active Directory domain name resolution.

So let’s take a quick look at when you type a website name into a browser — perhaps the easiest example of this. What actually happens? Your local computer uses the following method to resolve names, going down the list until it finds a match. At each step its looking for a hit, which is typically a caches result.

  1. Your local computer first checks a local file called the hosts file to see if there is a static IP configured.
  2. Then it checks it’s local DNS cache — so it doesn’t constantly have to ask another source.
  3. It then uses the DNS name configured for your network interface. Which could be your DNS server for your local network (AD server), or perhaps just your home wireless router… (In some very rare cases it is skipping this and using your ISP’s DNS server.) But sticking with the local DNS server it will also check it’s cache first before going out to its upstream server, which is likely your ISP’s DNS server.
  4. Your local ISP is also checking its cache which if that fails, it will likely either source another upstream server, or hopefully, it will use root-hints.
    1. Root hints are the sort of master directory of authoritative servers, which will tell your server who to ask for authoritative information for the TLD, such as .com or .net.
    2. Once it gets the root zone, then it will query those servers to see specifically which DNS servers are authoritative for the next level such as microsoft.com
    3.  Then it will query that server for the actual DNS hostname, such as http://www.microsoft.com

As you can see once you hit step 4, you’re involving talking to a lot more servers, at a distance and latency for each step — which is why we have DNS caching. Each hop along this line introduces latency… Now there is a lot of things which can be said here. But I want to talk about a few things:

  1. Cache is essential for timely name resolution, however, this comes at a cost of stale records. This is especially important for IT Professionals to know because there is inherent latency involved with any DNS change. While local network DNS changes can propagate quickly, especially for AD Integrated AD changes when you’re talking about the public internet, it can take 24-72 hours for a simple hostname change to propagate because each cache location is going to hold on to that data for a certain length of time, often stated as TTL or Time-To-Live.
  2. Public DNS Servers have extremely diverse quality… from the amount of data in their cache to response time. DNS service is really a required afterthought for most internet service providers. As long as it works, they don’t care. As a result, response times can be significant if you need to query your ISP’s DNS information. Additionally, many of the times your ISP doesn’t use a geographically near DNS server so you might be having to traverse the internet to the other side of the continent to get your simple DNS response. Regional ISPs might not have a very good cache of DNS names causing them to reach into the Root Hints, which is time consuming, to build their cache.

There can be a huge performance improvement by migrating away from your ISP’s DNS servers. I have been experiementing with many different options over the decades.

  • Many years ago Verizon had some public DNS servers at 4.4.4.4 that was extremely popular, fast and reliable. However, they became flooded with a bunch of IT professionals directing their networks to 4.4.4.4 which impacted performance, so they closed it to just Verizon customers. It was such an easy IP address number to remember it was often used over ISP DNS servers just because it was easy to remember.
  • In 2009 Google released their set of public DNS servers at 8.8.8.8 and 8.8.4.4 which quickly became a popular replacement for the Verizon servers. As of this writing they’re still publically available.
  • Around the same time, I became introduced to OpenDNS which was recently acquired by Cisco for being awesome at DNS Resolution. Beyond just being a very fast, reliable, responsive DNS server, they also provided very basic DNS filtering. This helped IT professionals by keeping the really, really bad stuff from properly resolving. It also provides options for DNS based content filtering as well, which permitted businesses to get basic content filtering for objectionable content for low cost.
  • Starting in 2018, another company which are experts at DNS resolution, CloudFlare entered the public DNS space with their DNS servers at 1.1.1.1 and 1.0.0.1. They are ANYCAST addresses and you’ll automatically be routed to the geographically closes DNS servers to you. Benchmark testings show that the 1.1.1.1 servers are significantly faster than anything else within North America. Not only for caches records but also for non-caches results.

Today when choosing a public DNS server for my clients, it comes down to either CloudFlare or OpenDNS. In environments where we have no other source of content filtering, then I prefer to use OpenDNS but if the client has some form of content filtering on their firewall then the answer is the CloudFlare 1.1.1.1 network.

One important thing to note is that after ClouldFlare started using the 1.1.1.1 address, it exposed that some hardware vendors were improperly using 1.1.1.1 as a local address, against RFC standard. So in some isolated cases 1.1.1.1 doesn’t work for some clients — but this is because the systems they’re using are actually violating the RFC standards. So this isn’t CloudFlare’s causing but rather vendors disregarding RFC standards when they built their systems to use this unregistered space for their own purposes.

As far as how I personally use this as an individual, at home we use OpenDNS with content filtering to keep a bunch of bad stuff off of our home network, it even helps by filtering ‘objectionable ads’ from popping up often.

On my mobile devices, I have a VPN Tunnel which I use on any network which will let me use a VPN, like at Starbucks, etc., and you can find more about this config at this Roadwarrior VPN Configuration article. But sometimes I cannot connect to the VPN due to firewall filtering, such as at Holiday Markets or at my kids school guest network, so in those cases, I use the 1.1.1.1 DNS Profile for my iPhone.

One other closing issue — there have been various ISPs in the past which force all DNS resolution through there servers. In fact, there is one which on each subsequent request for a record, it will artificially increase the TTL number on each request. Basically trying to get your system to cache the results. In this case, your pretty stuck if you run into this but I would suggest you complaining to your sales rep for that ISP. Also you can look into using the DNS over TLS or DNS over HTTPS but as of right now Windows doesn’t natively support it without third party software, some very modern routers might support it, and I know that the DD-WRT aftermarket wireless firmware supports it. So you might have a bit more work to do to get it working.

 

Dad needs a new computer?!

One of the banes of most IT Professionals is when family members ask for help with purchasing a computer, or worse yet, they just purchased something from a big-box retailer and need help.

This is a multi-part story inspired by my dad who called me recently for a computer question he had. It made me realize that 13 years ago I helped him purchase the computer he currently has. I couldn’t believe it’s been that long! I’m thankful that after he received the catalog for home computers from Dell that he immediately came to me to ask for advice…

Now I’ll get back around to what computer I help him select because I want this to sink in for just a moment…

My dad has a desktop computer,

that was purchased 13 years ago,

that he is still using…

And as for performance, it is working just as good today as it did when it was first purchased… Almost unbelievable! Oh, and he has no plans on replacing it either!

Okay, now as the commercials for miracle weight loss say, “results are not typical”… but they are not wholly unexpected. Let’s talk about this a bit.

My first advice to anyone purchasing a computer for home use, is to skip the big box stores, and even anything seemingly consumer grade. Everything in this real seems to be designed with a short lifespan in mind. Cheaper parts, poorer construction, etc. Not to mention all of the consumer bloatware that seems to come on them. So the first thing I tell everyone and everyone is to immediately go to a major computer sellers “enterprise” tab on their page, be it Dell or HP or whomever. Normally anybody can still just order these, and the benefits are more solid construction, longer MTBF and usually far less bloatware preinstalled. In this case, 13 years ago I had my dad purchased a Dell Optiplex Workstation.

Now if you simply did that, it shouldn’t be surprising to get 6+ years out of the hardware, to get over 10 years is to really be getting your money’s worth. Now truth be told, he did have to replace the power supply once but that was likely caused due to a recent series of lightning storms in his area that the little power-strip surge protector couldn’t really protect against.

But okay, let’s talk about performance… There are really two prongs to why this thing performs so well…

First, he uses his computer for just word processing — and printing — nothing else. Nothing online and he wanted his computer to be as secure as possible from such threats… So, that makes things really easy… Realize that if the computer is an island, there is no external connectivity – no internet, no USB drives, etc. Then it really is an island. What are the threat vectors in this case? None really. So, do you need patch management? Not of the system is working? Most ‘bugs’ patched these days are more about vulnerabilities, not functionally. And honestly, after 13 years, if there are any functionality quirks, he doesn’t seem them as such, but just work through or around them. It really is surprising to see how stopping patching significantly improves system performance and reliability!

For the record, I’m a huge proponent of patch management – but that is because in virtually all cases you have threat vectors you need to account for. But let’s pause for just a moment, and think about that — are there places or situations where you can vastly improve security and performance by outright removing a threat vector such as the internet? It’s also worth mentioning that because of this lack of patching, the 2007 Daylight Saving Adjustment was never patched on his computer. But there are ways to manually patch this yourself on such systems.

But beyond that, let’s talk about the statement that it runs that the same performance level. That is a true statement, although perhaps a bit misleading. Do you remember having to wait for Windows XP to boot up? I sure do. Although if you think back, XP made a lot of waves because it did boot much faster than prior operating systems of the day. But that aside, Windows 10 boots almost instantly. But that is what end users expect these days, my iPhone is instant on… The concept of having to wait befuddles us nowadays. So by today’s comparison, the computer is slloooooowwwww. But that is just my modern comparisons. But it works just as fast as it always has… After all, the processor is still ticking away at the same speed, and the software hasn’t changed at all.

The biggest reason it isn’t a problem for him is that he has no point of comparison. He is retired, the computer works the way it always has. He hasn’t worked on more modern, faster computers.

It’s also probably a mindset — my parents have hundreds of VHS movies. Sure, they have DVD and the latest blue ray discs. Mostly, however, because it’s virtually impossible to not buy a blue ray player. So sure, they’ve got the latest and greatest, and the quality is better than VHS. Although who knows how well they actually see with their aging eyes. But why throw out thousands of dollars worth of working (inferior) VHS movies and buy again higher quality movies, which, at the end of the day, is the exact same movie, story, actors, lines, etc., And most of those movies really were filmed using inferior camera equipment of the day… So is there really a big difference between Gone with the Wind on blue ray since it was captures with 70 year old, non-digital camera technology?

In the end its a bit of a philosophical discussion. Perhaps.

But what’s the takeaway from this article, if any? I would propose a few points:

  • Purchasing: realize that the enterprise gear is often worth it even for personal use because while it can be marginally more expensive, it can last far longer. I think his tower cost sub $500.
  • Security: Consider how in every environment security and performance can be improved by mitigating threat vectors. Remember that patch management is one tool we have to address threats and isn’t a panacea into itself.
  • Performance: Performance is very relative, and subjective. Each use application is different – purchasing or upgrading in blanket terms is wasteful. Each user, department, or situation can often be different and unique. Address them as such.

 

 

 

 

 

 

Installing Vagrant

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover Vagrant — a virtual environment management tool.

Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the “works on my machine” excuse a relic of the past.

If you are a developer, Vagrant will isolate dependencies and their configuration within a single disposable, consistent environment, without sacrificing any of the tools you are used to working with (editors, browsers, debuggers, etc.). Once you or someone else creates a single Vagrantfile, you just need to vagrant up and everything is installed and configured for you to work. Other members of your team create their development environments from the same configuration, so whether you are working on Linux, Mac OS X, or Windows, all your team members are running code in the same environment, against the same dependencies, all configured the same way. Say goodbye to “works on my machine” bugs.

 

Let’s get into how to setup Vagrant by HashiCorp:

  1. First, make sure you have your virtualization software installed. For this example, we’re running Oracle’s VirtualBox as it’s an excellent and easy to use open source option. See my VirtualBox Installation Guide here.
  2. Find the appropriate package for your system and download it.
  3. Run the installer for your system. The installer will automatically add vagrant to your system path so that it is available in terminals.
  4. Verify that it is installed by running the command vagrant from the command line – it should run without error, and simply output the options available. If you receive an error, please try logging out and logging back into your system (this is particularly necessary sometimes for Windows).

That’s it, you’re all set. Now go ahead and take a look at my Introduction to ScotchBox which is a great web development option, which uses a vagrantbox.

 

Footnote: It’s also worth mentioning that recently Docker has gained a lot of attention and for some web developers its a great option. I’ve only looked into it a bit, and will probably create a series using that tool later this year.

 

Version Disclosure: This document was written while the current version of vagrant is 2.2.4 and virtualbox is 6.0.4 – different versions might be slightly different.

Installing Virtual Box

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover VirtualBox — an Open Source Virtualization product for your local machine.

Oracle VM VirtualBox (formerly Sun VirtualBox, Sun xVM VirtualBox, and Innotek VirtualBox) is a free and open-source hosted hypervisor for x86 computers and is under development by Oracle Corporation. VirtualBox may be installed on a number of host operating systems, including Linux, macOS, Windows, Solaris, and OpenSolaris. There are also ports to FreeBSD and Genode.  It supports the creation and management of guest virtual machines running versions and derivations of Windows, Linux, BSD, OS/2, Solaris, Haiku, OSx86 and others, and limited virtualization of macOS guests on Apple hardware.

In general, our application for web development is to emulate our production web server environment which is often a LAMP or WIMP stack. For our examples in this series, we’re going to look at the most popular, the LAMP stack (Linux, Apache, MySQL, and PHP).

 

The installation and setup of VirtualBox are very simple:

  1. Verify that you have a supported host operating system – that is, the desktop operating system that you’re on right now. https://www.virtualbox.org/manual/UserManual.html#hostossupport
  2. Navigate to https://www.virtualbox.org/wiki/Downloads and download the version that is right for your host operating system.
  3. Host OS Specific Steps:
    1. For Windows installations double-click on the downloaded executable file. Select either all or partial component installation – for web development make sure the network components are also selected — USB and Python support is optional
    2. For Mac installations double-click on the downloaded dmg file. And follow the prompts.
    3. For Linux – see this link: https://www.virtualbox.org/manual/UserManual.html#install-linux-host

For most people that is just about it, you’re installed and all set with VirtualBox. The next step for most web developers will be to install Vagrant, which makes managing virtual images super easy!

 

In some situations, your host machines BIOS settings need to be changed because your manufacturer has turned off the required settings by default. You don’t need to worry about this unless you get an error when trying to use a virtual machine. You might get a message like:

  • VT-x/AMD-V hardware acceleration is not available on your system
  • This host supports Intel VT-x, but Intel VT-x is disabled
  • The processor on this computer is not compatible with Hyper-V

This issue is can occur regardless of the virtualization technology you use (VMWare, XenServer, Hyper-V, etc).

How to configure Intel-VT or AMD-V:

  1. Reboot the computer and open the system’s BIOS menu. Depending on the manufacturer, this is done by pressing the delete key, the F1 key or the F2 key.
  2. Open the Processor submenu (also may be listed under CPI, Chipset, or Configuration)
  3. Enable Virtualization, Intel-VT, or AMD-V. You may also see Virtualization Extensions,  Vanderpool, Intel VTd, AMD IOMMU, if the options are available.
  4. Select Save & Exit.

You should be now all set, reboot into your host operating system and try again.

Version Disclosure: This document was written while the current version of virtual box is 6.0.4 – different versions might be slightly different.

 

Scotch Box – Dead simple Web Development

In this series, I’ll demonstrate some of the web development tools I use. Today we’ll cover Scotch Box — a virtual development environment for your local machine.

Many people begin development by working directly on live, production web servers. Sometimes they’ll work in a sub-directory or a different URL. However, there are several drawbacks to this approach.

  1. Performance: Every update of your files requires them to be sent over the internet, and equally your tests also need to come back over the internet. While each of these is probably only an extra second of latency for each file, it can quickly add up over the lifetime of development.
  2. Security: Let’s face it, development code isn’t the most secure out of the gate. I recently was developing a custom framework and in the process of writing the code for the display of images, introduced a bug which would dump any file to the browser, even php code or environment variables.
  3. Debugging: Debugging tools such as Xdebug shouldn’t be installed on production servers as it can accidentally expose sensitive data.
  4. Connectivity: You must be connected to the internet to develop, so internet connection, no development.

So for most of my projects, I develop first on my laptop. But instead of installing a full LAMP stack on my desktop (where I’ve got a database and web server running full time in the background), I use a Virtual Machine through Oracles Free VirtualBox Hypervisor.  And instead of having one virtual machine host multiple projects, which might have different development needs (specific PHP versions, databases, etc), I spin up a new virtual instance for each project. This is made super easy through a tool called Vagrant. As they say:

Development Environments Made Easy

This post assumed you already have both Oracles VirtualBox and Vagrant installed on your local machine.

My favorite development stack is Scotch Box — perhaps this is because I love scotch, but more likely because it’s (in their own words): THE PERFECT AND DEAD SIMPLE LAMP/LEMP STACK FOR LOCAL DEVELOPMENT

It’s three simple command line entries and you get access to:

  • Ubuntu 16.04.2 LTS (Xenial Xerus) OS
  • Apache Web Server
  • PHP v7.0
  • Databases: MySql, PostgresSQL, MongoDB, SQLite
  • NoSQL/Cache: MemCashed, Redis
  • Local Email Testing: MailHog
  • Python v2.7
  • Node.js
  • Go
  • Ruby
  • Vim
  • Git
  • Beanstalkd
  • And much more.

Within PHP it includes tools like Composer, PHPUnit, WP-CLI. Also since this is designed for development PHP Errors are turned on by default. It works with most frameworks outside of the box, with the exception of Laravel which needs just a bit of tweaking. All major CMS are supposed like WordPress, Drupal and Joomla.

And if you want access to more updated versions, such as PHP 7.2 or Ubuntu 17.10.x, you can pay just $15 for their pro version which comes with so much more!

So how to do install it?

  • From the command line, go to your desired root directory, such as Documents
  • git clone https://github.com/scotchio/scotchbox myproject
  • cd myproject
  • vagrant up                    (learn how to install vagrant)

You can replace “my-project” with whatever you want to name this specific development project.

After you run “vagrant up” it will take several minutes to download the code from the internet. Then you’ll be all set. You can browse http://192.168.33.10/

For shell access SSH to 127.0.0.1:2222 with the username of vagrant, and password of vagrant.

You’re all set.

Configuring a basic Road Warrior OpenVPN Virtual Private Network Tunnel

If you’re a road warrior like me, you’re often accessing the internet from insecure hotspots. All traffic that traverses an open wireless connection is subject to inspection, but furthermore even on untrusted secured wirelesses, you’re activity is subject to monitoring by those providing the internet (trusted or otherwise), as well as ISP providers, etc.

To help keep what you’re doing private, I suggest always using a secure VPN tunnel for all your roaming activity. This guide will show you how to setup your own VPN tunnel using Linode for only $5 per month! That’s right, why pay a third party company money for your privacy which costs more, and you get unlimited usage for yourself and whoever else you decide to provide access for.

Now to be clear upfront, the purpose of this setup is to provide secure tunneling when you’re on the road with untrusted networks such as hotels or coffee shops. Some of the reasons people use VPNs is to provide general internet privacy, which this setup will NOT provide. It does, however, allow you to appear to be connecting to the internet from another geographical location. They have 8 datacenters, spanning the US, Europe, and Asia Pacific. So when you’re on the internet you can configure it so that it appears your connecting from a different location then you’re actually located.  There are other benefits available such as giving you an always fixed WAN IP address, so when you’re configuring security for your services, you can now lock down access to a specific remote IP. Think of only allowing remote connections to your server/services/etc from a single IP address. That provides much stronger security instead of just leaving remote access open.

 

Let’s get started with the configuration:

This post is going to assume you already have a basic Linode setup. Here is how to install the OpenVPN Server in a very simple way. That way, these instructions will work with any Ubuntu Linux Server. Leave comments if you’d like a full setup guide and I’ll throw it together for you.

  1. Remotely connect to your server (such as SSH)
  2. Login as root (or someone with sudo rights)
  3. Run the following from the command prompt:wget https://git.io/vpn -O openvpn-install.sh && bash openvpn-install.sh
  4. When prompted I suggest the following configuration:
    1. UDP (default)
    2. Port 1194 (default)
    3. DNS of 1.1.1.1 (see this link for more info)
    4. Enter a name for your first client name – this is unique for each client. So for example, I’ll call my first one Laptop
  5. The file is now available at /root/ under the filename equal to the client name you specified in step 4.4 — in our example /root/Laptop.ovpn
  6. Download that file to your local computer using the transfer method best for your system:
    1. Linux/MacOS use SCP
    2. Windows use Windows SCP
  7. You’ll want to download the OpenVPN client from https://openvpn.net/community-downloads/
  8. Install the Laptop.ovpn file you downloaded into OpenVPN client – for Windows, right click on the systray icon, choose import – from file. Choose the Laptop.ovpn file you copied from the server. After you choose the file it might take a minute or so, and you should see a notice that the file was imported successfully. Then check the systray icon again and you’ll now see the server WAN IP address listed. Then you simply click that IP address then connect, and you’re all set.
    1. The first time you initiate a connection you may be prompted to trust this unverified connection, this is because you’re using a self-signed certificate. For basic road warriors, this is sufficient. If you’re a corporate IT department, you might want to consider using your own certificate, either trusted or enterprise certs.

You can simply repeat steps 1-3 above, and at step 4 you’ll only be prompted for the client name. Do this for every device and/or user that needs to remotely access this server. For me, I use a separate key for my laptop, phone, and tablet. If they’ll be connected at the same time, you’ll need separate keys. You can also run through the same steps to revoke certificates – so you want to make sure you name them something logical, such as myAndroid, kidsiPhone, wifesLaptop, etc.

Enjoy!

 

 

 

 

 

Powered by WordPress.com.

Up ↑