05 Aug

Remote Desktop Protocol (RDP)

Remote Desktop Protocol (RDP) is a proprietary graphical remote desktop software developed by Microsoft. It is the default solution for connecting to remote Windows servers and has been packaged with every version of Windows since XP. Now on its 8th version, RDP has been forked to work on most popular operating systems including IOS, Android, Mac, Windows, and Linux. Though designed to work with windows server, RDP has become a popular choice for connecting not just to remote PCs, but to mac as well as Linux servers.

Capabilities:

Remote Desktop Protocol is designed to allow users to connect to a computer desktop remotely and access that computer as if it were right in front of them. A desktop connected has access to all plugged in peripherals just like a normal computer. These peripherals can include things like:

  • Mouse and keyboard
  • Monitors – including multi-monitor setups
  • Headphones and speakers (Audio In/Out)
  • Printers
  • Storage devices

Security Concerns

Remote Desktop Protocol uses the very well-known 3389 port, which is commonly used to infect computers with malware and ransomware. According to an article published by cybersecurity firm McAfee, RDP was the most common attack mechanism used to spread ransomware in Q1 2019.

For this reason, experts generally warn against using it over the Internet despite encryption capabilities. It is rather suggested to use RDP over a private network or VPN.

Despite security issues with port 3389, RDP does come with encryption enabled. Even peripherals like mouse and keyboard have their signals sent over encryption which although leading to input lag, helps to secure input such as passwords.

Here are a few suggestions to stay safe using RDP:

  • Make sure encryption is enabled: Encryption can be turned on or off so make sure it is enabled on your DRP session
  • Use complex passwords: choose passwords that will be hard to crack. RDP requires knowing a computer’s password.
  • Use over a private network or VPN: RDP should be used over a private network especially or enterprise use. Be cognizant of which networks you are accessing your computer from. If you must access it remotely, make sure you are going through a Virtual Private Network.
  • Manage Firewall Settings: In your firewall, you can limit RDP sessions to know IPs to make sure unauthorized users are not given access.

Open Source Forks

RDP is available on almost all platforms thanks to the open-source community and their efforts creating RDP software designed to work with operating systems like Linux.

The 2 leading open-source RDP protocols are FreeRDP and Xrdp. These can be installed on any Linux operating system and accessed through either Microsoft’s own Remote Desktop application on Windows or through open-source viewers like Rdesktop on Linux.

Using RDP

On Windows:

  1. Open the Remote Desktop Connection application
  2. Click on the Options link if you want to customize settings
  3. Input the IP address of the desktop you are trying to connect to
  4. Input your username and password

On Linux:

  1. Install a client. Popular options include: Rdesktop, Remmina, xfreerdp
  2. Run your chosen RDP client
  3. Input IP, username and password

On Mac:

  1. download the Microsoft Remote Desktop client from the App Store
  2. Click “New” to configure your connection
  3. Type username, IP, username, and password
  4. Click “Start”
Share this
29 Jul

Understanding Git

What is Git?

Git is the most widely used Version Control Control System in the world. It was developed by Linus Torvalds in 2005 as a way to address the challenges of remote development projects and the multiple conflicting updates that can occur during the development cycle as a result of developers not working closely together.

A Distributed Version Control System is a platform that allows developers to work on programs remotely as their own separate versions, then merge their changes into one master file once the developer feels changes are ready. In Git projects, there is a master project that a team is trying to develop and improve. But instead of everyone working on the master project itself, each developer can download a copy, make changes in a completely separate environment from other developers, then merge those changes into the master version which can then be globally updated for all developers to download from.

When developers work on these separate versions of an application (known as branches) those changes are not going be seen by other developers until integrated into the master file. Each developer has their own version and their own repository of changes. This is useful because if a developer feels they need to make revisions or restore an old version, they are free to do so.

By working on a separate version of an application, developers aren’t disturbed by changes from another developer. It also allows developers to test their specific changes in a program to see if issues arise or they want to make corrections before integrating those changes for everyone else.

Example of Version Control

How Version Control Branches Work
How Version Control Branches Work

In this illustration, there is an app that is going through 3 stages of development and being worked on by 2 developers.

John (developer 1) downloads App version 1 and makes several changes to it. He doesn’t disturb Dave (developer 2) who is also working on the app because both have separate version repositories after they’ve downloaded from the master project.

John makes several changes to the app and then reverts 1 step because he made an error before resubmitting his changes to the master app. John can do this with ease because Git backs up John’s app versions along the way. He ends up submitting version 3 from his repository instead of version 4.

Dave only needs to make 2 changes. He works undisturbed then submits his changes to the master project.

After both developers have submitted their changes, the master project is updated to App version 2.

Both developers are now working from a new version (App V2) where they are both up to date on each other’s work.

They review their changes. Dave finds he doesn’t need to make any more changes, so he submits App V2 as the final version. John, on the other hand, realizes there is an error and fixes it in his copy of App V2. He then submits the changes to the final version.

Because both developers were using Git, John was able to submit his change to the final version and Git recognized Dave didn’t submit any conflicting changes and so the change was implemented in the final project.

Advantages of Using Git:

Scalability: Git is a powerful tool that allows multiple developers to work on applications remotely. Git is relied on from small development teams to large, multi-national tech corporations.

Built-in Backup: Git is built with recovery in mind. versions are saved all along the way and repositories are separate so if a developer wants to return to an old version they can.

Centralized Development: Git projects can be enforced by an admin so that no one is pushing changes to the master file before approval. This helps foster organization and managed collaboration.

Genuine Details: details like time, file ID, commit messages, and anything else stamped by Git cannot be changed without changing the file ID itself. This means details are authentic.

Speed: most operations in Git are performed on the local machine that the developer is working on. This means there is no latency or performance drop that sometimes comes from communicating with central servers.

Open Source: Git was developed by Linus Torvalds – the creator of the Linux kernel. Git is open source so the community can contribute and modify the program as they wish.

Free: Git is a free platform that allows developers of all sizes and scale to use Git to build projects affordably.

Optimized for Linux: Git was built by Linux Torvalds who developer the Linux Kernel. git was designed to be a tool optimized for Linux development environments.

What is GitHub?

Many people in the technology community are more familiar with Github than with Git itself. And you might be wondering what the relationship between the two is.

Github and Git are separate entities. Git is the development platform that allows for Distributed Version Control while Github is a company offering an online development community where users can submit Git projects along with other development projects.

GitHub, recently purchased by Microsoft in 2018, is a hub targeted at the developer community. It allows developers to share code, projects, collaborate on forums, ask questions, find featured tools and software, learn about open source guidelines, and more.

For example, when clicking the “Android” topic, users can find anything from designers asking questions about Google’s material design, sharing resources that they’ve found helpful for general Android programing knowledge, and curated collections of software or interesting projects.

Just like Git, Github is built on the premise of community and collaboration. It is designed for developers to have a forum to discuss, collaborate, and share.

What’s The Connection Between Github and Git?

While Github is more than just Git projects, it is still designed to work seamlessly with Git. Git projects are typically published and stored on Github as a way to centralize them project. From there, developers can “pull” a project down from Github and begin working on their own branch. When they are ready, it can then be “pushed” back up to Github and integrated into a downloadable master file.

While many users of Git are users that share their projects to the public, Git offers the choice to create private repositories that act much like a storage locker for projects. This is a great system for remote developers looking to collaborate as Git acts as a safe space to store a development project. Private repositories are also heavily used by large corporations looking for a way to unite their remote workforce.

Summary and Final Thoughts

For many developers worldwide and of all sizes, Git is an irreplaceable tool and the defacto standard when collaborating on large development projects with many distributed versions. The fact that Git is free and open source means that developers of all sizes can take advantage of Git which allows for coordinated, remote development.

Git is scalable, recoverable and centralizes workflows between developers making it an essential tool for the modern remote cloud computing dominated world. Developers can work on their own machines for speed, then push their creations to repositories in the cloud through services like Github or private cloud servers for internal organization use. Git allows for seamless development on the Linux platform but is available to be used on all platforms giving it unparalleled flexibility.

If developers decide they want to collaborate openly on projects, they turn to a platform like Github where they can share their development, collaborate, ask questions or find resources through the many forums and curated lists by Github users.

If you want to try Git yourself, the Git website has details on how to install Git. There is also an optional GUI desktop program to install.

Share this
22 Jul

Installing SSH on Linux and Generating Public and Private Keys

What is SSH?

SSH stands for Secure Shell and it is a way for systems to communicate with each other by sending encrypted data over a network. SSH is one of the most secure ways to send and receive data and is the method relied on by the data centers that manage tons of information and keep the Internet as we know it running. It is also relied on by many including developers and systems admins because it is a reliable way to access servers. Combined, with SSH keys, it can even eliminate the need for passwords when remote connecting to servers.

The reason that SSH is relied on because it is universal, working on Linux, Mac, and Windows. It can be controlled through a command prompt or GUI based programs such as the popular PUTTY client for windows.

One of the first steps for setting up a remote server, whether it be through a provider like ZebraHost or a private on-premises cloud, is to enable SSH access and to generate a public and private key so that two systems can send encrypted information remotely.

SSH works by generating two keys. One public key and one private key. The private key only lives on the machine you want to SSH into your server from. The public key can be copied to any server or machine you want to access remotely. This is great for security because regardless of who has your public key, as long as only you have your private key, only you can log in with it. This even opens up opportunities like being able to securely access your server as a root user or without a password.

Install SSH Access on Linux

the first step to enabling SSH access is to install Secure Shell. We will be doing this via Open SSH. To do this, type:

sudo apt-get install openssh-server

After clicking enter you will be prompted if you want to continue (Y/N). Types Yes.

Once open SSH is installed you will be ready to use basic SSH in order to access your machine remotely. All you have to do is type your username, IP, and password and you will be able to log into your machine remotely. However, this is considered the least secure way to access your machine as anyone that knows your credentials can access your machine via SSH.

To make your machine more secure, you will need to generate a public and private SSH key. The advantage of using public/private keys is that because the two keys must “talk” to each other, even if someone learns what your credentials are or sees your public key, without knowing the private key there is no way they can access your server.

How to Generate a Public and Private Key on Linux

Think of a public key and a private key as a lock and key. The lock can go on any door you want but without the key its inaccessible. The public key lives on any server you want to access while your private key lives on the machine that you use to access the server. This allows you to have control over your server or if you want, multiple servers.

On your host Linux machine, you will start by generating a public and private key using either of the commands below:

Type either:

ssh-keygen

or

ssh-keygen -t rsa

Then, as shown below you will have options such as setting up a passphrase to protect your keys. In this example, we will simply click Enter to skip entering a passphrase and go straight to generating our public and private key.

Generating SSH keys using ssh-keygen command
Generating SSH keys using ssh-keygen command

Now that the keys are generated, you will want to copy your public key over to your server while your private key remains on the host machine.

To do this, first go into your .ssh directory by using:

cd.ssh

Then, type:

ssh-copy-id username@serverIP

If done correctly, you will see that 1 key has been added to the server. This is the public key that will allow you to access your server via public/private key access.

Successfully copying over public key in Ubuntu
Successfully copying over public key in Ubuntu

Try logging into your server. If your SSH keys have been generated and placed successfully, you will no longer need your password. Rather, Ubuntu will use your private key generated on your host machine to access your server.

Successful SSH login from Ubuntu
Successful SSH login from Ubuntu

The above screenshot is an example of what your machine should look like if you have properly SSH’d into your machine using keys. Notice that after I typed ssh sshtest@192.168.56.104 I was not asked for a password and my server simply said Welcome to Ubuntu 19.10

Generating SSH Keys on Windows

Because the ability to generate public and private keys is not fully baked into Windows, we will be generating our keys via a program called PUTTY. Putty is a client that allows users to log into their servers via SSH.

Putty is GUI-based and as such will generate public/private keys using a different method.

To start, download Putty.

Testing SSH Without SSH Keys

Once PuTTY is installed on your system, you will open the client and first test to make sure you can SSH into your server without keys.

To do this, simply type either your server IP address or username@server address into the “Host Name (or IP address)” bar towards the top of the PuTTY program.

PuTTY session page
PuTTY session page
Where to put your private key in PuTTY
Where to put your private key in PuTTY

Once you click Enter, you will be asked either which user you would like to log in as or the password for that user.

PuTTY SSH requiring username
PuTTY SSH requiring username
PuTTY successful SSH login
PuTTY successful SSH login

Above is what a successful PuTTY SSH login will look like without using SSH keys

Generating SSH Keys via PuTTY

Now that you know your server is accessible from your host machine through normal SSH, you can begin to generate your SSH keys.

To do this, you will want to exit your session if it is active, then search for Putty Key Generator in your start menu. Open this program and click “Generate” with parameters set to RSA

Generating PuTTY SSH Keys
Generating PuTTY SSH Keys

Once your key is generated, click save for both the private and public keys to a private, secure folder where no one will see them (especially for the private key).

Go back to your regular PuTTY program and then type in your IP address in the “Host Name (or IP address)” bar and then without logging into your server, go to the Category window, then scroll down to SSH, then click “Auth” and browse for your private key you generated.

PuTTY Authentication Settings
PuTTY Authentication Settings

Once that is uploaded, you will then SSH login to your server like normal. You will still need your password because you need to upload your public key to the server.

Once you’ve logged into your server, type:

cd. ssh

This will bring you to the ssh directory you need

Then type:

nano authorized keys

then paste the public key that PuTTY generated in a new line. See below which key to copy.

Copy the public key generated in the box on PuTTY
Copy the public key generated in the box on PuTTY

Click CTRL+X then Enter

You will now have the proper public key loaded onto your server

Setting Up One-Click Login Parameters for PuTTY

In order to set up one-click login using your public/private keys, you will need to set up a session profile in PuTTY.

First, type your IP in the “Host Name (or IP address)” bar.

Then go to the Category window. Connection -> Data -> Auto-login username.

Enter your username so that you don’t need to enter your username each time you log in to your server.

Then, Category -> Connection -> SSH -> + -> Auth -> Private key file for authentication.

Click Browse and locate your private key file generated earlier.

Return to “Session” in the Category window.

Enter a name for your Session under Saved Sessions and then click Save.

Now, you can double click your session name or click the session and the load button. With SSH access enabled you will be able to log into your server with just your public/private key combination with no password or username needed.

Below is an example of a successful login with SSH keys:

Successful SSH Login with PuTTY Using SSH Keys
Successful SSH Login with PuTTY Using SSH Keys

Share this
15 Jul

Edge Network vs CDN – and Why They Can Help Your Business

The Internet and networking technology is becoming increasingly critical for business growth. More people are shopping and working online than ever before.  Companies are now using the Internet as the primary outlet for increasing brand recognition, reaching customers, and converting sales.

Although it seems like more users online equal more sales opportunities there is a problem. The internet is brutally competitive. And users are demanding data load faster.

If your company can’t produce a webpage or load an image in milliseconds users will simply go elsewhere.  And there’s little chance they are going to return to your site.

With rising globalization and increasingly connected users, data needs to travel farther and faster than it ever has before in order to reach users. Many companies, like ZebraHost, have users across the globe. Those users need to be served data at equally fast speeds or ZebraHost risks losing its global competitive edge.

The pain point of needing to deliver content to users faster has led to the rise of Edge computing. Under the umbrella of Edge computing, two solutions have emerged. Edge Networks and Content Delivery Networks. Content Delivery Networks and Edge Networks are both designed to bring content to users faster. Although the two methods are referred to interchangeably, there are some differences you should be aware of.

Content Delivery Network (CDN)

A Content Delivery Network (CDN) is a network of servers located strategically close to users. The primary purpose of these servers is to cache static content like images videos etc. to reduce the distance that data needs to travel. With less distance to travel, data reaches users faster.

Have you ever tried to request content from a server located in a foreign country or continent? If you live in North America and have ever tried to get content from Asia, you might have noticed it loads slowly. It has nothing to do with either your internet speed or theirs. It’s simply because of latency due to the distance that data needs to travel.

Because content must travel physical distance over copper or fiber wires, the farther a user is from the server hosting the content, the longer it takes to receive it. The user will have to request the content from the origin server (where the data is originally hosted). That server will then realize a user far away wants to access it then will push the content out to that specific user. All this takes time. The amount of time it takes to request then render a data request is known as latency.

Latency is solved through taking commonly accessed content like images or a website homepage and storing it on the CDN. The CDN will store the content via caching it closer to users. For example, if I want ZebraHost’s homepage to be accessed by users in Kansas City, Sydney Australia, and Cape Town South Africa but my servers are only located in Altoona Iowa, I might find a CDN beneficial. I can cache my homepage on a CDN that has servers in those locations. Then, instead of users having to download my homepage from Altoona Iowa, they can download it from the CDN server closest to them.

Naturally, a CDN can’t place a server equally close to every single user. So, in order to deliver data effectively, the CDN places servers at strategic points such as high traffic data centers where most internet activity in a region is accessed from. A CDN is not just about numbers or servers but strategic placement of those servers to minimize latency.

Edge Network

An Edge Network takes things a step further by not just placing servers in regionally dispersed data centers but rather at high volume Internet Exchange Points (IxPs). These Internet Exchange Points are where ISPs connect with each other to exchange information.

An Edge Network provider will place Point of Presence (PoP) servers at Internet exchange points to further minimize latency by going through the network most direct to the user.

Typically, data must travel over the cables of multiple network providers when traveling long distances. Because an Edge Network caches information at Internet Exchange Points, a user might only have to connect to the nearest exchange point to receive data rather than have it travel over many networks. This can speed up data delivery dramatically, especially for companies with internationally dispersed users.

This might make an Edge Network just sound like a fancy CDN because that’s what it is. An Edge Network is a CDN that specifically places servers at Internet Exchange Points to bring data closer to the network Edge. The network edge is the closest that a CDN can get to users. While an Edge Network is a CDN, a CDN is not an Edge Network. A CDN simply refers to a content delivery solution that caches information in geographically dispersed data centers. The CDN may place their servers in IXPs or they may not.

Edge Computing

CDNs and Edge Networks are under the umbrella of the Edge Computing revolution. But Edge Computing and Edge Network are not the same thing. Edge Computing is an umbrella term that refers to the new trend of bringing data closest the users as possible. And Edge Computing can encompass many things. Most commonly, Edge devices are devices that enable data to be brought closer to users. Common examples are Internet routers, connected vehicles, smartphones, etc. Anything that can act as an entrance to the network has the potential to be an Edge device. But being able to store and access information at these Edge devices is what allows them to be a part of Edge Computing and work towards reducing latency for users.

A common example of Edge Computing is found in electric vehicles. Electric vehicles are highly connected with various electronics that can store, cache, and transmit data. Technology companies can cache information in these vehicles to deliver it faster to users. They can also pull data from these vehicles to understand driving habits, safety data, and more.

The overall goal of Edge computing is the same as CDNs and Edge Networks in that it is designed to deliver information to users quickly and decrease latency.

Free vs Paid CDNs

There are numerous paid CDN services available that make CDN management easy. Examples include Key CDN, Akamai, CloudFront, etc. But there are also free CDNs with more limited features. The most popular of which is Cloud Flare.

Free CDNs typically require more manual configuration. A paid CDN will have a support team that can guide you as you set up your CDN, as well as help you if anything goes wrong. A free CDN will typically let you use the CDN network, but there is unlikely to be any kind of personalized support as part of the free plan.

Free CDNs will also often use a ‘push’ method of delivering content. The push method means that users will choose which content they want on their CDN and request that it be stored on the CDN. The plus is that users have control over the content being pushed and there isn’t wait time for the CDN to cache the content once users request it. On the other hand, if the content hasn’t been pushed to the CDN it won’t be automatically added to the CDN as frequently access content. This can prevent the CDN from working optimally from a user end.

A Paid CDN will typically have more premium support as well as extra features. For example, Cloudflare offers both a free CDN and paid CDN. The free plan offers access to their global CDN. But the paid tier offers the following features:

  • Better site protection
  • More frequent site crawls
  • Mobile optimization
  • DDoS protection

As well as offering extra features as Cloudflare does, paid CDNs are typically more automated using the ‘pull’ method. The pull method is where users will request content. The CDN will then be set as the default access point and since it does not have the content. It will pull that content from the origin server and then cache that content over the CDN. The downside to this is that the first users to request something will have to wait longer because the information isn’t already cached. However, users afterward will be able to access the content quicker once it’s cached. And because the CDN is helping automatically pull frequently requested content it leads to better user experience as the CDN is optimized via user requests.

Main Advantages of CDNs and Edge Networks

CDNs and Ege networks differ slightly, but the core advantages are very similar between the two:

Fast: CDNs and Edge Networks deliver content quicker to your users than if they must send a request to the origin server. This will give your users a better experience and increase the chances of them returning to your site.

Better Bandwidth: Because geographically dispersed users request content from the server closest to them, it means that there is less bandwidth being used on the origin server. CDNs cache content in multiple places helping balance bandwidth.

Load Balancing: Having information cached in dispersed servers that users can access other than the origin server lessens the stress on each server in your network.

Cost Savings*: CDNs and Edge Networks typically end up saving businesses money. Data travels shorter distances so hosts pay less to ISPs. Although CDNs and Edge Networks in most cases cost money, this cost is often less than the amount saved.

Security: Because content is cached globally and the CDN or Edge Network acts as an access point to the network, DDoS and other network attacks are mitigated or at the very least affect fewer users.

Redundancy: Because content is stored in multiple places, if a server goes down, users can simply be redirected to the next closest server that has the content they requested cached. Content is also cached so if the origin server is down, users can still access the content.

Should You Use a a CDN, Edge Network or Neither?

Most businesses have geographically dispersed users and will likely benefit from the addition of a CDN or Edge Network to their infrastructure. However, some businesses, mostly local businesses, will have users in only one area. For these businesses, a CDN might not be useful – yet.

In order to answer the question of which content delivery option is best for your business you must look at your user base.

Do you have a large, globally dispersed userbase? Use an Edge Network. An Edge Network being located at Internet Exchange Points will help data travel by having it traverse less ISP networks thus bringing content closer to your users.

Do you have a geographically dispersed userbase in only a few main regions? If the features and price are right, a CDN is right for your business. If your users are only in a few key areas having your content stored in data centers close to your users will suffice.

Do you have users only in a local area? If you are a local business or only serve on region nearby your origin server, a CDN or Edge Network will not make any noticeable improvement for your business. Your data doesn’t have to travel far so the extra cost or technical setup might not be worth it. On the other hand, you should keep Edge Networks and CDNs in mind if you plan to expand beyond your local area in the future.

Conclusion

There is no doubt that we will see a rise in the usage of CDNs and Edge networks in the future. As the Internet continues to grow and users demand content to be delivered faster, Edge Networks and CDNs will be an important part of most companies’ network infrastructure.

Though CDNs typically are not free, the cost is well worth it for many businesses for the benefits they provide like security, redundancy, and faster load times.

Businesses of any size should keep the advantages of a CDN or Edge Network in mind as they expand and need to potentially cover a larger customer base. Unless your business deals strictly with the local area, understanding how these content delivery methods function could help expand your business.

ZebraHost is now on a CDN. We’ve worked with a leading CDN to make our website faster by caching our homepage so it is delivered to users quickly. We chose to work with a CDN to add additional speed to make browsing our website faster and increase user satisfaction when researching our cloud solutions.

Share this
07 Jul

Happy Birthday ZebraHost!

It’s Our Birthday!

Today July 7th, 2000, ZebraHost was established by current CEO Clive Swanepoel. Over these last 20 years, we’ve continuously strived to bring our clients the best of breed technology and exceptional customer service.

Speaking of our clients, we couldn’t have made it this far without you! Over these last 20 years, we’ve had the pleasure of working with you to host many of your wonderful and exciting ideas. These ideas have allowed us to become more creative, respond better to the needs of developers, and position ourselves as a premier provider of infrastructure solutions.

Because a lot has happened over 20 years, we wanted to take a moment to share our story on how we grew into a customer-focused boutique provider we are today.

History of ZebraHost

20 years of ZebraHost History

Our Founding

ZebraHost’s serial entrepreneur CEO Clive Swanepoel has always pioneered new connected technology. Back in South Africa, he was one of the first people to be granted access to the internet by the Council of Scientific Research. Seeking ways to connect South Africa, Clive founded one of South Africa’s first digital sign companies Vivid Outdoor Media. Vivid Outdoor Media built some of the first digital signs that could be controlled remotely through wiring which allowed for easy turnover of advertisements.

After Clive discovered that he was procuring most of the technology and material for Vivid Outdoor Media in the Mid-Western United States, he decided to move to Des Moines Iowa. Wondering how he could continue to deliver cutting-edge services, Clive looked towards his next tech venture. Upon witnessing the growing userbase of the internet, Clive realized that companied needed a service that would allow tech companies to store massive amounts of data with little upfront expense. Capitalizing on this pain point, Clive founded ZebraHost – one of the first hosting providers.

ZebraHost was founded in 2000, right during the height of the dotcom bubble. The surplus of overvalued tech companies failing to deliver a profit eventually led to the greatest tech crash in US history. But the internet did not end with the crash. Companies focused on solving a core pain point, who delivered exceptional customer service, and grew at a sustainable rate continued to thrive.

These 3 factors were what Clive took away from surviving the dotcom crash. ZebraHost still to this day remains committed to growing sustainably without injections from venture capital groups, prioritizing exceptional customer service, and delivering result-oriented solutions. But as the internet continued to grow, there was a demand for more granular solutions.

The Expansion of Our Product Lineup

Although ZebraHost now owns its own hardware and is co-located in industry leading data centers around the globe, ZebraHost only had its start as a hosting reseller. Reselling is very common in the cloud infrastructure industry, with many smaller providers today still opting to rebrand existing hosting services as their own. Common resale providers include Rackspace and IBM.

ZebraHost spent several years as a shared hosting reseller. Shared hosting is a form of inexpensive cloud infrastructure hosting which has multiple tenants using the same hardware. But as security needs changed, ZebraHost began to offer more premium options like Linux and Windows dedicated servers. Dedicated servers offer enhanced security and flexibility over shared hosting because each server only has one tenant.

In 2007, ZebraHost wanted to bring a solution that combined value and security which led it to begin offering VPS hosting. VPS hosting uses virtualization to separate each tenant so that they can occupy the same server hardware but use resources separately. This creates a secure wall between tenants and reduces the hardware expense normally associated with dedicated servers.

Building a Global Presence

After building a successful US-based brand, ZebraHost continued to grow its data center presence outside of The United States. In 2008 ZebraHost began to offer UK-based data center solutions. This allowed ZebraHost to test the European market.

5 years later, ZebraHost expanded its presence not just in Europe, but in Asia and Australia as well. In 2013, ZebraHost expanded its data center offerings to include Amsterdam, Singapore and Melbourne, Australia. The result was that ZebraHost could now reach customers with strict data sovereignty laws, reduce latency for customers abroad, and build a truly globally connected network.

Purchasing our Own Hardware

While reseller hosting and renting infrastructure works for smaller providers who are trying to get a start, ZebraHost had long outgrown this phase and was looking for ways to reduce cost and grow more sustainably.

In 2018, ZebraHost started purchasing its own hardware in order to have more control over its hardware and reduce licensing costs. The result was unparalleled flexibility in market offerings allowing ZebraHost to bring bespoke solutions at unmatched value. Starting with Altoona, Iowa and Kansas City, Missouri, ZebraHost has continued to purchase hardware to meet growth needs.

As we continue to expand our footprint in data centers across the globe, we will be purchasing hardware that meets the performance, sovereignty, and security needs of our clients.

The Growth of our Leadership and Team

We wouldn’t be where we are without our exceptional team and leadership.

In 2019, Nate Battles joined Clive Swanepoel in leading the company as Managing Partner. The two constantly exchange ideas and visions for the future of ZebraHost while working personally with clients to make sure technical challenges are resolved.

Over the last couple of years, new additions have been made to the team to assist in sales, marketing, and systems administration. Together the team is focused on meeting existing and planning for future challenges while putting our clients first.

Where We’re Going

We aren’t just spending the day reminiscing about the last 20 years, we’re planning for the next 20! As ZebraHost has grown and changed so has the market for cloud services. There is increased demand for remote work services, enhanced security, compliance compatibility, and better data responsibility. And we’re planning to tackle all of those!

We will be spending these next 20 years revamping our current offerings to make sure they remain industry-leading while introducing new services that compliment them. Stay tuned over the next few years to see the exciting changes we’re about to make!

Attribution

Image Template Attribution: Infographic vector created by pikisuperstar – www.freepik.com

Share this
01 Jul

What is the ‘Hybrid Cloud’?

What is the ‘Hybrid Cloud’ and Why are so Many Businesses Turning to it?

Try this,

Count how many digital services you rely on day to day. Now, count how many of those must connect to the internet and how many of those are subscription-based. After connecting to my CRM, email marketing manager, Adobe Creative Cloud for designing marketing material, SEO tools, I realized my entire job relies on cloud-based services. I can’t be the only one either. It seems like all the services we rely on daily as professionals are moving towards the cloud. So, let’s try to understand it.

The cloud is mainly split between 3 different solutions ‘Private’, ‘Public’, and ‘Hybrid’. But as more and more data is processed and stored in data centers, there is an increasingly hot debate amongst businesses who are arguing about the pros and cons between using a private cloud or public cloud data strategy. And there is no clear winner. Both the public and private clouds have compromises which cause many companies to also consider hybrid strategies.

The Private Cloud

Let’s start with on-premises private cloud. The “on-prem” private cloud is a solution that is hosted on private hardware and typically accessed over a private network. Example: consider a desktop tower that contains all the data needed to run your business which is accessed over an internal intranet. Information can be accessed remotely, but that information is largely limited via different certifications and access levels. Although the private cloud is secure, scalability is challenging and private clouds often lack redundancy in case of hardware failure. Private clouds can also be immensely expensive as companies need to purchase their own hardware if they want to scale. That hardware must also be replaced regularly leading to added maintenance and expense. This makes owning hardware hard to scale with company growth. This issue of scalability typically leads many businesses to use a public cloud.

The Public Cloud

The ‘Public Cloud’ refers to a cloud that is provided by a third party (like ZebraHost). With a public cloud, the cloud provider manages all the infrastructure and data that is hosted on their system. Some of the advantages include near-infinite scalability, lower costs for businesses due to not owning hardware, more flexible As a Service (aaS) payment structure, and security solutions backed by dedicated data experts and physical data centers.

Examples of other public cloud providers are big tech companies that store user data in their Software as a Service (SaaS) applications or Platform as a Service (PaaS) infrastructure. Example: Salesforce’s popular CRM platform. Salesforce is a platform for users to upload sales activity data. With a subscription to Salesforce, companies can store their sales data on the servers that Salesforce owns. Not only that, but Salesforce also acts as a platform for companies to implement custom APIs and or build their own customized platform on. All a company subscribed to Salesforce has to do is pay a monthly subscription fee. While paying a subscription, the company can customize Salesforce CRM to fit their needs. All backend data hosting and infrastructure is taken care of by Salesforce.

But again, there are cons to a public cloud.

For example, many do not like the idea of a 3rd party controlling all their data and having access to potentially sensitive information and trade secrets. Also, with nothing being hosted on-premises, there is no place to store the most sensitive data away from potentially prying eyes. This is especially important for industries like Healthcare IT and Finance which have regulatory requirements often disqualifying them from hosting data on a public cloud.

Because of the challenges presented by both the public and private cloud, more and more corporations are beginning to seek out a third option… the hybrid cloud.

The Hybrid Cloud

The hybrid cloud offers the best of both worlds. It offers a private cloud for storing base application infrastructure or extremely sensitive data while allowing near-infinite scalability, redundancy, and cost savings.

The hybrid cloud is an infrastructure strategy where a company will maintain its own private cloud as the core for its data while utilizing a public cloud to scale.

There are multiple ways to use a hybrid cloud strategy. Arguably the greatest strength of the hybrid cloud is its flexibility. Here’s a few ways companies can use the hybrid cloud strategy:

  • Storing the most sensitive data on a private cloud then storing more routine data on a public cloud
  • Storing core application infrastructure on a private cloud then building on it in the public cloud
  • Using a private cloud as the primary server then saving some data on the public cloud for redundancy.
  • Using the augment a local private cloud for seasonal traffic surges.

The above are only a few common examples of ways companies might use a hybrid cloud, but they all boil down to a few key advantages:

Controlled Scalability – The hybrid cloud allows scalability, but at a level you decide is comfortable for your business. For example, your business might have HIPAA data that can’t be stored on a public cloud but might also have more routine, less sensitive data that would be more appropriate to store on a public cloud. You can keep the HIPAA data on your private cloud while saving cost by utilizing the public cloud for any other data which scales per your needs.

Cost Efficiency – Scalability feeds directly into the second main advantage of the hybrid cloud which is cost-efficiency. Because your business doesn’t have to purchase new equipment to host your remaining data and instead builds on an already established foundation, it’s much cheaper to outsource to a 3rd party. 3rd party providers use their own equipment and most charge a monthly Infrastructure as a Service (IaaS) fee. An IaaS fee will be more manageable than buying hardware outright for most businesses because hardware costs are split between multiple tenants using the cloud provider’s hardware. This means that for most businesses, paying a monthly fee is cheaper overall and splits costs over long periods of time. Some businesses prefer to have Operational Expenses over Capital Expenses for easier accounting.

Redundancy – Redundancy is generally poor for private cloud solutions. Having private hardware and no ability to transfer data to other hardware of another data center naturally means there is more risk involved with storing data. Both public and hybrid clouds offer more redundancy and as a result, a safer environment for your business. If data is backed up into a public cloud as well as a private cloud, its accessible from two places. Even if separate data is kept between your public and private clouds, you still won’t lose everything if there should be a hardware failure. The hybrid cloud offers redundancy and redundancy offers peace of mind.

Final thoughts:

Right now, there is no denying the cloud is going to continue to expand in importance to how we perform our daily tasks and how we run our businesses. The impressive scalability, redundancy, and profits from cloud services are too good to pass up in their entirety.

Many companies rushed to the cloud. Some are adopting cloud-first strategies that will see their entire business and data inhabiting the public cloud. But others are starting to adopt a more cautious approach, afraid of what could happen if they leave everything to a third party. It makes sense that businesses want to take an approach that allows them to hold onto the most critical data on-premises whilst scaling with a cloud provider as needed.

We will likely continue to see the hybrid cloud become an increasingly important part of how information and applications are stored and managed. But dedicated private clouds and public cloud solutions will also continue to be solutions with their own advantages.

Share this
24 Jun

The Importance of Hypervisors

The Importance of Hypervisors

Hypervisors are important. They are the main tool that helps your hosting provider manage all the virtual machines that connect you with YOUR sensitive data. At first, Hypervisors might seem like something only the hosting provider needs to worry about. But you should be aware of what features the hypervisor running your virtual machine has. Some hypervisors are very basic and just allow you to connect to your server. But others have features like backup snapshots that can save you time and money. Knowing what features a hypervisor has will give you more control over your server, and help you make sure you are getting a good value from your hosting provider.

A hypervisor is a layer of software that allows the hosting provider to create, provision, modify, and manage virtual machines (VMs). A Hypervisor provisions a set of real hardware and partitions it into isolated VM environments. The hosting provider can then add or remove resources, perform backups, and remotely control each virtual server with a few clicks. The hypervisor allows you or your hosting provider to access the virtual server through remote computing protocols. Common protocols are SSH, RDP, and VNC.

Hypervisors don’t just do basic hardware provisioning like picking how much RAM the server has or the CPU speed, but also let the hosting provider mix and match from a pool of hardware on the server rack. For example, when choosing storage, a hosting provider can choose which tier to provision. It could be an SSD tier or an array of spindle hard drives. ZebraHost currently provisions all new machines on SSD only. The more advanced the hypervisor, the more options involving hardware and network connectivity that can be customized.

Think of these hypervisors as ‘virtual machine management software’. The hypervisor is a layer that either runs directly on the server hardware itself (known as bare metal) or as a separate window on top of a host operating system (like parallels on a mac). These two types are known respectively as type 1 and type 2 hypervisors.

Hypervisor Types

Type 1: a type 1 hypervisor is installed directly on server hardware. They are usually used in data centers and designed for professional data management use. A type 1 hypervisor is installed on a server in a data center much like an OS on a computer. Type 1 hypervisors allow more advanced hardware provisioning and a large performance increase over type 2 hypervisors because they are installed on the hardware itself.

One of the most popular options for type 1 hypervisors are KVM hypervisors. These are kernel-based, open-source hypervisors installed on the hardware. These hypervisors come with the advanced functionality provided by the Linux kernel.

Type 2: Type 2 hypervisors run on top of a host operating system (like Mac OS or Windows). Type 2 utilizes the hardware of the host machine while creating a virtual version of an operating system. Usually managed from a window. Type 2 hypervisors are typically for consumer and temporary use. Applications would include running untrusted software, development testing, or testing out an operating system. One of the most common examples is Oracle Virtual Box.

Why Hypervisors Matter for You

When inquiring about professional hosting services, make sure your hosting provider is using a professional-grade type 1 hypervisor. Some examples would be a KVM based solution like Verge.io, VM Ware, or Microsoft Hyper-V.

But while hypervisors are conveniently split into two BROAD categories, not all of them are created equal. Some are inexpensive but lack critical controllability features. Others are expensive but offer features like built-in backup that alleviate the need for other service subscriptions. Some, like Microsoft Hyper-V, are proprietary and only accessible with licensed software.

Before getting into more detailed questions you will want to make sure your hosting provider is using a feature-rich hypervisor with built-in security and backup solutions (it might save you from disaster one day). Built-in backup will also make sure you are getting the most value possible out of your host.

Questions you will want to research yourself or ask your potential future hosting provider.

  1. What kind of VM management tools are available?

You will want to make sure you are choosing a host that has a feature-rich hypervisor. When an environment is virtualized, it needs tools to be managed remotely. Here’s an example. ZebraHost uses Verge.io. Verge.io is a cloud management provider that uses a KVM-based hypervisor. It allows for the remote management and creation of multiple, isolated tenants within a virtual environment. Need to spin up a new VM in a flash? In a feature-rich hypervisor like Verge.io you can do that. Need to perform a backup within seconds and be able to restore it in minutes? Again, a feature-rich hypervisor like Verge.io can do that. Capabilities like these can often save you money because you don’t need to purchase extra 3rd party software licenses for your server.

  • High availability?

High availability (HA) is a hosting strategy where a stack always maintains free storage and redundant hardware. This means that if there is a hardware failure, your data can copy to the next available server. It also means that if hardware fails, there is another set of hardware to take its place. The result is less data loss and less potential downtime. Make sure your hypervisor works well with high availability hosting options.

  • Does the hypervisor have a great support network?

Knowledge is power – especially in a world dominated by the public cloud. A hypervisor should have training available, a great community, customer support, and be intuitive enough to develop a good understanding of how it works. Having a great support network opens the door to more solutions.

  • Is the hypervisor reliable?

The last thing you want is an unreliable hypervisor. It’s your gateway to your virtual machine. Consider researching how long the hypervisor has been around, if people have had issues with security or malfunctions, and make sure it works like it is supposed to. Great hypervisors are reliable while also have cutting edge tech.

  • Cost?

Finally, the cost. The cost is usually something paid by your hosting provider. But knowing how much a hypervisor costs can help you understand if your getting a good value from your hosting. You should partner with a hosting provider that has an excellent, feature-rich hypervisor and will host your VM at a fair price.

Final Thoughts,

Hypervisors are an important technology for anyone that uses virtual machines. While the hypervisor will likely be a utility used more by your hosting provider than yourself, it is still important to understand how hypervisors work so you can be in control of your data. Understanding which hypervisor your hosting provider uses will also let you know if you are getting a fair deal or if there are cost-saving measures you can take advantage of.

Share this
17 Jun

What is the ‘Public Cloud’?

The Public Cloud

Public Cloud, It’s a puzzling term, isn’t it? Companies are moving applications and critical data to something called the ‘Public Cloud’ at a record pace. At first, ‘public’ cloud sounds well… public. It sounds like content designed for the entire world to see. SO WHY WOULD A COMPANY DO THAT WITH THEIR CRITICAL AND SENSITIVE DATA!?

Defining ‘Public’ Cloud

It turns out the public cloud isn’t as ‘public’ as you might think. The ‘cloud’ refers to storing data with the intention to access it remotely. And this cloud comes in several forms including:

  • Private Cloud
  • Hybrid Cloud
  • Public Cloud
  • Community Cloud

‘Public’ refers to the idea that the data is not on-premise, but instead, stored in a data center off-premise. Some data centers are owned by large companies with enormous data storage needs like Facebook or Salesforce.com. But for many small and medium businesses, a public cloud means renting space with multiple tenants occupying the same data center while a hosting provider manages their data. When looking at hosting solutions, think of the public cloud as a service where a 3rd party hosting provider like ZebraHost is responsible for storing data, maintaining the infrastructure, hardware, security, and day to day support.

Why the Move to the Public Cloud? Savings and Convenience.

Many companies and individuals are increasingly storing data in the public cloud because it’s easier and can make costs more manageable because of utility-style or monthly billing options. The hosting service provider maintains all infrastructure and oversees maintenance which results in less cost for clients using hosting services. And unless a cloud is on separate dedicated hardware, the cost of the hardware itself is also being split among other tenants. This saves money because each company is only paying a fraction of the hardware costs. These hosting providers provide equipment and maintenance as a service often referred to as Infrastructure as a Service (IaaS).

Hosts will usually charge a monthly flat rate based on hardware and storage needs. Some may even use a model called ‘utility’ billing which is a pay-per-use model where companies are charged depending on the resources they use like RAM, CPU, storage, bandwidth etc. For many businesses, paying in small increments makes the move to the cloud a lot easier and affordable because they don’t need to purchase equipment upfront, build a server-safe environment or hire dedicated IT maintenance professionals. The public cloud has thus made the cloud affordable for many businesses which has accelerated the cloud’s growth.

Rise of the Public Cloud

You’ve undoubtedly seen the results of the public cloud’s rapid rise. Popular platforms like Office365, Salesforce.com, and Google products all store information in the public cloud. While these are all examples of large technology companies that have the means to purchase their own equipment, many smaller companies, startups, and individuals want to have their application be web-accessible or to have a safe place to store and access data. These are the groups that will usually turn to a hosting provider and or application building platform like AWS to move to the public cloud.

What you’ll notice is that the public cloud tends to be a sort of umbrella term. There are actually a few industry-specific terms to define which public cloud a business uses.

Here’s the different services that make up the public cloud.

SaaS: Software as a Service – The example most think about first is Salesforce.com which offers the leading web-based CRM platform. SaaS is when the software is offered as a paid service with a recurring fee to use it. These applications are typically maintained and accessed from the web. Ex. Office 365.

IaaS: Infrastructure as a Service – Hosting providers like ZebraHost would be considered IaaS. They offer the hardware, virtualized environment, and networking to maintain and host applications or services. This a common route for many businesses looking to transition to the cloud.

PaaS: Platform as a Service – Amazon Web Services (AWS) is used as a common example. PaaS allows the development of software and applications without having to build or maintain the underlying software infrastructure. This can come in both the form of hardware and software. This is a popular service because it makes development easier for the end-user.

So…Why would you hand your data over to the public cloud? Well, there are a few reasons.

  • Cost: As mentioned before, because someone else owns the hardware there isn’t a reason to buy your own. Hardware is split among other tenants which saves you money
  • Scalability: Theoretically a company utilizing a public cloud could scale infinity fast because all they must do is find a provider with already running hardware to set up a public cloud.
  • Ease: Most of the work is outsourced to a 3rd party making it easy to get up and running.
  • Redundancy: Data centers and cloud providers can provide numerous backups for long periods of time. They can also split data among hardware and hosting locations.
  • Risk: Having your whole business on a tower on your desk might not be the best idea…It’s very vulnerable to day to day activities and accidents. A 3rd party will have a well-secured environment for your data and backup hardware in case of failure.

But there’s also disadvantages to consider:

  • Security: This depends on how much you trust your hosting provider. Data is sensitive and while most hosting providers will respect your data, it’s still in the hand of someone else.
  • Regulation: Industries like Healthcare and Finance usually have data that has a lot of legal regulations (like HIPAA). These companies must be careful with monitoring which data is stored outside their private cloud.
  • Configurability: Hosting providers don’t always provide flexible solutions such as OS choice, backup services, or hardware configurations. But some like ZebraHost do. You should ask your hosting provider what they allow you to configure.

In Sum,

The public cloud is quickly becoming a strategy used by businesses to outsource maintenance, save costs, and position services to be web-accessible. In an increasingly services dominated market the public cloud is not only A strategy for businesses it is THE primary strategy to maintain a competitive edge.

For many, the pros outweigh the cons for implementing a public cloud strategy. The low cost, scalability, and ease of use are attractive for many organizations that don’t want to spend time managing their infrastructure. But some companies still like having some data on-premise on a private cloud. These businesses are leading the revolution to implement the hybrid cloud which is another strong hosting strategy aimed at giving the security of a private cloud and the scalability of a public cloud.

Share this
10 Jun

How To Install Docker on Ubuntu 19 “Disco Dingo”

What is Docker?

Docker is a popular containerization platform that allows you to run applications in their own isolated environments. Containerization is often seen as the next evolution of virtualization technology and is often run within virtual machine environments for extra security.

Ubuntu is one of the more popular Linux distributions due to user familiarity and stability.

Today we’re going to install docker on Ubuntu 19 “Disco Dingo” as well as show you how you can access a list of Docker Commands.

The “Easy” Way

There is a script which drastically shortens the amount of time and commands needed to install docker. Here’s the script:

Step 1: type: curl -fsSL https://get.docker.com -o get-docker.sh

Step 2: type sudo sh get-docker.sh

And that’s really it. Those 2 commands should get docker installed on your system.

The “Traditional” Way

Preparations

Before actually installing Docker on Ubuntu we are going to have to prepare some things like downloading repos and making sure docker is being downloaded from the proper source to take advantage of the latest release.

Steps

  1. If you are using Ubuntu GUI, you will need to locate and open the terminal application. Sometimes this is hidden. If you can’t see it listed in the application drawer on the dock, just search for ‘terminal’ and it will show up.
Ubuntu Terrminal
Open Terminal

2. Type sudo apt update in the terminal. What this does it is update internet repositories which is essential for finding and downloading the latest version of Docker.

sudo apt update
Sudo apt update

3. type sudo apt install apt-transport-https ca-certificates curl software-properties-common

sudo apt install apt-transport-https ca-certificates curl software-properties-common
sudo apt install apt-transport-https ca-certificates curl software-properties-common

4. The curl command is going to let us turn our terminal into the medium that downloads the docker resource files rather than our web browser. This is both convenient for GUI users and allows you to download docker with or without a GUI.

type curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
curl fsSL http://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

5. You’ll need to pay attention to this part. When we download our repository to install docker its going to want to attach to the version of Ubuntu we are running. Depending on the version of Ubuntu you are running the command you are going to type will be different. You will also need to know the name of your Ubuntu installation. For example, Ubuntu 19 is “Disco Dingo” and will be abbreviated in the terminal as “Disco”

Type sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu disco stable”

sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu disco stable”
Docker finished downloading

Installing Docker

Now that we’ve prepared the proper repositories and are prepared to download Docker from the internet using our terminal, its time to actually install docker.

6. With all our repositories updated and downloaded we’re all set to go, lets just do a quick sudo apt update

7. Its important to make sure you set your policy to download from Docker and not any other Ubuntu repositories for this reason we are going to set our policy to download from docker specifically.

Type Apt-cache policy docker-ce

Apt-cache policy docker-ce
apt-cache policy docker-ce

from here, you will see 3 things:

  1. Is Docker installed? (it will say none)
  2. Candidate: this is the version of Ubuntu that docker has targeted for the installation (should say your Ubuntu version) If any resources were not installed properly earlier this section will say none
  3. Version table: a list of docker versions for this particular version of Ubuntu
Docker version table
Docker install candidates

8. type sudo apt install docker-ce

sudo apt install docker-ce
sudo apt install docker-ce

Once that final step has been completed you should have Docker installed. A good way to find out if Docker is properly installed is to use the same apt-cache policy docker-ce command to see if this time there is a version of Docker installed.

As you can see, after typing apt-cache policy docker-ce our version of Ubuntu is listed next to installed

apt-cache policy docker-ce
Installed and candidate versions match showing successful installation

Another good way to see if Docker is installed is to bring up a list of commands that you can use to control and configure Docker. If Docker is properly installed, the terminal will start to accept Docker commands.

Commands

To get a list of commands:

Type docker

docker
Docker commands

Now that docker is installed, Ubuntu will recognize Docker commands in the control panel allowing you to implement containerization and begin developing with Docker on your system.

Share this
03 Jun

Domain Squatting And How to Protect Yourself

What is Domain Squatting or Cybersquatting?

These days, your business’s success is largely dictated by your online presence. And your Internet presence is largely dictated by your SEO and how you rank in Search Engine Results Pages (SERP).

Having a domain that represents your business well and is easy for users to remember not only increases your business’s search engine visibility but will make your business look professional and legitimate.

But have you ever browsed a domain brokerage and found your perfect domain like yourbusiness.com only to find that the domain is either taken or for sale at an exorbitant price? Maybe the price isn’t even listed, and they ask you to contact the owner? Unfortunately, you might have come across an instance of domain squatting.

Domain squatting, or as many call it cybersquatting, is the act of purchasing a domain with the intent to hold the domain and sell to the highest bidder, take advantage of Internet traffic to make money on ads, or purposefully block others from having access to the name for a variety of reasons.

Domain Squatting vs Domaining

Domain squatting is different than the similar act of domaining which is purchasing domain names that might be valuable for the purpose of reselling or holding for personal use. The difference between domaining and domain squatting is that domain squatting often has malicious or extortionary intent. Domain squatting actually used to be perfectly legal, but the Uniform-Dispute Resolution Policy (UDRP) by ICAAN, an international body dedicated to Internet fair competition and human usability, has made the act of domain squatting illegal in many cases and a grey area at best.

Why Would Someone Domain Squat? Its profitable.

Though domain squatting or cybersquatting can have series legal implications many consider it an investment. Valuable top-level domains (like .com) are purchased first come first serve, held, renewed and sold at exorbitant prices. For example, generic websites like thebestdatahost.com could be considered valuable investments and sold to a company looking for a marketable URL that will generate traffic. Domain squatters can buy this domain and hold it until they sell it at a profitable price.

Here’s where domain squatting becomes problematic

There have been high level court cases such as those with celebrities and fortune 500 companies where domains were either purchased and offered for extortionary prices or misused to gain web traffic. These cases are typically referred to as “bad faith”.

Celebrity Cases

One of the most famous cases of a celebrity winning a bad faith case was in 2000 when the singer Madonna successfully sued Dan Parisi to gain the rights to Madona.com. The singer won due to The World Intellectual Property Organization (WIPO) ruling that the name was purchased for the purpose of capitalizing on Madonna’s fame and gaining web traffic. Because Madonna is a famous trademark, she was well protected against domain squatting.

But even celebrities don’t always win domain/cybersquatting cases. Such was the famous case involving Bruce Springsteen in 2001. The WIPO ruled that a domain squatter had legitimate interests in the website BruceSpringSteen.com after the cybersquatter argued points like the site being a fan site and that Springsteen’s name had no trademark protections.

Steps to protect yourself

The takeaway from the Madonna and Springsteen cases is that despite ICANN trying to create protections against domain squatting, it still happens decades after the above cases, so Internet users and business owners need to protect themselves.

Here’s a few suggestions that can help you protect your business against domain squatters or cybersquatters.

  1. Purchase domains in advance: If you are strongly considering starting a business, a new website, product line etc.. you should ABSOLUTELY purchase a domain name for your business and similar domain names for good measure. Domain squatters like to monitor for new businesses being registered so they can claim the domain and potentially auction it.
  2. Purchase a domain for as long as possible: Domains can be purchased for a maximum of 10 years at a time. Consider purchasing a domain for the max period of time so you don’t have to worry about expiration for a while.

As a tip: Sometimes you may be told that registering a domain for a longer period of time can increase legitimacy in the eyes of search engines leading to better SEO. There is no evidence of this. Your efforts should be focused on creating great content for SEO while having a domain for a longer period of time gives you peace of mind that the domain will remain yours.

  • Don’t be picky about the domain name: Just because you don’t get the perfect domain name doesn’t mean your business is dead. Be creative with your domain name so you don’t have to pay crazy prices.
  • Remember to renew or auto renew your domain: Domains must be renewed annually, and domain squatters will sometimes take over a domain if the first owner forgets to renew. Be vigilant about renewing domains or set up to auto renew if the option is available.
  • Consider all top-level domains (TLD): the TLD is the part of a domain after the dot. Like (.com). While .com is the strongest TLD right now consider others like .net or the emerging .io.
  • Know your rights: ICAAN’s UDRP outlines what they consider to be domain squatting and how to file a case if litigation hasn’t resolved the issue. A link to their website will be included at the bottom of this post.

When does domain squatting become malicious? When there is an attempt to harm or confuse

Domain squatting can go beyond simply holding a website in hopes to extort owners or auction it off. Much of the time domains are purchased because they look similar to legitimate, high traffic sites. This is often referred to as typosquatting. For example, let’s use buzzfeed.com. A squatter might purchase the rights to officalbuzzfeed.com or another similarly named domain to hijack web traffic and confuse everyday users of the legitimate site. From here, those domains could be ad pages for generating revenue or spread malware or phishing campaigns.

The ACPA and How to Protect Your Business

The Anti-Cyber Consumer Protection Act (ACPA) is a US law passed in 1999 designed to protect personal names and brands from cybersquatters or domain squatters from profiting from a famous or trademarked name. The goal was to create a path of litigation if a business or brand feels their name is either being extorted or misused for the cybersquatter’s financial gain.

The ACPA is different than the UDRP because the ACPA is a path for litigation to award damages from the squatters. Unlike the UDRP, these cases aren’t just for winning a domain name transfer, but also for awarding monetary damages.

When bringing a case under the ACPA, the court mainly looks for these factors:

  1. Intellectual property trademarks or other documented rights to a name
  2. Prior use of a domain for offering a good or service
  3. If a name is part of trademark fair use
  4. The intent to divert web traffic from the legitimate name or brand
  5. Any offers to sell and transfer the domain for profit without having any other use for the domain
  6. the accuracy of the owner’s contact information when registering the domain in question.
  7. A history of purchasing confusing or misleading domain names by the domain name in question’s owner.
  8. How famous or distinctive the name in the domain is and if it seems like it was meant to confuse.

ACPA: https://www.govinfo.gov/content/pkg/CRPT-106srpt140/html/CRPT-106srpt140.htm

After assessing the above, a court can award monetary damages and even domain name transfers. But domain name lawsuits don’t always end with the transfer of a domain name to the rightful owner. After trying the litigation process the party trying to acquire a domain can turn to ICAAN and the UDRP to argue for the transfer of a domain.

What is the UDRP and ICAAN?

In the early days of domains, domain squatting was a totally legal but emerging issue. But this changed April 30th, 1999 when The Internet Corporation for Assigned Names and Numbers (ICAAN) published the Uniform Domain-Name Dispute-Resolution Policy.

ICAAN is a nonprofit that is dedicated to keeping the Internet secure and competitive. They developed the Domain Name System (DNS) as a way for humans to be able to find Internet web pages.

What is DNS?

DNS refers to how we  humans read URLs or domains as words like www.zebrahost.com instead of as a series of numbers assigned to a computer (IP). This DNS system serves two purposes:

(a) it makes domains more human friendly because people remember words better than numbers.      

(b) domains are no longer exclusively assigned to a unique computer IP and can be transferred between hardware.

Naturally, as competition for unique word-based domains formed, certain domain names became more valuable. This created a valuable market opportunity to either buy domain names in advance as investments or to purchase similar names to popular websites in hopes to divert web traffic. This prompted the creation of the UDRP to address this issue.

ICAAN as a body enforces its UDRP because it forms deals with domain registrars. Registrars are the companies that own all the TLDs. ICAAN assigns accreditations to each of the registrars and ensures a fair environment for the domain market. All registrars must follow UDRP.

The UDRP outlines what happens when domains are abused by squatting or bad faith. In bad faith cases the UDRP first looks for arbitration. If the case cannot be settled in court ICAAN can step in and potentially transfer, cancel or suspend a domain.

You can read more about the UDRP and how to protect yourself from domain squatting here.

UDRP: https://www.icann.org/resources/pages/help/dndr/udrp-en

Final Thoughts

Domain squatting or cybersquatting is still an issue two decades after the passing of the ACPA and ICAAN’s UDRP. The unfortunate truth about domain squatting is that its hard to eliminate because there is a very fine line between investing in domains and being a domain squatter / cybersquatter. 

Many choose to invest in domain names for brand protection, future use or as an investment to sell later. The important takeaway is that these are all legitimate uses as long as they aren’t extortionary. The easiest way to protect yourself is to be proactive about your domain registration and renewal. Making sure you have a domain name ready for when you launch your business online can save you the headache of having to purchase from domain squatters. And purchasing extra similar names can make it harder for domain squatters or cybersquatters to capitalize off your brand and hopefully save you from an infuriating law

Share this

©2020 ZebraHost, LLC | All Rights Reserved | Powered by ZebraHost