07 Jul

Happy Birthday ZebraHost!

It’s Our Birthday!

Today July 7th, 2000, ZebraHost was established by current CEO Clive Swanepoel. Over these last 20 years, we’ve continuously strived to bring our clients the best of breed technology and exceptional customer service.

Speaking of our clients, we couldn’t have made it this far without you! Over these last 20 years, we’ve had the pleasure of working with you to host many of your wonderful and exciting ideas. These ideas have allowed us to become more creative, respond better to the needs of developers, and position ourselves as a premier provider of infrastructure solutions.

Because a lot has happened over 20 years, we wanted to take a moment to share our story on how we grew into a customer-focused boutique provider we are today.

History of ZebraHost

20 years of ZebraHost History

Our Founding

ZebraHost’s serial entrepreneur CEO Clive Swanepoel has always pioneered new connected technology. Back in South Africa, he was one of the first people to be granted access to the internet by the Council of Scientific Research. Seeking ways to connect South Africa, Clive founded one of South Africa’s first digital sign companies Vivid Outdoor Media. Vivid Outdoor Media built some of the first digital signs that could be controlled remotely through wiring which allowed for easy turnover of advertisements.

After Clive discovered that he was procuring most of the technology and material for Vivid Outdoor Media in the Mid-Western United States, he decided to move to Des Moines Iowa. Wondering how he could continue to deliver cutting-edge services, Clive looked towards his next tech venture. Upon witnessing the growing userbase of the internet, Clive realized that companied needed a service that would allow tech companies to store massive amounts of data with little upfront expense. Capitalizing on this pain point, Clive founded ZebraHost – one of the first hosting providers.

ZebraHost was founded in 2000, right during the height of the dotcom bubble. The surplus of overvalued tech companies failing to deliver a profit eventually led to the greatest tech crash in US history. But the internet did not end with the crash. Companies focused on solving a core pain point, who delivered exceptional customer service, and grew at a sustainable rate continued to thrive.

These 3 factors were what Clive took away from surviving the dotcom crash. ZebraHost still to this day remains committed to growing sustainably without injections from venture capital groups, prioritizing exceptional customer service, and delivering result-oriented solutions. But as the internet continued to grow, there was a demand for more granular solutions.

The Expansion of Our Product Lineup

Although ZebraHost now owns its own hardware and is co-located in industry leading data centers around the globe, ZebraHost only had its start as a hosting reseller. Reselling is very common in the cloud infrastructure industry, with many smaller providers today still opting to rebrand existing hosting services as their own. Common resale providers include Rackspace and IBM.

ZebraHost spent several years as a shared hosting reseller. Shared hosting is a form of inexpensive cloud infrastructure hosting which has multiple tenants using the same hardware. But as security needs changed, ZebraHost began to offer more premium options like Linux and Windows dedicated servers. Dedicated servers offer enhanced security and flexibility over shared hosting because each server only has one tenant.

In 2007, ZebraHost wanted to bring a solution that combined value and security which led it to begin offering VPS hosting. VPS hosting uses virtualization to separate each tenant so that they can occupy the same server hardware but use resources separately. This creates a secure wall between tenants and reduces the hardware expense normally associated with dedicated servers.

Building a Global Presence

After building a successful US-based brand, ZebraHost continued to grow its data center presence outside of The United States. In 2008 ZebraHost began to offer UK-based data center solutions. This allowed ZebraHost to test the European market.

5 years later, ZebraHost expanded its presence not just in Europe, but in Asia and Australia as well. In 2013, ZebraHost expanded its data center offerings to include Amsterdam, Singapore and Melbourne, Australia. The result was that ZebraHost could now reach customers with strict data sovereignty laws, reduce latency for customers abroad, and build a truly globally connected network.

Purchasing our Own Hardware

While reseller hosting and renting infrastructure works for smaller providers who are trying to get a start, ZebraHost had long outgrown this phase and was looking for ways to reduce cost and grow more sustainably.

In 2018, ZebraHost started purchasing its own hardware in order to have more control over its hardware and reduce licensing costs. The result was unparalleled flexibility in market offerings allowing ZebraHost to bring bespoke solutions at unmatched value. Starting with Altoona, Iowa and Kansas City, Missouri, ZebraHost has continued to purchase hardware to meet growth needs.

As we continue to expand our footprint in data centers across the globe, we will be purchasing hardware that meets the performance, sovereignty, and security needs of our clients.

The Growth of our Leadership and Team

We wouldn’t be where we are without our exceptional team and leadership.

In 2019, Nate Battles joined Clive Swanepoel in leading the company as Managing Partner. The two constantly exchange ideas and visions for the future of ZebraHost while working personally with clients to make sure technical challenges are resolved.

Over the last couple of years, new additions have been made to the team to assist in sales, marketing, and systems administration. Together the team is focused on meeting existing and planning for future challenges while putting our clients first.

Where We’re Going

We aren’t just spending the day reminiscing about the last 20 years, we’re planning for the next 20! As ZebraHost has grown and changed so has the market for cloud services. There is increased demand for remote work services, enhanced security, compliance compatibility, and better data responsibility. And we’re planning to tackle all of those!

We will be spending these next 20 years revamping our current offerings to make sure they remain industry-leading while introducing new services that compliment them. Stay tuned over the next few years to see the exciting changes we’re about to make!


Image Template Attribution: Infographic vector created by pikisuperstar – www.freepik.com

Share this
01 Jul

What is the ‘Hybrid Cloud’?

What is the ‘Hybrid Cloud’ and Why are so Many Businesses Turning to it?

Try this,

Count how many digital services you rely on day to day. Now, count how many of those must connect to the internet and how many of those are subscription-based. After connecting to my CRM, email marketing manager, Adobe Creative Cloud for designing marketing material, SEO tools, I realized my entire job relies on cloud-based services. I can’t be the only one either. It seems like all the services we rely on daily as professionals are moving towards the cloud. So, let’s try to understand it.

The cloud is mainly split between 3 different solutions ‘Private’, ‘Public’, and ‘Hybrid’. But as more and more data is processed and stored in data centers, there is an increasingly hot debate amongst businesses who are arguing about the pros and cons between using a private cloud or public cloud data strategy. And there is no clear winner. Both the public and private clouds have compromises which cause many companies to also consider hybrid strategies.

The Private Cloud

Let’s start with on-premises private cloud. The “on-prem” private cloud is a solution that is hosted on private hardware and typically accessed over a private network. Example: consider a desktop tower that contains all the data needed to run your business which is accessed over an internal intranet. Information can be accessed remotely, but that information is largely limited via different certifications and access levels. Although the private cloud is secure, scalability is challenging and private clouds often lack redundancy in case of hardware failure. Private clouds can also be immensely expensive as companies need to purchase their own hardware if they want to scale. That hardware must also be replaced regularly leading to added maintenance and expense. This makes owning hardware hard to scale with company growth. This issue of scalability typically leads many businesses to use a public cloud.

The Public Cloud

The ‘Public Cloud’ refers to a cloud that is provided by a third party (like ZebraHost). With a public cloud, the cloud provider manages all the infrastructure and data that is hosted on their system. Some of the advantages include near-infinite scalability, lower costs for businesses due to not owning hardware, more flexible As a Service (aaS) payment structure, and security solutions backed by dedicated data experts and physical data centers.

Examples of other public cloud providers are big tech companies that store user data in their Software as a Service (SaaS) applications or Platform as a Service (PaaS) infrastructure. Example: Salesforce’s popular CRM platform. Salesforce is a platform for users to upload sales activity data. With a subscription to Salesforce, companies can store their sales data on the servers that Salesforce owns. Not only that, but Salesforce also acts as a platform for companies to implement custom APIs and or build their own customized platform on. All a company subscribed to Salesforce has to do is pay a monthly subscription fee. While paying a subscription, the company can customize Salesforce CRM to fit their needs. All backend data hosting and infrastructure is taken care of by Salesforce.

But again, there are cons to a public cloud.

For example, many do not like the idea of a 3rd party controlling all their data and having access to potentially sensitive information and trade secrets. Also, with nothing being hosted on-premises, there is no place to store the most sensitive data away from potentially prying eyes. This is especially important for industries like Healthcare IT and Finance which have regulatory requirements often disqualifying them from hosting data on a public cloud.

Because of the challenges presented by both the public and private cloud, more and more corporations are beginning to seek out a third option… the hybrid cloud.

The Hybrid Cloud

The hybrid cloud offers the best of both worlds. It offers a private cloud for storing base application infrastructure or extremely sensitive data while allowing near-infinite scalability, redundancy, and cost savings.

The hybrid cloud is an infrastructure strategy where a company will maintain its own private cloud as the core for its data while utilizing a public cloud to scale.

There are multiple ways to use a hybrid cloud strategy. Arguably the greatest strength of the hybrid cloud is its flexibility. Here’s a few ways companies can use the hybrid cloud strategy:

  • Storing the most sensitive data on a private cloud then storing more routine data on a public cloud
  • Storing core application infrastructure on a private cloud then building on it in the public cloud
  • Using a private cloud as the primary server then saving some data on the public cloud for redundancy.
  • Using the augment a local private cloud for seasonal traffic surges.

The above are only a few common examples of ways companies might use a hybrid cloud, but they all boil down to a few key advantages:

Controlled Scalability – The hybrid cloud allows scalability, but at a level you decide is comfortable for your business. For example, your business might have HIPAA data that can’t be stored on a public cloud but might also have more routine, less sensitive data that would be more appropriate to store on a public cloud. You can keep the HIPAA data on your private cloud while saving cost by utilizing the public cloud for any other data which scales per your needs.

Cost Efficiency – Scalability feeds directly into the second main advantage of the hybrid cloud which is cost-efficiency. Because your business doesn’t have to purchase new equipment to host your remaining data and instead builds on an already established foundation, it’s much cheaper to outsource to a 3rd party. 3rd party providers use their own equipment and most charge a monthly Infrastructure as a Service (IaaS) fee. An IaaS fee will be more manageable than buying hardware outright for most businesses because hardware costs are split between multiple tenants using the cloud provider’s hardware. This means that for most businesses, paying a monthly fee is cheaper overall and splits costs over long periods of time. Some businesses prefer to have Operational Expenses over Capital Expenses for easier accounting.

Redundancy – Redundancy is generally poor for private cloud solutions. Having private hardware and no ability to transfer data to other hardware of another data center naturally means there is more risk involved with storing data. Both public and hybrid clouds offer more redundancy and as a result, a safer environment for your business. If data is backed up into a public cloud as well as a private cloud, its accessible from two places. Even if separate data is kept between your public and private clouds, you still won’t lose everything if there should be a hardware failure. The hybrid cloud offers redundancy and redundancy offers peace of mind.

Final thoughts:

Right now, there is no denying the cloud is going to continue to expand in importance to how we perform our daily tasks and how we run our businesses. The impressive scalability, redundancy, and profits from cloud services are too good to pass up in their entirety.

Many companies rushed to the cloud. Some are adopting cloud-first strategies that will see their entire business and data inhabiting the public cloud. But others are starting to adopt a more cautious approach, afraid of what could happen if they leave everything to a third party. It makes sense that businesses want to take an approach that allows them to hold onto the most critical data on-premises whilst scaling with a cloud provider as needed.

We will likely continue to see the hybrid cloud become an increasingly important part of how information and applications are stored and managed. But dedicated private clouds and public cloud solutions will also continue to be solutions with their own advantages.

Share this
24 Jun

The Importance of Hypervisors

The Importance of Hypervisors

Hypervisors are important. They are the main tool that helps your hosting provider manage all the virtual machines that connect you with YOUR sensitive data. At first, Hypervisors might seem like something only the hosting provider needs to worry about. But you should be aware of what features the hypervisor running your virtual machine has. Some hypervisors are very basic and just allow you to connect to your server. But others have features like backup snapshots that can save you time and money. Knowing what features a hypervisor has will give you more control over your server, and help you make sure you are getting a good value from your hosting provider.

A hypervisor is a layer of software that allows the hosting provider to create, provision, modify, and manage virtual machines (VMs). A Hypervisor provisions a set of real hardware and partitions it into isolated VM environments. The hosting provider can then add or remove resources, perform backups, and remotely control each virtual server with a few clicks. The hypervisor allows you or your hosting provider to access the virtual server through remote computing protocols. Common protocols are SSH, RDP, and VNC.

Hypervisors don’t just do basic hardware provisioning like picking how much RAM the server has or the CPU speed, but also let the hosting provider mix and match from a pool of hardware on the server rack. For example, when choosing storage, a hosting provider can choose which tier to provision. It could be an SSD tier or an array of spindle hard drives. ZebraHost currently provisions all new machines on SSD only. The more advanced the hypervisor, the more options involving hardware and network connectivity that can be customized.

Think of these hypervisors as ‘virtual machine management software’. The hypervisor is a layer that either runs directly on the server hardware itself (known as bare metal) or as a separate window on top of a host operating system (like parallels on a mac). These two types are known respectively as type 1 and type 2 hypervisors.

Hypervisor Types

Type 1: a type 1 hypervisor is installed directly on server hardware. They are usually used in data centers and designed for professional data management use. A type 1 hypervisor is installed on a server in a data center much like an OS on a computer. Type 1 hypervisors allow more advanced hardware provisioning and a large performance increase over type 2 hypervisors because they are installed on the hardware itself.

One of the most popular options for type 1 hypervisors are KVM hypervisors. These are kernel-based, open-source hypervisors installed on the hardware. These hypervisors come with the advanced functionality provided by the Linux kernel.

Type 2: Type 2 hypervisors run on top of a host operating system (like Mac OS or Windows). Type 2 utilizes the hardware of the host machine while creating a virtual version of an operating system. Usually managed from a window. Type 2 hypervisors are typically for consumer and temporary use. Applications would include running untrusted software, development testing, or testing out an operating system. One of the most common examples is Oracle Virtual Box.

Why Hypervisors Matter for You

When inquiring about professional hosting services, make sure your hosting provider is using a professional-grade type 1 hypervisor. Some examples would be a KVM based solution like Verge.io, VM Ware, or Microsoft Hyper-V.

But while hypervisors are conveniently split into two BROAD categories, not all of them are created equal. Some are inexpensive but lack critical controllability features. Others are expensive but offer features like built-in backup that alleviate the need for other service subscriptions. Some, like Microsoft Hyper-V, are proprietary and only accessible with licensed software.

Before getting into more detailed questions you will want to make sure your hosting provider is using a feature-rich hypervisor with built-in security and backup solutions (it might save you from disaster one day). Built-in backup will also make sure you are getting the most value possible out of your host.

Questions you will want to research yourself or ask your potential future hosting provider.

  1. What kind of VM management tools are available?

You will want to make sure you are choosing a host that has a feature-rich hypervisor. When an environment is virtualized, it needs tools to be managed remotely. Here’s an example. ZebraHost uses Verge.io. Verge.io is a cloud management provider that uses a KVM-based hypervisor. It allows for the remote management and creation of multiple, isolated tenants within a virtual environment. Need to spin up a new VM in a flash? In a feature-rich hypervisor like Verge.io you can do that. Need to perform a backup within seconds and be able to restore it in minutes? Again, a feature-rich hypervisor like Verge.io can do that. Capabilities like these can often save you money because you don’t need to purchase extra 3rd party software licenses for your server.

  • High availability?

High availability (HA) is a hosting strategy where a stack always maintains free storage and redundant hardware. This means that if there is a hardware failure, your data can copy to the next available server. It also means that if hardware fails, there is another set of hardware to take its place. The result is less data loss and less potential downtime. Make sure your hypervisor works well with high availability hosting options.

  • Does the hypervisor have a great support network?

Knowledge is power – especially in a world dominated by the public cloud. A hypervisor should have training available, a great community, customer support, and be intuitive enough to develop a good understanding of how it works. Having a great support network opens the door to more solutions.

  • Is the hypervisor reliable?

The last thing you want is an unreliable hypervisor. It’s your gateway to your virtual machine. Consider researching how long the hypervisor has been around, if people have had issues with security or malfunctions, and make sure it works like it is supposed to. Great hypervisors are reliable while also have cutting edge tech.

  • Cost?

Finally, the cost. The cost is usually something paid by your hosting provider. But knowing how much a hypervisor costs can help you understand if your getting a good value from your hosting. You should partner with a hosting provider that has an excellent, feature-rich hypervisor and will host your VM at a fair price.

Final Thoughts,

Hypervisors are an important technology for anyone that uses virtual machines. While the hypervisor will likely be a utility used more by your hosting provider than yourself, it is still important to understand how hypervisors work so you can be in control of your data. Understanding which hypervisor your hosting provider uses will also let you know if you are getting a fair deal or if there are cost-saving measures you can take advantage of.

Share this
17 Jun

What is the ‘Public Cloud’?

The Public Cloud

Public Cloud, It’s a puzzling term, isn’t it? Companies are moving applications and critical data to something called the ‘Public Cloud’ at a record pace. At first, ‘public’ cloud sounds well… public. It sounds like content designed for the entire world to see. SO WHY WOULD A COMPANY DO THAT WITH THEIR CRITICAL AND SENSITIVE DATA!?

Defining ‘Public’ Cloud

It turns out the public cloud isn’t as ‘public’ as you might think. The ‘cloud’ refers to storing data with the intention to access it remotely. And this cloud comes in several forms including:

  • Private Cloud
  • Hybrid Cloud
  • Public Cloud
  • Community Cloud

‘Public’ refers to the idea that the data is not on-premise, but instead, stored in a data center off-premise. Some data centers are owned by large companies with enormous data storage needs like Facebook or Salesforce.com. But for many small and medium businesses, a public cloud means renting space with multiple tenants occupying the same data center while a hosting provider manages their data. When looking at hosting solutions, think of the public cloud as a service where a 3rd party hosting provider like ZebraHost is responsible for storing data, maintaining the infrastructure, hardware, security, and day to day support.

Why the Move to the Public Cloud? Savings and Convenience.

Many companies and individuals are increasingly storing data in the public cloud because it’s easier and can make costs more manageable because of utility-style or monthly billing options. The hosting service provider maintains all infrastructure and oversees maintenance which results in less cost for clients using hosting services. And unless a cloud is on separate dedicated hardware, the cost of the hardware itself is also being split among other tenants. This saves money because each company is only paying a fraction of the hardware costs. These hosting providers provide equipment and maintenance as a service often referred to as Infrastructure as a Service (IaaS).

Hosts will usually charge a monthly flat rate based on hardware and storage needs. Some may even use a model called ‘utility’ billing which is a pay-per-use model where companies are charged depending on the resources they use like RAM, CPU, storage, bandwidth etc. For many businesses, paying in small increments makes the move to the cloud a lot easier and affordable because they don’t need to purchase equipment upfront, build a server-safe environment or hire dedicated IT maintenance professionals. The public cloud has thus made the cloud affordable for many businesses which has accelerated the cloud’s growth.

Rise of the Public Cloud

You’ve undoubtedly seen the results of the public cloud’s rapid rise. Popular platforms like Office365, Salesforce.com, and Google products all store information in the public cloud. While these are all examples of large technology companies that have the means to purchase their own equipment, many smaller companies, startups, and individuals want to have their application be web-accessible or to have a safe place to store and access data. These are the groups that will usually turn to a hosting provider and or application building platform like AWS to move to the public cloud.

What you’ll notice is that the public cloud tends to be a sort of umbrella term. There are actually a few industry-specific terms to define which public cloud a business uses.

Here’s the different services that make up the public cloud.

SaaS: Software as a Service – The example most think about first is Salesforce.com which offers the leading web-based CRM platform. SaaS is when the software is offered as a paid service with a recurring fee to use it. These applications are typically maintained and accessed from the web. Ex. Office 365.

IaaS: Infrastructure as a Service – Hosting providers like ZebraHost would be considered IaaS. They offer the hardware, virtualized environment, and networking to maintain and host applications or services. This a common route for many businesses looking to transition to the cloud.

PaaS: Platform as a Service – Amazon Web Services (AWS) is used as a common example. PaaS allows the development of software and applications without having to build or maintain the underlying software infrastructure. This can come in both the form of hardware and software. This is a popular service because it makes development easier for the end-user.

So…Why would you hand your data over to the public cloud? Well, there are a few reasons.

  • Cost: As mentioned before, because someone else owns the hardware there isn’t a reason to buy your own. Hardware is split among other tenants which saves you money
  • Scalability: Theoretically a company utilizing a public cloud could scale infinity fast because all they must do is find a provider with already running hardware to set up a public cloud.
  • Ease: Most of the work is outsourced to a 3rd party making it easy to get up and running.
  • Redundancy: Data centers and cloud providers can provide numerous backups for long periods of time. They can also split data among hardware and hosting locations.
  • Risk: Having your whole business on a tower on your desk might not be the best idea…It’s very vulnerable to day to day activities and accidents. A 3rd party will have a well-secured environment for your data and backup hardware in case of failure.

But there’s also disadvantages to consider:

  • Security: This depends on how much you trust your hosting provider. Data is sensitive and while most hosting providers will respect your data, it’s still in the hand of someone else.
  • Regulation: Industries like Healthcare and Finance usually have data that has a lot of legal regulations (like HIPAA). These companies must be careful with monitoring which data is stored outside their private cloud.
  • Configurability: Hosting providers don’t always provide flexible solutions such as OS choice, backup services, or hardware configurations. But some like ZebraHost do. You should ask your hosting provider what they allow you to configure.

In Sum,

The public cloud is quickly becoming a strategy used by businesses to outsource maintenance, save costs, and position services to be web-accessible. In an increasingly services dominated market the public cloud is not only A strategy for businesses it is THE primary strategy to maintain a competitive edge.

For many, the pros outweigh the cons for implementing a public cloud strategy. The low cost, scalability, and ease of use are attractive for many organizations that don’t want to spend time managing their infrastructure. But some companies still like having some data on-premise on a private cloud. These businesses are leading the revolution to implement the hybrid cloud which is another strong hosting strategy aimed at giving the security of a private cloud and the scalability of a public cloud.

Share this
10 Jun

How To Install Docker on Ubuntu 19 “Disco Dingo”

What is Docker?

Docker is a popular containerization platform that allows you to run applications in their own isolated environments. Containerization is often seen as the next evolution of virtualization technology and is often run within virtual machine environments for extra security.

Ubuntu is one of the more popular Linux distributions due to user familiarity and stability.

Today we’re going to install docker on Ubuntu 19 “Disco Dingo” as well as show you how you can access a list of Docker Commands.

The “Easy” Way

There is a script which drastically shortens the amount of time and commands needed to install docker. Here’s the script:

Step 1: type: curl -fsSL https://get.docker.com -o get-docker.sh

Step 2: type sudo sh get-docker.sh

And that’s really it. Those 2 commands should get docker installed on your system.

The “Traditional” Way


Before actually installing Docker on Ubuntu we are going to have to prepare some things like downloading repos and making sure docker is being downloaded from the proper source to take advantage of the latest release.


  1. If you are using Ubuntu GUI, you will need to locate and open the terminal application. Sometimes this is hidden. If you can’t see it listed in the application drawer on the dock, just search for ‘terminal’ and it will show up.
Ubuntu Terrminal
Open Terminal

2. Type sudo apt update in the terminal. What this does it is update internet repositories which is essential for finding and downloading the latest version of Docker.

sudo apt update
Sudo apt update

3. type sudo apt install apt-transport-https ca-certificates curl software-properties-common

sudo apt install apt-transport-https ca-certificates curl software-properties-common
sudo apt install apt-transport-https ca-certificates curl software-properties-common

4. The curl command is going to let us turn our terminal into the medium that downloads the docker resource files rather than our web browser. This is both convenient for GUI users and allows you to download docker with or without a GUI.

type curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –
curl fsSL http://download.docker.com/linux/ubuntu/gpg | sudo apt-key add –

5. You’ll need to pay attention to this part. When we download our repository to install docker its going to want to attach to the version of Ubuntu we are running. Depending on the version of Ubuntu you are running the command you are going to type will be different. You will also need to know the name of your Ubuntu installation. For example, Ubuntu 19 is “Disco Dingo” and will be abbreviated in the terminal as “Disco”

Type sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu disco stable”

sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu disco stable”
Docker finished downloading

Installing Docker

Now that we’ve prepared the proper repositories and are prepared to download Docker from the internet using our terminal, its time to actually install docker.

6. With all our repositories updated and downloaded we’re all set to go, lets just do a quick sudo apt update

7. Its important to make sure you set your policy to download from Docker and not any other Ubuntu repositories for this reason we are going to set our policy to download from docker specifically.

Type Apt-cache policy docker-ce

Apt-cache policy docker-ce
apt-cache policy docker-ce

from here, you will see 3 things:

  1. Is Docker installed? (it will say none)
  2. Candidate: this is the version of Ubuntu that docker has targeted for the installation (should say your Ubuntu version) If any resources were not installed properly earlier this section will say none
  3. Version table: a list of docker versions for this particular version of Ubuntu
Docker version table
Docker install candidates

8. type sudo apt install docker-ce

sudo apt install docker-ce
sudo apt install docker-ce

Once that final step has been completed you should have Docker installed. A good way to find out if Docker is properly installed is to use the same apt-cache policy docker-ce command to see if this time there is a version of Docker installed.

As you can see, after typing apt-cache policy docker-ce our version of Ubuntu is listed next to installed

apt-cache policy docker-ce
Installed and candidate versions match showing successful installation

Another good way to see if Docker is installed is to bring up a list of commands that you can use to control and configure Docker. If Docker is properly installed, the terminal will start to accept Docker commands.


To get a list of commands:

Type docker

Docker commands

Now that docker is installed, Ubuntu will recognize Docker commands in the control panel allowing you to implement containerization and begin developing with Docker on your system.

Share this
03 Jun

Domain Squatting And How to Protect Yourself

What is Domain Squatting or Cybersquatting?

These days, your business’s success is largely dictated by your online presence. And your Internet presence is largely dictated by your SEO and how you rank in Search Engine Results Pages (SERP).

Having a domain that represents your business well and is easy for users to remember not only increases your business’s search engine visibility but will make your business look professional and legitimate.

But have you ever browsed a domain brokerage and found your perfect domain like yourbusiness.com only to find that the domain is either taken or for sale at an exorbitant price? Maybe the price isn’t even listed, and they ask you to contact the owner? Unfortunately, you might have come across an instance of domain squatting.

Domain squatting, or as many call it cybersquatting, is the act of purchasing a domain with the intent to hold the domain and sell to the highest bidder, take advantage of Internet traffic to make money on ads, or purposefully block others from having access to the name for a variety of reasons.

Domain Squatting vs Domaining

Domain squatting is different than the similar act of domaining which is purchasing domain names that might be valuable for the purpose of reselling or holding for personal use. The difference between domaining and domain squatting is that domain squatting often has malicious or extortionary intent. Domain squatting actually used to be perfectly legal, but the Uniform-Dispute Resolution Policy (UDRP) by ICAAN, an international body dedicated to Internet fair competition and human usability, has made the act of domain squatting illegal in many cases and a grey area at best.

Why Would Someone Domain Squat? Its profitable.

Though domain squatting or cybersquatting can have series legal implications many consider it an investment. Valuable top-level domains (like .com) are purchased first come first serve, held, renewed and sold at exorbitant prices. For example, generic websites like thebestdatahost.com could be considered valuable investments and sold to a company looking for a marketable URL that will generate traffic. Domain squatters can buy this domain and hold it until they sell it at a profitable price.

Here’s where domain squatting becomes problematic

There have been high level court cases such as those with celebrities and fortune 500 companies where domains were either purchased and offered for extortionary prices or misused to gain web traffic. These cases are typically referred to as “bad faith”.

Celebrity Cases

One of the most famous cases of a celebrity winning a bad faith case was in 2000 when the singer Madonna successfully sued Dan Parisi to gain the rights to Madona.com. The singer won due to The World Intellectual Property Organization (WIPO) ruling that the name was purchased for the purpose of capitalizing on Madonna’s fame and gaining web traffic. Because Madonna is a famous trademark, she was well protected against domain squatting.

But even celebrities don’t always win domain/cybersquatting cases. Such was the famous case involving Bruce Springsteen in 2001. The WIPO ruled that a domain squatter had legitimate interests in the website BruceSpringSteen.com after the cybersquatter argued points like the site being a fan site and that Springsteen’s name had no trademark protections.

Steps to protect yourself

The takeaway from the Madonna and Springsteen cases is that despite ICANN trying to create protections against domain squatting, it still happens decades after the above cases, so Internet users and business owners need to protect themselves.

Here’s a few suggestions that can help you protect your business against domain squatters or cybersquatters.

  1. Purchase domains in advance: If you are strongly considering starting a business, a new website, product line etc.. you should ABSOLUTELY purchase a domain name for your business and similar domain names for good measure. Domain squatters like to monitor for new businesses being registered so they can claim the domain and potentially auction it.
  2. Purchase a domain for as long as possible: Domains can be purchased for a maximum of 10 years at a time. Consider purchasing a domain for the max period of time so you don’t have to worry about expiration for a while.

As a tip: Sometimes you may be told that registering a domain for a longer period of time can increase legitimacy in the eyes of search engines leading to better SEO. There is no evidence of this. Your efforts should be focused on creating great content for SEO while having a domain for a longer period of time gives you peace of mind that the domain will remain yours.

  • Don’t be picky about the domain name: Just because you don’t get the perfect domain name doesn’t mean your business is dead. Be creative with your domain name so you don’t have to pay crazy prices.
  • Remember to renew or auto renew your domain: Domains must be renewed annually, and domain squatters will sometimes take over a domain if the first owner forgets to renew. Be vigilant about renewing domains or set up to auto renew if the option is available.
  • Consider all top-level domains (TLD): the TLD is the part of a domain after the dot. Like (.com). While .com is the strongest TLD right now consider others like .net or the emerging .io.
  • Know your rights: ICAAN’s UDRP outlines what they consider to be domain squatting and how to file a case if litigation hasn’t resolved the issue. A link to their website will be included at the bottom of this post.

When does domain squatting become malicious? When there is an attempt to harm or confuse

Domain squatting can go beyond simply holding a website in hopes to extort owners or auction it off. Much of the time domains are purchased because they look similar to legitimate, high traffic sites. This is often referred to as typosquatting. For example, let’s use buzzfeed.com. A squatter might purchase the rights to officalbuzzfeed.com or another similarly named domain to hijack web traffic and confuse everyday users of the legitimate site. From here, those domains could be ad pages for generating revenue or spread malware or phishing campaigns.

The ACPA and How to Protect Your Business

The Anti-Cyber Consumer Protection Act (ACPA) is a US law passed in 1999 designed to protect personal names and brands from cybersquatters or domain squatters from profiting from a famous or trademarked name. The goal was to create a path of litigation if a business or brand feels their name is either being extorted or misused for the cybersquatter’s financial gain.

The ACPA is different than the UDRP because the ACPA is a path for litigation to award damages from the squatters. Unlike the UDRP, these cases aren’t just for winning a domain name transfer, but also for awarding monetary damages.

When bringing a case under the ACPA, the court mainly looks for these factors:

  1. Intellectual property trademarks or other documented rights to a name
  2. Prior use of a domain for offering a good or service
  3. If a name is part of trademark fair use
  4. The intent to divert web traffic from the legitimate name or brand
  5. Any offers to sell and transfer the domain for profit without having any other use for the domain
  6. the accuracy of the owner’s contact information when registering the domain in question.
  7. A history of purchasing confusing or misleading domain names by the domain name in question’s owner.
  8. How famous or distinctive the name in the domain is and if it seems like it was meant to confuse.

ACPA: https://www.govinfo.gov/content/pkg/CRPT-106srpt140/html/CRPT-106srpt140.htm

After assessing the above, a court can award monetary damages and even domain name transfers. But domain name lawsuits don’t always end with the transfer of a domain name to the rightful owner. After trying the litigation process the party trying to acquire a domain can turn to ICAAN and the UDRP to argue for the transfer of a domain.

What is the UDRP and ICAAN?

In the early days of domains, domain squatting was a totally legal but emerging issue. But this changed April 30th, 1999 when The Internet Corporation for Assigned Names and Numbers (ICAAN) published the Uniform Domain-Name Dispute-Resolution Policy.

ICAAN is a nonprofit that is dedicated to keeping the Internet secure and competitive. They developed the Domain Name System (DNS) as a way for humans to be able to find Internet web pages.

What is DNS?

DNS refers to how we  humans read URLs or domains as words like www.zebrahost.com instead of as a series of numbers assigned to a computer (IP). This DNS system serves two purposes:

(a) it makes domains more human friendly because people remember words better than numbers.      

(b) domains are no longer exclusively assigned to a unique computer IP and can be transferred between hardware.

Naturally, as competition for unique word-based domains formed, certain domain names became more valuable. This created a valuable market opportunity to either buy domain names in advance as investments or to purchase similar names to popular websites in hopes to divert web traffic. This prompted the creation of the UDRP to address this issue.

ICAAN as a body enforces its UDRP because it forms deals with domain registrars. Registrars are the companies that own all the TLDs. ICAAN assigns accreditations to each of the registrars and ensures a fair environment for the domain market. All registrars must follow UDRP.

The UDRP outlines what happens when domains are abused by squatting or bad faith. In bad faith cases the UDRP first looks for arbitration. If the case cannot be settled in court ICAAN can step in and potentially transfer, cancel or suspend a domain.

You can read more about the UDRP and how to protect yourself from domain squatting here.

UDRP: https://www.icann.org/resources/pages/help/dndr/udrp-en

Final Thoughts

Domain squatting or cybersquatting is still an issue two decades after the passing of the ACPA and ICAAN’s UDRP. The unfortunate truth about domain squatting is that its hard to eliminate because there is a very fine line between investing in domains and being a domain squatter / cybersquatter. 

Many choose to invest in domain names for brand protection, future use or as an investment to sell later. The important takeaway is that these are all legitimate uses as long as they aren’t extortionary. The easiest way to protect yourself is to be proactive about your domain registration and renewal. Making sure you have a domain name ready for when you launch your business online can save you the headache of having to purchase from domain squatters. And purchasing extra similar names can make it harder for domain squatters or cybersquatters to capitalize off your brand and hopefully save you from an infuriating law

Share this
27 May

Containers vs Virtual Machines

Containers vs VMs

At first it seems like containers are the natural evolution of virtualization. VMs have been around much longer which makes them seem like old tech compared to containers. Containers and VMs also tend to be written about as if they compete for the future of cloud computing and it’s a battle to survive. The reality is that they aren’t exclusive from one another but are different technologies that have their own use depending on your needs.

What is a Virtual Machine?

Virtual Machines (VMs) are virtual computing environments that utilize provisioned hardware and their own isolated OS. They are themselves separate computing environments that can coexist when managed through a hypervisor.

Virtual Machines are basically computers themselves that sit either within a hypervisor or as a window on top of a host operating system. Each virtual machine is referred to as a guest while the hosting hardware or OS is considered ‘the host’. Because VMs are computers themselves, their performance is dictated by hardware specs.

Just like physical computers, VMs provision hardware to use and boot from. This means that just like a physical computer, boot times can depend on the hardware quality and specs. Fast hardware will generally boot a virtual machine faster. Also, because a VM has its own contained operating system it will have all the capabilities of a full operating system.

A big advantage of VMs is that operating systems can be installed separately. This means different OSs can run next to each other on the same hardware. Want to run a windows server next to a Linux server? No problem. Want to run a Linux server on top of windows? Again, you can do that.

The environment within a VM is totally separate so that multiple tenants can occupy the same hardware. VMs create separate environments for sensitive data and apps so they are secure even with multiple tenants using the hardware.

VMs are legacy, proven and trusted technology. They are relied upon by large organizations that require security and can be scaled for multiple functions.

But in the day of web apps, entire operating systems aren’t always needed which has given rise to a new technology called containers.

What is a Container?

Containers are often thought of as the next phase of the virtual machine because they are newer and create separate virtualized environments for applications making them sound like they serve the same purpose as virtual machines.

Containers are isolated environments to run single applications. They package app code and include all the necessary tools and packages needed to run and manage an app. They utilize a host operating system and share the same kernel between container environments. This means that containers only use what they need for the application itself while the host OS and libraries all function underneath.

Containers are useful when you only need an application hosted because the container packages and runs only what is needed for the app. Everything else is hosted by the main OS. When hosting multiple containers with apps this leads to resources being allocated more effectively, faster boot times and portability.

Keeping an environment light weight and decreasing boot times is important in the day of SaaS apps, but so is security. A strength of virtual machines is that if an OS becomes compromised in any way, it doesn’t affect other VMs because they are managed through a hypervisor not an operating system. There are exceptions like the hypervisor itself being compromised but this is exceptionally rare.

On the other hand, because containers don’t have their own isolated OS, the entire system could become compromised if its infected.

For this reason, even Google, one of the most significant contributors to the Kubernetes open sourced container project keeps its client’s containers within separate VMs to maintain extra separation. Google does however only use containers for its own Google search use.

The big appeal to containers other than fast boot speeds is their flexibility. Containers are lighter on processes. Without needing to manage entire operating systems to manage an app, it can make building, testing and managing the application within its own environment much slicker. Plus, container environments can be run almost anywhere making them easy to move between computers.

Should I use a Container or VM?

Though many consider VMs and containers to be competitors or an evolution in technology, that‘s not necessarily true. Both have use cases and will be prevalent in data server storage, application hosting and virtualization for the foreseeable future.

The use scenario for container technology is mainly in web applications, SaaS applications and web app programs. For these programs it is isn’t always necessary to have an entire virtual computing environment and isolating the application is generally enough. Using this approach can mean lighter workloads for your servers and faster boot times, which is critical for services.

Virtual Machines are going to find use mainly in their security, isolated OS environments, and flexibility of a full computer. Having your choice in a full operating system, and the ability to perform multiple functions within that VM will be useful for anyone that wants to go beyond running single applications. Examples include managing a website, having a database and perhaps a web application. A virtual machine will also allow you to store data for longer periods of time as opposed to containers which don’t store data once the container is deleted. You can also partition multiple VMs in separate environments to create another level of security.

But again, VMs and containers can work together and will likely be increasingly coupled in the future. Large IaaS providers who own the hardware will likely implement VMs to separate clients for additional security and streamlined management while using containers to run each customer’s services.

Both containers and VMs are critical components to the infrastructure of the modern web. New innovations are being pushed out every year bringing these technologies to the forefront of modern computing resulting in more efficient apps and services for end users.


Virtual Machine and containers both have their own advantages and disadvantages. Many developers are beginning to incorporate both extensively for added security, increased speed, increased scalability and the ability to isolate various computing functions.

Here’s a breakdown of both:

Virtual Machines:


  • Entire operating system functions
  • Isolated OS
  • OS choice
  • Proven history


  • Hardware dependent
  • More computing load to process



  • Lightweight
  • Support from the open-sourced community
  • Speed
  • Streamlined for application testing
  • Less computing load to process
  • Easy scalability


  • Newer technology that’s not as proven
  • Potential to infect OS if compromised
  • Designed to have more limited functionality than an OS
Share this
20 May

How To Install a Plex Media Server On Ubuntu (4 Ways)

Plex Media Server

Plex Media Server allows you to build your own private multimedia cloud. The Plex service ranges from a free subscription with few limitations to a monthly paid service or perpetual license that provides extra mobile app functionality.

Plex is a simple way for users to create their own cloud for streaming movies, photos, music and TV shows from either an on-premise private cloud or a rented server through a hosting provider.

Ubuntu is a Linux distribution that most people who are familiar with Linux will have either used or at least heard of. Canonical, the company behind Ubuntu, is credited with bringing Linux desktop environments aimed at everyday users to the mainstream with its popular Ubuntu distribution.

Due to people’s familiarity with Ubuntu, and Linux’s overall efficiency compared to solutions like Mac OS and Windows for running servers, it’s a very popular choice for setting up server environments such as Plex.

Many users installing Plex on a Linux server are more familiar with Windows and Mac which both use clickable program installers. Ubuntu is a great choice for users accustomed to application installers because it comes with a package installer much like Windows and Mac OS as well as a few other ways to install applications (like an app store).

This tutorial is going to teach you how to install and set up a Plex media server using the built-in package installer, Ubuntu app store, command console and Snapd so that no matter which method you are comfortable with you can get your server up and running!

The App Store

Ubuntu App Store
Ubuntu App Store

Something that many popular Linux distributions have been getting good at is including a pre-packaged app store. The Advantage of an app store (especially for Linux) is that for many it’s a convenient place to quickly search for numerous popular programs that have Linux compatibility.

1. To install Plex using the app store simply search for Plex using the upper righthand search icon. Once you’ve located Plex, tap the icon and click install just like you would using any other app store.

Plex on Ubuntu App Store
Plex File

2. To make sure you have Plex installed, go to your app drawer on Ubuntu by locating the set of squares at the bottom left corner of your Desktop dock. Scroll through your applications and you should see the Plex icon.

Ubuntu App Drawer
Ubuntu App Drawer

3. Click the Plex Icon which will take you to the Plex web interface where you can begin by making a Plex account. After creating an account, you can begin to connect your media folders with Plex.

Package Installer

If you want to simply download your Plex media server straight from Plex’s website, Ubuntu comes with a handy package installer much like you’d find on Windows or Mac.

1. First go to this link to download Plex: https://www.plex.tv/media-server-downloads/

2. Select Linux then click Choose Distribution. Click your Ubuntu version. If you have the latest version, just go with the latest version Plex has listed.

Plex Downloads
Choose Ubuntu Version

3. Once it’s downloaded click the package and select Open with – Software Installer. Click OK.

Ubuntu Package Manager
Ubuntu Package Manager

4. Once Plex is installed, locate your app drawer by going to the bottom of your dock on the left side of your desktop and click the set of squares. This will bring you to your app drawer where all your Ubuntu apps are accessed. Scroll down till you find Plex Media Server.

Ubuntu App Drawer
App Drawer

5. Click Plex Media Server to be directed to the web interface where you can begin syncing your content with Plex.

Terminal (Command Window)

Another way to install Plex is via terminal. While the terminal can be confusing for certain tasks when you are new to Linux, many long-term Linux users enjoy using the terminal because of its sheer number of capabilities when maneuvering around the Linux operating system.

Installing Plex is a great way to get comfortable with one of the most useful commands on Ubuntu which is Sudo apt-get install. Sudo apt-get install is the command on Ubuntu that will let you install apps even from the web without downloading a package. It’s one of the most powerful and convenient tools on Linux.

1. First go to your app drawer and search for terminal. The terminal might not be visible from the normal selection of apps. If not, go to the white search bar and type in terminal and it will show up. Click on terminal which should bring you to a black box ready for you to type a command.

2. First, update your web repositories by typing sudo apt-get update

3. Then, type sudo apt-get install plexmediaserver

Sudo apt-get install plexmediaserver
Sudo apt-get install plexmediaserver

4. If you input the command correctly, you should be able to go to your app drawer and see the Plex Media Server icon.

Plex Icon Towards Bottom Right
Plex Icon Towards Bottom Right

5. Click the Plex Media Server icon which will bring you to the Plex web interface where you can begin to sync your media with Plex.

Install via Snapd

One of the greatest challenges Linux desktops face is the lack of a simple, universal way to install applications and keep them updated across the near infinite number of Linux distributions. Canonical, the company behind Ubuntu, developed Snapd in response.

Snapd is a package install management software that can be installed over a variety of Linux distributions and helps to keep apps consistently updated and compatible amongst different Linux distributions even beyond Ubuntu. In fact, it is the primary form of installing application on newer Ubuntu versions now.

Though installing Snapd is an extra step and isn’t necessarily required if you just want to run a plex server, Snapd and Flatpak (another popular install management program) are arguably the future for Linux app installation. They make app installation uniform and allow apps to be uploaded and updated easily across app stores as well as implement a simple install command in the terminal.

If you want to try Snapd here’s how you install Snapd and Plex via Snapd on Ubuntu.

1. In the terminal type sudo apt-get update. This will update the files you can download from the internet.

2. Then type: Sudo apt install Snapd

sudo apt install snapd
sudo apt install snapd

3. Finally: Sudo snap install plexmediaserver

sudo snap install plexmediaserver
sudo snap install plexmediaserver

4. Plex will now be installed as a Snap file and can be accessed via the app drawer. Scroll down till you find the Plex Media Server icon. Click it to go to the Plex web interface and start syncing your media with Plex.

Plex installed on bottom right
Plex installed on bottom right

Setting up your Plex Media Server to Sync Content with Ubuntu

Plex Setup Screen
Plex Setup Screen

1. On the Server Setup page you’re going to want to click Add Library which will connect Plex with your media folder directories which it will use as server directories.

Select Music Category
Select Music Category

2. In this example we’re going to install our music directory. First, click the music Icon which should turn yellow if properly selected. This will tell Plex that the directory you are connecting is related to music. (You do have the option to rename it if you wish).

3. Next you will be asked to browse for a media folder. Click this and you will be brought to a directory. You going to want to go all the way back to the / folder so that you can see the music folder. Scroll through the right column to locate the Music folder. Just click the music folder then click Add.

Browse Media on Plex
Browse Media
Connecting Music Directory in Plex
Connecting Music Directory

4. Any media added to the Music folder will automatically sync with Plex. This means that let’s say you download a song, as long as your server is online that same song should show up anywhere you have access to Plex.

5. To add your photos, do that same thing but instead add the Photos library type then repeat the same steps to locate your pictures folder and like music, it will sync with Plex.

Connecting Pictures Directory
Connecting Pictures Directory

6. You should be all set to go with every library type you wanted to sync with Plex. But sometimes those library types aren’t immediately shown on Plex’s sidebar for easy access and are instead hidden. Scroll to the bottom of this article to see how to fix this if it is happening to your server on first use.

You should now have your media ready to sync with Plex. Anything added to the folders you’re syncing with Plex will also sync across any platform you have Plex installed on as long as your Plex server is online. If you shut off your server or it is offline for any reason – Plex will tell you it cannot find your media and you will simply have to turn on your server again to access your media.

Here’s what the music and photos will look like in Plex once synced.

Music Screen in Plex
Pictures in Plex

After you see all your media in Plex that you want synced, try going to either the Plex app on your phone or another computer to login to your Plex account. If your Plex server is set up correctly and is online, all the media you’ve synced in these steps will be accessible.

How to customize your Sidebar

Plex doesn’t always come pre-configure properly to make your music and pictures accessible from the sidebar. After setting up your Plex Media Server you might have to go in and manually add these. To do this:

1. Go to your media server name. click on it.

2. Underneath, you should see options such as music, pictures or anything else not automatically added to your sidebar underneath.

3 dots menu
The 3 Dots Will Give You the Option to Pin Music

3. Select the media folder you wish to add and click the 3 dots to the right. You should see the option to pin that folder.

Plex Media Server with Music in the Sidebar
Plex Media Server with Music in the Sidebar

4. Once the folder is pinned it will now be accessible from the sidebar when you login to your Plex Media Server.

As you can see, the music and photos icons are at the bottom of the sidebar when accessing the Plex media server.

Common Problems with Plex Servers

1. Cannot access server from a virtual machine despite the machine being turned on.

You need to make sure your virtual machine can connect to the internet. To start, go to your VM network settings in your hypervisor and make sure the VM can use the host machine’s connection.

Then ping the VMs IP to make sure there is a response.

You will also need to make sure it is connected to the proper port (this is typically port 22).

This will allow you to access the web through your virtual machine which is typically segregated from the main system thus not allowing you to connect your Plex server.

Another solution is to forward the server’s IP through your router.

2. Remote Access Disabled.

Remote access is what allows you to access your Plex server beyond your home network. This can be accessed by clicking on your server name in the Plex sidebar then going to the remote access tab and enabling remote access.

There are a variety of reasons remote access is disabled.

First try to manually re-enable remote access in your plex server as sometimes the service can glitch.

Port forwarding can also sometimes be a solution for remote access not working.

Remote access is what allows you to access your Plex server beyond your home network. This can be accessed by clicking on your server name in the Plex sidebar then going to the remote access tab and enabling remote access.

If remote access is still not working you might have to specify the port so that an exception can be made in your firewall.

  1. In Plex remote access check the box that says specify public port. The public port for Plex is 32400.
  2. Go to your firewall settings and create a new rule that will allow for port 32400 to be exempt so that Plex can get through the firewall.

The unfortunate reality of network connectivity is that there are numerous possible problems and solutions to allowing network access. These are only a few examples of solutions that users will sometimes find solves their remote access problem.

If none of these works there are numerous resources online that users can turn to such as forums, reddit and pages from cloud companies that write about issues involving network connectivity.

If you are an individual setting up a plex server on a dedicated machine (like an old laptop or desktop) that you intent to keep turned on, you may not come across issues such as the ones listed because Plex has direct access to the internet. But if you are using a virtual machine, a hosting provider or a firewall you may come across issues.

If you are using a hosting provider, work with them to make sure Plex can access the internet through the VM it is installed on. Your hosting provider should make sure to provision a server that can be accessed remotely and will not have issues accessing the internet.

ZebraHost has 24/7/365 customer support via ticket, email or phone. Meaning that if you are hosting with us you can contact us at any time and our team will be happy to assist you with your server.

If you have questions, we’re happy to be reached at Sales@zebrahost.com.

Share this
01 Aug

How to build a membership-based site in minutes

How to build a membership-based site in minutes –

The ZebraHost website builder powered by Weebly can easily allow you to set up a membership-based program on your site and open up a variety of opportunities, like a photography site where members can download special prints, or a registration portal where only conference attendees can access event information.

There are many possibilities.

With the Pro Plan, you can have up to 100 members and with the Business Plan – Unlimited members.

The login and registration windows are sleek, streamlined, and are automatically added to your site navigation. You can also hide the login link if you wish. There’s also an option to link the login modal to a button placed anywhere on your site.

With the membership option, you can even take that further by creating Groups and Page management

There are 2 ways someone can become a member of your site:

  • By signing up through a registration link on your site
  • By you manually inviting them to become a member

More Membership Features:

  • Customize invitation emails and direct invite links
  • Bulk add members via CSV
  • Member and group search
  • Password reset for individual member accounts

Start building your membership site

Share this
01 Mar

Hyper-Converged Infrastructure

What is hyper-converged infrastructure (HCI, hyper-converged, hyper-converged)?

Traditionally, data centers have relied on a separate stack of hardware including a layer for compute, a layer for storage, and a layer for networking – in order to function. With that comes the experts necessary to set up, maintain, and troubleshoot these complex infrastructures. A deep level of technical knowledge is necessary in order to run these technology silos. These traditional setups are still widely used, but hyper-converged systems are gaining momentum as IT decision-makers are seeing the tremendous benefit these systems can bring to their departments.

The issue often comes down to scale

Traditional IT infrastructures are difficult to scale. Once the stacks either reach the end of their expected life-cycle or the storage layer fills up to capacity, it’s often required to perform a forklift upgrade – a rip out and replace is often the only way to upgrade. This is costly, risky, and time-consuming.

As companies grow, so does the infrastructure needed to support that growth. Data storage inevitably increases as does the need for constant availability. With the need for increased availability, comes with a decrease in tolerance for downtime. Depending on the business, the cost of downtime can monumental. Growth also means more hardware, more technical silos, and an increase in energy consumption to power and keep cool – those rows of cabinets.

Enter hyper-converged infrastructure

Hyper-convergence combines the layers of the traditional infrastructure into a single “box”. This has obvious benefits from a dedicated hardware perspective but also allows IT departments to utilize their resources much more efficiently. The reduction in hardware footprint means a much lower cost of ownership and much lower energy usage.

Hyper-converged infrastructures are gaining a growing presence in data centers. The cost and resource savings are impressive. Not to mention being able to set them up in a short amount of time and then manage them from a single pane of glass.

Share this

©2020 ZebraHost, LLC | All Rights Reserved | Powered by ZebraHost