Cloudberry Backup review

Yes, I’ve decided to write another review.  It was never my plan when I started this blog to be a review site, but I just felt this product was too good and too few people had heard of it.  Cloudberry Backup is a personal backup product for protecting your systems that can utilize almost all of the public cloud providers out there as a storage target.  One look at the cloud storage selection screen below will show you it’s a dizzying array of options.


The interface is nice and clean too.  Let’s walk through how you would setup backup to Amazon Glacier, which is a very cost effective method for storing data that you don’t need to retrieve frequently (or quickly), which makes it perfect for personal backup.

I realize it’s ironic me reviewing a backup product, considering I work at a backup company.  However, Cloudberry targets the individual and small business segments, which is much different than our product.  Another great use case for this might be to backup a home lab, as well, something I imagine many of my readers have.

Configuring Cloudberry Backup

First, you need to create an account with Amazon.  If you haven’t done that yet, Cloudberry placed a helpful link in the dialog box that will take you to AWS account creation.  Then, once you’ve done that, you name the storage account and enter in the Access Key and Secret key from your AWS account.  After inputting the Access Key and Secret Key, you’ll be able to select the Vault name from the drop down box.  That is where data for this account will be stored.


Next you’ll create a backup plan to store data in the account you created.  A really nice feature of the product is that it stores your backup plans with the data so if you ever have to restore, you’ll have everything you need.  Also, it can synchronize with data already stored in the cloud, which was very helpful for me recently when I had to rebuild my computer and re-install Cloudberry.  I didn’t have to send all the data again to the archive.  Here is what my backup plans look like.  Obviously, you’ll create your own to match the data you have.


First you’ll select the type of backup. Cloudberry gives you the option to backup locally first then from there to the cloud and calls this a Hybrid Backup.  We’ll be doing the Cloud Backup.


Then, you pick a backup storage account (or create one):


Name the backup and leave the box checked to put the backup plan config with the backups.


We’ll be doing a regular backup.  Cloudberry has the option of combining a lot of small files into what it calls an Archive Backup but that won’t be needed here.  It can be useful if you have a lot of small files because it reduces the number of requests to the cloud, which is something providers charge you for.  Also, you can select several options around how VSS (Volume Shadow Copy Service) treats your files.  This is a service in Windows that allows open files to be backed up, when they would normally be locked by the program that opened them.  For a Documents folder, I recommend turning on VSS and using the System VSS provider, as it will ensure docs you’ve opened get protected.


Select your backup source:


Next, are a bunch of Advanced Options that you may or may not need.  Cloudberry gives you some fairly sophisticated options to skip files, or only backup files of a given type (like PDF and DOC files).  The full version offers Compression and Encryption.  However, there is a very good free edition for home use which doesn’t have those options.  It does provide all the data protection functionality you might need though.

After going through the Advanced filters and the Compression/Encryption options, you can determine a retention policy for your data.  This allows you to keep versions of your data, if needed.  For most home use, you’ll probably want to keep just the latest version of your files.

Then, the backup can be scheduled, as you’d expect.  The product also has a Real-Time feature, which constantly monitors a given backup set and copies them to the cloud storage.  This might be useful for a Documents or Projects folder, where you wouldn’t want to lose data that you worked on in a given day.  In backup terms, your RPO (Recovery Point Objective) would be effectively zero.

Your last option is to determine how you want to be alerted if backups fail.  The system can email using Cloudberry’s service, or you can specify your own server if you happen to have one (Gmail provides SMTP service to it’s users, for example).  The last screen before you create the plan is a summary screen to validate your settings.


Cost Analysis

So, this is where a product like Cloudberry really shines.  I was previously paying a service over $50/year for a limited (300GB) amount of storage.  Also, I was completely locked into their service and at the whim of them changing or going out of business.  Years ago, I used a product that did that and I had to start my backups all over.  For a while, I was in a scary, unprotected state where if my system had taken a dirt nap I would have had no way to recover.

Anyways, I’m currently spending around $1 a month keeping my data in Glacier.  Considering that the product is either free, or $30 for the paid version, the savings adds up pretty much after the first year.  As you can imagine, YMMV but I expect the flexibility and savings will really appeal to my readers (who I imagine are a bit more on the technical side).


Cloudberry Backup Desktop Edition is a great product for backing up your home computers.  It offers amazing flexibility over where you want to send your data, including all the major cloud players.  If you need a product to backup your personal stuff (and we all should!), I highly recommend it.

Disclaimer: Cloudberry Labs provided me a free license to the paid edition as a vExpert.  However, I can honestly say I would continue using the product even without that consideration.

What is DevOps?

I realize this topic has probably been beaten to death, but I had to put together a presentation for a group of my peers and I thought it might work as a blog post.  Plus, adding it here helps me internalize my thoughts about a topic.  I hope some of it is a useful distillation of the information out there on this huge topic.  If you find it interesting, I highly recommend checking out one of the books I list below.

A Definition:

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.  It’s also characterized by operations using many of the same techniques as developers.

Think automated infrastructure provisioning.  You’ll frequently hear the phrase “infrastructure as code.”  What that means is that provisioning activities are driven by a recipe that can be treated like a program.  For example, the application Puppet has a concept called “manifests” which are used to create an application and also to determine if the running machines comply to that specification.

The Three Ways:

In “The Phoenix Project,” Gene Kim talks about the Three Ways, methods used to continuously improve IT operations.  These have been taken from manufacturing theories used in many organizations today.  (credit for the images goes to Gene Kim on his website)

The First Way


The First Way emphasizes the performance of the entire system.  It also encourages IT to look at Operations as a Customer of Development.  It consists of Dev creating services which are transitioned to Operations and then consumed by the Business.

The Second Way


The Second Way is all about feedback loops.  There should be continuous feedback about the results of the product delivered to Operations by Development.  This enables continuous improvement to be built-in.

The Third Way


The Third Way is about the culture of the organization.  It’s about creating a culture that fosters two things: continual experimentation,  and understanding that repetition and practice is the prerequisite to mastery.  IT can be very resistant to change.  Also failures can result in finger pointing and this can create an “us versus them” environment.  I think this way is probably the hardest to implement, because it can require a real mind shift in the people of the organization.

Common DevOps Practices

Let’s talk about some of the more common practices organizations use to implement a DevOps culture.

Version Control

This is key to the concept I mentioned above around “infrastructure as code.”  You need to have some way to control the configuration of your systems and the best way to do this is some type of version control system.  Many companies are using Git and Github for this, although you might also see systems like svn and cvs.  This is also where products like Puppet and Chef come in, as they provide a way to consume these “recipes” when building and maintaining systems.

Automated Testing

Instrumental in implementing the Second Way, some type of automated testing should be built into an environment so that continual improvements can be realized.  Also, this will help minimize issues creeping into Production.  Some examples of testing frameworks include Pester and Cucumber.  These are both examples of software that is designed to provide BDD, or Behavior-Driven Development.  A good read about what BDD is and why it can help improve your processes and app development is here.  You can also find a good intro into testing methodologies here.


This is almost an obvious one, but the advent of virtualization enabled the implementation of DevOps throughout organizations.  It made it much simpler to deploy systems automatically and based on a configuration described by code.  Systems like containers and Docker have taken this to the next level by abstracting even further from the underlying hardware.  New tools like NSX and network virtualization extend this promise of “infrastructure as code” by allowing Ops to control not only the systems, but also the networks that connect them.

More Reading

Here are some good resources if you want to delve more into the world of DevOps and help improve your environment.