Previous History:

Was very familiar with and used extensively the 4.x version for home for all my backups. It contained over 10 personal workstation and servers in my household running multiple versions of Windows and Linux

Infrastructure configuration contained 1 "server" that housed backups of all my other machines. CrashPlan Home had the very powerful ability where it could store backups in the Cloud, which were CrashPlan's servers, and also had the ability to send backups to another machine in your account. Essentially you could designate a device of your own choosing to collect all the backups of other devices.

Completely destroyed my 4.x infrastructure once it was announced that the Home product was being discontinued, and used other methods for backup; did not wait for the EOL date or conversion to CrashPlan Pro reduced rate.

The client is less powerful than the previous Home version; all peer to peer functions have been stripped out.

The client is slower, both on communication to Code42's servers over the internet, to local drives, and remote drives within my own network. The program throughput is less than what my disk and network subsystems can provide, so I have to conclude either the code is throttled, or less efficient.

With the loss of the Peer-to-peer functionality, The client does not like working with network connectivity.

Mapping network drives does not work



iSCSI does work, so a remote location to backup is possible, but still slow **This has been improved with optimizations, please continue to read**.

(NEG) Despite the hard coded file exclusions, it backups up a lot of data. Personally I miss that I cannot backup virtual disk files like .vdi or .vmdk, but knowing that I can take alternative measures to back it up

(NEG-20 Days) The slowness is annoying especially when copying to a local location to the client, but once your dataset is built, keeping is up to date is a painless process

(NEG-60 Days) Local and remote backups are are capped at the ~7 Mbs bandwidth, not great, but I am OK with it as long as It keeps working at that rate. This means it will take a week to backup 1TB, but after the initial backup, keeping my date up to date will be simple

(POS)It defaults to backup continuously, so once your data is backed up, keeping it up to date is a automatic process

(POS) Really, really like using iSCSI as a local destination for my clients. Very versatile and resilient. If my iSCSI disk is offline, local backups are not available, once it becomes available, Crashplan starts right up and backs up, no reboot of client, no restart of service.

(POS) $10/month for unlimited backup of client to cloud is cheap and really easy to plan for. Don't have to worry about cost of upload / download transfer and storage costs

(POS) (7.7.0-new) I started up the Crashplan client after a 7 days and it did a deep pruning, and backed up 30GB to my online and onsite repository in under 7 hours, that feels like an improvement. Trying to look for evidence in logs

The Great workaround

Setup an iSCSI target to hold all of your local data. I can be Windows, Linux or BSD. I am using StarWind Software Virtual San because Windows because my primary OS that I am familiar with was only doing 2 nodes and it was proof of concept Create a LUN for each machine you want to backup and set up for each client. My test Laptop has a 20 GB LUN, and my test desktop has a 1.05 GB LUN Provide each client you wish to backup a new drive letter, in my case with Windows clients, it was the R: Drive Configure Crashplan for one backup set to the cloud, and another backup set the R: drive for local backups. Use script robocopy and copy over all the standard excluded files in CrashPlan files like *.vdi, *.vmdk. They wont be backed up into the cloud but at least you will have a second copy of them. If you don't want to use CrashPlan for your local backup, robocopy everything over to the R: drive

Tweaks for Better Perfomance

Increase CPU usage on Idle / Active from 80 / 20 to 100 / 100. After changing this setting I was seeing higher CPU usage but no where total CPU usage. I have suspected and a Crashplan technician confirmed that the engine is not multi threaded, so with a 2 core system, will max CPU Usage at 50%, 4 cores, 25%

Exclude the backup directory (R:\backup_Crashplan) and the Crashplan cache (C:\Programdata\crashplan\cache) from Microsoft Defender or any other virus scanners. There is no need to scan the CrashPlan files, let Microsoft Defender focus on the source files.

Exclude large files that are already highly compressed like .mkv, .pst, .rar, .ress (Data files from the game Tacoma), and .zip by modifying the file C:\ProgramData\CrashPlan\conf\my.service.xml. Here is my exemptions:

(7.7.0-new) The file my.service.xml is now gone, and I cannot find any xml file that would offer any control. I do know that my exemptions are still in place because I can see evidence of it in the following files:

The file my.service.xml is now gone, and I cannot find any xml file that would offer any control. I do know that my exemptions are still in place because I can see evidence of it in the following files: C:\ProgramData\CrashPlan\log\service.log.0



C:\ProgramData\CrashPlan\log\app.log



Watching for Stalls and Rectify - 2019.10.23

Turned a corner and not looking back - 2019.11.06

Updated: 2020.02.26This is a living review of Crashplan for Small Business 7. Please refer back to it frequently because the plan is to continuously review it and update the community that either rely on or are considering CrashPlan after its directional change to only supply the Small Business and Enterprise versionsOn Feb 20, adding my notes and updates with the 7.7.0 Build 883 update, all differences will be noted withI really liked CrashPlan Home Version. It was powerful. It had the ability to backup whatever I wanted to the cloud and also the ability to send backups to another device attached to your account. I had over 10 machines sending backups to 1 device, and that became my "server". Yes it was a peer to peer infrastructure, but it had a lot of advantages. It was fast, it encrypted all my data, and it was resilient, the software just worked and everything I need was backed up in one location. Restores were fast and it did save me from times when I needed to recover data.I have accepted that the new product that Code42 has given us is different. From my 60 days evaluating it I have noticed the following things:I wanted to love the new CrashPlan for Business. After 20 days of usage, can only sort of like it. After 60 days of usage, it is not bad and I can learn to live with its directionsOK, so the new CrashPlan for small business is not perfect, but I have developed a workaround that will work and it pretty clean. I want a solution where my data is backed up in 2 locations; first in the cloud and second on-site that is not on the same machine where my data is. This is my workaround.Here is my robocopy script, I needed to add the attrib command because there is a bug where robocopy will sometimes hide the destination directory:robocopy g:\ R:\backup_robocopy_G *.vdi *.vfd *.vhd *.vmdk *.vmem *.vmsd *.vmx *.vmxf /S /PURGE /XD $RECYCLE.BIN OneDriveTemp Recoverybin /A-:SHattrib -s -h R:\backup_robocopy_GI think this is a decent workaround. I know if you don't decide to use CrashPlan for you local backups, then setting up an iSCSI target might be overkill, but if it is used for some, then the process is consistent and with iSCSI, you don't need to map network drives, it just attaches as long as the client and server are up.UPDATE: 2019.10.24 After doing tweaks and overcoming my stall, which I documented below, it has been smooth sailing.Feel free to comment here or on reddit I am researching what will improve performance to speed up backups, here is my attempts and how they behaved.For local backups from my client to a remote drive via iSCSI, I sometimes get 35 Mbps for highly compressible data, but for mostly compressed data I get 6 to 7 Mbps. Cloud backups I normally see from 5 to 7 Mbps. I see no improvement on backups to crashplan.com so something on their side or how the engine is programmed that is slowing things down.In my instance, local backup were just stopped at 29% for days and would not go anywhere. I could see that the engine was running, but nothing was backing up and nothing was being written to the log file C:\ProgramData\CrashPlan\log\backup_files.log.0 that was usefulUsing Resource Monitor I could see it was spending a lot of time reading the file sharedassets3.assets.ress in a directory for the game Tacoma. It was a 2GB file and it never backed up, but it was always working with that file.I surmised that CrashPlan was trying to compress this file and was having difficulty. Once I exempted the extension of .ress, it was moved right into the destination without incident.Since the 23rd, CrashPlan has been continuing to do a local backup and just before the start of the month it finished, I now have my 1TB of crucial data backed up on the CrashPlan servers, and on a local iSCSI disk. And I have to say it is pretty good. With average usage, I generate 7 to 15 GB of new data or overwrite of previous data, and the the system can handle it without breaking a sweat.I have done test restores from remote and local data, and the process has been clean, my data is backed up and able to be recovered if the need arrises. Granted I only backup 2 machines, and the second will be decommissioned soon because it was always a Proof of Concept, but while the behavior of Crashplan, is different, I consider this tool much more of an asset than a liability for me to use.