Testing alternatives to Dropbox

Dropbox is the gold standard as far as reliable syncing goes, but it has its drawbacks. It seems that some people are able to get good performance for Scrivener project syncing on other services, but recommendations of “it works for me” are merely anecdotes.

I want evidence.

But how to collect it?

I’d like to mine the expertise here in designing some sort of testing script that produces & changes files in a manner that mimics Scrivener. I want an objective record of how reliably files are synced by various services so I (and others) can determine if they meet their needs. I hope that such a testing process would also benefit Lit & Lat in evaluating services that are worth attempting mobile sync integration with.

What do you think are some key file manipulations that a testing script would have to mimic? What are other issues to consider when designing such a script?

random file names added
random contents changed in file
frequent updates in XML elements

My real question to you is “why”.

If you are following the advice of the DEVELOPER you do not keep a live copy on an automated sync store. If you are doing static copies of a live file (cp -a (but with a mouse)) then what does your script really need to do?

Why? Because Dropbox works for keeping live projects synced up. Using it this way is even enshrined in the “Scrivener Everywhere” section of the manual. We have anecdotal evidence that even careful use of other services like Google Drive will not work consistently, but no real proof of where/how it breaks down.

Well, I don’t really trust these things. But I think you should consider the points from the first post.

As a thought, you might want to see if KB already has a test bed for simulating IO to a scriv package. all you might need is to tweak it a little.

Has anyone tried BitTorrent Sync for Scrivener files. I have been using it for regular files and it is fast. But, I have not have the courage to try a .scriv file yet.

Niran: This is precisely why I want a test script of some kind–I don’t want to have to just try it and hope it works reliably. It’s fairly easy to try a new service out, watching the autosave & sync interactions for trouble, but it’s not easy to simulate actual work habits. I’d at least like to be able to identify the services that are unfit for Scrivener before experimenting further with an actual scrivener project.

So far, I’ve considered the following scenarios that can have an effect on synchronization:

  1. Create a large file (10k+ words), then 2 seconds later, cut a chunk out (5k words). This simulates a paste into a document and a quickly executed (manual) split happening just after adding a new file to the binder.
  2. Modify 100+ files in minor ways as fast as the script will run, then change something in the first file. This would simulate Convert Formatting to Default Text Style and then a minor edit to the top file in the selection just after autosave kicks in.

Something I don’t know about the inner workings of Scrivener that will take some experimenting (or Keith chiming in…) If I delete 3 files and empty Scrivener’s trash, and those files happen to be 3.rtf, 17.rtf, and 44.rtf, will those numbers be re-used for new files created, or do the numbers simply continue to increment past the last one used?

Hitting the edge of the autosave interval (initially 2 seconds, but should include higher numbers when tests fail) will hopefully tease out the kinds of scenarios that trigger sync conflicts when only one computer is involved.