Yes, to confirm, the symptoms you describe sound exactly like what can happen if some external system doesn’t load or copy parts of the project. Most often you’ll see descriptions like that around here with someone using Dropbox that hasn’t set it up right, because by default it doesn’t sync and just shows icons of things you can sync. If Scrivener deleted the content then you wouldn’t for example get an empty rectangle for an imported web page. But if you or some system you use goes in and deletes (or never copies) the “content.webarchive” file that corresponds with that web page, then that’s the result you’ll get.
Since it sounds like backups aren’t available, as I recall Resilio saves deleted data right? It’s been a few years since I’ve used it, but I seem to recall it backed up everything removed from it. It might be worth looking up their help pages on how to do this. If it does, at a minimum you might find your “content” files and piece things back together by hand from that, but if they’ve made it more sophisticated—like for example having the ability to roll back the whole entirety of the project-name.scriv folder to a set point in time, you may already essentially have a backup.
Beyond that, one trick that can usually at the least pull out all of the data from a damaged project is to:
- Make a new “Blank” project somewhere convenient.
- From it, and with the main project closed, use the
File ▸ Import ▸ Scrivener Project...
menu command.
This command is extremely liberal about what a “scrivener project” is, and can usually recover even the most damaged project’s data. It might take some reorganisation to get it all back together, but it’s worth doing this first as an experiment.
If you are finding stuff that was missing, then how to continue with recovery is up to you. Here are some common options:
-
Open up the other project and start dragging items back in from the recovery project until you’ve got it all restored. The advantage here is that all of your project settings, labels, status, links between items, compile settings and so forth are fine.
The potential disadvantage is that if the project is damaged somehow, it might not be a safe place to continue working from. To be fair the likelihood of that is pretty slim. Missing data, like you describe, isn’t actually a form of corruption. That’s just what a project looks like before you type stuff into binder items. There is no difference between a project you’ve fleshed out the binder on, and one you typed a bunch of stuff into and then later deleted with the file system. The format is very “safe” like that: what exists in the project, in the right places, is what determines whether content exists, not elaborate and failure-prone indices and databases.
-
Migrate to the recovery project. You will find the sections in the user manual that pertain to copying settings to be useful. Most things can be easily copied—and depending on what features you use, you may very well want to start over with a new blank project and copy settings first, and then import, because Scrivener will discard stuff like labels that don’t exist, but if the labels and section types are already there, it will map them to the imported stuff.
I think the best index of topics on this matter is found in §5.4.2, Converting a Project to a Different Template, since that is the most common query leading to the matter of transferring settings between projects.
And to conclude on Resilio’s performance, like I said I used to use it. The only reason I stopped using it was because I wasn’t satisfied with the Linux client. It worked very well for me other than that—though for full disclosure I’ve never been a huge fan of syncing live work of any kind. I’ve always used it more as a way of keeping archived static data and backups mirrored around, and use those to create volatile working data outside of sync areas. It’s more work, but it avoids problems like this, which can occur in any system, whether by human error or machine.
In short I wouldn’t worry too much about one-off failures. At least with Resilio you aren’t risking your data on servers you don’t own or control, and trusting them to do right by your unencrypted work. Besides, from now on you’ll be keeping solid backups I’m sure, which makes copy/transfer errors like this completely harmless. It’s when you start depending on the transfer copy entirely as the sole master copy that you enter a new category of high-risk, as a default and baseline, regardless of the brand of sync you use.
I.e. it’s okay to use high-risk technology, but only if what you use it for is mirrored somewhere else in case it fails.