Backup with Assets

Can you please include an option to do a backup with assets?

I am using the Mac version and have a backup set to iCloud. This seems to work fine except it does not include the assets (linked Images) and since I use linked images it can make things difficult if I have to rely on a backup.

My preference would be for you to add a new tool menu button ‘Backup with assets’ Then when you click that, it will give you two options, 1 - Backup with Linked assets, or 2 - Backup with Embedded assets. (Ideally both in a zip file, but the linked one will maintain the file linked assets locations.)


The problem is that Scrivener has no way of knowing where linked assets are without parsing the entire project.

A file that’s not contained in the project itself could be literally anywhere: it could be in the same folder as the project, it could be elsewhere on the local hard drive, or it could be on a remote drive connected to the user’s collaborator’s computer. The only record of a linked asset is the link itself, there’s no central repository in the project that could be used to drive such a backup.

It’s therefore the user’s responsibility to keep track of where linked items are stored and back them up appropriately.


When Scrivener compiles a script / book or whatever, it has to find those image files. From a computer programing standpoint, it is not difficult to copy those files and create a new folder with the same tree structure, unless one is using multiple hard drives, and anyone with any sense does not do that. Or as I said in my post, it could create a backup with the images embedded into the RTF files which not being the best solution, is better than nothing.

If you embed the images in the project, instead of using links, the images will be in the backup. But then the project and all the backups would increase in size with the size of the images.

If you open a zipped backup with linked images, rename it and move it to the folder where you had the original project, the problem is solved. All links will work.

Requiring Scrivener to compile a manuscript containing potentially hundreds of external images in order to create a backup is a significant performance hit, and defeats the purpose of storing those images outside the project in the first place.

Note also that Scrivener can link to external research files that are not included in any output document, and therefore will not be found by the Compile command. For example, our academic users often link to primary research materials of various kinds: audio and video recordings, scans of historical documents, etc.


I think the main problem with this idea is that it is necessarily destructive no matter how you do it. There are two potential scenarios:

  1. The archive maps out all the files in their original locations. I’m not sure if this is something .zip can even do properly, at least not from the GUI side of things when you just double-click on it. It’s something .tar can do, but most people don’t know what that is or how to best use it (again I don’t think the GUI would help, you’d have to use the command line properly, and from the correct path so that all of the files line up with their original locations). Worse though, part of what makes it so .tar would work would also make it risky to use—if the original project you are restoring is in the same location as .tar extracts the replacement to, you could end up with a corrupted project as it would ineffectively and blindly merge the two projects together. And that’s not even getting into the problem of overwriting diverse files all over the filesystem as part of restoring a .scriv backup. Consider if you had made extensive modifications to a bunch of images that you link to, and then unpackage this backup—it would revert every single file on the disk that had been backed up along with the project.
  2. The other approach is safer from the standpoint of not potentially destroying your existing files, and that is to gather all of the links, potentially into a folder alongside the .scriv, and archive that together. Of course doing this would require updating all of the aliases and image links to point to the new location—and now you have a backup that doesn’t really restore its previous condition. You would be forced to work with this new directory hierarchy from that point on, and all of the original files that were linked to would be decoupled.

In short, there are very good reasons to use links, but having them all considered one collective whole that get backed up together with the project is not compatible with the theory of why you would use them in the first place. The major selling point for linking is keeping the project small so that it can be swiftly and frequently backed up. If backups have to trawl and rebuild gigabytes of data every single time, then you might as well have just imported everything into the .scriv in the first place.

The idea is not to back up assets every time, but periodically if changes are done, and if the coding was smart enough, it would only backup the changes once a main backup is already done. Most people keep all assets in one main folder, so if that is the only requirement then it is not difficult to keep the correct folder tree / structure, and updating a backup even with gigabytes of files would not be a big hit on performance, but having a secure backup would still outweigh any performance costs. Also, the idea was to have an option to backup with assets which would be separate from a normal backup.

That is a very poor excuse.

The issue with this is, if the main file fails and you need the backup, then the assets are probably gone too, so it would not link to anything.

You would of course have everything backed up somewhere

How is this any different than the differential backup scheme you’re proposing just two posts above?

The way things are right now, each Scrivener backup is a complete backup of that project. If the user involves files from outside of the project, it is up to the user to protect them. Scrivener doesn’t have to go down the slippery slope of trying to determine if a given file reference is still in use or if the file on the other end has been changed or any of dozens of other error conditions. If you want Scrivener to manage it, you import it into the project and accept the hit on backup time/space.

Scrivener expects its users to be adults.

Whilst Scrivener may expect users to be adults in your opinion (Sorry Teens but you were just ruled out by Devinganger), I expect non insulting adult replies!

Any linked file is part of the project (compiling a project, Scrivener does not cry “This is only a linked file, I cant do that”) but Scrivener does not back them up. Most other professional software gives one the option to backup all content being used, Others in this thread talked about huge file sizes, yet when one imports files to Scrivener it backs up all of those imported files even if they are not used. Thats like MS word backing up the entire dictionary and thesaurus for each doc, even if only a 2K page of text.

Scrivener’s “normal” backups copy the entire project, whether it changed or not. In fact, that’s the point: to create a version that the user can revert to no matter what disasters may have befallen their system or their data. Especially given that having to restore from a backup is already an extremely stressful situation for most people, having an “asset” backup that’s not synchronized with the “normal” backup seems like a recipe for confusion and frustration to me.

Abundant tools for creating backups of arbitrary file system folders already exist. It’s still not clear what value incorporating such a function into Scrivener would add.


Linked files are not part of the project. That’s the whole point of having linked files.

If a user would like for their projects to be completely self-contained, they can do that. If they’ve chosen to link files instead, there’s probably a reason.

Sometimes that reason is that one of the linked resources is a multi-gigabyte primary source database, the integrity of which is the responsibility of another entity.

So I’m visualizing a user who has a linked folder of external images, but also an enormous database off on a server somewhere, and maybe somewhere else an archive of video recordings from field interviews. All linked to the project. What would an “asset backup” for this user entail?


I know someone creating a book about the universe, and they are sifting through 12 Terabytes of data for a book which is a little more than 1GB with images. So you are telling me you do not see why backing up 12 TB of data is an issue.? Scrivener needs a way to discriminate against reference material and assets that are uses or will be used. This can be done in numerous different ways. It is not difficult.

I completely agree that backing up 12 TB of data is an issue. That’s exactly my point.

Scrivener already has a way to do this: import the assets that should be backed up as part of the project into the project.


So then you are saying don’t use the reference function which takes away the point of that, and one is still having to back up large amounts of data, around 1GB to the cloud, which takes away the point of that. Which then overall you are implying just don’t use this program. Thats how it sounds to me and others as coming across as.

Oh, but then you could easily fix this yourself with some quick coding. :slight_smile:

This is a willful misinterpretation of what I actually said, which is that one of the main purposes of the external reference functions is to allow the user to link to materials which are too large to incorporate into the project.

As Scrivener currently works, none of these external materials are backed up as part of the project. It is assumed that the user will design appropriate backup strategies of their own.

You are proposing that instead some portion of these external materials be backed up as part of the project. But not all of them. (No one wants a 12 TB backup! So there needs to be a mechanism to designate which external files to back up.) And not every time. (Only files that have changed, you said, and only when the user specifically requests an “asset backup.”)

To which my response is that creating a backup of some, but not all, external materials sounds like a recipe for confusion and frustration, especially given the inherent stress that’s always involved in recovering a backup. Abundant tools for backing up arbitrary folders already exist. If the user finds those tools to be inadequate, and wants the files to be part of a Scrivener project, they can certainly do that. But I don’t see the advantage of adding confusing and duplicative external folder backup functions to Scrivener.


Quite. Backblaze costs around a dollar a month and backs up your entire hard drive to the cloud, while CarbonCopyCloner (or SuperDuper) will create a bootable clone of your hard drive on a local external drive. I use both along with Time Machine. I would have to try very hard to lose something. In my view, it is better to have a suite of programs, each of which does one thing well, rather than one program that tries to do everything and does all of them less well.

The breaking of links when a file is moved or renamed has always been a problem. People are trying to solve it, for example with Hook, and DEVONthink provides a more robust way of linking to files when dealing with large amounts of research material (by using item links that seem to be basically UUIDs). In my view, Scrivener is for writing, and if you are having to cope with a very large amount of reference material, it is worth using another program to manage it. See the comment above about having programs that do one thing well. When I spent five years on a PhD, I kept all my research material in DEVONthink. It amounted to several million words in thousands of files of all kinds. I used Scrivener to write the text of my thesis. The system worked very well for me.

Edit: there is a detailed discussion of using Scrivener and Hook here: