How does it handle large research libraries?

I’m currently evaluating Scrivener and so far has been a pleasure to use.
I need to know how well does Scrivener handle large quantities of research material, I’m specially concerned about performance and stability.

Most of the time I write technical stuff and that involves compiling loads of text clips, URLs, code snips, PDFs and CHM files, web pages and sometimes even entire websites, etc.
In the end, the ratio between this reference and support material and my own writing is absurd: roughly 100 pages of research material to 1 of my own.
Usually each document I write ranges from 10 to 50 pages, which means that I end up collecting a lot of material.

I’ve been using Devon Think with OmniOutliner but going back and forth between 2 apps it’s not working for me.

So if any of you has a similar experience I would appreciate some insight!

Thanks in advance!

Ricardo wrote:

I dont do techie answers, cos Im not that smart, but! I may be able to help in this case. What follows is just one piece of research material, I have, in just oneScriv project. I have at least three more containing similar amounts of material. Like you I have sometimes wondered if Scriv will blow up if I put much more in it.

Keith describes Scriv as a [i]Shed[/i] More like the Library of Congress or Bibliotheque nationale de France. I don`t think you have a lot to worry about.

Actually Balzac is a bad example, because theyre only links to Project Gutenburg. Ive actually copy and pasted five of the following from Gutenburg, into Scrivs research folder. Ive also done the same with two or three much weightier tomes, in their entirety.

I doubt if you`ll sink Scrivener

Take care

Balzac, Honoré de, 1799-1850
• Wikipedia
• Adieu (English)
• Albert Savarus (English)
• The Alkahest (English)
Plus a hundred or so more (links).

I’m not that geeky either, nor have I created any huge projects, but here’s the answer from the FAQ:

To the best of my knowledge, no one has complained of instability, but I think it worth paying attention to the second paragraph. On the other hand, I know that many people on the forum swear by the use of Devon Think as a repository for massive amounts of research material ready for pulling into Scrivener projects. :slight_smile:


I think the general consensus here (see many previous posts) is:

  1. Use DevonThink for storing research files.
  2. Use Scrivener for brainstorming, outlining, and writing projects.

The Research file in Scrivener is handy for storing files you might like to see in split-screen. For example, the text of a novel you are converting into a screenplay.

But if you have massive research data, which you may use in many writing projects, keep it in DT. Think of DT as file cases, and Scriv as the little table top where you’re creating.

Thanks everyone but my problem is exactly that: I don’t want to go back and forth between two apps.
DT handles huge amounts of data beautifully but is severely handicapped in the classification/categorization area which I depend on to be able to write anything.
I’m not a writer and the only way I’ve found to write something structured is to begin with a list of topics, add subtopics to each one and continue to do that several levels down until I have covered all aspects, then I start expanding these key phrases to paragraphs or longer blocks of text beginning on the lower levels and working my way up.
That’s why I use omnioutliner which is far from being the best place to write.

I’ve read the FAQ and tons of other posts here but my concerns are not about the top limits, but how much it can handle before slowing to a crawl.

Think of iPhoto. Although they don’t list any limits now, you’ll see a performance drop after a few hundred photos and after importing a few thousand, it becomes almost unusable.


There’s no real reason that Scrivener should slow to a crawl if you import lots of information. One of the reasons it uses a package-based format is so that it only ever needs to load into memory files that you use during the session. So, if you had a 500MB project, you may only ever load a couple of MB of that during one session.

The main point is to keep it structured, as it says in the FAQ. You will notice a slowdown in the corkboard or outliner if you have thousands of files in one flat list in a directory. But if they are broken up into subdirectories you can avoid this.

That said, Scrivener wasn’t designed for data management, so it has never been optimised for such use (my usual caveat).

Hope that helps.
All the best,

Keep in mind what Keith says but also what happens if you go back and forth between a lot of your data. I did start noticing slow downs (while still using Tiger) when going back and forth between a lot of different files, since more gets loaded into memory the more files you open. I was using a LOT of research files in one particular project, some of them rather large, and I would occasionally have to reboot Scr. to get rid of the slowdown. I had 1.25 gigs of RAM in my iBook.

So I think it really depends on your machine (if it’s Intel and how fast) as well as your RAM (how much you can load into it before you start running out of it). Also, what other programs you have active can make a difference. The more loaded into the RAM, the more likely you’ll have to use virtual memory, which is what caused the slowdowns for me.

I too wanted to just have one program to do all, and I now have a modified system where I store all pdfs, some rtfs, all audio and video files in the Finder (organized by folders) so they can be shared between Scr. project files and all other data (not specific to a project) such as things I grab from the Internet in DT Pro. I either transfer project-specific files into Scr. or I link to things (pdfs, media files, etc.) as external references (via the reference panel in Scr.) and load them only when needed. The system took a while to evolve but it’s pretty stable right now and is working well.

Hope this helps! Alexandria


Was it lots of text files you had open? Only text files stay in memory. I’m thinking of optimising this; of maybe clearing from memory text that hasn’t been opened for a while or some such. Not quite sure how yet.

All the best,

Had you thought of using something like ‘LinkBack’? I think it might be owned by the Nisus people - Martin could probably tell you. I’m not sure whether it would work with Scriv. It’s a similar thing to OLE in Windows. As an alternative, instead of importing the research file, dropping it in the research folder would create a link (alias/symlink) rather than import the whole file.

It would mean that research files would have to stay where they were else the link would be lost, and I guess the files would open in the creator app rather than Scriv, but it might be a way round the problem of large wolumes of research data.

I’m getting to tht situation myself as I tend to take photos of archive docs where possible. I can optimise them for the web in PShop and then import them to keep the size down, but it still builds up.

LinkBack has been mentioned before, but it hasn’t been updated in a looong time.

Anyway, research files really aren’t a problem. They are only in memory so long as they are open on screen. Once you switch to a different document, they are cleared out of memory. It’s only text files that stay in memory, so if you opened hundreds in a session, or left Scrivener running for weeks at a time, this could start slowing things down. I just need to optimise this so they get cleared out of memory occasionally…


Hi Keith,

Yes, I was experiencing this kind of slowdown when I had everything related to my project in one Scr. file and was doing heavy research within it while writing. So yes, I was going back and forth between a lot of text files as well as referenced files (which I now understand are cleared out). It always worked to shut Scr. down then reopen. Also, I tend to put the computer to sleep and only restart every so often to clear things out. So I’m guessing a had a lot of stuff loaded in over time. Optimizing would be a good thing, I think, for cases like this.


I’ve come around to the same method as Alexandria (and no, it’s not just a Portland thing): store files (rtf, pdf, webarchives) in the Finder if they’re likely to be used for multiple projects, then import as needed into Scrivener projects. It also serves as a kind of backup to the Scrivener imports and just makes me feel more secure. Still, some of those projects are pretty big, including a book in progress, and I’ve seldom encountered any hitches.

I did set a few beach balls a-spinning occasionally when I had multiple projects open; I need to have several open at once because even when I’m working on a single project, I frequently clip info from web pages and emails to ongoing projects such as my regular columns and other work for various publications. So while this has only happened a few times, I’m glad to hear Keith is considering instituting some kind of automatic memory clearing process. Oh, and this is all under Tiger, although that might change RSN.

Nevertheless, don’t hesitate to use Devon with Scrivener if that turns out to be the most efficient combo. I used the remarkably inexpensive (I think I paid $10 on a sale, and it’s been given away in various bundles and sales recently) and powerful DevonNote with Scrivener for a year with no difficulties and recommend it. I did want to reduce the number of apps I was using but just because I like things streamlined, not because the Devon-Scrivener combo didn’t work fine. It did.