Has anyone reached a functional limit for the number of docs/notes in the binder/scrivener file? I have a project with 2000 or more mostly small notes and I’m wondering if it would be silly to try transfering them all into Scrivener from Filemaker and Devonthink.
Any experiences or opinions out there?
I’m running Scrivener on both an Imac core 2 duo 2.16 ghz with 2 gig ram and a powerbook G4 1.33 ghz. 768m ram.
I think I recall Keith saying he had a 1,000 document/1 million word Scrivener file – back in the early days, and that it was slow with that. Since 2,000 small notes would probably not add up to 1m words, it is probably an entirely different situation. I did a lot of stress testing back when B1 came out. To be funny, I loaded the entire PHPMySQLAdmin web application into Scrivener, including several other web sites. The only real performance problem I ran into was images in the Corkboard, which is a problem that has since been fixed. I was very impressed with how quickly Scrivener assembled a 150,000 word Edit Scrivenings session. There was definitely a delay, but nothing out of the bounds of what one might expect. Edit Scrivenings is really the only place I ran into speed issues (besides that old Corkboard bug), especially when used in conjunction with full screen. Under normal usage though, it didn’t seem to matter much at all how much I threw at it.
Scrivener should be able to handle 2,000 notes - even large ones - quite comfortably. Number of files isn’t really an issue, to be honest - more of an issue is how you organise them. If you throw all 2,000 notes inside the one folder, then obviously when you click on the corkboard or outliner, they have to calculate how to show 2,000 index cards/notes. This could cause a little lag. But on the other hand, if you have lots of the notes in subfolders, there should be no hit at all. And the only thing that Scrivener really keeps in memory is the texts themselves. Whenever you click on a text item to open it in the editor, Scrivener loads that text into memory. That way, for large texts that take a few seconds to open, Scrivener can open them immediately the next time you open them. Text is fairly lightweight, though, so you shouldn’t notice a hit even if you open hundreds of documents during one session.
As AmberV has said, the only real area that would cause a delay is if you tried to open hundreds of documents in an E.S. session, but then, how often are you likely to do that?
In short, Scrivener should be able to handle all of the files you want to throw at it. If it doesn’t, let me know. And just to reiterate, for best performance, don’t place 2,000 files inside one folder - split them up into subfolders. I very much doubt anyone would ever even want 2,000 documents flat inside one folder anyway, thought, as it would be hell to navigate under any system.
All the best,
Thanks to both of you for the helpful replies. Since the idea doesn’t seem to be a non-starter, I have a few more specific questions:
Would search take an appreciable performance hit? With that many documents -or more- I’m bound to do a fair amount of searching by multiple tags - which is the main reason I’d transfer them in the first place.
AmberV, while I’ve got your attention: have you ever tried this in Tinderbox? I’ve finally renewed my TB licence and am fiddling. I might move the notes in there first for a quick massage.
I’ve been considering diving in to MMD - is there any chance I could use it to get my equivalent tags/labels/status fields from either an FMPro export or a TB html export and into Scrivener?
Finally, if i decide to do this, it’s going to take a fair amount of time and effort to clean up, reorg and tag my notes. Is it worth waiting for beta 5 for somewhat better DT/drag drop integration?
Thanks again, Eiron
p.s. Keith, if you need some help with the help file (metahelp?) I was a magazine editor in a former life and a fresh set of eyes never hurts. Let me know.
Search will be slower depending on the number of files, but it shouldn’t slow to a crawl. Actually, I’d be interested to hear how it fares.
And thanks for the offer of taking a look at the Help file, Eiron. It’s about half-done at the moment…
Keith: Sorry to insist: Is it worth waiting for Beta 5 for somewhat better DT/drag drop integration?
Yeah, I suppose you could wait a couple of hours.
Ditto here. I’ve written instruction manuals for scholarly text-editing and spent years explaining software to students. If nothing else, you could probably use an experienced proof reader!
Check out the Help file in beta 5. It’s not finished… If anyone wants to start throwing a few paragraphs together for the parts that do nothing as yet, I can’ tell you how grateful I’d be…
Tinderbox will have no problem handling 2,000 notes. I have put as much as 120,000 words into it, and I know of other users that have thousands of notes in their documents. You will run into slower load and save times, but once it is up and running, it is fast. Where it can get slow is if you have a lot of really complicated agents. I have my GTD file in Tinderbox, and it has about 40 agents, half of which contain a lot of logic sequences and stuff. I have to optimise it a lot to keep it from halting for a second at a time to process the file. Without a lot of agent processing though, it is quite fast. Rules are faster because I think they are threaded. It might take a while for them to update, but it will not slow down the interface. Since agents can effect your outline, they are not threaded. It works best with small notes, so like I said, you should be fine.
Scrivener’s MMD import is limited to recreating a hierarchy based on header levels. Tags, labels, and all that will have to be manually set. That is true for any kind of import into Scrivener. Having some codes you could insert into a text or RTF file, which update Scrivener’s meta-data would be a nice 2.0 feature (nudge nudge).
I have given some thought as to getting Tinderbox to export MMD files. As long as you double space your paragraphs, it should be all right. I have a tendency to single space and use Tb’s paragraph spacing ability to set them apart – so my older files would be difficult. But you can adjust most of the formatting codes to be MMD codes.
Somehow it amazes me that such a big, single xml file (with all those attributes!) could still run smoothly, but I’m glad to hear it. Now I just have to figure out if TB’s organiztion features are worth the side trip between FM/DT and Scrivener.
I’ve been toying with GTD again myself lately and Tinderbox does seem like a good tool for that purpose, so I may very well dive right in. Thanks for the reassurance; after a shaky start I’m slowly regaining my trust in TB.
On the other hand, Scrivener is getting so good that I may just choose to live in it and scrap all the rest- except DT for research. As for MMD I guess it’s a bit beyond me for the moment: I just can’t seem to get my head around the cognitive shift it requires.
Grateful as always,
Well the trick is that it only has to deal in XML for input and output, hence the longer load and save times. Tb doesn’t have any kind of auto-save (probably for that very reason), so once things are loaded it is all optimised in the memory. I’m not sure how it manages to remain speedy with so much data running around, I’m sure that is one of their closely guarded secrets.
I have a couple of hundred files and 500k for my total project (I can’t delete anything EVER, so I just cut and paste all of my deletions into the research folder) and searches are still basically instant. I do love search on the Mac!