Is anyone using Scrivener on the new Mac Studio?

Good to know, that there are users running Scrivener on the Studio.

I have found out that running these larger project is a good sensible solution, for my purpose, and must repeat that I still experience Scrivener to perform excellent, under the given conditions.

I do quite a bit of photography and some video so the Studio may be a nice capable ‘oversized’ solution for me :star_struck:

How much SSD Disk does your Studio have?

2 TB

Yeah, if you’re editing photos and videos, I can see this being a nice machine. I rarely do that, though. So far, anything I’ve thrown at it has been dealt with swiftly as if it’s asking me “is that all you got?” It’s the version with the M1 Max, btw, not even the Ultra.

If I’m reading your stats right, you have 53 MILLION words of text in your draft folder alone!? That explains the long indexing; the index is an unformatted copy of the text of your entire project (even stuff in your research and trash folders), so it’s having to go through and strip out formatting and then copy all that text, and then store it somewhere (I’m not sure where, or how the index is used generally).

You could benefit from a larger amount of RAM, but quite frankly, I think a Mac upgrade is going to give minimal benefits to performance. Scrivener can handle what you’re throwing at it to an extent, but your usage looks like someone using a (very good) spreadsheet to run Amazon.com’s inventory system.

You really should look into alternative tools for your warehouse of data, such as Devonthink (or others, but I can’t think of what alternatives there are at the moment)–you’ll get a lot better boost in performance if you offload your research, or whatever all that data is for, to a tool or set of tools designed for that volume of data.

2 Likes

I completely agree… I had that expectation on several occasions, that now, was the time for a REAL database tool… but everytime I have become amazed over the simplicity of the way Scrivener handles it.
SO yes, I am totally outside the box… and it is my perfect tool.

I know of Devonthink, tested it for a while, and then came Scrivener along, because I needed some good tool to write ‘THE BOOK’ … and it came naturally to use scrivener also for gathering background info… and now… it has grown :disguised_face:

I have worked with ‘Real’ database tools, in my earlier PC incarnation, where I guess I spent most efforts in setting up the database rules, only to come back to my core competence, the spreadsheets.

I am fairly sure it was never on the mind of the Scrivener team, that anyone would want to write a book of 157.725 pages… and the quality of the design of Scrivener contains this too. Amazing to have such a tool at hand.

Thank you for your comments.

Lets see what will land in my stocking next month :santa:

Well, you can never have too much memory.

But if a project is 53 Million words and 87 GB, our advice is almost always going to be to slim the project down before spending money on hardware. That’s probably the largest project I’ve ever seen “in the wild.”

For images in particular, you might want to take a look at any of the purpose-built image database tools that exist.

1 Like

It has become huge, and I see it as a challenge (for the software), Who blinks first? Me or Scrivener.
But bear in mind that this huge project represents a ‘Total’ of about 5 years. The day to day projects contained in it are, relative smaller in wordcount.
I rely heavily on the use of Custom Metadata, and those are the only ones that may be added/changed in ‘older’ projects.

For the general ‘more 2 dimentional’ look I sync project to Aeon Timeline. In Aeon it is possible to make queries based on boolean-like input between these metadata types.
I dont do that on the big one, even if it could be done I cant take in such a enormous matrix… but otherwise the combo is great. Syncing may go both ways.

When I feel the time is available I find the documents with the huge photos, scale them to 400 pix and take a screenprint of that. Then I replace the original photo with its screen-twin. And the whole project is trimmed that way.

Suggesting DEVONthink as the proper research tool for you is what came to my mind too. Using it together with Scrivener makes a very fine combo I have been using for many years.

If you are interested you should not wait until next month because this month you can get a 25 % discount (but no stocking).

1 Like

Spliting the humungous project into its four constituent projects perhaps you could do that same search using Spotlight. If that doesn’t provide what you want then some KWIC utility run over the project(s) files would. FSF/GNU’s ptx utility (part of their coreutils package Coreutils - GNU core utilities) does a reasonable job of producing a concordance. Otherwise you could use a corpus linguistics tool such as LancsBox #LancsBox: Lancaster University corpus toolbox which can read RTF files as standard.

I suspect that much of your problem is that of the almost 58million words in this project is the images you have in the Project. Last time I looked Scrivener will convert images to RTF files reconstituing them as required (without loss of quality). But the problem is Scrivener will then try to index the “text” it finds in those RTF files.

By the way, 58million words is over half what many of the very best available corpora have available. The two British National Corpora (BNC1994 and BNC2014) each have 100millions in them!

This here is good advice. Scrivener is not a good tool to manage files in.

Here is one of my databases running smoothly on an old iMac 2015 that I recently brought back from the dead.

1 Like

I dont see any indexing of any image to RTF convertions happening!! And find it difficult so see any benefit Scrivener would offer doing such an extraordinary job. Neither does Activity Monitor show any peakings in CPU use.

Just testet one document with 625 words counted in statistics. Added one fullsize photo of about 40 MB.
Quit scrivener and started it again, The document was still 625 words.

Maybe you meant something different?

I know that Devonthink is a fine tool. But in this specific case it is not practical to use.
Overall Scrivener is the perfect tool to use, for me, for these kind of jobs.
:sunglasses:

1 Like

@GreyT you have my sympathy. Sometimes you ask the internet “where can I buy my favorite chocolate?” and you get a lot of “you shouldn’t eat chocolate!” "and “Have you tried eating carob?” instead of folks actually helping with your problem.

I wish I could help, but I don’t have a Mac Studio – also, my Scrivener projects are very light and small.

I would be surprised if Scrivener was multi-threaded, because it’s really not designed to do very heavy lifting – though I welcome correction from @AmberV or anyone else in the know.

My guess (and it is just an educated guess) would be that you could improve performance with a faster CPU, more RAM and a bigger boot drive for scratch/virtual memory use.

There is probably a real limit to speed improvements that can be had by upgrading hardware. Your chosen workflow puts a big burden on Scrivener.

If at some point, improved speed & performance becomes more important than the comfort of your familiar workflow, you might look at DevonThink. Putting non-text files in DT and linking back to Scrivener would improve your system’s responsiveness dramatically.

Until then, see how much faster the Mac Studio is at single CPU tasks, and get as much RAM and boot drive space as you can.

For indexing, the main bottleneck is going to be the speed of the disk or SSD. In order to rebuild the search index the software has to go through every single file in the project that relates to text data, opening and closing them as it goes, and that’s one of the slowest links in the chain.

Getting over that bottleneck with consumer-grade hardware is going to be difficult. The answers to that problem typically involve specialised setups like RAID arrays, and the workstations that can handle them, which cost a lot to assemble. Frankly, the last time I encountered someone pushing Scrivener to a similar level, that’s exactly the kind of setup they had.

I would say in most cases, breaking things down into smaller projects and having a master index project that integrates with them via external item links is probably a more effective use of one’s time and wallet thickness. Unless you’re already into video production or something, that’s a lot of hardware and overhead to get into just to avoid a more fragmented workflow.

1 Like

It happens, but I don’t see anyone doing that in this thread.

Exactly! The workspace and time required by some algorithms are exponential functions of the number (N) of items searched, indexed, sorted, etc. Every algorithm gets slower as N grows, and resources are always limited. That being so, reducing the number of items often is the best solution.

As for whether anyone should buy a Mac Studio, the answer, of course, is to buy the biggest, fastest machine you can afford.

It’s not as if users answering questions on the forum have any power to find and squash bugs in the program or rewrite the algorithms. Nor can Literature & Latte fix it in a day.

1 Like

Hi AmberV
The last part of your input is interesting for me, can you say a bit more? I gather you are not thinking on sync with folders, but I would like som pointers to how such a project with its master index, will function together with smaller projects. Took at glance in the manual, but did not find anything of what I think you are talking about.

Maybe I am indeed hoping for the possibility to sync two or more projects, from The Master project!!! So that changes made in one sub-project will be synced to the Master, and vice-versa

GreyT

Synchronisation is not the right concept for it.

Here is a post that describes the technique I am referring to that can be used to integrate multiple projects together.

There are downsides of course, searches need to be run more than once if you don’t know where something is at all, but a “meta project” that does that as part of its function—helps you find where things are and can be embellished and made better as you find yourself looking for things—is part of the idea here.

1 Like

Perfect advice AmberV…
Now I know of Bookmarks, and I rediscovered some Layouts I designed some time ago. So now… goodbye to the mega-monster-project and wellcome to happy coexisting (my screen is 34")
The left layout is for the “live” project, and to the right, nicely stacked are the 3 separate projects. Called by individual bookmark (to a Folder), when needed. :+1:

There is a bit of nostalgia over the demise of the monster… Afterall I resonance with the Queen line:
“I want it ALL… and I want it NOW” :sunglasses:

1 Like

If N is the number of words being indexed, is the algorithm you use of order N^2? N^3? Either of those could be a problem if N = 53 milion.

N logN? Not as bad, but there’s always a limit.

1 Like

Just as an afterthought, on the bloated by huge photos filesize, I have createde a small routine including a macro, whereby any selected picture/photo/graphic file will be scaled down to 500 x 333 pix, thus keeping the project file somewhat slimmer, (a full size photo adds about 40MB to the project filesize), and each trimmed item will have a ‘footer’ stating ‘Trimmed…’ making it easier for me to know if trimming might be needed.

Hash it! And ignore all the nonsense of using primes for the size of the hash table. Hopgood and Davenport (1972) showed conclusively that making hash tables a power of 2 meant it was easy to expand them. Hashes are also ~ O(1).

1 Like