Adding AI tools to Scrivener

The amount of knowledge “to be had” never changed. Only that you don’t have to dig it out and assimilate it first for yourself.

The last five words of the previous sentence is where the “big” of the problem is. (My opinion.)

EDIT : Quote’s source fixed.

To clarify, I didn’t actually say that, that was in a quote in my post (though maybe you’re having a crack at how the LLM quoted incorrectly!).

But on that notion, my point was that even with the mainstream services that are way outside of any individual’s capacity to run on their own hardware, they make so many mistakes about the things you do know about, that it makes any knowledge you don’t know about that it generates, suspect. If you have fact check everything you aren’t intimately familiar with, then the gains are greatly diminished. You might as well just do the research instead of hoping a language model isn’t essentially telling you that glue is a good pizza sauce ingredient.

5 Likes

But they are all the same.

3 Likes

(Agreed)
Doing the research is how you learn.
Knowing your topic is how you write something worth reading. (Even for fiction novels – less, perhaps, but still…)

5 Likes

Yup, exactly. From what I have seen, using GPT is a step below using Wikipedia as your primary source. Both require you to do more research to verify the claims, but with the former you get no citations to support the claims being made, and so you’re making more work for yourself.

That’s in opposition to one argument people use to support it, I am aware. The other argument is the “buddy” point. Maybe it isn’t always correct, but it is easier than human collaboration; you can pretend to chat with it whenever you want. Maybe there is some merit in that, I don’t know?

2 Likes

Actually, it’s worse than that. If you ask, it will give you citations. Which may or may not actually exist.

My human collaborators usually make better suggestions, though.

3 Likes

Hilarious thread going on here, thank you.

I am reminded of other resistance to earlier tech: “No-one will ever need more that 640K of memory”, or how about “the internet is just a passing fad.”

1 Like

I don’t see how anything I’ve said could be painted in that light, but maybe you’re referring to other comments. The notion that you shouldn’t trust a statistical model that simulates language to produce accurate results right now isn’t in the same category as saying the Internet is a fad. You would be closer by comparing it to statement, made in the early '90s, that the Internet is largely of interest to geeks and isn’t yet a suitable replacement for libraries and so forth.

I suspect, by the time the technology does mature, that it won’t be something anyone will have to jam into their software. It’ll be like “Look up…” works on a Mac right now, or text-to-speech.

6 Likes

Just a further comment on the cost angle. Whatever it is, it won’t end up being “free.” There are also likely to be many different competing models out there, with strengths and weaknesses depending on the task. Some users will want general-purpose models running in “the cloud,” others will want specialized models running on their own systems or (more likely) in a corporate data center. Some models may be subscription services, others may bill based on the resources used by a specific task. Some may use a simple API key for access, others may have more layers of security.

So how is a hypothetical Scrivener AI tool supposed to integrate with that? Hitch our wagon to a specific tool, and leave our users hanging when/if its API breaks? (See SimpleNote, MathType, and a variety of other third party tools that have come and gone.) Incorporate the price into the cost of Scrivener, whether a specific user needs it or not? Or do something like we do with bibliography software, where we provide a generic interface that can accommodate whatever tool the user prefers?

I would say that last option is both more compatible with our overall philosophy and more flexible. And, as noted above, it already exists to a certain extent.

6 Likes

There seems to be a huge misunderstanding of how AI might be useful, despite all the flaws everyone keeps hammering here.

I had told myself I’m done with the wall of texts, but I’ll try one last time.

AI Integration and Content Generation: The primary concern seems to be the confusion between AI for content generation and AI for support tasks. AI integration doesn’t necessarily mean generating content. It can be useful for brainstorming, maintaining consistency in world-building, timeline management, and providing early editing support. These are tools to enhance, not replace, the writer’s creativity and effort. This is not about generating content or research, which can be hallucinated (although I’ve still had a lot of success with this).

I’ve mentioned this type of AI usage a few times now, but the go-to arguments remain along the lines of ‘not trustworthy,’ ‘do your own research,’ etc. Why is it so hard to accept that with the above type of application, you can leave those concerns at the door?

Legal Concerns: There is no question about copyright if you don’t generate copy with AI. There are concerns about the data the AI is trained on, but this is an issue for OpenAI and others to address, not yours. They are addressing it accordingly, partnering legally with many sources now, including the New York Post, Reddit, and StackOverflow, to name a few. There’s just no way their training data can hit you legally, because you would be among thousands of companies that can claim plausible deniability if it ever came to it (which it also won’t, just imagine the burden of proof for the DOJ to prove you used improperly sourced training data).

Research with AI: Research with AI is not about generating knowledge and then having to double-check it, hence you might as well have skipped the AI part. No, research with AI is about orientation, finding out what you don’t know, proposing outlines, discovering different angles, alternatives, and suggesting topics for further research. The point is to accelerate your research, not do the actual research. Wikipedia doesn’t do this, but you still need Wikipedia, dictionaries, and research papers.

Cost: Bringing in the cost of training the model or what profit Microsoft makes is disingenuous and has nothing to do with AI support in Scrivener. OpenAI exposes stable APIs for AI integration at an increasingly low cost for commercial companies like Literature & Latte. Unless you don’t want to support AI because of the environmental impact it has, in which case: be clear about it and just say you don’t want this because you’re so green. In which case: maybe also plant a tree for every sale you make?

Various Models and What to Choose: This is a non-issue. Scrivener users don’t care about this or that model, local or not. They care about ‘I want AI support yes/no’ and ‘I’m fine with my data in the (AI) cloud yes/no.’ Simply choose a partner known for stability and an active community and have users opt-in. Don’t go with Google—they have a terrible track record with software support and shelf life (see Google’s Graveyard). Microsoft, however, has proven to be a very reliable partner for decades.

Cost and Accessibility: AI services are becoming more affordable, and while they aren’t free, this is not an issue without a solution, as evidenced by the many software packages out there that already offer it. Scrivener could explore various models—subscription-based for those who want AI features, pay-per-use models ensuring that users only pay for what they use, or bring your own key. Keep it simple, choose one and stick to that. For example, TheBrain just released v14 specifically for AI integration, which supports GPT’s models (included in the subscription) and allows for a custom OpenAI key. OneNote offers AI at an extra subscription cost (Microsoft 365-wide). If your customer base comprises corporations that want to keep things on-prem, perhaps that is something to look into later (or not at all), but right now your priority could be your larger user base: regular people (I imagine, I have no idea, really).

Leaving Customers Hanging: This is what SLAs and circuit breakers are for. You realize thousands of companies rely on OpenAI’s endpoints to be highly available, right? Especially Microsoft understands this very well with their Azure cloud architecture. There would be a very small chance of an outage on OpenAI’s side, but the fact is it will happen, just like network issues in general. For those rare occasions, simply inform the user of temporarily unavailable AI support. It sucks, but this is an age-old pattern and people will understand. And it wouldn’t be a core function of Scrivener anyway. FYI, OpenAI just this week posted an update, saying they are working hard on releasing SLAs.

Development Focus: While spreading resources thin is a valid concern, the software development landscape has plenty of examples where balanced integration of new features has been successful without detracting from core functionality. Incremental improvements can be made without overextending the team. You could start small, like just having a function that summarizes a scene or chapter into the outline, or where you can just ask a question about the current active scrivening, or a function that generates a character profile according to a template based on the given description.


All the concerns about local models, privacy in the cloud, and hallucinations, each valid on its own terms, overlook the fact that, for those who are comfortable with their data in the AI cloud, the benefits remain incredibly valuable. Many, including myself, would be willing to pay for such functionality (monthly even, or use my own key).

So:

  1. Environmental concerns? Sure, but be clear about it. Don’t hide behind hallucinations.
  2. Too small to deal with the extra development effort? Okay, fine, still questionable, but it’s a clear and acceptable stance.
  3. Morally against using AI in any shape or form, even if hallucination and privacy are not an issue? I’d like to convince you there are clear morally acceptable use-cases, but I can accept if you reject the concept altogether.
  4. Privacy concerns? Understandable, but if you’re transparent about this and leave it up to the user to opt in, isn’t that acceptable? Still a clear stance and ultimately acceptable, if a little opinionated.
  5. “But hallucinations!” Well, it should be clear now, that hallucinations a) can be left out of scope completely (don’t generate content) and b) are a function of inventing facts, which is squarely in the fictional domain (useful for fiction writers, not research).

Really, this shouldn’t be about hallucinations and privacy but about types of users, use cases, types of AI applications/integrations, acceptable/avoidable risks (for the user), resource management, and AI partner stability. Why are we getting bogged down so much with arguments like “AI is bad because of hallucinations” and “I’m too old for this shit”?

2 Likes

And because I can only post max two links per post: :man_facepalming:t3:

Google’s Graveyard
OpenAI just this week posted an update

But I was there when the AI “revolution” started — just before a UK government decided to pull funding — and I still do not want AI features in Scrivener.

4 Likes

Report yourself to your creator and stop intruding on our writing time.

3 Likes

Alfred or BetterTouchTool or a range of other tools can provide contextual in-app interface for the use of LLM models in Scrivener or elsewhere. I use a local LLM (LM Studio, works brilliantly on Apple Silicon, so many models to choose) and online APIs, you can use OpenAI / Anthropic / other APIs and can “fine-tune” your experience (variable system prompt, temperature etc.) in a way hard-coded support may not easily provide. While current LLMs remain pretty disappointing (at least as a scientific researcher at least), obviously LLMs are here to stay in a way a grammer tool is not. But I would argue that the writing app itself is not the best place to put the LLM, and cross-app integration tools like Alfred (or as Microsoft is doing, the OS itself) is where this should end up…

4 Likes

Mac users have had rather different experiences than PC users with the reliability of Microsoft as a partner. There is a long history of Microsoft software for Mac being unreliable, poorly supported, and lagging well behind what’s available on the PC platform. Nor is Microsoft seen as a particularly trustworthy company in the Mac world. (And yes, the same is true of Apple tools in the PC world.)

Scrivener users are an extremely varied bunch, with a correspondingly varied range of potential use cases. Any sentence that begins “Scrivener users want…” is immediately suspect.

3 Likes

Yes, exactly. It’s possible that a standard protocol for querying AI tools will evolve, just as there’s a standard protocol for referencing web pages. But at the moment the landscape is changing far too quickly.

2 Likes

Oh well, I tried.

Meanwhile, there’s hope yet with the GPT desktop app coming out soon™. At least that should add some context to my queries. But I tell you this: ten years ago, there were only a few authoring alternatives to Scrivener. Five years ago, this started to change. Now, these tools are beginning to shape up as proper alternatives, and soon they will have the edge for the new wave of AI-accustomed users, which I think is going to be damn near everyone below 45 and a few more above (my 74 year old mother even uses it in her bible studies).

Point taken, I was not aware of the Mac side. However, it was just an example. Partner reliability should not be show stopper.

Probably true. The smartphone enabled people to leave their computer at home. Next time it’s their brain.

10 Likes

This is way I love Scrivener and will stay with it on any upcoming version.

10 Likes