There seems to be a huge misunderstanding of how AI might be useful, despite all the flaws everyone keeps hammering here.
I had told myself I’m done with the wall of texts, but I’ll try one last time.
AI Integration and Content Generation: The primary concern seems to be the confusion between AI for content generation and AI for support tasks. AI integration doesn’t necessarily mean generating content. It can be useful for brainstorming, maintaining consistency in world-building, timeline management, and providing early editing support. These are tools to enhance, not replace, the writer’s creativity and effort. This is not about generating content or research, which can be hallucinated (although I’ve still had a lot of success with this).
I’ve mentioned this type of AI usage a few times now, but the go-to arguments remain along the lines of ‘not trustworthy,’ ‘do your own research,’ etc. Why is it so hard to accept that with the above type of application, you can leave those concerns at the door?
Legal Concerns: There is no question about copyright if you don’t generate copy with AI. There are concerns about the data the AI is trained on, but this is an issue for OpenAI and others to address, not yours. They are addressing it accordingly, partnering legally with many sources now, including the New York Post, Reddit, and StackOverflow, to name a few. There’s just no way their training data can hit you legally, because you would be among thousands of companies that can claim plausible deniability if it ever came to it (which it also won’t, just imagine the burden of proof for the DOJ to prove you used improperly sourced training data).
Research with AI: Research with AI is not about generating knowledge and then having to double-check it, hence you might as well have skipped the AI part. No, research with AI is about orientation, finding out what you don’t know, proposing outlines, discovering different angles, alternatives, and suggesting topics for further research. The point is to accelerate your research, not do the actual research. Wikipedia doesn’t do this, but you still need Wikipedia, dictionaries, and research papers.
Cost: Bringing in the cost of training the model or what profit Microsoft makes is disingenuous and has nothing to do with AI support in Scrivener. OpenAI exposes stable APIs for AI integration at an increasingly low cost for commercial companies like Literature & Latte. Unless you don’t want to support AI because of the environmental impact it has, in which case: be clear about it and just say you don’t want this because you’re so green. In which case: maybe also plant a tree for every sale you make?
Various Models and What to Choose: This is a non-issue. Scrivener users don’t care about this or that model, local or not. They care about ‘I want AI support yes/no’ and ‘I’m fine with my data in the (AI) cloud yes/no.’ Simply choose a partner known for stability and an active community and have users opt-in. Don’t go with Google—they have a terrible track record with software support and shelf life (see Google’s Graveyard). Microsoft, however, has proven to be a very reliable partner for decades.
Cost and Accessibility: AI services are becoming more affordable, and while they aren’t free, this is not an issue without a solution, as evidenced by the many software packages out there that already offer it. Scrivener could explore various models—subscription-based for those who want AI features, pay-per-use models ensuring that users only pay for what they use, or bring your own key. Keep it simple, choose one and stick to that. For example, TheBrain just released v14 specifically for AI integration, which supports GPT’s models (included in the subscription) and allows for a custom OpenAI key. OneNote offers AI at an extra subscription cost (Microsoft 365-wide). If your customer base comprises corporations that want to keep things on-prem, perhaps that is something to look into later (or not at all), but right now your priority could be your larger user base: regular people (I imagine, I have no idea, really).
Leaving Customers Hanging: This is what SLAs and circuit breakers are for. You realize thousands of companies rely on OpenAI’s endpoints to be highly available, right? Especially Microsoft understands this very well with their Azure cloud architecture. There would be a very small chance of an outage on OpenAI’s side, but the fact is it will happen, just like network issues in general. For those rare occasions, simply inform the user of temporarily unavailable AI support. It sucks, but this is an age-old pattern and people will understand. And it wouldn’t be a core function of Scrivener anyway. FYI, OpenAI just this week posted an update, saying they are working hard on releasing SLAs.
Development Focus: While spreading resources thin is a valid concern, the software development landscape has plenty of examples where balanced integration of new features has been successful without detracting from core functionality. Incremental improvements can be made without overextending the team. You could start small, like just having a function that summarizes a scene or chapter into the outline, or where you can just ask a question about the current active scrivening, or a function that generates a character profile according to a template based on the given description.
All the concerns about local models, privacy in the cloud, and hallucinations, each valid on its own terms, overlook the fact that, for those who are comfortable with their data in the AI cloud, the benefits remain incredibly valuable. Many, including myself, would be willing to pay for such functionality (monthly even, or use my own key).
So:
- Environmental concerns? Sure, but be clear about it. Don’t hide behind hallucinations.
- Too small to deal with the extra development effort? Okay, fine, still questionable, but it’s a clear and acceptable stance.
- Morally against using AI in any shape or form, even if hallucination and privacy are not an issue? I’d like to convince you there are clear morally acceptable use-cases, but I can accept if you reject the concept altogether.
- Privacy concerns? Understandable, but if you’re transparent about this and leave it up to the user to opt in, isn’t that acceptable? Still a clear stance and ultimately acceptable, if a little opinionated.
- “But hallucinations!” Well, it should be clear now, that hallucinations a) can be left out of scope completely (don’t generate content) and b) are a function of inventing facts, which is squarely in the fictional domain (useful for fiction writers, not research).
Really, this shouldn’t be about hallucinations and privacy but about types of users, use cases, types of AI applications/integrations, acceptable/avoidable risks (for the user), resource management, and AI partner stability. Why are we getting bogged down so much with arguments like “AI is bad because of hallucinations” and “I’m too old for this shit”?