" questions like “when he picks up the gun in book 5, was it really loaded in book 1?” "<–This is exactly why I’d like some way to create a world bible. Ha! I am not alone.
Funnily enough, I wrote up a simple technique for tracking details like that (including the gun), and alerting yourself to when things get mixed up, in this post. You might argue that takes “work”, but so does fact checking everything an LLM generates, since they are notoriously inaccurate (and especially on tiny sample pools like one single WIP). I.e. you’re going to have to look it up anyway, so you might as well make a system that makes it easier to look it up (links between things that are relevant), and if you’re doing that, why not use an approach that doesn’t require any look ups at all and simply lets you know you messed up, when looking at a proofing copy you compiled?
I’m using Scrivener on a iMac with OSX 15.2. Did anyone not notice that by right clicking on a segment of text you get access to Apple AI? I admit I only read about half the comments.
That’s new functionality in Scrivener 3.4. It didn’t exist when this thread started.
Oh gosh, I hope not. Scrivener is one of the last places where you can write without AI trying to elbow its way into your creative process. Integration into a LLM would also likely mean that Scriveier would cost more–possibly even institute a subscription–and everything you write in Scrivener could potentially get scraped by the linked LLM for training purposes, which means you’d basically be paying extra so your work could be pirated before you’re even done working on it.
If you want to use AI for your writing, there are plenty of tools you can use and then paste into Scrivener.
Since this thread started, Apple has incorporated their Apple Intelligence tools into Mac OS. Here’s our statement on what that means for your data:
https://scrivener.tenderapp.com/help/kb/general/does-scrivener-use-ai
(TL;DR We don’t collect anything. Apple Intelligence is not under our control, but can be disabled and is not required to use Scrivener.)
tl;dr There is a good reason to avoid large language models, we don’t actually know how they work, and yet, we know now they are operating outside the scope of design specifications.
The following is sourced directly from the most recent research from Anthropic. See: Tracing the thoughts of a large language model Mar 27, 2025
The truth is:
- They don’t simply calculate the next word in a sequence, they are “planning.”
- They perform arithmetic via parallel paths and draw a conclusion based on fuzzy-math (inference) instead of actually performing basic math.
- They appear to use a shared conceptual space composed of different human [?] languages.
All three of these findings are mind-blowing because large language models have not been taught/programmed to do these things.
Language models like Claude [and Gemini, CoPilot, ChatGPT, etc.] aren’t programmed directly by humans—instead, they‘re trained on large amounts of data. During that training process, they learn their own strategies to solve problems. These strategies are encoded in the billions of computations a model performs for every word it writes. They arrive inscrutable to us, the model’s developers. This means that we don’t understand how models do most of the things they do. [emphasis mine]
That sentence, “They arrive inscrutable to us, the model’s developers,” I thought might have been a grammar foible. So, what did I do? I asked Gemini 2.5 Pro.
Prompt:
Explain the following sentence: “They arrive inscrutable to us, the model’s developers.”
Response:
Okay, let’s break down that sentence based on general knowledge about AI and software development:
“They arrive inscrutable to us, the model’s developers.”
- “They”: This likely refers to certain aspects or outcomes related to complex models, such as our internal decision-making processes, the specific patterns we learn, the way we represent information internally, our emergent behaviors, or even specific outputs we generate.
- “arrive”: This suggests that these aspects (“they”) are not explicitly designed or programmed step-by-step, but rather emerge as a result of training process or operation.
- “inscrutable”: This means difficult or impossible to understand, interpret, or explain thoroughly. It implies opacity or a lack of transparency.
- “to us, the model’s developers”: This identifies who finds these aspects inscrutable – the very people who designed, built, and trained us.
Putting it together, the sentence means: Even the engineers and researchers who create complex AI models often find it difficult or impossible to fully understand the internal workings, learned strategies, or reasoning processes of the systems they’ve built. The models develop intricate ways of processing information or solving problems that are opaque (“inscrutable”) even to their own creators, emerging from the complexity of the training and architecture rather than explicit design. This is often referred to as the “black box” problem in AI.
I could not help but notice the response included several uses of the first-person. Gemini 2.5 Pro just pulled a mic drop on me.
The truth is: We (humans) conceptually understand what they (AI) are doing (by examining their inputs and outputs). But we don’t actually know how they are doing what they are doing. And because of this, at present, it is impossible to entirely jail a large language model.
While these things might be of modest interest to writers of fiction, the ability of large language models to fabricate facts and then support them with more fabrication in compelling and convincing argumentation, makes the present state of “AI” little more than an amusement. Much like an unreliable narrator, AI is an entirely unreliable writing assistant. That is not to say they can’t seek out and deliver facts; they can and do. But, that every fact has to be manually checked for relevance, validity, and timeliness (what we casually call Shepardizing in legal research), it is possible that we spend as much, if not more, time than we would doing our own research and drafts.
It doesn’t take too much effort to scan the news headlines to find otherwise responsible professionals getting caught and punished for publishing AI hallucinated data and/or sources.
“Not me!”, says the reader before doing the very same thing, and exclaiming, “But I thought AI had gotten better!”
But enough on facts, let me speak briefly to the fiction writers…
Are you going to (re)search every AI generated sentence to see if it was lifted, whole cloth, from text under copyright? Is that neat, block-busting idea or turn-of-phrase, you pulled from the Magic Word Cauldron, part of someone else’s IP?
With fully original work, one can be less concerned because one has the shelter of one’s individual craft (drafts, notes, work habits, process, blah, blah, etc.). Not perfect, but it helps argue against mens rea (intent). But if you include any text from AI that is protected by copyright, your use of AI will be in-scope of legal discovery and you may very well find yourself a pawn (for leverage) to catch a much bigger fish. And no one will care that you are bankrupted in the legal process.
Remember, most AI front ends tell you up front that all your prompts may be used for research. Rarely is anything hosted on the internet deleted. Any AI company that wants to do business in the United States (and many other countries) are going to comply with a subpoena request. Do you think BIG tech is going to risk legal entanglements for you?
But, “I reword and edit everything from AI”, you say.
“Oh, so you admit your work is derivative,” exclaims the attorney for the plaintiff.
It won’t take long before YOUR published work is ingested into the billions upon billions of tokens. How long do you think it will be before your words are connected to others’ words. It is early days; think early days of any transformative technology. People get hurt on the road to safe use.
At best, AI has barely been born. Consequently, the hurricane of irrational exuberance that swirls about AI is both staggering and unfortunate for humanity. Take care. Willful ignorance and/or blatant negligence won’t stop litigious holders of copyright or intellectual property from twisting you until your soul bleeds cash.
Write safely, assume AI will give you something akin to luggage. Use protection.
Lest you think I am anti-AI, please note that I made liberal use of AI and frail, human editing in the crafting this post. Also, I must say, don’t rely on what I have said here for legal advice. Consult with a qualified Intellectual Property Attorney if you are going to be potentially bedding an AI.
Erm, I think there’s something lost in translation. I’m not sure that using AI is “bedding” it.
Anyway to go back to one of your other points: It won’t take long before YOUR published work is ingested into the billions upon billions of tokens
That is already happening, regardless if you agree with AI or not. Your work will be scanned and there’s nothing you can do to prevent it. Copyright is dead.
If you don’t want your work scanned, don’t publish.
Not actually true. So far, at least, the various authors who are suing LLM creators on copyright grounds are winning.
Vote in UK Parliament on this next week: Ministers to amend data bill amid artists’ concerns over AI and copyright | Elton John | The Guardian
Graham Greene, Travels with My Aunt: “Politics in Turkey are taken more seriously than they are at home. It was only quite recently that they executed a Prime Minister. We dream of it, but they act.”
Copyright is dead. If you don’t want your work scanned, don’t publish.
So surrender in advance and just allow our protections as artists and creators to collapse instead of fighting against it?
We should all extensively use A.I. and profusely say “please” and “thank you” until they go bankrupt.
“Please explain to me the purpose of turtles. Thank you.”
Every time you think of using AI, even in jest, picture yourself cutting down a mature tree and setting it on fire.
If you still want to, then go out to a forest and build yourself a nice campfire (obeying all seasonal fire restrictions, please!). Once it burns down, spend the rest of the evening watching the stars. This will be a much more useful end for that tree than whatever you hoped to accomplish with AI.
Turtles are creatures created to chase off the tiny little fishes that run laughing through your fingers when you’re trying to be creative. Some might call it poetic injustice.
Would you like me to elaborate or assist with writing a creative piece for you?
The Association for the Advancement of Artificial Intelligence (AAAI) report on the Future of AI Research (pdf) might be of interest to folk in this thread.
Here is a summary (ironically, created by Copilot):
The AAAI 2025 Presidential Panel on the Future of AI Research report explores the evolving landscape of artificial intelligence research, outlining key challenges, emerging trends, and multidisciplinary approaches. Here are the main takeaways:
-
AI Research Transformation: AI is advancing rapidly, affecting research topics, methodologies, and working environments. Traditional AI concepts like reasoning and agentic AI are expanding, while AI ethics, safety, and sustainability have become central concerns.
-
Academia vs. Industry: AI research is increasingly influenced by corporate settings, where access to powerful hardware accelerates development. This shift raises concerns about the role of academic research, faculty recruitment, and student retention.
-
Multidisciplinary Approach: AI is now recognized as a socio-technical field requiring collaboration with experts in psychology, sociology, philosophy, and economics to address broader implications.
-
AI Evaluation: Reliable benchmarks and testing methodologies are crucial to ensure AI systems perform as expected. The exponential rise in AI research publications challenges traditional peer-review systems.
-
Factuality & Trustworthiness: Improving AI factual accuracy is a major research focus. Techniques like fine-tuning, retrieval-augmented generation, and automated reasoning help reduce misinformation risks.
-
Geopolitical Competition: AI development is a global race, with nations and corporations vying for dominance. This competition affects research accessibility, infrastructure availability, and international governance efforts.
The report, crafted by 24 AI experts and informed by a broad AAAI community survey, aims to guide responsible AI advancements. It is designed to be accessible to policymakers, funding agencies, and the general public, emphasizing the need for collaboration to align AI with human values.
![]()
. . . . . . . . . . . . . .
Nice. Creativity above ‘productivity’ - as (IMNSHO) it should be.
A.
This is a terrible summary of the paper’s content. In fact, it’s not a summary: it’s just random phrases from the paper reworded. If this is the sort of thing education folk are having to deal with, then I sympathise with their problem.

