Adding AI tools to Scrivener

It’s not a show stopper until it is. My point was that the landscape is changing so quickly that it’s hard to determine who the reliable partners are now, much less next year or three years from now. And past experience shows that if a partner becomes unreliable, people will still be upset with us.



Well this is Moravec’s paradox: “Computers find things that we humans find hard, easy. They find things we find easy, hard” — this was beautifully framed in my academic field (Cognitive Neuroscience) when in the 60s one of the founders of the field of AI, Marvin Minsky, gave a student an “easy” summer project to work on visual perception while they worked on "hard "problems. We intuitively open our eyes and see, it is something effortless to us. Yet > 50% of the networks that comprise our brain are active whenever we do this “effortless” thing. The student failed, as did thousands of others for subsequent decades.

We intuit that art is hard, and washing up is easy. In fact, the cognitive control in terms of planning and execution, and the physical control and dexterity necessary, means that washing up is orders of magnitude harder than composing a pleasing shorter form narrative novel or visual artistic work (it is of course unlikely/impossible they can compose “revolutionary” works of art). Yes, current vision diffusion models / LLMs are mostly statistical tricks, they take prior knowledge and recompose it with some randomness, but this follows the same statistical trick we humans use: if you took Picasso or Garcia Marquez back 20,000 years and placed them with a Paleolithic tribe, they would never develop any of the art that they did. Human brains and bodies today are not recognisably any different than our Paleolithic ancestors, it is just that our brains are trained (statistically primed) with millenia of cultural development, in an analogous method to how current AI models are trained on this cultural development through language or image datasets.


I believe the current state of cognitive neuroscience rejects the idea that human creativity depends on “statistical tricks” or that what current AI models are doing is qualitatively similar.

I also think you’re being unduly dismissive of Paleolithic art and classical literature.


I don’t mean for “statistical tricks” to be seen as something that is negative or dismissive. A hugely popular theory of how our brain works is called “The Predictive Mind”[1] and posits that we use past experience to build a complex internal model of the world. Thought involves this generative engine that creates experience from memory and experience fused together. Evolution built a whole bunch of complex statistical tricks, each one far from optimal but together they combined to allow us to take our learnt understanding of the world, which is statistically encoded in the innumerable synapses in the brain, and forge them together.

A recent review of the neuroscience of human creativity[2] summarises this: “The evidence to date suggests that creativity is an emergent property of the dynamic interplay between spontaneous and controlled processes in the brain.” — this is in fact what LLMs / DNNs are doing as they combine seed noise and structured data to create new cultural artifacts other humans can debate the merits of on forums like this.

I find a lot of art generated by Stable Diffusion or other models fascinating and aesthetically and conceptually pleasing. The models took our cultural visual knowledge and recombined it in ways that can be totally surprising. You can argue this is emphemerally different from human creativity, but we must guard against our own ignorance of unconscious cognitive biases that blind us to the sources of our novel ideas, and additionally what definitely motivates some people: a fear of losing our special place in nature.

TLDR: I don’t think we cognitive neuroscientists reject the notion that creativity cannot be explained by the (IMO beautiful) statistical combinations that our generative and predictive brains are capable of. There is still much to learn about the complex emergent dynamics across the brain (see my comments below, these all play into creative process), and the myriad tricks evolution endowed us.

Of course, if you are a dualist, then you can argue that “creativity” and our very “consciousness” cannot be encapsulated in our understanding of the brain and instead rely on something we can never properly understand (with our current tools anyway)…

I fully agree with you here. Our brains encapsulate many more cognitive structures than the LLMs do. I’ve argued[3] that even leading vision models miss basic features that continue to elevate our perception above any AI model of vision. Yann Lecun, one of the lead researchers on the current AI wave is currently dismissive of this wave a generative AI and is telling students to go and study some other models. Those cognitive structures are discernable and approachable (see the beautiful work of developmental psychologists like Liz Spelke and collaborator Josh Tenebaum), things like executive control, intuitive physics, our episodic memory, mind wandering, embodied cognition, curiosoty, attentional shifting etc. Each piece of the puzzle are actively being tackled by cognitive scientists and they then get integrated by AI models. How many additional pieces of the puzzle do we need to fill in? That is in debate, and people like LeCun (with something like predictive world models) and others are actively building models that fill in several of those pieces.

BUT I also reject something you may be implying, that there is a categorical difference. DNNs / LLMs are comprised of artificial neurons who wire and connect together based on learning reinforced by what is right or wrong (supervision by curated data sets, or unsupervised using other methods). The processes in these neural networks broadly reflect those in our own brains, and in some cases the artificial neurons even begin to reflect the same sorts of preferences we measure from biological neurons. There is analogy between digital and biological learning, and we would also be foolish to dismiss it as categorically different.

:star_struck: in fact, it may be if we had a time machine and could go back and could share a language with our paleolithic ancestors, they may have much to share with us! But if they were making great works like Picasso’s Guernica, it has sadly been lost to the mists of time…

[1] Clark A (2013) “Whatever next? Predictive brains, situated agents, and the future of cognitive science.” Behavioral and Brain Sciences 36(3), 181-204 ---- he also published some great general audience books on predictive coding, well worth a read!!!
[2] Vartanian O (2019) “Neuroscience of Creativity” (pp. 148-172, The Cambridge Handbook of Creativity Cambridge Handbooks in Psychology:, edited by Kaufman JC & Sternberg RJ) Cambridge: Cambridge University Press
[3] Hao W, Andolina IM, Wang W, & Zhang Z (2021) “Biologically Inspired Visual Computing: The State of the Art” Frontiers of Computer Science 15, 151304


Well, human brains use about as much power as a light bulb to do tasks that are out of reach for data centers consuming more power than entire cities. More generally, evolution has optimized human brains for efficiency and portability (able to fit inside our bodies and consume no more resources than are readily available with Paleolithic tools), while machine learning is optimized for accuracy. Almost all advances in AI since the 1980s are attributable to (1) bigger datasets and (2) the ability to throw more hardware at the problem. It seems reasonable to me that the different constraints would lead to different solutions.

(For more discussion along these lines, see C. Frenkel, D. Bol and G. Indiveri, “Bottom-Up and Top-Down Approaches for the Design of Neuromorphic Processing Systems: Tradeoffs and Synergies Between Natural and Artificial Intelligence,” in Proceedings of the IEEE, vol. 111, no. 6, pp. 623-652, June 2023, doi: 10.1109/JPROC.2023.3273520.)

Many of the cave paintings and sculptures that have survived are impressive as art. And of course our appreciation of Guernica is filtered through the same millenia of culture experience that led to its creation: we don’t know what our Paleolithic ancestors would have thought about it.

(Edit to add: Also, our Paleolithic ancestors lacked experience with large scale war and especially aerial bombardment. Which, of course, were part of the inspiration for Guernica. I’m not sure we “win” that argument.)

Our earliest literature is of course much younger, dating only to the invention of writing, but Homer is still being read and appreciated and reinterpreted today.


I don’t want to start getting into the weeds here, but the biggest tangible advance in AI, back-propagation and the resultant convolutional neural nets were just seed ideas in the late 80s. it was the persistence of Geoffrey Hinton, Lecun, Benigo and others who kept this idea moving forwards for the next two decades. DNNs of course piggy back on better access to data, and faster hardware, but there were significant conceptual shifts in how networks could be trained that allowed this. It is hard to underestimate that few academics in the 90s paid much attention to DNNs, by the 2010s DNNs just wiped the floor with alternatives. A single grad student could outperform decades of accumulated trad-AI progress. This was an accumulated conceptual algorithmic revolution for which Hinton, LeCun and Benigo won the Turing Award (there is some contention about original sources for these ideas).

Right, and even cooler is that our computing device runs on energy we literally harvest from the environment (don’t have enough power, eat a banana!). But again, the question is what happens as we apply inspiration from evolution, and start thinking about optimisation.

But as horrible as it is, the fact that we had built so many Empires and cities, had competing political ideologies, designed machines that could fly, bombs that could kill, are also cultural testaments to the beautiful tragedy of our accumulated abilities. My point exactly is that “our Paleolithic ancestors lacked experience” — knowledge accretes slowly over centuries, encoded in the synapses of each brain as it comes into the world to change the statistical millieu and propel our brain to do things that were literally unimaginable to our anatomically identical ancestors.

I gain as much satisfaction reading Aristotle as I do Galen Strawson. But we also can’t deny that thousands of years of human thought to date have opened possibilities that were probably unthinkable by Pythagoras, Anaxagoras and all the other great and creative minds around the dawn of written thought. Greeks mostly contemplated slavery as acceptable (with some Stoic exceptions), as did many other cultures at the time. They didn’t have the conceptual tools to explore the natural world as later natural scientists did, couldn’t really test the ideas that they did have.


I’ve been working with an editor for five years on a project that’s taken me seven years. He’s survived two rehabs, and one triple heart bypass. But he’s finally gone AWOL, and I’m not waiting around for him. Professional editors are of no use to me now (he was a professional editor) because of my grammar, punctuation, and general idiosyncratic writing style (that he’d learned and was familiar with. it’s also over 180K words. After much research it looks like I’m going to do it myself with a combination of ProWritingAid and Grammarly. Frustrating. But if a company came along and made something like Scrivener, but with AI editing tools, I’d jump over there quicker than you could say: “Wow, you really jumped over there, didn’t you?”

1 Like

The upcoming Apple AI is system wide and will offer proofreading writing tools.
Perhaps it will suit your needs.