With Sequoia 15.2 out for a couple of weeks now, I was wondering if anyone has incorporated Writing Tools into their workflow. Is it useful or is it cheating?
Apple Intelligence Writing Tools.
Thatās not mutually exclusive. (Kind of like entering a bank with a gun in your hand.)
I donāt think using it as an adjunct (editing, assisting with rephrasing) is necessarily cheating. PWA uses AI, and as with all those apps, itās up to you to decide if you accept their suggestions, uses them as a starting point to rephrase, or reject.
The greater concern is the rise in the number of āauthorsā rushing almost entirely AI generated rubbish on to Amazon. Thatās cheating (and a few other words likely to get me told off by the admins)
I donāt think using assistive AI is cheating; Iāve used Apple Intelligenceās Writing Tools to proofread only; a couple of times or so to rewrite (but Iāve not always accepted the suggested revision.)
I noticed Apple Intelligence (sic) was made available here in the UK with macOS 15.2, but Iāve not downloaded it.
I mean, Iām curious, but so far Iāve been underwhelmed with the reality of the LLMs Iāve tried. Iām also sceptical of Appleās ability in this area, especially considering Siri, which, admittedly, Iāve not tried for a couple of years; though when I did, all it seemed useful for was setting a timer.
All that aside, I wouldnāt consider it cheating to use an LLM for assistance any more than I would a spellchecker ā auto-correction remains so laughably bad, Iām surprised anyone sticks with it.
Why not cheating? Well, LLMs arenāt going to create. Thatās not what they were designed to do, nor something they are capable of doing. Sure, they might provide some interesting prompts, but thatās about their limit. But then everything we experience is a potential prompt, so I see it no differently.
Things might ā and probably will - change at some point. But for now, they are just an interesting plaything.
At the end of the day, we are writers, and the words that end up on the page are our choices, however we come by them.
I use it for editing before sending to human editors.
Lets take a minute here and compare terms, shall we?
The hue and cry against AI is usually for āGENERATIVEā AI where the AI ācreatesā (mimics, or your verb of choice). Even then, if you get into the weeds, if the LLM draws from training materials it has proper rights to, itās at least tolerated, at least in business settings.
Where Generative AI runs into trouble is when you feed it all of the Heinlein books, and all of the Sturgeon books, and all of Asimov⦠And ask it to spit out a Young Adult coming-of-age Sci-Fi story set in a new colony away from Earth. You donāt (presumably) have the rights to use those works to do this. And once the LLM considers it, it doesnāt surrender the rights for itās next task.
PWA, AutoCrit, and Apple Intelligence, (presumably Gemini and CoPilot too) either have a Generative mode or might gain one, but most of the time they are in Analytical mode, where they analyze just what you provide to them. In this sense, it is little different that the first spell checkers from fifty years ago. The difference is is in the size of the reference model they compare against.
Now we get to the crux⦠Who trusts these massive corporations to not use all the data that comes to them? Precisely no one. How can we prove or disprove that our data isnāt added to the LLM for someone else to Generate from? We canāt, at least not a the current state of the art. For services like PWA and AutoCrit, where even if they capture your data for their future use, it is as instructive as reading the genre you want to write to gain an understanding of your intended audience and their expectations. When they start trying to sell tokens for generative work, thatās going to be a problem.
As for stopping it? Fat chance, that ship sailed when the masses were granted access to the Internet and allowed to blog about their every whim on Twitter, LiveJournal, Dreamscape, Facebook, MySpace, Instagram, Snapchat⦠I think you get the idea. None of those sites work unless they have the rights to rebroadcast what you write there, and that was the camelās nose in the tent.
TL,DR: Analytical AI isnāt as bad as people fear. Generative AI is as bad as people fear, and too popular with the Business types that want to sell subscriptions to the three people that use the service and 97 that dream of one-day using the service.