Readers and AI: a thought experiment

I’ve been following recent discussions, on here, and elsewhere, about the benefits of AI for writers. It’s clear that there are many different views about this, and it’s probably going to stay that way. But it got me wondering about another perspective: we are all readers, after all, and it’s not axiomatic that our approach to AI should be the same depending on which hat we’re wearing.

So, a thought experiment…

At some point in the near future every book published will contain the following checklist, which the author will have completed honestly.[1]

" In writing this book I…"

  • used an AI[2] spelling/grammar checker and accepted the corrections
  • used an AI style checker and accepted the corrections
  • asked the AI to suggest ways of making my characters or dialogue more realistic
  • asked the AI to help with world-building or background research
  • had conversations with the AI about my plot and incorporated its suggestions into the plot
  • asked the AI to construct a plot for me, either in part or from the beginning
  • asked the AI to write part or the whole of the story
  • asked the AI to generate the artwork

To repeat: the author will complete this checklist with scrupulous honesty, and you see the checklist prominently before you buy the book.

Without taking anything else into account, how many ticks would it take for you NOT to buy the book? Is it a matter of numbers, or would any be deal-breakers in themselves?

The list is obviously aimed at fiction, but if you made the necessary changes for non-fiction (‘research’ instead of ‘plot’, for example), would your answer be the same?

I have my own intuitive answer to this question, but this post is already long and I don’t want to start the discussion off in any particular direction, so I’ll save that for later. In the meantime, it struck me as an interesting way of thinking about how I should (or shouldn’t) employ AI in my own writing, and I wondered what others thought…


  1. Yes, I do know this checklist is never going to happen. And yes, I know that this checklist isn’t scientifically rigorous. Perhaps I should have used AI to generate it… ↩︎

  2. I’ve deliberately used AI loosely as an umbrella term, rather than trying to distinguish between generative and non-generative etc – the point is the effect on the reader’s expectations, not the precise terminology. ↩︎

2 Likes

“used an AI spelling/grammar checker and accepted the corrections” would be okay. Until I spot the first error …

(It’s annoying enough in general, but I’m willing to cut a human proofreader at least a little bit slack, because … well, humans make mistakes.)

1 Like

The last three are all dealbreakers. The first and maybe second are okay. In between, more tics would lead to increasingly large amounts of skepticism and, well, there are plenty of other books to read.

6 Likes

Why is the last one (AI generated artwork) a dealbreaker for you? Would it still be a dealbreaker for you if the AI generated artwork was being used by the author to help them develop the story but not being published?

1 Like

I can’t speak for Katherine, but I have the same reactions to that list.

AI generated artwork is a dealbreaker for me because I know artists who create book covers.

I do not want to see them lose even more work, so I wouldn’t support a publication with an AI generated cover.

For me, it’s a principled stand in solidarity with other beleaguered creatives.

8 Likes

AI-generated art that isn’t published is part of “worldbuilding,” not a separate tic of its own.

AI-generated art that is published is bad for the same reasons AI-generated words are bad. Objectively poor quality, plus I’d rather use my dollars to support creative humans.

7 Likes

Thanks for the comments so far…

Personally, I feel as though any tick other than for the first two would raise questions for me, because they go beyond asking the AI to show why a text breaks rules that are accepted as the consensus (though it’s often not very good at that – hence the separate tickbox for style), into making value judgements about what would be an improvement. I know advocates say that the author makes the choice to accept or reject the suggestions in the end, but the black box nature of the algorithm and the lack of transparency over the training material, feels to me qualitatively different from employing a human editor.

Some of the tickboxes are more clear cut than others, though. I suspect most people would refuse to buy books where the AI wrote part or the whole of the story, but is asking the AI to help with world building / research any worse than going to the library or doing web searches? Again, for me, it depends on how active the assistance has been.

Anything which asks the AI to actively suggest improvements feels a step too far. E.g. what is a human doing asking an algorithm to improve the reality of a human character? Doesn’t that show a lack of competence in the author? Isn’t the ability to come up with a coherent plot basic table stakes for an author who wants to be read? That’s even before we reach the ethical consideration of where the training data came from.

Advocates will say that this doesn’t matter as long as the finished product is the best that it can be. But that’s why I couched the thought-experiment the way I did: would it make a difference to the reader if they knew in advance the role AI played in the book’s production?

And that’s really the point: the checklist will never exist, and the reader can never know how much AI help the author used.[1]

That seems to me to create an ethical pull on a writer to be explicit about the methods they used to write the book, though of course many will do no such thing. Don’t readers deserve to know?

I don’t for a minute claim my take on this is either right or the only logical one and I’m open to persuasion – I just thought that the honest checklist idea was a good way for me to start thinking about the issues and I’m grateful for others’ thoughts.


  1. I don’t find the ‘it doesn’t matter because you can always tell if an AI wrote it because it’s so bad’ justification particularly persuasive: fraud is still fraud even if the signature on the cheque is obviously a fake. ↩︎

4 Likes

Honestly, we should call apples just “apples”. Unfortunately, big corps forced AI-bullshit down our throats.

There is no “copilot” or “buddy-writer”, that’s all crap corps say to justify their crazy investments.

There is no intelligence in what they sell as AI. It’s just a buzzword for a far more difficult moniker to sell, that is “machine learning”.

At the of the day, what they call AI is just an algorithm that recognize patterns and find the most probable next pattern based on the previous. It sounds crazy, but it’s a boombastic autocompletion feature.

Whenever you “ask” AI to “generate a plot”, it just guess what’s the most probable chunk of text that would come next in a similar conversation between gazillions of conversations previously analized by this monstruosity. It has no creativity about ANYTHING. It’s as smart and creative as it’s source material it’s trained upon. No more. No less.

So, many people out there fear that human writers will disappear. They will not, the good ones at least. Instead, we’ll see an insane amount of crap out on stores written by AI that will drown the good stuff in an ocean of mediocrity. But, honestly, it was already happening before “AI”.

So, my take on the subject is to stop thinking about an exotic intelligent entity and start thinking about what really is.
Using AI for spelling/grammar checker? Sure, why not. Microsoft Word had a spell checker since ages, for example. The “machine learning”-powered spell checker is just a more powerful version of it.
Using “AI” to find information in text it’s interesting. Using query like “did character X injured himself somewhere in my novel?” it’s interesting. Just pure “machine-learning” capabilities at play.

Everything else involving a sense of “creativity” is madness. Whenever you think that the answer it provided you is “creative” or “original” by any means. Just take one thousand of step backs, take a look at the whole world and ask yourself: “there are Gazillions of Petabytes out there. Am I really sure the answer provided me isn’t just a rehash of piece of data spread across the world, written in dozen of different languages, that this supposed “intelligent” system glued back togheter in a nearly random-probabilistic manner?”
In short, writing with machine-learning is like asking a person who has read EVERY line of text ever produced by mankind to combine random stuff togheter to suit my prompt. To my eyes, it will look like this guy is creating something new. In reality, it’s just rehashing Harry Potter with Twilight in a combination never existed before.
It’s something we have done for decades. These tools are just superfast doing it.

Furthermore, the quality of this kind of “AI” will not improve much in the future. Because this tecnology is smart just as its source material is. At the moment, the source material is produced by humans, but with the flooding of AI-produced crap into internet, this stuff will be used again as material to improve the next gen AI. So, if AI cannot create anything out of the blue, how it can improve if the source dataset will shrink over years?

3 Likes

First and second, I have no issue with, though any author worth considering would surely only use AI spelling/grammar as suggestions, just as we used pre-ai spelling/grammar checkers - wrong as often as right.

I have no real issue with AI generated cover artwork provided it looks ok and relevant to the title. I’ve seen enough pretty ordinary people generated covers over the years, even from on name authors’ work.

Everything else - I hope that never happens. I have no interest in seeing a lengthy checklist when evaluating a book purchase. The reality of how authors work would turn this into a nightmare long list.

Used an AI spelling/grammar checker and used it as the basis for considered decisions
Used an AI spelling/grammar checker and decided it was rubbish…

How many options for each one? How many pages do we want to this checklist?

1 Like

Well, the checklist is never going to happen, of course, and is never intended to happen, just as no-one puts cats in boxes waiting for poison gas to be released. Thought experiments are cheap and don’t need to take account of practicalities or animal rights.

The point is really to ask how confident a writer is that readers would not be put off their work if they knew the extent of the AI support the writer had used. My feeling is that if one wouldn’t be happy that the reader knew what you’d done, one shouldn’t use those tools.

2 Likes

Interesting take. As a reader, I would consciously avoid anything created with any box after the first two ticked. I use MS Word’s spell and style checker in my own writing (mostly ignore the latter). I’m not a Luddite, just wary of a headlong rush to adopt new technologies given the tech sector’s terrible record of safeguarding its users, and the clear danger generative AI presents to the artistic community’s financial future.

I read yesterday that Lionsgate has licensed its video vault for AI training. The report included a quote from a Lionsgate exec, who said that AI would help develop “cutting edge, capital efficient content creation opportunities”. No thanks.

3 Likes

asked the AI to generate the artwork

I have a question about this. I am preparing a presentation that addresses the national teacher shortage. I’m making the point that, given our criteria for hiring teachers, we fish in a small pond even as the ocean of candidates is shrinking. I worked with AI to generate an image I could use on a slide to illustrate this point. I have a caption on the image that says, “AI-Generated Image,” but the caption is not visible on the slide as I don’t want to distract the audience from my point.

I’m curious if anyone considers using such an image inappropriate.

It looks »AI« generated, so don’t worry about the disclaimer part. And since the presentation audience usually tends to sleep or play on the phone…

1 Like

Not during my presentations! :rofl:

3 Likes

I can give a relatively short answer to that. I use and implement LLMs where they are useful for my workflow and improve quality. I write my novels myself, I sometimes get plot suggestions generated, but so far there hasn’t been anything useful among them. I create illustrations and covers with generative AI and sometimes use randomly generated styles for inspiration. As can be seen here in the forum, there is a lot of resistance and criticism of AI support in the writing and publishing process, but that doesn’t play a role in my decisions. I am always looking for ways to improve my workflow and that definitely includes the use of AI.

If I had the choice between a good novel by an AI and a bad one by a human, I wouldn’t have to think twice about which I would choose.

The more someone sees themselves as an artist and what they write as art, the more this author will reject AI.

Those who have agreed with themselves that they want to write and sell in order to earn a living, the more relaxed they will be about using AI, because it was never their intention to win the Pulitzer Prize.

“Quality” doesn’t matter if it is bought. Quantity, on the other hand, does, because that’s what average authors who produce average books do for a living. And that’s exactly what average readers who want to pass the time with such books want. So it’s all right.

Some of us who write for a living consider it unethical to present work that we did not write as our own.

8 Likes

Even if it is declared? The initial question was about reading (buying) not writing.

So if the author admits that the AI helped and the reader knows that … is it then unethical? I find it rather … strange, embarrassing, slightly absurd.

But I must confess that I haven’t thought about it in depth because it doesn’t concern me. My texts are about content that AI doesn’t have.

Well, if they’ve declared it, they’re not seeking to pass the work off as their own, are they?

For me, there are ethical issues for an author about using some forms of AI, but the primary one is to be honest about how you’ve used it, so the reader can make their own choice. That was the point behind the thought experiment really: is undeclared use of ‘active’ AI a breach of the reader’s trust or not?

Now, I know this is all hopelessly naive: some authors won’t care a jot and very few will declare that they have used AI extensively. But that doesn’t mean it’s unreasonable to ask whether this is a good thing.

1 Like

It is certainly a breach of trust and your thought experiment is very valid. What if you take it even further? How far could / should such a self-declaration go? Would one also have to disclose “help of the normal kind”?

In writing this book I… used some friends

  • to suggest ways of making my characters or dialogue more realistic
  • to help with world-building or background research
  • an had conversations with them about my plot

I do not know :man_shrugging: