It’s curious to see that when the response to using ChatGPT4 to help write a work is ‘Go ahead but just declare it’ that this is met with animus and insult. I think that is quite revealing.
Requiring disclosure and transparency when using AI is a pretty low bar. If there is unwillingness to even entertain that…
Of course all online channels can be sources of rubbish, HOWEVER from my reading of the numerous discussions online, and here, I’d have to have grave doubts about your ‘grave doubts’.
Amazon often make no official announcements, just implement their latest decision and try to convince them to reverse that decision.
When an author points to their Amazon email that specifically mentions images, I tend to believe the author. That has nothing to do with ‘Graphic artists with extreme protest attitude’. I’ve seen endless posts with confusion over exactly what is or is not allowed with images from the likes of Canva, and reading their T&C I can see where the confusion arises, and where some authors get caught. ‘Confusion’ doesn’t seem to be a valid defence when fighting a block or ban.
The current Amazon T&C need no changes to allow them to make a unilateral decision on the acceptability of an image, and their content guidelines specifically mention their public domain stance on text.
At this time they’ve said nothing on AI-generated text, but expect when they do move and decide against it, it will be blocks and bans before or at the same time as any statement, it’s just how they work. As stated above, traditional publishers are likely to just consign anything they considered may be AI, or potential for plagiarism claims to the bin rather than take a risk.
Given an Amazon ban is usually firm and lifetime, why on earth would anyone choose to take a risk until there is a resolution to the court cases and/or a firm Amazon statement?
I get you have staked a position and appear unwilling to contemplate your own position could be based on ‘nonsense, half knowledge and fake news’.
Considering that AI has only recently come out none were written with the assistance of AI except his last one. I saw that many had been published years ago.
Oh, I know. And I can see him struggling with appropriate use of AI like the rest of us. It’s going to trigger debate. It’s just best with it remains civil. Some professional editors on this forum would be good experts to tell us their definition of the difference between writing and editing and the risk and benefit of AI.
I believe until courts rule and appeals are heard, pretty much ALL the opinions above are just that, opinions. Hanging your self-publishing career on your best guess of the outcome is like playing chicken with a hand grenade..
I would also say that if you can’t write better than an AI, you need to find a different line of work.
Which is not intended as a value judgment, simply an observation. If corporations in particular can use an AI for their press releases, annual reports, and so on, they will.
After spending some time with GPT4 it is a useful tool but it is not going to replace original thought or a good writer, at least in non-fiction. I can’t speak for fiction writers.
It will be used for the drudgery type of writing tasks that you have pointed out- “press releases, annual reports, and so on.” It can also be useful for writing a synopsis or perhaps an abstract as long as you know the subject and can spot “hallucinations.”
I suspect that those who think that AI spells the doom of writing have not worked with it enough to see how actually stupid it is. It is only as good as the human who operates it.
And that brings us back to the programing concept of GIGO: “Garbage In, Garbage Out” which completely applies to AI because it is just a sophisticated programing language. AI is not going to turn a pedestrian writer into a literary giant.
But in the hands of a great writer it will be a useful tool. So I would suggest that writers learn how to use it to improve their work and get it to do the drudgery tasks so that they can focus on the creative part.
I think what scares some people is the fact that this is not the end of some technical evolution, it’s barely the start. Even if they don’t understand it, they feel it. Ask some random strangers on the street some random questions — “artificial intelligence” at this point is likely already surpassing their natural stupidity in a functional way. (I wouldn’t call ChatGPT an AI, but let’s just pretend.)
Recently I involuntarily participated in one of those infamous support hotline marathons. You know, where you talk half of the time to a machine that can’t understand what you say, and the other half of time to… a human who can, but doesn’t know shit.
Five different people. Five different answers, contradicting each other, none of them correct. Not even remotely. They also had a hard time to form coherent sentences, despite being native speakers, trained for this job, and likely all adults. ChatGPT might not come for the creatives (at the moment), but this technology in principle is ready to replace those poor souls in no time.
Drudgery or not, those sorts of tasks pay humans (currently) pretty well. More than one writer has funded their more creative, less commercial output with this kind of work.