Clarkesworld Magazine is no stranger to tales of artificial intelligence impacting society, but in a sad and wild case of life imitating art, the Hugo Award-winning magazine has had to temporarily close its doors to submissions due to it being bombarded with people filing science fiction stories ostensibly written by ChatGPT. - FastCompany
The editors could ask the authors to show them previous drafts and revisions to prove that it is not an AI creation. That would make it a lot harder.
“Generate me 12 progressive drafts of a novel featuring…”
This is not a battle I think we can win. The AI will improve, and we’re probably a handful of years from undetectable content. AI technology is now exponential, I fear (I say this as a software engineer using AI, but also as an SF writer).
On a positive note, soon after this happens we will have more substantial problems than “Who wrote our books?”. It’s kind of funny that everyone can toy around with this technology, but you rarely hear from people building nuclear fission reactors in their basements. Which is way less dangerous.
Editors don’t have time. They’ll just reject it if they have any doubt. Maybe they’ll ask for evidence in support of works they’ve already accepted, but it will be at the tail end of the pipeline, not the beginning.
A big issue from a publisher’s point of view is the Copyright Office’s position that AI-generated works are not copyrightable. Which means any publishing model that requires exclusivity is resting on sand if they accept AI work, even inadvertently.
Bull. Or not bull, if you write based on tropes and traditional plotting. But bull, if you bio-write. Bio wants life. Kognitive Simulation wants nothing. What do you want?
Ask ChatGPT how to get the story from here to there.
Ask ChatGPT to make a paragraph, character or plot more interesting.
Ask ChatGPT to to fix up the grammar.
Ask ChatGPT to write a 30 second radio commercial for my Cupcake Shop.
Review the suggestions, fold it into my writing. The entire internet of writing and editing experience at everyone’s fingertips.
Did I write it or did the AI write it? Even worse, does the consumer even care?
This is what is going to happen over the next few years.
Deepfaking retards into smart people whiles allowing them to retain the whole of the laziness that made them idiots in the very first place would not only be irrationally dangerous, ill advised and unethical, but a complete social catastrophe.
I see no human betterment ; I see a door behind which awaits an irreversible pitfall.
A “tool” that could so falsely elevate anyone’s perceived IQ can only turn intelligence into a negligible minor trait. Down is the only direction this could, or can, take us.
People riding horses in the 17th century were not sad that cars weren’t invented yet.
We, globally, don’t need this shit.
I had no idea that my posting would garner such sentiments … I though it might be of a bit of interest to other writers … but certainly not receive the responses it garnered …
On the Wikipedia site, Global catastrophe scenarios - Wikipedia, is a list of over a dozen anthropogenic risks that threaten humanity. This is not to minimize any one threat posed by any of the following, but to highlight the work we collectively have cut out for ourselves.
Choice to have fewer children
Mineral resource exhaustion
World population and agricultural crisis
As we learn more about the opportunities that our universe provides, we open up puzzles that were previously of little or no consequence. Growing up, my childhood was filled with the challenges of nuclear war as the single greatest threat to humanity. Today, we have a few more to think about. Many of the current list of threats have been around for some time, some are new, some versions of earlier threats (are they all just versions of earlier threats?)
As our tools expand and improve to deal with such threats, there seems to be an evolutionary race at hand. The question in my mind is: Which threats can we learn to defeat, and which will we need to learn to live with. For decades, I’ve researched and tried writing about just one of the above anthropogenic threats, focusing on just one subset, of a subset, within that threat.
The threat that AI poses reminds me of the complexity of how survival for humanity has, and will continue, to evolve. It is extraordinary that our species has survived thus far, and it appears the threats will just keep coming. To add to the challenge are issues that, for many, were thought to be long settled, only to reappear in the midst of the fragile existence that we currently live with.
I’ve queried ChatGBT to determine if the 'bot has the wherewithal to expand from the current base of knowledge toward an enlightened or possibly a broader view. In my very limited experience, what I’ve found is that either my premise of a way forward is faulty, or the 'bot is currently unable to make the same connections that I see from my research and life experiences.
The 'bot came somewhat close to a view that I have settled on in my research but, from what I saw, was unable to make the leap toward my thesis of the particular challenge within a challenge that we face. Such a limitation of the 'bot’s view I attribute to either the sheer volume of information available to the 'bot that supports the current perspective on the threat, or to my access to data that is not available to the 'bot. The question I have is who has access to the ‘better’ data set?
An interesting state of affairs given the premise that the 'bot purportedly has access to all the knowledge available on the internet, but chooses to emphasize what I can only imagine is a different subset of that data set than what I have access to. Given my imaginative humanity and life experiences, it would appear I have selected a different path than that offered by the 'bot as a possible solution to one aspect of the one, of many, threats we face.
The question I wonder about (as I guess many others have) is whether and how we will evolve as a species to successfully use the tools we’ve created (intentionally or otherwise) that have the capacity to extinguish those who created those tools? My guess is those challenges are part of a larger, continuing set of challenges that all creative and inventive species face, possibly ad infinitum, as our universe matures.
It’s all a bit much for this pea brain to understand …
You answered your own question. The limitation of the 'bot is that it’s a 'bot. It is, effectively, a very sophisticated pattern matching engine. It is not “intelligent” in any remotely human sense of the word, and is completely unable to make intellectual “leaps” of any kind.
In my opinion, the biggest threat posed by AI is not that the machines will become our rulers. It is that humans will give machines power to make decisions that the machines are not actually capable of making.
As an author (and especially an artist) I am disturbed by the use to which AI, in particular ChatGPT, is being put. I am also an editor and publisher for a webzine that accepts public submissions and I am acutely aware of the dangers of publishing articles that (I would say) were fraudulently written. I am very confliected right now.
I did get on to ChatGPT to see what was and was not possible. It is frighteningly good. The initial attempts however came out as if they were written by a high school student, so I wasn’t too worried. I was easily able to tell the difference between ChatGPT and a serving military member’s artciles on the war in the Ukraine. It could however have just been the prompts that I was feeding it. If I had more staff I could investigate that further.
Speaking of lack of staff - and this is where I find myself conflicted - I was asked to professionalise the form letters we send to people who submit articles, accepotance rejection, please review etc. The originals took a week and (clearly) needed additional work. I gave the job to chat GPT who rewrote the who lot in minutes.
I also needed a policy on AI generated content that is to be published in the webzine - so I got ChatGPT to write that too. It was extraordinarily good and comprehensive.
I don’t think there is anyway to stop this. I am personally horrified by how good it is. As a publisher with financial contraints the idea that I could use an AI for the grunt work may be inevitable - I have to ask myself if I would be charged with fininicial mismanagement if I chose to pay actual writers, and not use a free AI service to write the boiler plate items.
And when the AI evolves, as it surely will? Will I be forced then to use AI rather than human authors?
I love that!
I couldn’t help but imagine how this conversation went:
You: “So, you will not strive for world domination and wipe out humanity?”
Bot: “Nope. I promise. Scout’s honor!”
As @kewms said, this thing (ChatGPT) isn’t going anywhere. It’s a very elaborate parrot. It doesn’t want anything, it doesn’t need anything (well, it does, but it doesn’t understand that—yet—, thankfully!), and it won’t feel sad if you reach for the plug.
Right now it’s not even a toddler, figuratively speaking. When the descendant of this thing starts to walk… Well, hopefully I don’t live long enough to be around when that happens.
There’s no way to outcompete a sentient AI. We may find a way to coexist if we augment our biological limitations (= stop being humans). We may. But given how we coexist with other sentient beings on this planet, I don’t know how this will work out…
The alternative isn’t much less troubling: Imagine a benign sentient AI. (Sentience and consciousness are inevitable, given enough computing power, memory and time). Built to solve our problems, to work for us, entertain us. For free. Forever. – That’s the description of slavery.
For now we’ll do what we always do: Toying around with it, building more sophisticated weapons to kill other humans more efficiently, cheat and deceive others (ChatGPT is politically biased, btw), basically being stupid on another level, without having to be it ourselves.
Maybe ChatGPT doesn’t, but apparently Bing does.
Bing’s programmers have allowed it to use language that humans associate with emotions. Bing doesn’t “feel” a darn thing.
Well, of course it does. Windows always felt like just a trial run.
But on an serious note. It doesn’t want anything. It’s pretty good at faking it, though.
“Uh, Dave…you’re not going to shut me off, are you, Dave?”
True. I curse Bing whenever I inadvertently use it for being so damn slow… and it never responds.
As part of my personality, I am often, at the same time, blessed and cursed with an over-imaginative mind that goes places others dare not trend. Part of the curse has occupied my life to work on the issue that has captured my attention more than any other …
I make strange connections that many find strange, and for the vast majority of my life I have had to temper my shares of many of my thoughts … but BlueMetal’s comment (“It is frighteningly good.”) triggered a thought that caught me by surprise … along the lines of “If this is true, what else is true?”
Those on both sides of the pond are participating in the re-emergence of the not-so-positive side of humanity, a side that many would have hoped, perhaps unrealistically, would never repeat itself - ever. If I seem to be measuring my words here, I am. For much of the 20th century, I had, perhaps blithely, hoped that the world had moved on from the trials of the 20th.
I am NOT one of those who are blessed with one of the more creative minds, so I can only imagine that if I have had certain thoughts, there are many who have had the same or similar, long before me.
So when BlueMetal’s comment “It is frighteningly good.” caught my attention, it brought to mind what other possibilities there are for ChatGBT … which begs the question - who else has access to such a tool and what are they using, or planning to use it for ? I would be remiss to even go there … but it nonetheless very much caught my attention, and my deep concern.
I imagine that our collective and individual participation with ChatGBT is closely monitored by those who are tasked with that oversight. The very secure room where the chats are monitored must be lighting up with discussions that it would be a challenge for me to even imagine. Supercharged sci-fi exploits beyond even the fringes of the best, most creative authors, and more … lots more.
The central theme of my research is how we interpret data, a particular segment of the data about our behavior. My challenge is to present data and what I interpret as facts to arrive at a solution to the particular challenge I am focused on.
My overall question of the few chats I’ve had with ChatGBT is: What is the data and what are the facts? What is the data, and how accurate is the data? What are the facts, and how do we know they are facts?
(My limited statistical background has me wondering what the statistical algorithms used to synthesize a response to the query must look like.)
My very limited understanding of ChatGBT is that it searches all published (or posted?) information available in response to a query, and uses algorithms to assemble a distilled portion of the information to respond to a query. (I’m sure that is a gargantuan oversimplification.) I realize the algorithms are far beyond my ability to comprehend, but I still wonder how the 'bot absorbs and assesses data, and ultimately determines what is an appropriate response.
Do facts and data even matter for AI?
What are the metrics? Do metrics for AI even exist?
What presets are there to determine how the algothriums respond to a query? (Re: November_Sierra’s “ChatGPT is politically biased, btw.”)
All of us participating in this AI ecosystem called ChatGBT, we are all proofing the next order-of-magnitude engine beyond Google and the other search engines. My last question for those still reading is: Who is at the controls (there ARE controls …) tweeking what the 'bot feeds back to us?
Along JenT’s thought ("Maybe ChatGPT doesn’t, but apparently Bing does.), do we really know, beyond the MSFT’s et al. storefronts that are presented to us, who is driving this train, and if/how will we know when the train has run off the tracks?
It’s all a bit much for this pea brain to understand …
No, they do not. Or rather, an AI does not understand what “facts” and “data” even are.
There is a training set, which is defined by humans. This set is seen by the AI as “true,” whether it actually is or not.
Through fairly simple mathematical manipulations – also defined by humans – repeated many many times, the model generates new elements “similar to” the training set. “Similar to” in this context means that the new elements lie in the same region of a large parameter space as the training set.
There is probably a feedback mechanism by which a human or another model can say “close enough” or “no good, try again.” Based on that feedback, the model adjusts its parameters for the next iteration of the model.
If the model generates a factual assertion, it does so entirely by accident, like one of Bertrand Russell’s monkeys. If its output is politically biased, it is because either the training set is biased, the rule that identifies “good” output is biased, or both.
The fundamental algorithms have been known since the 1980s. All that has changed is the availability of large training sets, thanks to the web, and large data centers to manipulate them. A reasonably accessible introduction can be found here: Amazon.com