I certainly would not be in a hurry to share personally identifiable information with ChatGPT. At a minimum, it’s a reasonable assumption that user input becomes part of the system’s “understanding” of the world. (And indeed there are many examples where chatbots have been twisted by users to give shockingly racist/sexist/otherwise biased responses.)
Call me when ChatGPT can initiate an intelligent conversation. So far it only responds to your input. No input, no response. It is not sentient and will never be.
Agree!
After what was my second to third ChatGBT session, I was required to input a non-trivial telephone number in order to continue. Given how all identities are basically interlinked on the 'net, any anonymity was essentially extinguished at that point, so I try to be careful in using the 'bot for anything more than research. A VPN and/or LS is of little help in preserving privacy when such a connection exists.
Me, it asked for a picture of my credit card and the three useless numbers in the back.
Are you saying I shouldn’t have ?
And what about my birth date and social security number? Was that a mistake too?
. . . . . . .
But seriously,
…that is somewhat worrying.
→ Me puts on his foil hat and looks for microphones in the thermostats.
It might be interesting to hear if others have been required to input any identifying information as part of their activities to access the 'bot?
Perhaps it didn’t like my firewall, VPN and/or use of LS …
Isn’t the thing designed to acquire information?
That’s what it does, right? And information it spits back at you, at your request, right right?
Else, why would it care about whether you use a VPN or not.
It actually asked because it is designed to.
Tell it nothing.
Gotta go, Arnold’s at my door.
Anyone not using LS gives away VOLUMES of information every second they are on online … I am not aware of a LS version that operates on platforms other than the Mac …
We Windows users have a much better built-in solution anyways:
there is nothing interesting about us.
Does LS = Little Snitch?
As far as I am aware … only available for the MacBook platform …
But nothing is foolproof …
I heard a story from an old jeweler who regularly was burglarized by thieves with varying degrees of success. When asked if there was a way to stop the break-in’s, they responded “You can’t stop them, just slow then down”.
The NY Times today ran an Op-Ed piece by Noam Chomsky titled “The False Promise of ChatGPT” that may be of interest to those interested in ChatGPT …
For those without a Times subscription, there’s an Archive.org capture at: Noam Chomsky: The False Promise of ChatGPT. (Unfortunately, the Archive.org capture does not appear to include the exchange between Dr. Watumull and ChatGPT )
P.S. Thank you November_Sierra for the reminder on Archive.org as a non-subscription repository.
I want folks to be realistic about how this will impact all forms of art over the next few years. What we’re seeing now is the tip of an exponential blooming of AI tech, and honestly, pretty hard to see all facets of it. It’s a real threat, I think. It’ll start with the tropey trash because that’s easy, just like those awful “how many fingers!?” AI images. But it’ll outgrow that in a blink of an eye. Just like the finger problem has now been resolved (it took a few months…)
Anyone who’s income depends on copywriting should start looking for a second or new job imho.
I doubt the copyright infringement question will reach that domain anytime soon.
But that A.I. thing will get paid nothing (instead of you whatever you charge) to come up with
“Enjoy whiter teeth with Weiter teeth!tm”
It’ll likely be no fun for anyone though – writers, visual artists, you name it.
I still write so much better than chat gpt. But I like sparring with it.
Yes, transcriptions, web texts, info synthesis, that’s gonna go fast. And then it’ll come for translations - it’s already quite good. Even got a precise metaphore translation today - but what it cannot do yet, I think, is emulate period language. And it can not write poetry and it never will - poems and truly felt sensations from landscape, interactions, people, situations, ideals etc. And love is personal, we must not forget that. And I am sure our loved ones will not let us forget that!
Blockquote we’re probably a handful of years from undetectable content.
You’re talking trope-filled stereotype and cliché, and if that is your gambit, maybe you should compete some more with AI…
I don’t think we will see original content BY an AI. Sparred, prompted ideas, synopsis, shape maybe, but personal writing is detectable. You feel it - that it matters to the writer. I don’t get that from gpt or other bots. Theirs is sterile, correct, perfect.
All forms of non-fiction reportage are safe, I think. Even once it stops inventing sources, it’s a long way from being able to conduct intelligent interviews or dig through primary sources.
Fiction, well … there are lots of really terrible human writers out there. A good bot is probably better than a bad human, but not remotely comparable to a good human.
Almost two decades ago, I worked for a number of years creating a custom application in a large, well known publisher to document all stages of publication. I learned a LOT about publishing.
What impressed me about the industry was their attention to the details, particularly with regard to sources, specifically who wrote what, and most importantly, who gets paid, and what they get paid.
What I see now is a complete disruption of the publishing industry, in that AI has introduced a completely new, obscure set of sources using a proprietary, obscure algorithm, with the need for a completely new set of definitions as to who ‘wrote’ what (notice that wrote is in quotes).
I cannot even imagine where all this will lead, but the above discussions portend a massive disruption in the ENTIRE publishing industry where the source of much of what AI does is a complete black box, at least for the moment.
I suspect, however, that the obscurity may not last … I don’t believe that such obscurity of source, will remain as such.
As a rule of thumb, I use is the 5 word rule whenever I lift a piece of text, e.g. duplicating more than 5 words in length requires attribution. I have NO idea if AI follows the 5 word rule, or any rule. I’ve not seen any AI output/product that includes ANY attribution? (Am I out of line here?)
Just as clever as AI seems to be in creating ‘new text’ from a user-supplied source prompt, I believe there will be parallel development to delve into how and where any ‘new text’ is created. (Yes, I know AI also creates much more than just text, but I ask for your indulgence to use ‘new text’ as a proxy for just a bit.)
At the moment, very creative people at the bleeding edge are testing the myriad of ways that their writing lives can be embellished by AI. Many will profit from the chaos that follows as they push the boundaries; many of those boundaries have not even been defined. But those pseudo-boundaries need to be tested before they can be established, as happens in any revolution.
Make no mistake, we are at the precipice of a revolution. Some may understand the extent of the revolution that is simultaneously coming and happening. Many will benefit (and profit) from the revolution. But it will take time for the vast majority to understand the extent of the earthquake that is coming in the experiment that is humanity.
We are at a precipice that cannot be reversed, and despite what Musk et. al think, cannot be slowed. The potential power of AI is just too enticing, alluring, appealing, (choose whatever words you like) to stop it now … just like the power that fossilized carbon represented to the pre-WWI militaries of Europe.
The discourse that we are witnessing in the dialogue above is just the tip of the avalanche that is coming …
From my research over the last few years, I see a parallel between what those in the AI industry today see, to the burgeoning militaries in pre-WWI Europe. Back then fossilized carbon was a new source of power to conquer and impose their view of the world on others, possibly as many see AI today. The tools to attain that power may differ, but after all, it’s all about power.
Beginning toward the end of the 19th century, that thirst for power blossomed over decades as the evolution, development, and implementation of fossilized carbon grew as a source of both energy and feedstocks. The new source of energy metastizing (sp?) to transform every aspect of life during the 20th century, finally becoming cancerous, now threatening humanity in the 21st century.
The only difference with AI is that mature AI implementation will likely occur (at least) an order of magnitude faster than the development of fossilized carbon that extended over the last century and a half.
But make no mistake, AI will impact every aspect of humanity, initially with few rules-of-the-road to guide us as we all stumble forward. The fortunes, careers and profits to be made are incalculable, as are the mistakes we will make as we learn about what humanity has, and is, creating.
In the meantime, perhaps we can have some fun with AI ?
cheers,
scrive
There are examples of these systems inventing “sources” that don’t exist. The key phrase is that they “generate text.” They don’t look up facts and summarize them, they use a fancy pattern matching algorithm to generate sentences and paragraphs that resemble whatever existing sentences and paragraphs on the subject happen to be in their dataset.
This is one of many reasons why calling this sort of product “artificial intelligence” is misleading.
Hi kewms,
I hope that my last post is not construed as any sort of a recommendation for AI.
As my research is non-fiction (with the emphasis on the NON), I simply cannot trust anything that any AI will output/produce.
That said, the hype I see surrounding AI (if you can call it that), parallels the hype surrounding the application of fossilized carbon for warfare. Given the disastrous results of not just one, but two, fossilized-carbon powered world wars, me thinks ‘thar be trouble brewing’.
My guess is the militaries of the world have already taken notice, and are likely already (more likely definitely) have significant R&D already dedicated to AI.
So strong was the fever pitch prior to WWI that in the year leading up to WWI, Winston Churchill supervised the conversion of the British navy from coal to petroleum fuel to power the fleet. This enabled the war ships to build up their head of steam and be underway in hours as opposed to the day-long process to build up a head of steam when powered by coal. This was an enormous advantage for the British fleet, the largest navy of the then world.
To many, AI holds the promise of transforming humanity in ways we have yet to understand, and likely won’t understand until the creatives push the boundaries to the limits that most of us haven’t even thought of.
But the genie is out of the bottle …
Although I don’t trust anything that AI will output/produce, there are those who control the purse strings, are desperate to one-up others, or who see a future we may not. They are willing to put everything on the line, just as the powers of Europe did before WWI, (and again prior to WWII) to best others …
That kind of hysteria toward power, whether it be from carbon or AI, is what terrifies me.
‘Damn the torpedoes, full speed ahead!’
Our predicament with our environment today has it’s roots in that pre-WW frenzy and military buildup … we are still living with the remnants of those decisions today!
scrive
I am curious if this is a self imposed rule, or an actual rule that I never heard of before. I did a brief search and could not find any such rule.