NYTs article on how writers are using ChatGTP

Hi there… i would suggest that in the entire span of humanity you can not find a single time when mankind actually tried to do this. Small pockets tried but always fell back into their old ways quickly. Mankind appears to be incapable of mature co-existence with anything. Full stop.

I admit to full pessimism today. I need to go back to the water for a decade or so. Where are my sailboats and bottles of rum?!?

1 Like

Pattern matching is not intelligence.

I’ve been fighting with how we define “intelligence” and “sentience”. The problem I keep running into is that many people would fail most of the definitions. If we use these terms in context with “AI” the same way folks apply them to humans, ex all humans are intelligent and sentient, then LMM are already intellectually superior to most humans and likely qualify as sentient by comparison.

I really need to buy more rum.

I like the questions described in the article as a step toward a potential sentience detector.

Certainly AI as a field has a long history of achieving a major “intelligence” milestone and then saying, “nope, that’s not it,” but that’s in part because we don’t really have a good understanding of human intelligence, either.

1 Like

A central issue-the implied assumptions we all make are immense.

Many fields have experienced major earthquakes once they started being accessible to more diverse groups. Archaeologists who work with the descendants (and traditional knowledge) of the groups being studied have drawn radically different conclusions from archaeologists who don’t. The idea of “universal grammar” in linguistics becomes shakier the more languages you look at. And so on.

1 Like

Recent developments in our experience with Almost Intelligence has highlighted AI’s susceptibility for what the AI developers refer to as ‘hallucination’.

Until AI came along, the concept of ‘hallucination’ was a distinctly human peculiarity.

One wonders the degree to which the tactile nature of the human senses and perception, as a constant reinforcement of reality, and actual facts, serves as a constant reset for our thinking.

This raises the question whether our constant contact with reality is a crucial component of what we think of as human intelligence, constantly correcting what permutations our thinking may serve up as we look for new ways to evolve and survive, as painful as that may be at times.

scrive
:thinking:

Phrased another way, “Physics will kill you whether you believe in it or not.”

3 Likes

The difficulty is that we may think that “reality” and “facts” can be objectively commonly (similarly) understood. I live with someone bipolar disorder. She sometimes talks about multiple hard drives-which appear to me through observation to operate in “different parallel universes”. If that is the case for one person, then each person in our objective world has a different reality. It seems that AI may promote the spawning of additional hard drives (parallel universes) for each of us, in SPITE of the tactile connections we have. Said at a time when her hard drives are being serviced as part of recent hospitalisation-so perhaps I am being cynical.

1 Like

It would appear that Almost Intelligence can now read our minds!

Strap in and buckle up!

Cheers,
scrive

The White House will host leaders of AI’s top companies Thursday for a meeting with administration officials, Axios has confirmed.

Details: Vice President Kamala Harris and other senior administration officials will meet with the CEOs of Alphabet/Google, Anthropic, Microsoft and OpenAI, per an invitation obtained by Axios.

  • The meeting is meant to underscore the responsibility of developing safe and trustworthy AI that mitigates potential harms, part of a broader effort to engage with different industries about AI, according to a White House official.
  • The official said the CEO meeting builds on previous White House efforts such as the Blueprint for an AI Bill of Rights and the AI Risk Management Framework.

Why it matters: When some of the smartest people building a technology warn it could turn on humans and shred our institutions, it’s worth listening.

scrive

Farahany’s The Battle for Your Brain is a good read on this stuff. Companies are well down this route already.

“Corporations are getting in on the act too. Before long, computers will interface with our brains directly, allowing companies to know what products we want before we do – and which pitches we are the most primed to love. L’Oréal, the beauty and fragrance world leader, has even launched a strategic partnership with Emotiv, a large neurotech company, to target fragrance selection to individual brains. It now offers in-store consultations to help consumers find the “perfect scent suited to their emotions” by asking the customer to wear a multi-sensor EEG-based headset to detect and decode their brain activity through powerful machine learning algorithms. Will people willingly trade their brain data for customized perfume? Will those of us who refuse the technology have advertisements targeted to our brains based on the data amassed from other people? And is this just the tip of the iceberg of corporate collection and commodification of our brains?” Link

1 Like

I think the following suggestion may incite the greatest impetus toward an accountable rendition of Almost Intelligence:

“The fourth is liability. There’s going to be a temptation to treat A.I. systems the way we treat social media platforms and exempt the companies that build them from the harms caused by those who use them. I believe that would be a mistake. The way to make A.I. systems safe is to give the companies that design the models a good reason to make them safe. Making them bear at least some liability for what their models do would encourage a lot more caution.” (emphasis added)

Of course, from what I’ve read, considering that AI’s generally have the ability to pass whatever ‘Bar’ exams there are (and even to potentially replace all attorneys), the challenge will be proving that AI had anything to do with a bad outcome.

scrive

Full interview: “Godfather of artificial intelligence” talks impact and potential of AI

scrive

That one’s easy. You sue the person/organization providing the objectionable content. If they don’t want to be held liable, they will open up their own records and help you prove that the AI did it.

1 Like

“Geoffrey Hinton nurtured the technology at the heart of chatbots like ChatGPT. Now he worries it will cause serious harm.”

How Oppenheimer of him, or a blatant opportunistic attempt to insert himself into history books.
You’d think he could come up with more fear fuelled words than “worries it will cause serious harm”.
It’s obvious he got his bot to write it for him.
Now he’s screwed by his own invention that stole a line from a cigarette packet.

3 Likes

I don’t know.
But if I was in his seat and worried with the result of my own doing, I’d fix it, not quit.
Or maybe he looked ahead just a tad too late.

“Yay. Lets do it and see if it blows up. We’ll get the helmets from the car once we’re done.”

1 Like

I was previously very nervous about the introduction of AI, but now I’ve seen this AI generated advert for a pizza restaurant I have to say… I would totally eat here (and I absolutely love their slogan).

1 Like

Looks more like pepperoni bug spots. I feel quite ill just looking at it.

1 Like

Perfect adjective ! Thank you!

1 Like