Why A.I. Isn’t Going to Make Art – Ted Chiang

That’s a pretty condescending statement. I’m open to new ideas. I’m not open to stupid new ideas.

4 Likes

It seems to me that posting an image as a demonstration of “what AI can achieve” kind of invites criticism of that image.

I will remind people on all sides of the conversation, though, that personal attacks are out of line.

Having an opinion – pro or con – about AI tools is fine. Attacking others for having the opposite opinion is not.

6 Likes

True. And it breaks the immersion. That’s an opportunity where “AI” could in theory be useful and add something positive: “reality checking”. But it can’t.

I’m not.

These I am open to.

2 Likes

I think it’s pretty evident that we’re likely only glimpsing the very beginning of what AI is going to eventually be capable of. To say that it will or won’t crack anything, including art, is (I believe) too early.

We’re coming up on a new Industrial Revolution. Unfortunately, many of us are on the wrong side of it, with the rare(ish) skills and talents we pride ourselves on being put in the hands of anyone and everyone with an online connection.

We are going to lose.

I am really starting to understand what Armand was talking about in Interview With The Vampire.

4 Likes

I don’t think it works that way. “Stupid new ideas” are a subset of “new ideas”.

1 Like

And which ones were stupid is often only known in hindsight.

4 Likes

We will see, I think this is what people used to think about blockchain, remember that?

We will end up with a set of tools that help creatives do their job better, in that sense ThoRab is not wrong. But talent with the written word and pen strokes will still matter.

I simply cannot see this art posted here as professional. It simply isn’t. Now you take a professional artist and give them AI tools and perhaps that works. But we are not there. There is far too much “AI-created creative works” that are extremely uncreative and simple money grabs off the backs of legitimate self-publishing (as an example).

But audiences won’t be fooled for long. They will demand higher standards not lower.

2 Likes

You mean that thing that powers Bitcoin?

But, yeah, I’m not surprised that Blockchain, a technology that takes power and control away from a historical centralised model and democratises it, has not met with the same push for widespread adoption from the people who occupy that centralised position as AI, a technology that is trying to remove the reliance that same centre has on talented (and so expensive) individuals.

Ha! I so hope you are right. However, a quick glance at the “trending” tab of any social media site will likely quickly disabuse you of that notion.

Blockchain and Bitcoin are related, but not the same. I suspect @JasonIron is referring to the brief moment in which NFTs were supposed to usher in a fabulous new funding model for creators.

Yup. Hence:

You mean that tiny use case that was an obvious bubble from the get go that no one at all took seriously unless they were trying to exploit being at the early tier of a Ponzi scheme?

You and I clearly run in different circles. All of the investment and IT sectors were clamoring for blockchaining everything. It is obvious in hindsight of course–or to those of us in IT that had been using audit trails for decades.

We will see. Different expectations for different products…

It was most definitely obvious with foresight too.

Those hypes are not the result of shortsightedness, they’re created to extract money from shortsighted people. And it works every single time.

1 Like

To you and I, yes, to others? Not so much.

Resurrecting this thread to add a paper that might be of interest to those following along. It’s by Mehrdad Farajtabar – RS @ Apple and ex-DeepMind – et al. entitled Understanding the Limitations of Mathematical Reasoning in Large Language Models. Or, more simply: “Can Large Language Models (LLMs) truly reason? Or are they just sophisticated pattern matchers?” Here his twitter-thread announcement.

1 Like

Spolier: they are sophisticated pattern matchers, a hypothesis that is becoming better quantified by this and other recent papers…

…and yet, the newly minted Nobel laureate and “godfather” of AI Geoff Hinton is still convinced that LLMs truly understand, possibly have subjective experience and experience intuitions that supersede human thought…

Well, pattern matching is quite explicitly what their code actually does. So to claim that they are anything more is to argue that “intelligence” itself is simply an extremely sophisticated form of pattern matching, maybe with a little randomization thrown in.

Far be it from me to argue with a Nobel laureate, but that seems like an overly reductive definition.

1 Like

I’m currently reading an excellent book (“Noise: A Flaw in Human Judgement” by Daniel Kahneman, Olivier Sibony and Cass Sunstein) which (amongst many other points) makes the case (backed by data) that even simple models are likely to be more reliable in making judgements than the humans they model.

Combine that with the lessons on cognitive bias and heuristics from Kahneman’s earlier book (“Thinking, Fast and Slow” - also an excellent read) and you may come to the conclusion that humans themselves are simply flawed Large Language Models that could be improved upon.

EDITED TO CLARIFY: I should point out that the above observed potential conclusion is mine, not one proposed be Kahneman et al in their books.

DOUBLE EDIT: *at least I don’t think so, I’ve not finished “Noise” yet!

TRIPLE EDIT: there may be something to this flawed judgement thing after all

1 Like