There’s a whole bunch of problems there, and not all technology-related.
That said, it really is important to separate LLMs from AI – LLMs are a type of neural network, which are a type of AI. The article I posted is referring to ChatGPT, which is an LLM, of course, and the general point is that LLMs are a dead end, but not AI in general.
Demis Hassabis has done pioneering work in AI. What I was objecting to in @AmberV’s comment was lumping in, say, AlphaFold with LLMs. AlphaFold is not a SillyThing – and nor are many of the projects DeepMind are working on.
With all respect, you’re still putting words into my mouth. At first it was to say that I was sceptical about whether LLMs are limited, dismissive of the person that posted the link, of the article about the comment, and the CEO doing the speaking. Now it’s about a technology for predicting protein folds in a DNA strand—even though that has nothing to do with the original article, the comment linking to it, your comment linking to that, never mind the topic of this thread, anything the CEO had to say, and most certainly nothing I had to say. Can we agree that might be a stretch?
It was a very simple, and very narrowly constrained statement, in the form of a joke, on whether announcements being made by CEOs should be taken for statements outside of the sphere of what CEOs do when they get on a stage and talk into a microphone. I.e. is this really about finally coming out and saying what we all feel or know to be the truth? Or is this about selling something?
To me it is more likely the latter, to me it seems Alphabet is maybe on the cusp of announcing a new project, probably to do with world modelling. My guess is they don’t have much to show for it yet, but want to get their narrative in the open sooner than later, ahead of the decreasing inertia of LLMs in the world at large, in hope of becoming the next money pit.
My disappointment is that Demis didn’t pipe up earlier.
I’m sure nothing was said about it because it would have been very bad business to basically call your own product (Gemini) a dead end. Hence, my speculation that they have something else to promote on the horizon, and as to whether or not that is a SillyThing2, I don’t think anyone outside of their labs can really say. But if anyone is interested in whether world models (what is actually the topic of the statement, and mine) are the next dead end, or something that might matter, the discussion on the linked post above is quite interesting and worth reading.
In fairness to Alphabet, it’s mostly their own money they’re burning, rather than investor dollars like OpenAI. Which in this case may be a good thing, as it may help them see more clearly.
I definitely agree with that sentiment, and wish more of these endeavours were something born from such a criteria. Perhaps progress would have been a little slower, more deliberate, for as you say that isn’t a bad thing—and it wouldn’t feel like we were on the precipice of a frighteningly huge economic bubble right now.
That said, I wouldn’t ever accuse Alphabet of not wanting to be a money pit. But more seriously, there are many other reasons to want to strategically “win” even if it isn’t to be the kind of jackpot OpenAI has been, and that is a company positioned to take strong advantage of having a pervasive presence, or being the thing that is adjacent to a bubble collapse.
@SedonaSam imagine a hypothetical person who feels an instinct towards storytelling, who wants to write a story themselves. you have an itch to write a story of a particular type or protagonist that doesn’t exist, so you have to write it yourself. once, you would have written that story. it would likely have failed as a story, at least on your first attempt, but you would get better as you worked on it and you would have injected your own unique sensibility into it.
nowadays, they can go, “I would like to read a story and X and Y, without Z type of protagonist, please write it for me” and the LLM writes it and whenever they want to read another story, they can do so. they don’t have to write a story themselves, they don’t even have to read the wrong of great writers who can really show them the way.
I see it as an okay outcome for a low talent writer to get into “writing” stories with LLMs. but if potential writers with high talent start using LLM, they might get stuck on that and never do anything more with their talent.
The difference between the low talent writer and the high talent writer is that the high talent writer is willing to do the work. No one starts out good.
This. The only thing an LLM will do for a low talented writer is prevent them from becoming a high talented writer. It is the act of writing, failing, and writing again–ad nauseam–that turns one into the other. All an LLM will do is hinder that progress.
a writer of low talent may work and work at it but they will not produce work of substantial quality. they may even get someone to pay for their work.
some years ago, I knew a painter, who helped run a gallery, who taught art classes at the gallery, who painted every day, and had no talent whatsoever. I could provide proof of how badly she painted but to do so I would have to show you actual paintings, and thus, reveal her identity. but as I said, she loved painting and she painted every day.
conversely, high talent doesn’t ensure anything.it only means potential. it doesn’t mean perseverance. it only means potential. higher potential.
Talent and skill are synonymous and not different things. Things that are improved by practice and growth, not by asking a third party (human or not) to create anything for you. The only way for a person to grow creatively is to create.
I don’t know this person, obviously, but developing technique is only part of the “work” that needs to be done.
Back to the topic of the thread, I would say that the part of writing (or painting, or whatever) that is not about technique is probably both the most important and the least accessible to machines.
absolutely! I fear that the people who do possess that special quality (or whatever you choose to call it) will get seduced away from creating their own work and having AI create inferior work instead.
the point about bringing up the bad artist: try and work as you might, the majority of artists or athletes or what-have-you will reach a point where they can’t get beyond. they will have reached their capacity and no amount of hard work will push them past it.
you know this, and I know it, but a (say) 19 year old may not know it, especially if socially normalized.
my sister teaches writing on a college level and tells me that her students use it and try to get away with it but the college says that they can get away with it three times without formal consequences. pretty certain she’ll tell me on first day of classes to not use AI bu they’ll use it anyway.
At the risk of potentially devolving this discussion: Words mean things and you’re intentionally subverting their meaning to fit your argument.
As an academic, I agree 100%. AI in our spaces is the closest thing to an untreatable cancer as we have ever seen. There is a lot of discussion about how to curtail its usage and I’m a fan of the idea that academia return to a textbook oriented, pen, paper, and notepad learning environment. Is it cumbersome? Yes. But I think in the long run it will benefit both students and faculty by forcing us into campus libraries, peer studying in community spaces, and being more hands on and present, even just in the lecture environment, having to physically take notes and listen.
This in turn raises all kinds of accessibility issues. For better or worse, “physically taking notes” is outmoded, IMO. Between 5-20% of us are dyslexic or dyspraxic or somewhere on that spectrum. (And school is not supposed to just be about stuffing heads with data?)
the students want to break into writing movies and TV. and, if anything, they should take their writing more seriously than a hobbyist. even if they only wanted good grades, they ought, for their own sakes have higher standards than said hobbyist.