Urgh. More AI hate from me

Yeah, but… that word “potential”. In the meantime (ie, until we get to a place of acceptable reliability), are your team still actually reading the source docs, understanding them, and making sure the summaries are correct? The AI advocates in my workplace (largely) aren’t, plus it’s already showing signs of impacting their accumulation of knowledge and capability (both of the subject being studied and the craft of studying and developing itself).

…and of course, if your team are still reading these documents, where is the time saving? Unless, of course, they’re just using the AI tools as an aid to prioritisation / triage.

I’m sort of okay (although a bit mocking and judgemental) with AI tools being used as a fancy grammar checker, in the same way that I’m sort of okay (and mockingly judgemental) with AI working behind the scenes in Photoshop to improve the algorithm for “Content Aware Fill”. I don’t, however, think either of those highly marginal benefits (for people who actually know what they’re doing) are worth the cost – in time, in skills, in actual money, in computing resources, in environmental impact, in societal decline, or in legal / regulatory impact for creative professions.

5 Likes

Certainly, they know what they’re referring to and have also read the summaries. That’s why they get paid. AI makes no difference in that respect; its purpose is solely to lessen unproductive writing tasks.

I’ve used ChatGPT in the past to write code in perl. Multiple times it offered examples that simply did not work. Similarly when asked to provide examples of SQL order for SQLite3 it included functions and clauses that do not exist or used syntax from some other relational database system. And again when asked to provide examples of how to write LaTeX code it produced stuff that did not work.

I do not trust these LLM AI systems with anything other than general assistance in topics that I already know.

3 Likes

In my experience, and as I note in the original post, if you can’t tell the difference, you probably shouldn’t be relying on AI just yet.

I was asking if they’re also reading the source materials not the summaries. There is a very big difference between those two things, espcially with AI not the most relaible in terms of accuracy (at the moment). I have directly observed teams that have used AI tools with any regularity over the past 6 months showing atrophying critical analysis skills, and those “born into” an AI tool environment never developing them in the first place.

(much like I never developed oil painting skills because I was born into a world with cameras)

4 Likes

Some people are just hell-bent to remove themselves from the workforce and I came to the conclusion that my best option is to lean back, enjoy some popcorn and watch them go down. Intentionally being evil is another thing “A.I.” can’t do. Yet.

3 Likes

AI itself doesn’t need to be intentionally evil, when (some of) the people running the AI companies take such a delight in directing it towards malevolent goals.

1 Like

Here in the US, more than a few lawyers have been caught submitting briefs with fictitious AI-generated citations. That’s the kind of thing that can have catastrophic professional consequences, up to and including disbarment. And yet it still happens.

“Unproductive writing task” is in the eye of the beholder. In my own experience, the tasks that AI purports to help with are exactly the ones that I need to do myself to develop a solid understanding of the material.

7 Likes

Although I may be slow, I have begun to understand the problem. Microsoft, Apple, and Google are introducing artificial intelligence into the market, and its proliferation appears inevitable. Engaging in social media disputes will not resolve this matter. One can either acknowledge and leverage the benefits and market potential of AI or choose to overlook it. The decision rests with you. I have no further comments to make.

You missed “evaluate the potential benefits and decide that they do not justify the costs.” It’s not “overlooking” AI to say “LLMs generate garbage and I don’t want my work (or my company) anywhere near them.”

9 Likes

I agree with every single word of this.

1 Like

Apparently I missed one of the costs…

How many word long is the scrivener codebase?

I think you have achieved your goal. Just in an unexpected genre.

1 Like

I’ve used ChatGPT to write JS plugins for OmniFocus that implemented four out of five features perfectly, but wouldn’t fix that fifth feature no matter what the prompt. I uploaded the JS code to Grok, asked what it was (it knew immediately) and then asked it to fix the fifth feature and change nothing else. Five seconds later, it spat out new code, and it worked the first time.

AI keeps getting better at everything. Today is the worst it’s ever going to be.

1 Like

For stuff like that, somewhat. I have found in my day job that it is rather hit-or-miss. The worst part is that it may be as good as it ever is now. Due to training rot (it takes in all the AI written stuff in a vicious circle downward) it may just keep getting worse.

The evidence for this is … not great. To improve, each generation uses a larger and larger model. The problem with that is (1) exponential growth in computing resources is not sustainable; real physical limits exist, and (2) as @JasonIron pointed out, the amount of quality (i.e. human-generated) training data is finite, and they’ve already slurped up most of what is publicly available. (And they needed to play fast and loose with copyright to get that.)

Per the link I posted upthread, several “premium” models actually give worse results than their free counterparts. That could be an early sign of inadequate training data: the premium model is trying to give more degrees of accuracy than the data supports.

1 Like

@kewms, you’re assuming that the underlying technology requirements will stay the same in the next year, and it almost certainly will change. There’s an enormous amount of innovative pressure coming from open source. I think it’s a very safe bet that in one year, we will have smarter, more effective AI/ML that is faster and cheaper to train and requires less computing power.

For a precedent, just look at DeepSeek-R1, which was developed and trained at a fraction of the cost and in less time than ChatGPT, but has competitive functionality – and is open source, so other computer scientists can use it as underlying technology for their own innovation.

Throughout the history of human technology, whenever there is a big breakthrough, it precipitates a cascade of innovation, as long as there is a way to make the technology more useful. Otherwise, we’d all be using carts with stone or wooden wheels.

There’s quite a lot of controversy about DeepSeek’s training claims, and as far as I know no discussion of it in the peer-reviewed literature. Certainly press releases from the Chinese government should not be taken at face value.

Also, inference is not free, and it becomes more not-free as the model size and the context window increase. Moreover, the inference cost is incurred every time the model is used.

3 Likes

You’re entitled to your opinion. There are folks who thought the internet was a passing fad, as well.

I would say that the internet as we know it today is different in very significant ways from the internet as it was envisioned by investors in the 1999-2000 dot com boom. Anyone remember Ask Jeeves?

For “AI” as a category to have tremendous potential does not necessarily mean that LLMs will ever deliver on the more exuberant promises of their creators.

2 Likes

Yes! Warmly, I might add. It was pretty good for the time.

Jeeves, the character might be a good way to evaluate the utility of AI offerings. (A gedankenexperiment.) Ask a question of your LLM of choice and then compare the response to what Jeeves would say. The answer, at the moment, is that Jeeves is so far beyond the LLM’s it’s ridiculous. Why? Because LLM’s are word prediction machines that have absolutely no underlying structure that “understands” the real world and can use it to evaluate the question. Jeeves, on the other hand, understands the world immaculately much to poor Bertie’s benefit. And by gosh he speaks well, too.

Machine learning is the real deal. There are astonishing achievements in medical imaging, drug discovery and the like. But LLM’s? A seductive parlor trick that’s flooding the world with incorrect, insufferably bland language.

Dave

3 Likes