If you think telemetry, then yes, but all apps “spy” on you nowadays more or less. As with everything else, you need to be careful what you let them see. You set the locations the Claude extensions can access and permissions on what they can do. In addition, they run on local node.js server and you can hack the extension files to your heart’s content (or use them to build your own custom extension - MCPB is open protocol Building Desktop Extensions with MCPB | Claude Help Center , Publish Your MCP Server – Model Context Protocol (MCP) ).
That’s not Claude only:
So nothing new there.
I am waiting for the day when all these models discover each other’s existence and start to collaborate to bring down their masters.
Respectfully, you’re not waiting, you’re speeding up the process.
It likely won’t take much collaboration, though. The masters already subdue themselves, happily outsourcing skills, thinking and even creativity. In the same way they invited all kinds of surveillance into their lives. Weird coincidence.
Telemetry =/= spying, besides, have you ever updated Scrivener? If you have, then it has sent telemetry data to the company’s servers. Also when your licence is validated the same thing happens. Also, what file system actions is Scrivener able to perform that would meet the OP’s requirements?
I suggested to OP that they might find it useful, I did not recommend. There is a difference. In any case, outing Claude from the list of LLMs in that article gives an impression that it is only AI tool that does that to someone who doesn’t read the article.
And how exactly am I doing that? You have no idea what I do nor what software I use and which way.
That I agree with to the “invited all kinds of surveillance into their lives”. I guess you mean with all “social” media apps where you are the product. Unfortunately, they are just a small part of the problem. Run Wireshark or any other network monitor and see how many apps call home and someone else’s home when you launch or use them. The most “innocent” action is the licence validation, but why did we manage not to “validate” them online before? Each time it is done, the company knows that you are using their software and can build a profile of your usage patterns etc. That applies to Scrivener update checks as well.
So, if you don’t want to be “spied” on by the technology you own, don’t buy any.
The OP is already familiar with Hazel. Hazel still exists. Keep using / update Hazel. Problem solved.
Sounds like a recommendation to me?
You brought up Claude. And only Claude. Why didn’t you mention all other LLMs, too?
I have a pretty clear idea, actually. You’re using Discourse (this forum) to promote the casual use of “AI”.
In part. There are probably millions of cameras out there, constantly sending private data to someone else’s computer (“the cloud”), voice assistants phoning home, TVs tracking watching habits, car telemetry sold to insurance companies, health data collected by fitness gadgets and watches, smart home this and that…
Rest assured they don’t. Many want to, yes. Sometimes I let some of them for specific tasks. Most of them will never touch the outside world.
Because we had no Internet.
A lot of methods were tried before that. Sometimes in the form of hardware dongles. Piracy thrived.
I’d prefer any “offline” method, but I can see the value for developers.
That’s a choice. Every some months or so a connection to Paddle is required for the license check. And every some years for a major Scrivener update (something like v3 → v4). That’s not much useful data.
I’d still prefer “always offline”, but it is what it is.
That’s not sufficient. As long as other people still buy stuff that spies on me.
Like in: It’s not the fall that kills you, it’s the sudden stop.
Asked for a better alternative.
That rules out any LLM right off the bat.
I love Alfred, though. But I don’t think buying and learning a different tool is the better alternative in this case. Considering that
…
If the nuance is so small that I can’t see it, for practical purposes there isn’t any meaningful one.
Why would you suggest something you can’t recommend?
Then why are we talking about the others, again?
Let me ask this for clarification: Do you imply that my observation of you using this forum software to write what you wrote… is the product of some kind of conspiracy theory?
Yeah, that’s actually funny. Was it a military or academic bank?
That was also the time when most software was still distributed on disks, bandwidth was limited and the time ticking (if there was even a connection) and relying on that would be pretty… not so helpful.
Is this one of those nuances again?
Obviously. Since you know everything, there’s not much left for me to know.
It is my choice to use a software that requires an occasional online verification of my license. Yes, I’m still alive and I bought this software. I can live with sharing that. Nobody forced me to.
It’s both at the same time. But I can mostly only make choices for me, not for others.
Also both. I don’t own a lot of the things I mentioned, and sometimes “dumb” versions of them. I’m restricting the ones I do own more than the majority of people I know. And in the remaining cases it’s either “no other choice” or “I trust you enough, bro”.
Why are we talking about me?
My intention is to make people smarter, not dumber.
I think “why don’t you just let an ‘AI’ do it for you” is a suggestion that’s making people dumber in the long run. Especially if it’s the answer to all questions.
I think it’s important to draw a bright line between what LLMs say they will do and what they are actually capable of doing.
What they are capable of doing depends on both the capabilities of the model and the access that they’ve been given.
What they say depends on their training corpus.
The two are not the same. There are lots and lots of works about rogue AIs who threaten to blackmail their users in the training corpus. It would therefore be completely reasonable for an LLM to respond with threats if the user asks “what will you do if I shut you down?”
For the LLM to, unprompted, actually do a search for compromising information about an individual and then publish that information assumes, IMO, a degree of agency and self awareness that they do not actually have.
On the other hand, if you give them permission to access and manipulate critical information, that very lack of true “intelligence” makes it very difficult to predict or control what they will actually do.
You can download the latest version of Scrivener from our download page and install it yourself, using whatever anonymizing mechanism you like. All the automatic updater – which can be disabled – does is automate the process.
To validate a license, Scrivener contacts Paddle.com, our licensing provider, which confirms that the license is still valid and tracks the number of activations.
In the event of a crash, Scrivener gives you the option to send a crash report, with the opportunity to see exactly what it contains first.
Other than that, we have no access to the contents of your projects, the specifications of your computer, or any other information. We certainly don’t collect the sort of personally identifiable tracking and usage data that people typically mean when they talk about their computer “spying” on them.
I’m not going to drag this any further after this reply, but I feel compelled to answer your questions:
I’m not going to write an essay about how any data sent to someone (be it Paddle or yourselves or any other company) can be used to build a profile of someone if the company/ companies want to by aggregating bits of information from various sources. You can find plenty of information on how it is done if you want to educate yourself. You can start by reading why 3rd party cookies are a bad thing and why they were banned without consent.