The Great Big Scrivener Survey

Oh, yes - when I said that I’d put together a blog post, I didn’t mean that it would include any major conclusions, such as, “From the results we have decided to turn Scrivener into a basic Markdown editor that only runs on ChromeOS.” :slight_smile: But it would be interesting to tease out some general trends. For instance, from the 3,500+ responses we’ve had so far, there is nearly an equal number of people that use iOS, macOS and Windows, but when asked which platform they prefer to write on, very few choose iOS.

Ah. You really are struggling.

I’m sorry.

My response was perhaps a little juvenile. But honestly, I was baffled by your response as my initial one was quite friendly. But then it always baffles me when people are keen to criticise but not so keen to accept that their criticisms may have been unjustified, and so instead change tack to criticising something else entirely.

3,500 responses, wow, that’s a lot in just 24 hours or so. Lots of feedback to digest, best of luck with it.

Thanks! Over 4,000 now. :slight_smile: We sent a newsletter out a few hours ago, and the response has been phenomenal. A good range of responses, too, including 13% from people who decided against using Scrivener. It’s great to have people like that responding too, because (a) it shows they still have some interest and (b) it allows us to get feedback from them on why Scrivener didn’t work out for them (even if it turns out only only to tell us that what they were hoping for isn’t the sort of app we want to make anyway). The 18 people who ticked “I have never used Scrivener and don’t know what I’m doing here” have me somewhat baffled, though…

That sounds like something I would do. Regularly. Because it is easy to spoon access controls on certain curves sites. I’m not saying I [b]did[/d] do it. I just sounds like my kind of thing :slight_smile:

That number is now up to 19. I think we have our culprit. :slight_smile:

Maybe that’s says more about personality than the survey? I finished in less than 10 minutes, mostly because I didn’t stop at each question to find the perfect answer. A reasonable answer from lots of people is sufficient.

Why would you spend half an hour on the survey, when your answer is one among several thousands?

The conditional branching structure (cited above as an obstacle to showing estimates of time remaining),
suggests that some paths through it may be shorter than others.

Absolutely, but the average time spent on each question probably has a significant effect. Ten seconds versus 30 seconds or even a minute to choose an answer will affect the total time more than branching.

Besides, it’s something you do voluntarily. Why care if you didn’t like it? Why not just drop it and forget it?

That would be a pity – it’s a good product and a useful exercise.

Potentially useful, I think, to report difficulties and make suggestions if there is any reasonable possibility that there are feasible adjustments which might lead to fewer abandoned responses.

(Your question, though, may be more relevant to this thread than to feedback on the questionnaire itself : -)

I did the survey and shared it in some author groups. At least one person has noted (and I’d agree) that would have been useful though is maybe giving an explanation of some of the terms like “scrivenings”.

I’ve heard the word but I have no idea what it refers to. The word doesn’t appear on any of the menus or in the UI of the older Windows version. I see it is now in 3, though I only know what it is now because it replaced what was called Document View?

There were a few others like that as well - where an explanation or even just a quick reminder, might have helped.

Thanks for the feedback. Someone else said that about “Scrivenings”. This honestly came as a surprise, because “Scrivenings” is one of the core features and is covered in the tutorial (where it has its own section), so in theory ( :slight_smile: ) it’s one of the first things people learn when using the software. (It should appear in the menu, but it alternates with document view so depends what mode the editor is in - so it’s a good point that you may not see it there.) To be clear, I’m not saying anyone is wrong for not knowing this, it just genuinely didn’t occur to me that users would be unfamiliar with the term given its intended centrality. To clarify, “Scrivenings” is the app’s name for viewing any number of documents as though they are a single document in the editor. Unfortunately, I don’t think I can now edit the survey to annotate this - from my tests, it seems that if you edit a survey in Survey Monkey, anyone currently taking a survey will be booted back to the beginning. It’s useful to know that not all users know what “Scrivenings” mode is for the future, though.

A more sensible survey than many that I get asked to fill in. I don’t mind a long survey so long as I feel that the answers could be helpful, and your questions mostly seemed well thought out, There were one or two where the possible answers did not cover my responses, but nothing serious.

It was only after I finished that I thought of the feature suggestion that would be very useful. Being able to export, and ideally import, paragraph styles that could be read as names styles in word processors.

Yeah, but it’s one of those things that’s so core to the way the software works that it doesn’t feel like a “feature” that would have a name, in the same way that the ability to add punctuation in amongst the letters doesn’t feel like a “feature”.

It’s a compliment, really.

Filling in the survey took me more than 15 minutes, yes, but it was a real pleasure!

This very thorough survey reminds me of another thorough one, held two years ago by Joe Kissell, the publisher of the excellent Take Control Books. In that case, 85,3% of the respondents turned out to be 51 or older, and 49,1% even 66 or older. Moreover, 85,5% of the respondents were male. Which made Joe sigh “it looks like our audience is mostly a bunch of old guys”.

But those were user guides, this is an application. I’m looking forward to (some of) the results!

I answered the survey on my phone and without a keyboard, it’s not easy to write at length.
I didn’t know this thread existed when I posted my suggestion to use Wizards to address issues that users may have with perceived complexity. My examples with a poorly drawn example of how a Wizard might look/work is in my recent thread in the Wishlist forum.

https://forum.literatureandlatte.com/t/just-finished-the-survey-how-about-wizards/92985/1

After my first failed attempt, I revisited it, and this time managed to complete it. I didn’t bother about timing how long it took. The only thing I found were that some selections had “often” as the “most frequent” choice, where I wanted to say “always”, and there were somme questions with no space to elaborate—in my case questions about the Windows beta—where none of the answers were actually relevant to me.

Mark

I have a few of their books, very useful. I missed this survey, but that’s a very interesting read, especially about the subscription model for their business. Thanks for posting

Putting aside the relationship between thoroughness and efficiency, was there, in fact, a bit of a gap in the survey ?

(where some focal curiosity about learning and feature-discovery might have been expected ?)

A slight blind patch in the exercise is suggested by the exchange between @AnmaNatsu and @KB above.

Did the survey not probe how users acquire their knowledge of the app ?
Or how long they persisted (if at all) with the tutorial ?

@KB seemed rather baffled by the possibility that knowledge of the app, and discovery of its features and terminology, were not primarily determined by:

* Their "intended centrality" in his own thinking
  • the provided tutorial

It’s very common for users of any application to rely primarily on the discoverability of features in the course of experimental use.

Equally, it’s not at all easy to engineer a successful ratio of effort-to-reward (for a wide range of users, with diverging needs, experience and cognitive styles) in tutorial material.

It would be interesting to know:

  • What proportion of users read the tutorial at all
  • What the average tutorial completion rate appears to be
  • What the feedback is on the tutorial is
  • What the feedback is on feature (and terminology) discoverability is.