Safari/Scrivener perplexity

Probably this is more a Safari problem than a Scrivener one, but I can’t be the only writer to be vexed by it.

While researching an article, I usually save web pages with relevant information in page source format (rather than web archive format) and add them to the story’s Research binder in Scrivener. That way, I’ve always got a copy even when I’m not on-line (it happens!) or if the page gets taken down. Also, when I’m sourcing a quote or fact, I can create a Scrivener link from the quote in my article to the page in the Research binder AND highlight the quote in the research page – a big help when you’re quoting one line from the middle of a very long page.

However, once I enter the hell of fact-checking, I’d like to have the “live” urls for those web pages to send to the fact checker. For some reason, as far as I can tell, Safari does not record/save the url of a web page as you save it in Page Source format, so I have to locate the page all over again to obtain the url. Very aggravating, as they are not always find-able by an easy Google search. As a three-year-old I know often asks, “Why, why, why?”

So far, the cumbersome solutions I’ve found to this are:

  1. Copy url; save page as page source; locate document in finder and paste url into Spotlight Comments; import into Scrivener’s Research binder and hope I don’t lose the original.

  2. Copy url into “Document References” pane of Scrivener’s Inspector drawer; save page as page source; copy page source doc into Scrivener’s Research binder.

There are problems with both of these that would so easily be solved if Page Source documents automatically recorded the url of the web page saved. It seems to me that someone must have realized this by now, so if I’m missing a trick, can somebody clue me in?

Well, I’m frustrated by the same thing, but I don’t have a good answer. I usually print web pages to PDFs and catalog them with Bookends right away. (I’m an academic and use Bookends as my reference manager, and like you I want to have a stable, always-viewable version of the web page that will never change, for future reference purposes. This practice has actually saved me once or twice, when critical web pages got taken down without warning–I still had the info I needed, and the proper citation information, without which it is worthless in my world.) What I would like is have the url securely associated with the PDF right away (e.g. as a Spotlight comment, or even better, appended or prepended to the PDF), so even if I don’t have Bookends, I will still know where I got the PDF from. I want that to happen automatically, without copying and pasting.

Ideally, I would also like to be able to print a web page to PDF with one click, but I can’t figure out how to do that. I use Red Snapper sometimes, but I prefer the pagination created with print-to-PDF over Red Snapper’s endless single page.

I know that when you import a web page into Scrivener, it associates the url with the web archive, which is a fantastic function, but doesn’t really help with my particular work flow, at least not for my academic work.

So yes, this is not a Scrivener question (at least not for me) but I would like to know how other people deal with it.

Oh, I don’t know what you mean by saving as page source. Would you mind explaining what that is?

OK, just realized what Page Source was. I forgot it was called that. Sorry:)

What might be nice for you, I guess, is to import as a web archive into Scrivener, then convert to text, and have Scrivener hold onto the url somewhere in or associated with the text file. I don’t think Scrivener does this right now. If you could even copy the url from the bottom bar while the page was still in web archive format, then convert to text, then paste it into your text tile, that would be good. But I can’t seem to do anything with the url in Scrivener, except click on it to open the web page again.

I use Eagle Filer - - to save pages.

Once you’ve saved the Web page with F1, you can:

Open in Safari

Open Source URL


Not as nifty as saving the URLs in Scrivener perhaps, but easier for me. I hate cluttering up my Scrivener files anyway, so for me it’s the ideal solution.

(I’m not associated with Eagle Filer, I just find it very useful.)



Sorry for hijacking the thread, but I also use Bookends, and noticed that they have a new version out (BE 10). Have you upgraded? Is it worth the upgrade?

It’s good to know I’m not the only one baffled by this weird aspect of Safari, or any other application that saves web pages in a stable version.

My dream would be to have a service menu command in Safari that would enable me to save the foremost web page to Scrivener in either page source or web archive format, but that would retain the url somewhere accessible in either case.

I just can’t believe more people don’t find this frustrating!

As bluebird pointed out, you can just import the web page directly into Scrivener (either by using File > Import > Web Page in Scrivener, or by dragging the icon next to the URL in Safari’s address bar into Scrivener’s binder). If Scrivener takes the web page directly from the internet, it will associate the URL with the web page and show it in the footer.

Bluebird has given me an idea, though - it would be nice if, upon converting a web page to text in Scrivener, if there is a URL associated with it, it were placed in the document references rather than lost.


The difficulty with importing a web page is that the result is not editable.

I think explaining a usage scenario might make the problem more clear.

If you can’t edit a web page imported into Scrivener, then there’s no way to flag the parts of the page you’re considering using in the draft. For example, in writing a profile, I want to pull all previous interviews with my subject off the Web and into my Research folder. Most likely, I’ll need to quote a line or two from some or all of them. Then, while preparing to write, I read through the source material and highlight the lines that sound quotable. That way, when I’m actually writing, I can quickly and easily locate those passages. Otherwise, I’m forever clicking back and forth between documents in the Binder and scrolling up and down, searching for that one sentence or two I thought I saw somewhere…

(Also, because a lot of newspaper web pages are rife with lots of junk like animated ads and navigation sidebars, it’s nice to be able to delete that stuff – which also requires an editable page.)

Conversely, if you save the web page as text, which means you can edit it and use the highlighter, etc., you lose the url. Then, when it’s time to fact-check, you have to go rooting around on the web for it all over again, unless you had the foresight to copy the url into the comments pane in the Inspector.

BUT… If Scrivener could hold onto that url even when saving the Web page as text, it would be kind of amazing – since it’s the sort of thing even the designers of Safari (or any other web browser?) apparently didn’t manage to think of.

I agree it’d be handy to have Scriv retain the URL when importing a web page as text. Meanwhile, here’s my workaround. I never import web pages. Instead I clip the text I want using the Scrivener item in the Services menu. If I want the URL, I just copy and paste that into the document. This takes at least one more step than would importing the page itself. (It would be easier still if Scrivener v. 2 adopts my number one request: making it possible to name the clipped article – instead of just calling it "Clipping [date] – and also to designate the Scriv folder to which to save a clipped article, both when clipping it.)

Another possible option: import the web page, select all, then use the styles menu to format it in your standard writing or editing style. That will get rid of non-text elements but I dunno if it keeps the URL. I do this sometimes when I just copy a whole web page, paste into a Scrivener doc, and then find too much advertising or other graphical clutter.

Dunno if these will help you but they work OK for me.

Okay, this wasn’t difficult to add (and in the process I also found a bug in converting web pages to text that caused a crash, so that’s a good thing). In 1.11 (out soon - hopefully this weekend, but don’t hold me to that), when you import a web page and convert it to text (either converting it after you have imported it, or if you have “convert webarchives to text” set in the preferences), the original URL of the web page is set as a document reference, so that you can always locate the original web page. (Obviously this doesn’t work for webarchives that you have saved to disk and then imported to Scrivener, because they don’t contain such data - it only works for web pages you import via File > Import > Web Page or by dragging from the address bar into the binder.)


Well, I’m thunderstruck. I lodged my gripe on Monday, and by Wednesday, Keith figured out how to fix it. No wonder Scribner is so awesome!

Thanks sooooo much for implementing this, Keith!

I mean, thanks! Really thanks! This is really, really great.

To Amaru, re Bookends 10: Yes, I upgraded, and no, for me it hasn’t been worth it yet. There might be some killer features that would make it worthwhile for more advanced users, but I am not that kind of user, and the main thing I notice is a change in the interface whereby you can view attachments at the same time as references–IMO not that elegantly implemented and not really that useful for me, since I don’t read in Bookends. Notes and Keywords have been given bigger spaces accessed in separate tabs, which again doesn’t make that much difference for my particular workflow.

Sorry for the delay in replying, I haven’t been around for a while!

Brilliant! - I am absolutely loving Scrivener for touches like that.

I know feature requests have their own place but IMO this fits perfectly here.
Apart from the (already implemented) save URL feature, it would be great if Scrivener could extract META Keywords as keywords and META Description as document notes from imported web pages.