Password Protected Web Pages

I want to snag some pages for my Research folder which are behind a password – the Web site in question has you log in, sends cookies to validate you as you move through the site, the usual kind of thing. This doesn’t seem to be possible in Scrivener. Is there a workaround for this other than saving the Web page as a JPG or PDF and then importing it?

Web Archive?

Yeah, I could save it from within Firefox (or Safari) and then drag the result into Scrivener. But that’s an extra step in the workflow and I was hoping for simplicity. It won’t stop me from using the program or anything, though. :slight_smile:

When Applescript support happens that’ll be another way to automate/simplify the process.

I’m afraid for password protected pages, the extra step described is necessary. Scrivener’s web view is just a basic viewer, so it has none of the more advanced features such as password-checking etc. It works for most pages, but as soon as you come against something like this, you will need to use a dedicated browser to “unlock” the pages first, so saving out of Safari as a .webarchive is the way to go.

All the best,

Thanks! This was my suspicion but I didn’t know enough about WebKit to be sure.

I have seen widgets take advantage of Safari’s cookies before, but I’m not sure about anything else. That always struck me as a fairly risky move by Apple. Creating a framework that allows software to easily take advantage of cookies is just asking for it, in terms of privacy.

wget has been able to use (almost) any browser’s cookies for a long time, as long as they’re stored in the fairly standard cookies.txt file format. Of course, those likely to use wget are also more likely to understand the security implications…

This was exactly the hint I needed, privacy issues aside (and I agree they’re important). WebKit shares cookies with all WebKit apps, so when I viewed the page in Safari I was able to add it to the archive seamlessly. Since the .webarchive file appears to store the full HTML of the page, I’m assuming/hoping the cookie expiration won’t make it go away. If Scrivener tries to reload the page when I reopen the project, well, I’ll just have to visit the site again before working on this particular document.

Off to buy a license. Thanks all!

I was going to suggest another way, though it involves other steps and might not suit you … that is to print the pages to PDF and store the PDFs in your research area. That gives me another thought, which will go in the suggestions forum.