Hi, I bought Scrivener yesterday. When I try to import some web pages, the following message appears: “Scrivener appears to be having trouble downloading the entire contents of this web page”. Clicking on “Import” or “Wait” does not help. It happens with some wikipedia pages, indipendently which output option (HTML; PDF …) is chosen. Thanks, Michael
Yeah it’s been my experience that some web pages load better than others in Scrivener’s browser. Things which are script-heavy tend to not work so well. What you can do is grab an image of the website–that seems to work 99% of the time for me. There’s always copying and pasting text from it, too, but it’s less sexy than importing a website.
Thanks for your answer. Yes, I thought of these opportunities, too. But as you said, it’s not very sexy
This “Scrivener appears to be having trouble …” message was one reason for me to switch to the Mac version. All the “problematic” websites I’ve been stumbling at before could be added to the research folder of my project flawlessly.
I’m just beginning to get used to my new Mac, but I guess this might simply be the OS support for the different file types with OS X or the lack of in Windows.
Huh, I think it’s time to get back my former Std. Disclaimer: “All above has to be turned into decent English.”
I see Moreover, up to now I have downloadad only freeware from the internet that worked very fine. For example yWriter is free (ok, it’s not so powerful), Citavi is free (ok, it’s not exactly for the same purpose, and the free version has some restrictions). This is the first time I paid for downloaded software. So, I’m even more disappointed that a feature doesn’t work … I hope it will work in the next version. Otherwise, I’m really angry and wish I could have my money back.
OS support for file types? Not likely an issue. Websites are not simply “file types.” They’re arbitrary graphs of many kinds of things, some of which are built on the fly, some of which contain executable scripts, various kinds of media… and then there’s HTML5 canvases…
Scrivener uses the Qt libraries, or so I’ve been given to understand, to deal with HTML. Perhaps the windows Qt version that Scrivener uses currently (which I read somewhere on these forums is not the latest, though maybe it is now) is less capable than the one used on the Mac. Oh wait, right, the Windows version is based on the 1.54 Mac version, not the current one…so they don’t have all the same capabilities to begin with! “Broken” and “feature incomplete” are a matter of point of view, I suppose, depending on how important the feature is to you.
As far as website importation is concerned, I’m sure that there are websites that the Mac version can’t import, either. For instance, try to import the iPlayer page from bbc.co.uk, or YouTube, or your bank. Any “website grabber” software can easily be defeated by server side access control lists anyway. The point is that there will always be limits on that kind of function.
I don’t think the demand that Scrivener support arbitrary website import flawlessly is a reasonable one, considering both the inherent limitations of that activity as outlined above, and its brief as an organization and draft-production tool. If importing websites is that crucially important to you, then try something designed explicitly for that purpose… such as the Unix tool “wget.” It can be run under Cygwin, so you can use it on Windows.
There might even be a version of wget for Mac. I googled “wget on mac” and got most of a page, so I suppose it’s possible.
Before you go to all the trouble to learn wget, take a look at your web browser’s File menu. There should be some way for it to save a copy of a web page to an archive of some kind. If you first do that, and then drag the archive into Scrivener, it might work somewhat better. I’m not speaking from experience, because even on the Mac side, importing web sites is imperfect, and sometimes downright unreadable. What I’ve taken to doing is using Safari’s “Reader” view, which strips out all the adds, and uses a very nicely sized font in place of whatever the website uses… then saving that as PDF and importing the PDF. Maybe you can find a similar feature in Chrome, Firefox, the latest IE, Opera, or one of the other web browsers available for windows…
magicfingers,
As an FYI, wget is already on all macs. OSX is based on BSD and inherits all the goodies. Whenever you think of OSX function you can apply *nix concepts with no issues.
I know that web sites consist of many different files. Moreover, dynamic websites, can’t be “saved” as a bunch of files, since the content comes from server scripts. But it should be possible to “print” them in another format (PDF, JPG …). Nothing more is what I’m expecting of Scrivener. Of course there are many ways like grabbing, archiving, pdf-making etc. a web page, but these are workaround solutions that have to be made outside of Scrivener. And as you shurely experienced, when you just want to concentrate on your text, every klick too much, every extra software is annoying. This was the reason why I bougt Srivener. I would not say that the feature is “crucially important” to me, but it is not fair to sell a software that has known errors. (Sorry for my bad English.)
I agree with you that any ‘unnecessary steps’ can really jar you out of the writing zone. Only this morning I’ve had a web page I wanted import to my research folder stumble, leading me to copy and paste the text instead. You obviously have a very clear understanding of what works best for you in terms of workflow and how you need everything set up. I wish I was so self-aware at times!
I believe L&L’s approach is the right one here, though, and I want to highlight three reasons why:
Firstly, I wouldn’t classify the issue you’ve raised as a ‘known error’ with the software, but rather a constraint on what it’s programmed to deliver. In this particular case it’s a constraint made by the technical realities behind what you are trying to do (as described in other posts - including your own) as opposed to a conscious design decision from the team, or a lack of development effort to add an extra feature. But it’s a constraint nonetheless. No software can possibly hope to achieve everything. To attempt to do so would (a) delay release by years (if not permanently / indefinitely), (b) make the software very large, slow and inefficient, and © lose the focus and specialism that makes great software great.
Secondly, there are other ways to gain the same basic result of storing the information from an external webpage within your Scrivener project. Yes, copy and pasting text isn’t as smooth as when the website can be directly imported, but it’s hardly a significant drain on your focus; importing itself adds steps and timelags to the process. Perhaps a better way of thinking about this is that Scrivener allows you to store information from webpages in your Research folder by giving you somewhere to paste the information into… and in some cases there is an additional ‘over-and-above the normal service’ option to import directly.
Thirdly, and finally, I want to mention the trial period. It’s not possible for one piece of software to meet the ‘perfect’ workflow needs of all individuals. Scrivener has a very deliberate and focused design and that’s what makes it stand out from its competitiors. As such there will always features that are absolute ‘must-haves’ for some, but rather insignificant issues for others. This is why L&L provide a long 30 day trial period so you have plenty of opportunity to determine if the design direction meets your workflow requirements before you have to make a decision to purchase or not.
Hopefully you found enough really positive aspects that suit you to make you want to stick with Scrivener. It is (after all) first and foremost software to help you write - not a web browser.
Thanks for your attempt to make me see positive aspects of the issue I just wonder why L&L did not simply omit the menu item “import web page”, if it does not work properly. Then, everybody would copy & paste the text of a web page in a text file, and nobody would think that there could be a better way to store it. Nobody would ever have the idea and the trouble that something could not work as expected. And the developers would have all the time to develop new features that live up to their promise - of if they don’t, well, not offer them.
I have tried so many programs which can be described as Scrivener-like. (Storybook, yWriter, Citavi, Rough Draft, WriteItNow, etc, etc.) Not only freebies, but also demos of paid software.
None of them come even close to what Scrivener can do, even in this early version. (From what I could tell, only Scrivener even deals with screenplay formats.) I was sold on it even when I beta-tested it, despite the glitches and occasional crashes.
The present version is nearly perfect for me.
I’m still EAGERLY waiting for the Final Draft export feature ( the FD import works perfectly) and would love to have the MAC feature which allows importing Word document into Scrivener, formatted automatically into separate Scrivener scenes. I’d also like to be able to see landscape Word docs and PDFs, and have yet to to be able to see images in a imported local HTML file.
However, its still an amazing organizing tool!
Is it the “be all and end all” writing software that allows one to do everything with one click? No.
But name me one that does.
My opinion, for what it’s worth,
That doesn’t actually exist on the Mac. There is no stylesheet support for either import or export—and if anything that will be something easier to bring to Windows than the Mac given the text engine differences.
What it does have a function for is Import & Split, which can use arbitrary strings to split on (like "Chapter ") as well as FDX per-element splitting.
Because it works properly for the majority of standards compliant websites and a good percentage of those that are not. There are sites that fail on the Mac as well. Nothing short of a massive full-featured web browser is going to intake everything web designers create—and I don’t think you’d actually want a full-featured browser in Scrivener because that would mean all development would cease, very likely forever, on all features that have anything to do with something other than browsing the web. Web browsers are either made by large corporate divisions, entire companies that do nothing but that, or huge global volunteer collaborations like the Mozilla project. And they still don’t work all of the time because anyone can buy some web space and write a crappy website that blows stuff up (sometimes intentionally, and protecting you against that is an important part of writing one’s own web browser).
For those cases where it doesn’t work, just save the file as HTML and drop it into the Binder. If you don’t want to disturb the writing process, then don’t. Drop the URL into the References table in the inspector and save the job for later.
Hi Amber
Perhaps I phrased in incorrectly but there is a Scrivener tutorial on Vimeo called:
"Importing Word into Scrivener in under two minutes " which shows how to do what I tried to say
vimeo.com/31433040

magicfingers,
As an FYI, wget is already on all macs. OSX is based on BSD and inherits all the goodies. Whenever you think of OSX function you can apply *nix concepts with no issues.
Wget is a Gnu/GPL program that originated in 1996, after Linux was established, and as far as I know was never part of the pre-1996 BSD system. BSD had a distinct aversion to GPL programs, since it was not open source etc, so there were legal issues. The whole topic is murky. There are a set of “BSD” interfaces (notably the TCP/IP stack) that are common to any Posix-style OS, including Linux and OS X. That’s why wget could be ported to OS X.
I’m not a Mac user, but at least one source:
http://www.mymacosx.com/terminal/wget-replacement-macos.html
suggests that there’s no “wget” in OS X, but something called “curl,” which is also on Linux, but is a very different program. You might want to let the author know if he is in error.

Drop the URL into the References table in the inspector and save the job for later.
Wow, I’d forgotten all about this, even though I’ve used it. There’s like, 3, 4 different ways to deal with web-based references. That’s what Scrivener is really like: you’re tearing your hair out and it says to you, quite calmly, “Here, let me help you with that…” and that, as they say, is that.

I’m not a Mac user, but at least one source:
http://www.mymacosx.com/terminal/wget-replacement-macos.html
suggests that there’s no “wget” in OS X, but something called “curl,” which is also on Linux, but is a very different program. You might want to let the author know if he is in error.
On a stock OSX install mbp13:~ jaysen$ which wget
mbp13:~ jaysen$ which curl
/usr/bin/curl
which backs up what you are say 100%
I sit corrected and humbly beg the pardon of all. In my defense, I was on a different Mac on which I had loaded wget. Which isn’t much of a defense.
Because it works properly for the majority of standards compliant websites …
Indeed, I was able to import w3.org/ does not comply with web standards. Anyway, it’s a pity it does not work with wikipedia pages I tried, not even the PDF option.
I still think, from a user’s perspective, it is sort of a “bug”, and explaining why it can’t work technically, can be interesting from a technical view, but does not changes anything for the user.
But many thanks for this hint:
Drop the URL into the References table in the inspector …
It works great and helps a lot! That settles the matter for me. (Do you say so in English?)
Because it works properly for the majority of standards compliant websites and a good percentage of those that are not.
Just one thing more: Could you please tell me some websites it works for properly - without error messages and with all pictures included? Thanks.
None? Uhm, why do I have the impression that in this forum problems are eloquently sugarcoated instead of being admitted?