You may try saving the webpage as a PDF using the browser and import the PDF file, or import the URL as a dynamic webpage to preview its contents in Scrivener.
I already had the problem in the beta for Scrivener 3 under Windows 10. Now also in the current purchased version 3.1.1.0
The posts here in the forum on the subject are already old and do not solve my problem.
There is currently no way for me to store web pages as links in my research. I get the above error in all variants.
@Anori, I just tried it to be sure, and web page importing works, as it always has.
Youâd be surprised at how unusual (and enormous behind the scenes) web pages can be â thereâs nothing simple nowadays about many of them.
Itâs not hard to imagine that the one youâve chosen is simply beyond the abilities of whatever library Scrivener is using to import. As a few no doubt are.
Have you tried other web pages?
And you might consider a screen shot (Win-PrtSc will put one on your clipboard if you donât have other tools).
There are a few different things that can cause issues importing web archives, which it sounds like is what youâre wanting to do.
How are you importing the web archive? Are you dragging and dropping it into the Binder or using the Import â Web Page menu command? Sometimes dragging and dropping doesnât work, but the menu command does.
In File â Options â Sharing â Import, do you have the âConvert HTML files to textâ option checked or unchecked? Does changing this setting affect your ability to import the web archive?
Specific web pages may have issues with import while the function generally works. Have you tried testing the import option with different URLâs? Is the problem universal or specific to a certain web page?
Also, have you tried the alternative methods suggested in the errorâeither saving as a dynamic webpage or PDF and importing that instead of importing the page as a web archive?
I have noticed that the poblem mainly affects Wikipedia articles. Many other web pages could be saved. Wikipedia basically not at all and it comes above mentioned error message.
This is problematic from my point of view, because Wikipedia is 60-70% research source. And to create a separate PDF for each source to have the data available offline is more than just annoying and also 2 more steps that make the work less smart.
The immediate access to information and the offline use are exactly the point why a local storage of website content is necessary. If you live in Germany like I do, where the network coverage outside of the big cities is simply subterranean, you really appreciate offline data.
The option in File â Options â Sharing â Import is apparently not relevant for my problem. I tried both options with the same results.
I think I just found a nice workaroundâŠsee if you agree.
I didnât have any luck with drag-drop from âsome browsersâ â tried four of the usual suspects.
but I ended up on one we shall not name, which I reject on principle, but would for something like this if had to. But we donât have to, because something I found there works just fine on Chrome, and Brave also; if Firefox, not so muchâŠ
Itâs very simple, easier than copy-pasting a link, which Iâve always found awkward, not to mention the dialog suggesting still http instead of https
You just hit Ctrl-S, and then choose the label Save as type: Web Page, Single File (.mhtml), at the bottom of the page.
Click Save then, to choose an easy place to put it.
Now go to Scrivener, either File | Import | FilesâŠ. having chosen a folder in the binder where itâs to go,
Or much more easily, just right-mouse-button the folder, and then Add | Existing Files will let you select your saved mhtml with full appearancce as a webpage.
in fact, the Scrivener result looks truly nice â much better than most PDF attempts. Thatâs because you can just ignore (and check the box to not further see it) the alert about conversion to RTF. In fact, the import in this case is saved directly in its mhtml form.
As an extra ability, you can have changed the filename offered before saving, which will be the title in Scrivener, so thatâs useful if you want several pages off a site that keeps the same title for them
and as soon as youâve put the page in Scrivener, you can delete the mhtml file on your laptop; all is self-contained in the project.
Thatâs it, and this feels better than any other ways Iâve used.
The resulting pages are real except their active complications are no longer included, nor any active internet accesses , or downloading images, as these are encoded into the file. Your links on the page will still work, another plus over some pdfs.
So not only does this pass @JimRac 's fine test, it really should work with most anything. Fingers crossed, as itâs still software