You may try saving the webpage as a PDF using the browser and import the PDF file, or import the URL as a dynamic webpage to preview its contents in Scrivener.
I already had the problem in the beta for Scrivener 3 under Windows 10. Now also in the current purchased version 18.104.22.168
The posts here in the forum on the subject are already old and do not solve my problem.
There is currently no way for me to store web pages as links in my research. I get the above error in all variants.
There are a few different things that can cause issues importing web archives, which it sounds like is what you’re wanting to do.
How are you importing the web archive? Are you dragging and dropping it into the Binder or using the Import → Web Page menu command? Sometimes dragging and dropping doesn’t work, but the menu command does.
In File → Options → Sharing → Import, do you have the “Convert HTML files to text” option checked or unchecked? Does changing this setting affect your ability to import the web archive?
Specific web pages may have issues with import while the function generally works. Have you tried testing the import option with different URL’s? Is the problem universal or specific to a certain web page?
Also, have you tried the alternative methods suggested in the error—either saving as a dynamic webpage or PDF and importing that instead of importing the page as a web archive?
I have noticed that the poblem mainly affects Wikipedia articles. Many other web pages could be saved. Wikipedia basically not at all and it comes above mentioned error message.
This is problematic from my point of view, because Wikipedia is 60-70% research source. And to create a separate PDF for each source to have the data available offline is more than just annoying and also 2 more steps that make the work less smart.
The immediate access to information and the offline use are exactly the point why a local storage of website content is necessary. If you live in Germany like I do, where the network coverage outside of the big cities is simply subterranean, you really appreciate offline data.
The option in File → Options → Sharing → Import is apparently not relevant for my problem. I tried both options with the same results.
I think I just found a nice workaround…see if you agree.
I didn’t have any luck with drag-drop from ‘some browsers’ – tried four of the usual suspects.
but I ended up on one we shall not name, which I reject on principle, but would for something like this if had to. But we don’t have to, because something I found there works just fine on Chrome, and Brave also; if Firefox, not so much…
It’s very simple, easier than copy-pasting a link, which I’ve always found awkward, not to mention the dialog suggesting still http instead of https
You just hit Ctrl-S, and then choose the label Save as type: Web Page, Single File (.mhtml), at the bottom of the page.
Click Save then, to choose an easy place to put it.
Now go to Scrivener, either File | Import | Files…. having chosen a folder in the binder where it’s to go,
Or much more easily, just right-mouse-button the folder, and then Add | Existing Files will let you select your saved mhtml with full appearancce as a webpage.
in fact, the Scrivener result looks truly nice – much better than most PDF attempts. That’s because you can just ignore (and check the box to not further see it) the alert about conversion to RTF. In fact, the import in this case is saved directly in its mhtml form.
As an extra ability, you can have changed the filename offered before saving, which will be the title in Scrivener, so that’s useful if you want several pages off a site that keeps the same title for them
and as soon as you’ve put the page in Scrivener, you can delete the mhtml file on your laptop; all is self-contained in the project.
That’s it, and this feels better than any other ways I’ve used.
The resulting pages are real except their active complications are no longer included, nor any active internet accesses , or downloading images, as these are encoded into the file. Your links on the page will still work, another plus over some pdfs.
So not only does this pass @JimRac 's fine test, it really should work with most anything. Fingers crossed, as it’s still software