Scrivener failed importing the URL

Scrivener failed importing the URL.

You may try saving the webpage as a PDF using the browser and import the PDF file, or import the URL as a dynamic webpage to preview its contents in Scrivener.

I already had the problem in the beta for Scrivener 3 under Windows 10. Now also in the current purchased version
The posts here in the forum on the subject are already old and do not solve my problem.

There is currently no way for me to store web pages as links in my research. I get the above error in all variants.

Who can help?

If you just copy the URL to the editor it works…




I know that embedding HTML links in the editor works. That was not my question either.

My question is about importing web pages. Why is there this option if it doesn’t work under Windows or it always produces this error message?

@Anori, I just tried it to be sure, and web page importing works, as it always has.

You’d be surprised at how unusual (and enormous behind the scenes) web pages can be – there’s nothing simple nowadays about many of them.

It’s not hard to imagine that the one you’ve chosen is simply beyond the abilities of whatever library Scrivener is using to import. As a few no doubt are.

Have you tried other web pages?

And you might consider a screen shot (Win-PrtSc will put one on your clipboard if you don’t have other tools).

1 Like

As a link, you said, and that’s what @Vincent_Vincent showed you.

I receive the above message when the URL is incorrect or incorrectly formed:


But if the URL is correct, the page imports:

@Anori, perhaps you could share a URL that is not importing, so we can test it.


1 Like

There are a few different things that can cause issues importing web archives, which it sounds like is what you’re wanting to do.

  1. How are you importing the web archive? Are you dragging and dropping it into the Binder or using the Import → Web Page menu command? Sometimes dragging and dropping doesn’t work, but the menu command does.
  2. In File → Options → Sharing → Import, do you have the “Convert HTML files to text” option checked or unchecked? Does changing this setting affect your ability to import the web archive?
  3. Specific web pages may have issues with import while the function generally works. Have you tried testing the import option with different URL’s? Is the problem universal or specific to a certain web page?

Also, have you tried the alternative methods suggested in the error—either saving as a dynamic webpage or PDF and importing that instead of importing the page as a web archive?

I have noticed that the poblem mainly affects Wikipedia articles. Many other web pages could be saved. Wikipedia basically not at all and it comes above mentioned error message.

This is problematic from my point of view, because Wikipedia is 60-70% research source. And to create a separate PDF for each source to have the data available offline is more than just annoying and also 2 more steps that make the work less smart.

The immediate access to information and the offline use are exactly the point why a local storage of website content is necessary. If you live in Germany like I do, where the network coverage outside of the big cities is simply subterranean, you really appreciate offline data.

The option in File → Options → Sharing → Import is apparently not relevant for my problem. I tried both options with the same results.

Can’t you in this case “save page as” ?
At least you’ll have access to its content while offline.

And then you can link to the created html file from within your project.
Perhaps not as good as you hoped, but functional nonetheless.

I was able to recreate this.



Hopefully the developers can test with this specific URL to determine what’s going wrong internally.


I think I just found a nice workaround…see if you agree.

  • I didn’t have any luck with drag-drop from ‘some browsers’ – tried four of the usual suspects.

  • but I ended up on one we shall not name, which I reject on principle, but would for something like this if had to. But we don’t have to, because something I found there works just fine on Chrome, and Brave also; if Firefox, not so much…

  • It’s very simple, easier than copy-pasting a link, which I’ve always found awkward, not to mention the dialog suggesting still http instead of https

  • You just hit Ctrl-S, and then choose the label Save as type: Web Page, Single File (.mhtml), at the bottom of the page.

  • Click Save then, to choose an easy place to put it.

  • Now go to Scrivener, either File | Import | Files…. having chosen a folder in the binder where it’s to go,

  • Or much more easily, just right-mouse-button the folder, and then Add | Existing Files will let you select your saved mhtml with full appearancce as a webpage.

  • in fact, the Scrivener result looks truly nice – much better than most PDF attempts. That’s because you can just ignore (and check the box to not further see it) the alert about conversion to RTF. In fact, the import in this case is saved directly in its mhtml form.

  • As an extra ability, you can have changed the filename offered before saving, which will be the title in Scrivener, so that’s useful if you want several pages off a site that keeps the same title for them

  • and as soon as you’ve put the page in Scrivener, you can delete the mhtml file on your laptop; all is self-contained in the project.

That’s it, and this feels better than any other ways I’ve used.

The resulting pages are real except their active complications are no longer included, nor any active internet accesses , or downloading images, as these are encoded into the file. Your links on the page will still work, another plus over some pdfs.

So not only does this pass @JimRac 's fine test, it really should work with most anything. Fingers crossed, as it’s still software :herb: