Import Web Page

I can only import web pages as plain text. The other options throw this error:
Could not retrieve content at address.
Could not retrieve the content specified by the given address

Does anyone have an idea of how I could resolve this problem? I’m on Linux Mint 17, 64-bit.

Thanks for any help you can offer!

Were you actually trying to import I tried on this page, and it worked fine. Does it do it on every web page? (that is, try something with some more text.)

ETA: even worked for me.

It wasn’t the address that I originally wanted, but I did try Google after I started testing random sites. None of them worked except as plain text. So Scrivener is capable of accessing the internet to fetch data, but something goes wrong for PDF/MHT.

I reinstalled, to no avail. And I can import PDF and HTML files without problems.

I also installed the Windows trial through Wine. I can import a web page as a PDF (with no problem at all) and as plain text (with some issues that aren’t present in the native Linux version). The MHT import just makes a link.

Thanks for your help!

Adding to an old thread. I have the same problem as the OP - it happens on any web page I try to import, I can import as text only but if I try any of the other options - PDF via webkit, Webpage complete (MHT) or the IE option I get the FailedToStart error message. It seems the OP did not really get a resolution except to load the Scrivener in Wine. Does anyone have have any idea how to troubleshoot this?


Same problem here. I tried installing under Wine (I have a Windows licence) but the same problem occurred. There is a workaround though. You can print the webpage as pdf and then just load it as a file in your research folder. I also maintain text pages where I list the websites that are part of my research by there URL. You can add a link by selecting a piece of text and then click Edit|Link…


Same problem here under Ubuntu Gnome 16.10
Only plain text import works.
Any idea ?