Request improvement to internal browser focused on blocking/removing cookie notices

Referring back to this tech support post: [url]]

Hoping this gets solved. I really loved the import web page feature and use it a lot. However when all these cookie notices started happening it’s been affecting my ability to use scrivener the way i used to

None of the technical issues that I raised in my post in that thread are different now, and I don’t understand why we need a new thread. :slight_smile:

Even if it were possible to modify a loaded web page in real-time, it would be a very difficult thing to do—there isn’t a standard “this is a pop-up” mechanism that all websites use to do this. Generally they are created using the same mechanism one uses to design a web page, really it’s just a form of design (not a pop-up, but a way of laying out components) that is intentionally intrusive from a purely human perspective. The implications of this are that each site has its own layout design for how these are drawn on the page. We’d have to build tens of thousands of filters and constantly maintain them in order for this to work.

Storing cookies would be the way to solve this, and other issues, wouldn’t it? I assume Scrivener doesn’t do that because it’s already keeping so many files, and/or it’s more complicated than one might think to implement… But the issue hardly requires inventing new methods to deal with the cookie notifications, when those notices surely use cookies to record if the notice has been clicked on/“accepted”.

One of the issues I raised in the other thread is that .webarchive files don’t store cookies anywhere to the best of my knowledge. But the main problem noted in that thread is that the dynamic code built into the website to dismiss the overlay wasn’t archived properly (or at all) meaning the buttons had no reaction to user input—which then lead to a request to block them in the first place.

Sorry, only started a separate thread because I realized it might be better categorized as a feature/improvement request rather than a bug issue.

For now I’ve been working around this by using the convert website to text feature. Seems to do all right with saving the important information and it still attaches the image files. Still would be cool if there was a better way.