[BUG] Epub Compile Option Downscale and Resample Image to visible size not working

Hi there,

It looks like the epub compile option to downsize and resample images on PC doesn’t work.

The “full” version is 10MB, the one with the option enabled in the PC version is about the same size.

The same file compiled with the mac version is 9MB full, and 1.3MB with the option enabled, so it works as expected on the Mac.

Also a feature request in relation to this bug: please could we have a 96 DPI option? 72 DPI is fine for old readers, but 96 DPI would be better for more recent hires ones, without having to go all the way to 150 DPI that more than doubles the size (for no visible gain). I think 96 DPI would be the sweet spot for many users.

This request applies to the Mac as well, please let me know if you’d like me to log it in the Mac forum as well or if it can be logged for both (assuming you agree it would be useful).

Thanks!

Hmm, I am getting a different result from you, one that is even worse: the image comes out 1 x 1px, with a variety of different settings tested. What are the precise settings you are using, and what size of an image should I target for (10mb doesn’t tell me much all by itself)—image file type might also matter (I was testing with PNG-24).


Also a feature request in relation to this bug: please could we have a 96 DPI option? 72 DPI is fine for old readers, but 96 DPI would be better for more recent hires ones, without having to go all the way to 150 DPI that more than doubles the size (for no visible gain). I think 96 DPI would be the sweet spot for many users.

This is a fairly complicated matter, one that was discussed several years ago. Ultimately the DPI setting of an image is largely ignored by HTML-based rendering systems such as ebook readers and web browsers. All that matters is the physical raster size and the HTML declaring the intended visual size. One achieves “high resolution images” in an HTML environment by providing more pixels than you stipulate for the width. These two factors create an effective PPI in the display, but this is not coming from the image’s properties itself. And as noted, even the concept of a “pixel” is a bit abstracted when it comes to HTML/CSS size declarations, and to a degree the physical size of a pixel is determined by the relationship between the physical display and the software.

The settings in the compiler are an abstraction, in other words, or a way of saying these two things using print jargon (why, I don’t know, but I guess it’s fairly common even in ebooks to think in terms of “DPI”). The DPI setting in other words allows you to select how many pixels to overload into the image’s requested width, set to the right. The 72 setting creates a 1:1 balance between raster size and requested display size, which we could consider a “standard resolution” image. A 150 DPI setting would be closest to what we would consider a “high resolution” screen image, such as one optimised for Retina (144 DPI, or twice the size of 72). Amazon has been recommending overloading by an equivalent of 300 DPI for some years now, given how many eInk and mobile LCD/OLED devices meet or exceed that resolution.

Whether or not there is a tangible gain from using higher resolution images is perhaps subjective, but I would not agree with you that there is no difference! Nobody would buy high-res screens if there was no difference, and displaying a standard-res image on a high-res screen at the same scale is noticeably blurry to me. No doubt the subject matter makes a big difference. A photograph of a lake may be passable at low-res on a high-res screen, but a wireframe figure of a machine part would look pretty bad in those conditions.

It’s worth reiterating that the discussion I linked to has less to do with the image’s physical properties, and more to do with how Scrivener should determine and set the HTML for its intended visual display size in the rendering agent. As noted there, it uses 72 pt/in based calculations for determining the size, which in my testing correlates very closely to visual parity between input and output, where possible. The argument being raised there is that HTML user agents might more commonly be using 96 pt/in calculations, and so Scrivener would be theoretically more accurate by using that to set the HTML sizes. As for why 72pt-based seems more accurate, it’s probably down to how Scrivener’s text engine is rooted in print-based scaling, where 72pt/in is the standard and has nothing to do with screen resolutions.

In practice that did not seem to be the case back then, but it is again all somewhat irrelevant to the physical image itself.

Hi Amber,

First, re the feature request of addign a 96DPI between the 72 and 150DPI option, you are correct that settings over 96DPI provide a visible improvement, I was only talking about physical e-book readers, up to an iPad Pro 11". I should have mentioned this. My pics are infocharts, some of them slides exported in hi-res 300DPI JPEG from Powerpoint (using a registry hack), the others created in In-Design and exported as JPEG at 300DPI.

Re the size, I use only .jpg that are dimensioned for print, so 300 DPI, because I compile the PDF file for print directly from Scrivener. It’s these print quality images that I want to re-scale using the option for e-book, when I compile the EPUB file. The Mac does that great, the PC doesn’t.

The 10MB size I provided is only an indication of the size of the whole ebook file once compiled, not an indication of the picture size. There are a few dozen 300DPI JPEGs in the book (many of them in B&W), and the whole ebook file with no compression/rescaling applied is around 10MB.

It remains 10MB when the rescale option is checked on PC, while it becomes 1.3MB when the option is checked on the Mac.

I don’t have the 1x1px issue you experience, but it could be because I use JPEG, not PNG.

1 Like

Okay, thanks for the clarification on image size—I was going to say if individual images are 10mb then resizing them more to purpose would be a start!

Well I am still getting 1x1 images when testing a 300 DPI JPG set to display at around 300pts wide in print. I might need a more technical reproduction in the form of sample data. Would it be possible to provide a small test project with one image that does not downscale, but also doesn’t drop to 1x1? This could be sent via PM to me, if you can’t post the sample image publicly.

To circle back around though, on the whole if I were doing what you are doing, I would have a whole second set of images specifically for ebook publishing vs print. Having the compiler hack them down crudely isn’t the best in terms of quality, as that will do no post-resampling sharpening and will recompress the JPG. I’ve always viewed this setting in the compiler as more of a proofing phase tool, than a production tool, is what I’m getting at. Plus you get full control over their scale rather than whatever we provide in a dropdown. This just general advice though, not really related to the bug report.

I don’t have the time to create a test project at the moment, but I’ll do this when the devs are ready and willing to fix all the other issues in the PC compiler. There is really a lot more than that to fix. I’ve reported some of them but was told that the current beta wasn’t focusing on the compiler issues, so I’m ready when they are, I’m keeping a list that I update almost daily. I mentioned this option because it was fairly obvious and was also missing in the Mac version.

I’ll send you by PM an infograph which is around 550K and only 240DPI after cropping. It’s one of the more complex ones, where you do get compression artifacts at 72DPI that almost disappear at 150DPI, so that I would expect would be considerably reduced at 96DPI.

I agree with you re the crudeness of the rescale/recompression provided with this option, but again I’m not publishing art books or photo heavy volumes. I’m publishing non-fiction, mostly text with some super simple text-based powerpoint slides and a few basic infographs, also mostly text-based.

I can assure you that at 96-150DPI, they look absolutely fine and don’t warrant the hassle of creating separate versions for ebooks, with the hassle of have to replace all the paths for each compile version… I’m really trying to simplify my process here, while keeping a decent quality and an optimised file size ot reduce the costs.

I’ve gone the InDesign way for print and Scrivener for ebook (with images that were ebook specific) for the first volume, and it was too much of a hassle compared to the benefits (very minor for my type of books). I published volume 2 (no illustrations) in Scrivener only for both print and ebook, and I plan to do the same for the third volume, which does have a few dozen simple slides/infographs. Nothing fancy or particularly difficult to handle (no photographs or designs with intricate detail that would need sharpening or any kid of sophisitcated post processing).

I’m checking the quality of the infographs in the ebook on a 4K monitor, and the quality is satisfying at 150DPI. It’s over the top for any kind of ebook reader, which is where these epubs will end up. If I make a PDF, it’s with the high quality pics at 240-300DPI.

Strange, I am still getting 1x1 images with the same pictures you sent. So it may have more to do with the project’s compile settings, or the context in how the images are used.

Well, I’ll get the 1x1 output problem writted up anyway, so that can get fixed. Obviously that’s a big problem as it makes the feature unusable.

There is really a lot more than that to fix. I’ve reported some of them but was told that the current beta wasn’t focusing on the compiler issues…

(a) That doesn’t mean we are no longer investigating and writing up bug reports, nor fixing larger bugs! (b) That beta is specifically for testing the adoption of the new coding framework (which might involve the compiler). If we opened up the beta test itself to all bugs historic and present then it would take years to finish it, if ever, because bugs never stop being found. It is logical to set a scope for a project so that it can be completed.

1 Like

What I meant is that I’ll spend some time putting another test project together when the devs will be ready to focus on the compiler. It doesn’t seem to be a priority at the moment, which is fine by me, I just compile with the Mac version. The compiler on the PC version is simply not usable in its current state, too many issues.

Understood! I meant to convey that this might be fixed sooner than later though, given the feature isn’t working at all (one way or another).

I don’t know if we’ll ever just solely focus only on the compiler though. We tend to prioritise based on other factors. A reproducing crash should always be fixed for example, whether it is in the compiler or not. For example we just recently fixed a crash when searching for text in Snapshots under specific conditions, which is a bug that can be reproduced in prior builds as well.

No worries though, when you have time we’ll have a look at it!

1 Like

Don’t turn things around, :slight_smile: I can find the time to help, but I’m not willing to waste it until the devs are available to spend some serious time on the compiler as a whole.

I’ve already sent a test project with 4 or 5 bugs, and was told by support that it wasn’t the time, so I’m not wasting any more time on helping resolving individual compiler bugs until there is a will to allocate resources to improve it.

Here is my list of bugs/feature missing in the PC compiler as it stands currently:

BUGS IN THE PC COMPILER

MISSING FEATURES IN THE PC COMPILER

  • It doesn’t handle start section on recto/verso page
  • The title page can’t be selected as it can be in the Mac version
  • Since 3.3, the Mac version also produces PDFs that are compatible with Ingram, before that (and on the PC version I assume) we have to convert to a compatible format manually.

I’ve already reported half these issues in a support ticket as indicatd above, the others in separate threads, but as they are out of the scope of the current beta I was told that now wasn’t the time.

To be frank, the gap is so big especially if you add the missing features that I’ve given up on the compiler on the PC version for now.

So until/unless this is prioritized and there is a will to bring it to at least where it was in 2019 on the Mac, there isn’t much point wasting time on one or another issue.

I’m available whenever you guys are to dedicate serious time to make the PC compiler work. When there is such a will, I’ll put together another test project showing all the issues and I will help reproducing them.

In the meantime, I just have to go to the Mac version whenever I need to compile, which isn’t too often. I’m not in a hurry, just disappointed that I still have to keep the Mac version for half the workflow after all these years.

I understand the difficulties and the differences between the platforms, which is why I’ll wait patiently until there is a will to bridge as much of that gap as possible. :slight_smile:

I’m confused, but okay! :smiley:

Sorry to confuse you. :slight_smile:

What I mean is I’m ready to make the time to compile all these issues and help in reproducing them if there is a will to tackle them all on the development side.

Unless/until that happens, I’m not willing to waste time helping reproducing each individual ones, as the PC compiler is not usable – to me – in its current state and my first attempt at helping bridging the gap with the Mac compiler was rejected by support as the compiler is not a current priority of the Windows devs.

What’s confusing about this? If there is still any confusion, please let me know more precisely what it is, because it’s pretty clear to me.

When/if the devs are ready/willing to tackle the PC compiler and make it a priority to bring it up to speed to at least what the Mac compiler was in 2019, I’ll be happy to dedicate some time to help.

Until then, I’ll use the Mac compiler as the PC compiler is not currently usable for my needs, and won’t be until all the issues are resolved and at least some of the missing features are implemented.

Clearer? :slight_smile: