Interface unusable on 4K monitor - menus too small (MacOS)

That’s not how user interface rendering works.
Signed: a user interface developer.

macOS does it’s own thing.

Standard resolution as per system default:

Native resolution:

Both screenshots list 5120x2880 pixels at 72 and 144 pixels/inch respectively.
But: When I drag a frame for a screenshot, dimensions show 2560x1440 max while dragging.

1 Like

Fair enough, fvsch. Explain what I am misunderstanding.

Fair enough, fvsch. Explain what I am misunderstanding.

It’s hard to explain shortly because there can be many layers, and it also depends on how an application is developed, what graphics APIs it uses, and how it uses them. I don’t have access to PhotoLab’s source code to check what they do on macOS, so I can’t describe that exactly. Also it looks like they might be using C# and .Net SDKs, which I’m not familiar with at all.

But for general purposes: all modern operating systems use some sort of “system pixel” or “system point” unit which is virtual and independent of the actual display’s pixels (“physical pixels”). Operating systems, or individual applications, will often render graphics at a size that is a multiple of their “system pixel” dimensions, in order to produce an image that matches the screen’s physical pixels dimensions (or sometimes a higher resolution image that gets scaled down by the OS).

Also there are differences between UI rendering (rendering backgrounds and borders and buttons), bitmap renderings (icons, images) and text rendering (which often defers to the OS’s text rendering APIs but not always). So the devil can be in those details. :stuck_out_tongue:

But let’s take an example with a high resolution display.

  • My MacBook Air’s display has a native resolution of 2560x1600 “physical pixels”
  • Out of the box it comes configured with a virtual resolution of 1440x900 “system pixels”
  • Which means that macOS will render an image that is double that, i.e. 2880x1800, then scale it down to 2560x1600. You’d think that scaling down would make things a bit blurry, but in practice it doesn’t seem to do that (at least not noticeably without pixel-peeping a screenshot).
  • Personally I configure that screen to use a virtual resolution of 1280x800 “system pixels”, which is exactly half the “physical pixels” resolution, so that the OS can directly produce a 2560x1600 image and not have to do a second downscaling pass. Plus that way the text gets a bit bigger, and my eyes like that (it’s a 13" display, 1440x900 is a bit too much on 13", at least for my eyes).

To better explain what “OS or application will produce an image that is double the virtual size” means in practice, we can look at text rendering. Let’s say we’re rendering a single letter with a font size of 16 “system pixels”. Maybe a capital A. At a font size of 16 pixels, let’s say that its visible height will be 10px and its visible width will be 6px (note that this depends on the font we’re using, the font weight, etc.).

So we’re painting a capital A with pixels in a 6x10 rectangle. On a standard resolution display (not Retina, not high resolution, not 4k or 5k, not a smartphone screen because almost all smartphones have high or very-high resolution displays), that means that we have a grid of 60 pixels to fill with black, white pixels, or different shades of gray, to paint a readable capital A. That’s doable, but it’s not a lot to work with. On a high resolution display, and an OS configured to use that display with a 2x or 200% scaling factor, our software will still define the text size as “16px”, but our capital A will be painted in a 12x20 rectangle; which gives us a grid of 240 pixels to work with (4 times as much as on a standard resolution display), letting us draw our letter with much finer details, and display that drawing as-is on the screen. Usually that leads to a much nicer or more readable rendering of text, especially with fonts that are designed to use thin strokes and/or a lot of fine details (such as serifs and variations of stroke thickness).

If you use a high resolution display and a properly configured OS, you should have nice visual results. Especially on Retina, 4k and 5k displays, you’re not wasting pixels by setting the OS virtual resolution (or, on Windows, UI scaling factor) to something with a scaling factor of 1.5x (150%), 1.75x (175%), 2x (200%) or higher. Please configure your OS to use a virtual resolution or scaling factor that makes the system’s menus and text comfortable to read for you, and don’t worry about wasting pixels (there’s no such thing).

Some exceptions may apply:

  1. Some applications will sometimes require specific sizes or scaling factors to function properly. That’s mostly true with video games, but thankfully video games will usually render as full-screen applications and will take over the resolution of the screen. So you can worry about screen resolution for that game only, in the game’s menu, and not worry about how that interacts with the OS configuration (it usually doesn’t).
  2. Some applications use built-in bitmap graphics to render things like backgrounds or icons. If the application doesn’t ship with several versions of the same images (at the very least a 1x and a 2x version, but 1.5x and 3x versions can be useful too to render properly on 150% and 300% scaling factors), you can end up with icons and other graphics that look visually blurry or blocky, next to text that is rendered as high resolution. Thankfully that problem is becoming more and more rare, either because apps ship with several versions of their images or because they switched to using vector graphics for icons.
  3. Applications which render complex graphic elements like a canvas with bitmap or 3D graphics (Photoshop, Lightroom, PhotoLab, Blender etc.) usually decide on the size of those elements based on information relayed by the operating system. If the OS says “your window is 1000x800, and the current scaling factor is 200%”, the application will do its best to generate a 2000x1600 canvas or bitmap. But some older applications may not be updated to take scaling factors (also called “device pixel ratio” and some other technical names) into account, so in this situation they could produce a 1000x800 image that the OS will upscale to 2000x1600 pixels with a fast linear interpolation algorithm — so that image will end up looking blurry or pixely on your high resolution screen (compared to what other apps are able to render on the same screen). In PhotoLab’s case, I don’t know how well they make use of high resolution displays when rendering your photo or photo thumbnails; I haven’t checked.

Both screenshots list 5120x2880 pixels at 72 and 144 pixels/inch respectively.
But: When I drag a frame for a screenshot, dimensions show 2560x1440 max while dragging.

What happens here:

  1. The OS is always producing an image that is 5120x2880 pixels to render on that specific screen, either because that screen’s physical resolution is 5120x2880 or because it’s a bit lower than that and the OS produces a big image that it will scale down (because it’s easier to make an image by making things 2x, 2.5x or 3x bigger than scaling down a bit, than making an image by making graphical elements 1.9034x bigger or 2.876x bigger).
  2. The 72ppi and 144ppi information is just metadata, not pixel data, and is completely bogus. You can disregard that part. (It’s factually false, uses conventional values instead of factually correct values, and in this case it doesn’t have any practical benefit.) (A long time ago I wrote an article in French on this topic, mostly focused on images for websites, but it applies here.)
  3. Regarding “When I drag a frame for a screenshot, dimensions show 2560x1440 max while dragging”, I’m not sure what object you’re dragging and what software is showing you dimensions, so it’s hard for me to say what it means. But it’s likely that the dimensions shown are using the virtual resolution (dimensions before any scaling factor is applied) and not showing you the final resolution you will get in the image sent to the display.
  4. If you pixel-peep those two screenshots, using original screenshots in PNG (I can only see scaled-down JPEG versions here), you will be able to see that graphical elements and text in the “standard resolution” (2x) image have much finer detail than the same graphical elements and text in the “native resolution” (1x) image. When using the “standard resolution”, you are not wasting pixels but using more pixels to render the same information, looking to a nicer image with finer details (if your OS and software are doing it right).

Regarding the fear of “wasting pixels”, our experience with digital photographs and sensor resolution is a good comparison point.

As a though experiment, let’s say you have a 20MP sensor and an infinite supply of birds you can fit in front of your camera, arranged in a neat grid of birds. How many birds should you try to fit in the frame?

  • The top limit is 20 million birds, using one pixel for every bird. Of course it’ll look like a gray mush, so that’s too many birds.
  • Maybe 1 million birds? Well, each bird will get a canvas of 20 pixels, or a 4x5 rectangle… you have to be very skilled to draw a good bird shape in a 4x5 rectangle.
  • A thousand bird? Now each bird gets up to 20 thousand pixels, or roughly a 140x140 area. Now that’s doable. You could make an image with a thousand birds, and that could look pretty good. You won’t be able to see much detail, since a bird’s eye will be a couple pixels wide at best.
  • Maybe 20 birds? Each bird will have up to one million pixels to be rendered on the sensor. That’s a bird will be smaller and have fewer thousands of pixels to be rendered, or a square of 1000x1000 pixels. That’s good enough to render a nice bird with some nice details, and we can fit 20 of those in the image! Nice. If the result you want is “20 nicely rendered birds in one shot”, that’s a perfect use of those pixels!
  • A single bird, filling most of the frame, so that you use up all those 20 million pixels to render the fine details of that single bird? Now that’s a great option too, and if that’s the image you want it’s a perfect use of those pixels.

It’s up to you to decide if you want more birds in your image, but smaller and less detailed birds; or fewer birds or even just one, with much finer details. And beyond pixel-level details, the visual effect of those options in the final composition are different too, of course.

Rendering software UI on a Retina, 4k or 5k display is similar. With a modern operating system you can decide if you want to render more stuff at small or very small sizes, or less stuff at bigger sizes but with much more details.

Any operating system and application that handle scaling factors correctly will make good use of a Retina, 4k or 5k screen’s pixels, whether you set the scaling factor to 100% (probably way too small to be usable), 150% (renders more things on the screen, and might be usable if you don’t mind the small text), 200% (usually best for Retina/4k/5k) or more.

Things were sometimes wonky in the 2010s when Retina/4k/5k was new or rare, but now in the 2020s with a recent version of Windows or macOS or even Linux, and if you have a high-resolution screen, you should be able to select whichever scaling factor gives you the more readable results and not have to worry about the rest.

Hope this helps!

1 Like

Not much new in today’s update to 5.2.1
Only
“Incorrect scaling of Nik Collection plugins at 200% on a Windows 17inch 4k display”.

Apparently every combination of screen resolution and size is now step by step being fixed. But this does not solve the problem fundamentally.
And with my laptop you still need a magnifying glass.

1 Like

Me too, still need the magnifying glass

1 Like