Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The underlying logic for rendering "hinted" line borders and UI widgets is a lot simpler than for hinting arbitrary text. It's a matter of snapping a few key control points to the pixel grid, and making sure that key line widths take up integer numbers of pixels. Much of the complexity you point out only arises because we now insist on having physically sized rendering for "mixed-DPI" graphics, like a single window spanning both a low- and a high-resolution display. That's not necessarily a very sensible goal, and it's not something that would've been insisted on back when achieving "pixel perfect" rendering was in fact a major concern, regardless of display resolution.

A similar concern is the demand for arbitrary subpixel positioning of screen content, that basically only matters in the context of on-screen animations. Nobody really cares if an animation looks blurry, but it's somewhat more important for static content to look right. Trying to have one's cake and eat it too will always be harder than just focusing on what's actually important for good UX.



> The underlying logic for rendering "hinted" line borders and UI widgets is a lot simpler than for hinting arbitrary text. It's a matter of snapping a few key control points to the pixel grid, and making sure that key line widths take up integer numbers of pixels.

This is exactly what I was “hinting” at when I said coming up with a universal function that would work for anything. You can’t just snap some/all things to a pixel grid; it would look absolutely terrible because it would make lines and whitespace uneven. Even font autohinting, which does exist, is more sophisticated than just aligning key control points to a pixel grid.

> Much of the complexity you point out only arises because we now insist on having physically sized rendering for "mixed-DPI" graphics, like a single window spanning both a low- and a high-resolution display. That's not necessarily a very sensible goal, and it's not something that would've been insisted on back when achieving "pixel perfect" rendering was in fact a major concern, regardless of display resolution.

It’s not. Even under Wayland, which can achieve this, the application would only render one surface at a specific resolution at any given time. Nothing I’ve been talking about is related to being able to split a window across different DPI screens.

> A similar concern is the demand for arbitrary subpixel positioning of screen content, that basically only matters in the context of on-screen animations. Nobody really cares if an animation looks blurry, but it's somewhat more important for static content to look right. Trying to have one's cake and eat it too will always be harder than just focusing on what's actually important for good UX.

If you scale a UI that was designed for 96 DPI pixels to a screen that is around 160 DPI, you already have subpixels. If you then attempt to snap to a pixel grid instead of rendering elements at subpixel positions, then you have uneven, ugly looking UI elements.

This unevenness is arguably more tolerable for text than it is for UI elements, but Microsoft actually took the approach of not having it for text regardless; to make text look cleaner, text uses more aggressive gridfitting in Microsoft UIs, resulting in each glyph being gridfit. This is exactly why old Windows UI scaling lead to cut off text and other text oddities; it’s because the grid fitting lead to text that had different logical widths when rendered at different resolutions!

You can’t just wish away subpixels. Numbers that just happen to be whole numbers are the real edge cases in a world with arbitrary scale factors.


> it would make lines and whitespace uneven

Are we talking about single-pixel rounding errors, or something else? The former are already practically undetectable at 1080p, and nearly-so at 768p. Given a high standard of "pixel-perfect" rendering, there's basically zero reason to push resolution any higher!

Of course one can even make pure subpixel-based rendering (no fitting-to-pixels at all) look correct, by starting either from pure vectors or from a higher-resolution raster and then using a Lanczos-style filter to preserve perceived sharpness near the resolution limit of the display. This gets us as near as practicable to something that's almost "pixel perfect", without distorting spatial positions to make them precisely fit a pixel grid.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: