I'm using OSX.
On app startup in windowed mode, I'm seeing every Pico-8 pixel as a square block of N×N pixels, where N is a number the program chooses cleverly so that N*128, plus a variable black border, fits nicely inside the window dimensions set by the "video_mode" setting in config.txt. No matter what size I set in the config (as long as it fits inside the OS desktop) the pixels are perfectly crisp squares.
And that is great!
I wish it would always work like that.
It muddles up if I then manually resize the window - it seems to keep rendering into a buffer that's sized according to the config at load time, and then scales that buffer to fit in the window.
It would be nice if that worked differently (e.g. resizing the buffer according to window size, then recalculating N to fit the new buffer size), but I'm not experiencing it as a problem, so I won't complain much.
I'm not even sure it didn't work that way in a previous version.
What I am experiencing as a problem is the full-screen display. It seems to also render into that same buffer, then scale that buffer to the full-screen resolution, and that generally creates some anti-aliasing between Pico pixels.
There are approximately a total of 1 possible sizes that look crisp: the size must be equal to the vertical screen resolution. But this size can't ever work well in windowed mode, because OS chrome takes up some vertical space, so app windows have to be a bit smaller.
("Approximately" because, technically, using an exact multiple of that size also works, and I guess one could add pedantic footnotes about horizontal size and portrait monitors.)
So in my case, if I set the line in config.txt to anything larger than 852×852, the windowed mode is blurry, because the OS resizes the window to a size smaller than the buffer, so the buffer gets scaled.
But if I set it to (almost) anything other than 900×900, the full-screen looks blurry, because a buffer that's not 900 pixels is being displayed at 900.
There's no setting that lets me switch between them comfortably.
I imagine one quick and dirty fix might be to render the buffer onto the full screen unscaled - just leave the rest of the screen blank. But the ideal fix would probably be to resize the buffer and recalculate the N, either according to the actual screen resolution, or to a separate setting from config.txt.
It seems the current system already has integer scaling from Pico pixels to the buffer (with scale = max(1, min(video_mode_x, video_mode_y) >> 7) ), and then a second step of fractional scaling from the buffer to the screen (or window).
In theory, replacing all of that somehow with a single integer-scaling step would solve everything, but I doubt that's practical, because it sounds like a major restructuring of the pipeline.
Now, changing the second step from fractional-scaling to integer-scaling (or even just a locked 1:1 "scaling") would solve the problem of muddled pixels, though it wouldn't be perfect.
For example, in my case (900-pixel screen, usually set with video_mode at 780×780), that would give me perfect 6×6-pixel squares in fullscreen, with 66-pixel letterboxing above and below the entire Pico display. (6128+66+66 = 900)
This would be a great improvement over what I get now when I switch to fullscreen, which ends up trying to map each Pico pixel to (I guess) 6¹²⁄₁₃×6¹²⁄₁₃ system pixels, which obviously causes anti-aliasing.
It would be even better if it could smartly increase to 7×7-pixel squares, with just 2-pixel letterboxing. (7128+2+2 = 900)
[Please log in to post a comment]